Title stringlengths 12 257 | Annotation stringlengths 101 3.94k | PDF stringlengths 38 45 | Latex stringlengths 1 261k ⌀ |
|---|---|---|---|
Title:
Nonperturbative structure in coupled axion sectors and implications for direct detection |
Abstract: Pairs of misalignment-produced axions with nearby masses can experience a
nonlinear resonance that leads to enhanced direct and astrophysical signatures
of axion dark matter. In much of the relevant parameter space,
self-interactions cause axion fluctuations to become nonperturbative and to
collapse in the early Universe. We investigate the observational consequences
of such nonperturbative structure in this ``friendly axion'' scenario with
$3+1$ dimensional simulations. Critically, we find that nonlinear dynamics work
to equilibrate the abundance of the two axions, making it easier than
previously expected to experimentally confirm the existence of a resonant pair.
We also compute the gravitational wave emission from friendly axion dark
matter; while the resulting stochastic background is likely undetectable for
axion masses above $10^{-22} \, \text{eV}$, the polarization of the cosmic
microwave background does constrain possible hyperlight, friendly
subcomponents. Finally, we demonstrate that dense, self-interaction--bound
oscillons formed during the period of strong nonlinearity are driven by the
homogeneous axion background, enhancing their lifetime beyond the in-vacuum
expectation.
| https://export.arxiv.org/pdf/2208.05501 |
\raggedbottom
\title{Nonperturbative structure in coupled axion sectors \\ and implications for direct detection}
\author{David Cyncynates\orcid{0000-0002-2660-8407}}
\email{davidcyn@uw.edu}
\affiliation{Department of Physics, University of Washington, Seattle, WA 98195, U.S.A.}
\affiliation{Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94305, U.S.A.}
\author{Olivier Simon\orcid{0000-0003-2718-2927}}
\email{osimon@stanford.edu}
\affiliation{Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94305, U.S.A.}
\author{Jedidiah O. Thompson\orcid{0000-0002-7342-0554}}
\email{jedidiah@stanford.edu}
\affiliation{Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94305, U.S.A.}
\author{Zachary J. Weiner\orcid{0000-0003-1755-2277}}
\email{zweiner@uw.edu}
\affiliation{Department of Physics, University of Washington, Seattle, WA 98195, U.S.A.}
\date{\today}
\section{Introduction} \label{sec:intro}
Axions are some of the best-motivated extensions to the Standard Model (SM).
The simplest such extension, the QCD axion, was originally proposed to solve the strong CP
problem~\cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}, but it has
since been realized that axions are common in many theories beyond the SM
(BSM)~\cite{Witten:1984dg,Banks:1996ss,Svrcek:2006yi}.
One particularly important example is string theory, which generically predicts a large number of light axions coupled weakly to the SM~\cite{Arvanitaki:2009fg}.
The possibility of such a ``string axiverse'' is of particular interest because it offers a
potential low-energy window into extremely high-energy physics.
The simplest nonthermal production mechanism for a cosmological abundance of axions is the
misalignment mechanism~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah,Turner:1983he,Marsh:2015xka}.
Any axion with a mass lighter than the Hubble scale during inflation would be seeded in an
approximately homogeneous state displaced from the vacuum.
It would then remain frozen at this ``misaligned'' field value until the expansion rate drops
below its mass, at which point it begins to coherently oscillate about the minimum of its potential.
Barring substantial sources of isocurvature, axions have large-scale density perturbations that track
the adiabatic fluctuations also seeded during inflation and are thus a viable candidate for the
observed dark matter (DM) or a subcomponent thereof.
An axion's potential is generically nonlinear, but at late times all axions with a mass $m$ much
larger than the present-day Hubble rate ($m \gg H_0$) oscillate near the bottom of their potential
and may be treated as free, massive fields.
This is not, however, a valid assumption at early times, and it has become increasingly apparent that nonlinearities in an axion's potential can have an outsized impact on many late-time observables (see, e.g., Refs.~\cite{Daido:2015bva,Kitajima:2018zco,Arvanitaki:2019rax,Co:2019jts,Cyncynates:2021xzw,Co:2020dya, Eroncel:2022vjg,Eroncel:2022efc}).
If the dark matter comprises a single axion, these early-time dynamics can strongly enhance structure on scales that enter the horizon when the Hubble rate $H$ is approximately the axion mass $m$~\cite{Arvanitaki:2019rax}.
More generally, a string axiverse may consist of many axions interacting with each other through a joint potential, and recent work has shown that when any two of these have similar masses (within a factor of roughly 2) a new type of efficient, resonant energy transfer is possible~\cite{Cyncynates:2021xzw}.
This mechanism, dubbed ``friendship'' due to the necessary mild coincidence of masses, transfers
energy from an axion with a high decay constant to one with a lower decay constant.
Since an axion's couplings to the SM are generically inversely proportional to its decay constant, the mechanism boosts the abundance of the more strongly coupled axion.
In other words, friendly axion dark matter can be significantly more visible to direct detection
experiments than would be expected for either axion individually.
In this paper, we follow up on the work of Ref.~\cite{Cyncynates:2021xzw} with a suite of $3+1$
dimensional numerical simulations, corroborating its findings and extending the results to the
strongly nonlinear regime.
As anticipated in that work, large spatial inhomogeneities significantly modify the results of a
homogeneous analysis.
Nonperturbative fluctuations collapse into dense \textit{oscillons}, nontopological field
configurations bound by self-interactions~\cite{kudryavtsev1975solitonlike,Makhankov:1978rg,Gleiser:1993pt,Kolb:1993hw,Salmi:2012ta,Amin:2011hj,Kawasaki:2019czd,Olle:2020qqy,Zhang:2020bec,Cyncynates:2021rtf}.
The oscillons quench the resonant amplification and mediate energy transfer between the friendly
pair, leading to approximate energy density equipartition over a broad range of parameters.
In contrast to expectations from a homogeneous analysis, the enhanced visibility of one axion
therefore does \textit{not} come at the expense of the other's detectability.
In sum, nonlinear dynamics make the friendly axion model both more predictive (by being less
parameter-dependent) and uniquely identifiable (because both axions would be detectable).
This paper is divided as follows.
In \cref{sec:review} we review the friendly axion model and results within the spatially homogeneous approximation.
\cref{sec:results} presents the extension of these results into the nonlinear regime using numerical
simulations, with a primary focus on the late-time abundance as relevant to direct detection
experiments.
We also investigate gravitational wave signatures in these scenarios, which, while not promising if
the friendly axions make up all of the dark matter, are relevant for hyperlight subcomponents.
Finally, we study a novel driving effect in which oscillons resonantly siphon energy from the
axion background, parametrically enhancing their lifetime.
We conclude in \cref{sec:discussion}, putting this work into the broader context of the
landscape of nonlinear axion models.
For completeness and ease of readability, we relegate an extended discussion of methodology and
additional results to the appendices.
\cref{app:numerical-details} enumerates the system of evolution equations equations and the details of
our numerical implementation, and \cref{app:oscillons} expands upon our discussion of bound axion states.
\section{Review of friendly axions} \label{sec:review}
As a concrete and illustrative model, Ref.~\cite{Cyncynates:2021xzw} focuses on a simple two-axion
potential with two instanton contributions:\footnote{
Throughout, we work in units where $\hbar = c = 1$.
We also define the reduced Planck mass
$\Mpl = 1 / \sqrt{8 \pi G} \approx 2.44 \times 10^{18} \, \mathrm{GeV}$.
}
\begin{align} \label{eq:twoAxionPotentialPhi}
\begin{split}
V(\phi_S,\phi_L)
&= \Lambda_1^4 \left[ 1 - \cos \left( \frac{\phs}{f_S} + \frac{\phl}{f_L} \right) \right] \\
&\hphantom{{}={}}
+ \Lambda_2^4 \left( 1 - \cos \frac{\phl}{f_L} \right).
\end{split}
\end{align}
The canonically normalized axion field variables $\phi_S$ and $\phi_L$ are naturally recast as
angular variables via the definition $\ths \equiv \phs / f_S$ and $\thl \equiv \phl / f_L$.
Redefining $\Lambda_1^4 \equiv m^2 f^2$ and $\Lambda_2^4 \equiv \mu^2 m^2 \calF^2 f^2$,
the axion masses are\footnote{
The interaction-basis axions $\phi_S$ and $\phi_L$ are not exact mass eigenstates, making this
definition ambiguous.
For $\calF \gg 1$, the distinction between the two bases is small, and so we often neglect the distinction in our heuristic discussions. The effect is not, however, quantitatively
negligible for all parts of the parameter space we consider, and it is always included in our results.
} $m_S = m$ and $m_L = \mu m_S$ and their decay constants are $f_S = f$ and $f_L = \calF f$,
respectively.
In terms of these variables, \cref{eq:twoAxionPotentialPhi} takes the form
\begin{align} \label{eq:twoAxionPotential}
\begin{split}
V( \thl , \ths )
&= m^2 f^2 \Big[
\left( 1 - \cos \left( \ths + \thl \right) \right) \\
&\hphantom{{}={} m^2 f^2 \Big[}
+ \mu^2 \calF^2 \left( 1 - \cos \thl \right) \Big].
\end{split}
\end{align}
We focus on the range $\calF > 1$ where $f_S < f_L$, and we
refer to $\phs$ and $\phl$ as the ``short'' and ``long'' axion respectively in reference to the size
of their decay constants.
(The regime with $f_S > f_L$ does not exhibit nonlinear resonances.)
The short and long axions then form a ``friendly pair'' when $0.7 \lesssim \mu < 1$,
corresponding to an $\mathcal{O}(1)$ coincidence in their masses.
While \cref{eq:twoAxionPotentialPhi} might represent a subsector of a much larger axiverse, the
dynamics of the friendly pair of interest are insensitive to possible couplings to other axions
barring additional coincidences in mass.
Namely, only the relative frequency of coupled oscillators determines the efficiency of energy transfer between them, so the actual instanton scales $\Lambda_i$ and decay constants $f_i$ matter only insofar as they (together) determine the axion masses.
In the early Universe, the misalignment mechanism initializes each axion at an approximately
spatially homogeneous value away from the late-time minimum; a natural assumption, barring anthropic
and other considerations, is that $\theta_I(t_\text{initial}) = \mathcal{O}(1)$, where the capital
index $I$ runs over axion flavors.
The axions remain frozen at their misaligned values until the Hubble rate $H$ drops below their
masses.
Since the two axion masses are comparable, the long axion initially has $\mathcal{O}(\calF^2)$ times
more energy than the short axion.
In the absence of couplings between the axions, this imbalance would persist to their present-day
abundance.
The same conclusion holds for coupled axions as well, so long as the masses of the axions are well
separated.
At large field values, however, interactions can substantially shift the axion oscillation frequency from its ground state value.
Ref.~\cite{Cyncynates:2021xzw} showed that coupled axions with a decay constant hierarchy $\calF
\gtrsim 3$ and sufficiently close masses $0.75 \lesssim \mu < 1$ tend to align their frequencies in
a process called autoresonance, illustrated in \cref{fig:homogeneousExpectation}.
Specifically, interactions drive the short axion (with the smaller decay constant) to dynamically
adjust its oscillation amplitude to a fixed value in order to match its frequency to the long axion's,
as evident in the lower panel of \cref{fig:homogeneousExpectation}.
Consequently, the short axion energy density does not dilute like cold matter but instead remains fixed
(as in the top panel of \cref{fig:homogeneousExpectation}) by siphoning energy from the long axion.
If the fields remain spatially homogeneous, this energy transfer runs until backreaction disrupts the precise phase locking of the two fields.
Autoresonance then ends when
$\bar{\rho}_S / \bar{\rho}_L \simeq 2\calF^2(1-\mu)^2 $ for $\calF^2 \gg (1-\mu^2)^{-1}$,
representing a near-complete transfer of the available energy density to the short axion.
In other words, when autoresonance runs to completion, the energy density at late times in the dark
sector is virtually entirely in the short axion---an outcome opposite to what one would
expect from free evolution.
The boost to the late-time energy density of the short axion relative to the scenario of independent axions is of great importance for direct detection experiments.
Laboratory haloscopes probe the couplings of axion DM to SM states, which are typically higher-dimension operators suppressed by the axion decay constant $f_a$.
For example, axions are expected to couple to SM photons via an interaction of the form:
\begin{equation}
\mathcal{L} \supset - \frac{\gagg}{4} \phi F_{\mu \nu} \tilde{F}^{\mu \nu}
\end{equation}
where $\gagg \simeq \alpha_\text{QED} / 2 \pi f_a$ is the axion-photon coupling~\cite{Marsh:2015xka,ParticleDataGroup:2020ssz}.
As discussed in Ref.~\cite{Cyncynates:2021xzw}, when all axions evolve independently from $\mathcal O(1)$ initial misalignment angles, the final energy density $\rho_{a,0}$ of each axion is proportional to $f_a^2$.
In this case, the signal strength $\rho_{a,0}g_{a\gamma\gamma}^2$ is roughly independent of $f_a$;
as such, at a given mass any axion produced by the standard misalignment mechanism would be similarly
hard to see.
In a scenario with friendship however, the boosted late-time energy density of the short axion is $\bar\rho_{S,0} \propto f_L^2$ when autoresonance completes, but the coupling to SM photon is $g_{S\gamma\gamma} \simeq \alpha / 2 \pi f_S$.
Thus the signal strength $\bar \rho_{S,0} g_{S\gamma\gamma}^2$ of the short axion is enhanced by $\calF^2$, making it much more accessible to axion haloscopes.
The effect, however, would be reversed for the long axion: its energy density is suppressed by
$\sim \calF^2$ compared to standard misalignment scenarios.
In this picture, seeing \textit{both} friendly axions would therefore be difficult.
The description of autoresonance given so far assumes the fields remain approximately spatially
homogeneous, but large spatial fluctuations in the axions can prevent the completion of the energy
transfer.
The coherent oscillations of the short axion induce a time-dependent effective mass that
resonantly amplifies fluctuations of the short axion, much like that which characterizes preheating
after inflation~\cite{Traschen:1990sw, Kofman:1994rk, Kofman:1997yn}
(see Refs.~\cite{Bassett:2005xm, Allahverdi:2010xz, Amin:2014eta, Lozanov:2019jxc} for reviews) and large misalignment~\cite{Arvanitaki:2019rax}.
Large-amplitude fluctuations of the short axion can collapse under attractive self-interactions
into oscillons---finite-lifetime, nontopological bound structures with densities of $\mathcal{O}(m^2
f^2)$ and radii of $\mathcal{O}(1/m)$.
Such oscillons explore large field values for the short axion and thus continue to experience large
interactions, but, being nonperturbative objects, are difficult to treat analytically.
Ref.~\cite{Cyncynates:2021xzw} presented preliminary evidence that oscillon nucleation occurs for
$\calF \gtrsim 6$ and that oscillons quench autoresonance if they form early enough, setting a limit on the energy density transfer for $\calF\gtrsim 20$.
The remainder of this paper investigates the impact of the nonlinear dynamics of autoresonance and
oscillon formation on the predictions of friendly axion scenarios through the use of $3+1$ dimensional
numerical simulations.
\section{Results} \label{sec:results}
We now present numerical solutions for the fully nonlinear, friendly axion system.
We implement numerical simulations of the axions' classical equations of motions with
\textsf{pystella}~\cite{Adshead:2019lbr,Adshead:2019igv,pystella}, discretizing these equations onto
a 3D, periodic, regularly spaced grid, computing spatial derivatives via fourth-order centered
differencing, and utilizing a fourth-order Runge-Kutta method for time integration.
Further details are provided in \cref{app:numerical-details}.
Except where otherwise stated, all results use grids with $N^3 = 1024^3$ points, a comoving side
length $L = 1.5 \, \pi / m$ and conformal timestep $\Delta \tau = \Delta x / 10 = L / 10 N$.
The simulations begin with a numerical solution to the linearized system of equations starting
at a time when the Hubble rate $H \ll m$ (see \cref{app:numerical-details} for further details).
The 3D evolution begins when $H = m$, corresponding to a conformal time $m \tau_m = 1$
and cosmic time $m t_m = 1 / 2$.
The scale factor is normalized relative to $a_m \equiv a(t_m)$.
Of the free parameters in the model, the decay constant ratio $\calF$ has the strongest effect
on the dynamics.
The mass ratio and initial misalignments mainly determine whether or not autoresonance occurs at
all, whereas the decay constant ratio determines the size of nonlinear backreaction and even whether
fluctuations are sizeably enhanced at all.
Therefore, for most simulations we pick fiducial values $\mu = 0.75$, $\theta_{L}(0, \mathbf{x}) = 0.8 \, \pi$, and
$\theta_{S}(0, \mathbf{x}) = 0$, and run simulations for varying values of $\calF$.\footnote{
So long as we choose $\theta_L(0, \mathbf{x})$ large enough that the axions experience autoresonance, the initial misalignment angles are essentially inconsequential~\cite{Cyncynates:2021xzw}.
On the other hand, the choice of the relatively detuned mass ratio $\mu = 0.75$ is made to reduce the runtime of the simulations, as smaller $\mu$ cause perturbations to grow faster (see Appendix~C of Ref.~\cite{Cyncynates:2021xzw}) and shortens the oscillon lifetime
(explained in \cref{sec:drivenOscillons} below).
}
For $\calF \lesssim 6$, spatial perturbations do not grow large enough to form oscillons and the
results of the simulations are described completely by Ref.~\cite{Cyncynates:2021xzw}.
For larger $\calF$, fluctuations of $\phi_S$ indeed collapse into oscillons as anticipated by
Ref.~\cite{Cyncynates:2021xzw}.
We present a broad overview of the dynamics of oscillon formation in \cref{fig:delta-S-over-time},
plotting two dimensional projections of the energy density in the short axion at various times over
the course of a simulation.\footnote{
To be explicit, we display the energy density projected (averaged) along one axis of the
simulation volume, e.g.,
\begin{align}\label{eqn:def-projected-density-contrast}
\rho_{S}(x, y)
&= \frac{1}{\left\langle \rho_S(x, y, z) \right\rangle}
\frac{1}{L} \int_{0}^{L} \mathrm{d} z \, \rho_{S}(x, y, z).
\end{align}
Such a projected quantity presents more information about the full volume than a single two
dimensional slice but also underestimates the magnitude of overdensities (since, e.g.,
any given oscillon occupies only a small fraction of space along the $z$ axis).
}
The field begins in a nearly homogeneous state in the first panel, in which the initial adiabatic
fluctuations are too small to be seen.
The second and third panels depict the linear enhancement of fluctuations by parametric resonance as
the amplification of local overdensities.
Fluctuations become nonlinear at a time $m t_\mathrm{nl} \sim 100$, resulting in large overdensities
that quickly collapse under attractive self-interactions into the oscillons apparent in the fifth
panel.
These oscillons radiate energy and begin to dissipate one by one around $mt \gtrsim 2000$.
Eventually, no bound objects remain and nonlinear interactions cease to dominate the dynamics,
although significant density fluctuations remain.
The interplay between the persistence of homogeneous autoresonance and the onset of nonlinearity
has important consequences for the final distribution of energy between the two axions.
We discuss these dynamics in \cref{sec:energyDensity}, comparing to the results of
Ref.~\cite{Cyncynates:2021xzw}.
In \cref{sec:gravitationalWaves} we compute the gravitational wave production from friendly axions,
finding possible signatures for hyperlight subcomponents in the CMB $B$-mode polarization.
Finally, in \cref{sec:drivenOscillons} we demonstrate that the oscillons that form continue to
experience autoresonance long after the spatially averaged fields cease to resonate, and we discuss the
implications for oscillon lifetimes.
\subsection{Evolution of energy densities}
\label{sec:energyDensity}
Having established the importance of nonlinear dynamics for a large portion of parameter space, we
now investigate how nonlinear density fluctuations impact the final distribution of energy between
the two axions (and, as a consequence, their relic abundances today).
We first study the evolution of each axion's energy density in \cref{fig:rho-evolution} for three
representative values of $\calF$, comparing the result of simulations to that of a homogeneous
analysis.
To avoid ambiguities in the final partition of energy densities we work in the mass basis
\begin{subequations}\label{eqn:mass-basis-def}
\begin{align}
\nu_h
&\equiv \phi_S\cos\eta + \phi_L\sin\eta, \\
\nu_l
&\equiv -\phi_S\sin\eta + \phi_L\cos\eta, \\
\cos2\eta
&\equiv\f{1 - \mu^2-\calF^{-2}}{4\calF^{-2} + (1 - \mu^2 - \calF^{-2})^2},
\end{align}
\end{subequations}
where the heavy state $\nu_h$ is composed mostly of the short axion, and the light state $\nu_l$ is composed mostly of the long axion in the limit $\calF\gg 1$ (see Appendix~A of Ref.~\cite{Cyncynates:2021xzw} for a complete discussion).
Each panel exhibits an initial phase of homogeneous, autoresonant energy transfer and the onset of
nonlinearity that quenches autoresonance, at which point the energy density departs from the trend
of the homogeneous result.
From analytic estimates of growth rate for the fastest growing mode~\cite{Cyncynates:2021xzw}, nonlinearity occurs at approximately\footnote{This result accounts for both Hubble friction and the slight decay of the initial metric perturbation before the fastest-growing mode starts growing (see Eqs.~C17 and C18 in Ref.~\cite{Cyncynates:2021xzw} and the surrounding discussion, fixing $\delta\omega = \mu - 1$).}
\begin{align}\label{eqn:tnl-approximation}
m t_\mathrm{nl}
&\approx 17.6 \frac{1 - 0.1 \log(1 - \mu)}{1 - \mu},
\end{align}
in good agreement with $m t_\mathrm{nl} \approx 80$ observed in \cref{fig:rho-evolution}.
The ultimate partitioning of energy depends primarily on the precise timing of nonlinearity and
oscillon formation relative to the (would-be) completion of autoresonance, a point which we detail
below.
We now describe these two regimes of $\calF$ in detail.\footnote{
The precise $\calF$ where these regimes meet depends on the exact value of $t_\mathrm{nl}$,
which varies with $\mu$ via \cref{eqn:tnl-approximation}.
\label{footnote:tnl-mu-dependence}
}
For $6 \lesssim \calF \lesssim 20$, oscillons nucleate \textit{after} the short axion's energy
density first exceeds the long axion's.
At roughly the same time, autoresonance ends and $\bar{\rho}_h$ ceases to be roughly constant, instead
decaying approximately as $a^{-3}$ like nonrelativistic matter.
Contrary to the homogeneous analysis of Ref.~\cite{Cyncynates:2021xzw}, however, we observe in this
range that nonperturbative dynamics in fact enable energy transfer from the short axion back to the long axion,
resulting in late-time \mbox{(near-)equilibration} of the two axion energy densities.
This phenomenon is most evident in the top panel of \cref{fig:rho-evolution} ($\calF = 10$),
where the heavy and light axions' energy densities asymptote toward a common value.
Interactions between the two axions are strongest where the field values are largest, suggesting
that oscillons play a key role in reversing energy transfer.
Inside an oscillon, the field amplitude oscillates with a period $\omega < m$ due to its binding
energy.
Since the long axion's natural frequency is $\mu m < m$, an oscillon can provide a locus for more
efficient energy transfer from the short axion back to the long axion.\footnote{
In fact, during autoresonance the short axion is driven at exactly the frequency $\mu m$.
When fluctuations grow nonperturbative the oscillon frequencies will thus remain close to $\mu m$.
}
Indeed, for most decay constant ratios $6 \lesssim \mathcal{F} \lesssim 20$, the end of autoresonance
and formation of oscillons is associated with a substantial transfer of energy to the light axion.
For $6 \lesssim \calF \lesssim 10$, the final stage of energy transfer to the light axion occurs
in discrete jumps that appear to coincide with the death of individual oscillons.
In all cases, we observe that most of the radiation from the heavy axion into the light axion
is into semirelativistic modes, as one would expect if oscillons are responsible for equilibration.
However, at larger $\calF$ equipartition is nearly achieved by the time oscillons form anyway;
the subsequent evolution is more continuous, obfuscating any association between oscillon death
and energy transfer.
While nonlinear effects are evidently crucial, identifying the specific mechanism for energy flow in general
is challenging.
The middle panel with $\calF = 20$ represents the marginal case where oscillons form at nearly
the exact time that the heavy axion's energy density first reaches that of the light axion.
For larger values $\calF \gtrsim 20$, $t_\mathrm{nl}$ and oscillon formation occur before the heavy
axion dominates the sector's energy density, terminating autoresonant energy transfer to the heavy
axion.
As shown in the bottom panel of \cref{fig:rho-evolution} ($\calF = 40$), the energy density in both
axions then decays as approximately $a^{-3}$.
In this case the backreaction effects at play for smaller $\calF$ are too suppressed to enable
substantial energy transfer by the oscillons.
The trends for yet larger $\calF$ are qualitatively similar: parametric resonance proceeds at the
same rate and oscillons form at a similar time.
The final ratio of energy densities $\bar{\rho}_h / \bar{\rho}_l$ thus receives a constant boost due
to the period of autoresonance but still decreases as $1 / \calF^2$.
Having discussed the dynamics that control the distribution of energy between the two axions,
we now summarize the full $\calF$-dependence of the late-time energy fractions,
\begin{align}\label{eqn:energy-partition-def}
\Xi_{I}
&\equiv \left. \frac{\bar{\rho}_I}{\bar{\rho}_h + \bar{\rho}_l} \right\vert_\text{late time},
\end{align}
where $I = h, l$. The final partitioning changes qualitatively at a critical decay constant ratio $\calF_\star$
for which nonlinearities become important (at $t_\mathrm{nl}$) just as the heavy axion's energy
density first matches the light one's (via autoresonant energy transfer).
From our simulations we find $\calF_\star \approx 20$ for $\mu = 0.75$; this value depends on
the mass ratio in the same manner as $t_\mathrm{nl}$ (c.f. \cref{eqn:tnl-approximation}).
This timing separates two distinct regimes: one of near-equilibration due to nonlinear effects
at $\calF < \calF_\star$ and a $1/\calF^2$ suppression of the heavy-axion abundance via the early end of
autoresonance at larger $\calF$.
Both regimes are well captured by
\begin{subequations}\label{eqn:energy-partition-estimate}
\begin{align}
\label{eq:frach}
\Xi_h
&\sim
\begin{dcases}
\frac{1}{2}
& 6 \lesssim \calF \lesssim \calF_\star \\
\frac{1}{1 + 1.3 (\mathcal{F} / \mathcal{F}_\star)^2}
\hphantom{1 - }
& \calF \gtrsim \calF_\star
\end{dcases} \\
\label{eq:fracl}
\Xi_l
&\sim
\begin{dcases}
\frac{1}{2}
& 6 \lesssim \calF \lesssim \calF_\star \\
1 - \frac{1}{1 + 1.3 (\mathcal{F} / \mathcal{F}_\star)^2}
& \calF \gtrsim \calF_\star
\end{dcases},
\end{align}
\end{subequations}
including an empirical factor of $1.3$ that best fits the results from simulations.
We display the corresponding quantities computed directly from simulations in
\cref{fig:final-rho-vs-F}.
For $\mathcal{F} \lesssim 6$, the energy densities indeed match those predicted by the
homogeneous theory, which are computed in full in Ref.~\cite{Cyncynates:2021xzw}.
For $6 \lesssim \calF \lesssim 20$, the energy density of the light axion, instead of being entirely
depleted, remains within a factor of between 1 and 4 of the heavy axion's energy density with a
precise dependence on $\calF$ beyond the sophistication of \cref{eqn:energy-partition-estimate}.
At larger $\calF \gtrsim 20$, the $1 / \calF^2$ scaling takes over, parametrically suppressing
the heavy axion's abundance relative to the light axion's.
Nonetheless, in this range the heavy axion carries approximately $\calF_\star^2/1.3 \sim 310$ times more energy
density than it would have had in the absence of autoresonance.
The light axion's enhanced abundance has important consequences for direct detection experiments.
Although the near-even partitioning of energy for $6 \lesssim \calF \lesssim \calF_\star$ implies
that the light axion is slightly harder to detect than predicted by Ref.~\cite{Cyncynates:2021xzw},
it also implies that the long axion requires only twice the experimental sensitivity as it would for
an independent axion of that mass.
This is in contrast to the homogeneous expectation, which would be that the light axion requires
$\calF^2$ times the experimental sensitivity.
The nonperturbative equalization of energy density in the sector thus serves to make the sector
\emph{as a whole} more visible to direct detection experiments.
Observing two axions with similar masses---one substantially more visible than expectations for
single-axion misalignment, the other comparably so---is a unique signature of friendly dynamics of
the form described here.
For the mass range $10^{-10} \eV \lesssim m \lesssim 10^{-3} \eV$, many near-future experimental
efforts will probe relevant parameter space (see, e.g.,
Refs.~\cite{Brouwer:2022bwo,Alesini:2017ifp,Stern:2016bbw,BRASS,Lasenby:2019prg,Berlin:2019ahk,Berlin:2020vrk,DMRadio:2022pkf,Beurthey:2020yuq,McAllister:2017lkb,Nagano:2019rbw,Liu:2018icu}).
To close, we connect the partitioning of \cref{eqn:energy-partition-estimate} to present-day
abundances by estimating the net present-day energy density in the sector.
The energy density at horizon crossing is dominated by the light axion, i.e., $\rhotot ( t_m ) \sim
\mu^2 m^2 \calF^2 f^2 \Theta_{L,0}^2$, which subsequently redshifts as $a^{-3}$.\footnote{
The $a^{-3}$ redshifting assumes that the axions are always non-interacting and nonrelativistic.
In reality, at early times the axion interactions during autoresonance cause $\rhotot$ to redshift slightly slower than $a^{-3}$, and at later times oscillons radiate mildly relativistic axions such that $\rhotot$ redshifts slightly faster
than $a^{-3}$ (until all axions become nonrelativistic).
Together, these effects amount to only an $\mathcal{O}(1)$ factor which we neglect for simplicity.
}
Combined with \cref{eqn:energy-partition-estimate}, the present-day abundance of the each axion is
\begin{align}\label{eqn:final-relic-abundance}
\begin{split}
\frac{\Omega_{h, 0}}{0.13}
&\approx
\Xi_I
\left(
\frac{m}{10^{-19} \, \mathrm{eV}}
\right)^{1/2}
\left(
\frac{\mathcal{F} f}{10^{16} \, \mathrm{GeV}}
\right)^2
\left( \mu \Theta_{L, 0} \right)^2,
\end{split}
\end{align}
with $\Xi_I$ set by \cref{eqn:energy-partition-estimate}.
Factors accounting for the thermal history of the SM (i.e., the number of effective relativistic
degrees of freedom in the SM entropy and energy density) change the above result by only an
$\mathcal{O}(1)$ factor over the mass range of interest and are omitted for simplicity.
\subsection{Gravitational waves}
\label{sec:gravitationalWaves}
Rapidly growing fluctuations during and after autoresonance and the resulting oscillons can
both source gravitational waves, again much like the parametric resonance and oscillon formation
that can occur during preheating~\cite{Khlebnikov:1997di, Easther:2006gt, Easther:2006vd,
Easther:2007vj, Garcia-Bellido:2007nns, Dufaux:2007pt, Zhou:2013tsa, Lozanov:2019ylm, Amin:2018xfe,
Hiramatsu:2020obh} and single-axion misalignment~\cite{Arvanitaki:2019rax}.
In this section we compute the signal strength generated by friendly axions and discuss the
corresponding constraining power of existing and future observations.
\Cref{app:gravitational-waves} briefly reviews stochastic gravitational wave backgrounds and
the transfer functions required to relate their spectra at emission to that at the present day.
We begin by estimating the scaling of the peak amplitude of the gravitational wave spectrum with the
short-axion mass $m$ and long axion's decay constant $\mathcal{F} f$.
Gravitational waves are sourced by the anisotropic part of the stress tensor (via
\cref{eqn:gw-eom}), whose components scale like the energy density of the source.
The time (relative to $t_m$) and wavenumber (relative to $m$) of peak emission varies weakly with
model parameters (and is entirely independent of $f$).
The spectral abundance of gravitational waves (\cref{eqn:omega-gw-spectrum-ito-power-spectrum})
therefore scales with two powers of the fractional energy density of the source at the time of
emission, $\bar{\rho}_\mathrm{source}(t) / \bar{\rho}(t)$.
The short axion is the dominant source of anisotropic stress, which we expect to peak near the time
when the system becomes nonlinear, $t_\mathrm{nl}$, when axion gradients are largest and power is
scattered to smaller scales.
To proceed, we follow the model-independent heuristics of Refs.~\cite{Giblin:2014gra, Amin:2014eta}.
Approximating the gravitational wave source as a Gaussian peaking at
momentum $k_\star$ with width $\sigma$, one may estimate the peak amplitude to
be~\cite{Giblin:2014gra}:
\begin{align}\label{eqn:rule-of-thumb-1}
\Omega_{\mathrm{GW}}(k_\star)
&= \frac{27 \gamma^2 \nu^2}{\sqrt{\pi}}
\frac{k_\star}{\sigma}
\left( \frac{a H_p}{k_\star} \right)^2,
\end{align}
at the time the source is maximized.
Here $\gamma$ is what fraction of the Universe's energy is in the source at the time of the process,
$\nu$ measures how anisotropic the source is, and $H_p$ the Hubble parameter at the time of the
process.\footnote{
Note that our $\nu^2$ corresponds to $\beta w^2$ in terms of the parameters of
Ref.~\cite{Giblin:2014gra}.
}
The peak wavenumber $k_\star$ and width $\sigma$ are straightforward to approximate (or read off of
simulation results), but the anisotropy coefficient $\nu$ is harder to estimate;
Ref.~\cite{Giblin:2014gra} motivates $\nu \sim 10^{-2}$ to $10^{-1}$ for typical processes.
Evaluating \cref{eqn:rule-of-thumb-1} at $t_\mathrm{nl}$ and plugging in
$a H = 1 / \sqrt{2 t_\mathrm{nl}}$,
\begin{align}\label{eqn:rule-of-thumb-2}
\Omega_{\mathrm{GW}}(k_\star)
&\approx
\frac{27 \nu^2}{\sqrt{\pi}}
\frac{k_\star}{\sigma}
\left( \frac{m}{k_\star} \right)^{2}
\frac{\left( \mu \mathcal{F} \Theta_{L, 0} f / \Mpl \right)^4
}{
\left[ 1 + 1.3 \left( \mathcal{F} / \mathcal{F}_\star \right)^2 \right]^2
},
\end{align}
where we have used that the energy density in the short axion at $\tnl$ is approximately
$1 / [1 + 1.3 (\calF / \calF_\star)^2]$
of the total axion energy density, and the total axion energy density is
given by $\rhotot(t_m) (t_m / \tnl )^{3/2}$.
Notice that the suppression from how far inside the horizon the peak is (the factor of
$[a H_p / k_\star]^2$ in \cref{eqn:rule-of-thumb-1}) is exactly compensated by the growth in time
of $\gamma$ (since the homogeneous energy available to source the short axion redshifts with one
fewer power of the scale factor than the SM radiation).
We therefore expect gravitational wave signals from friendly axions to be only weakly sensitive to
the time of nonlinearity $t_\mathrm{nl}$.
By comparing \cref{eqn:rule-of-thumb-2} with the relic abundance (\cref{eqn:final-relic-abundance})
we may estimate the peak of the gravitational wave signal as a function of the mass $m$
and relic abundance of the heavy axion $\Omega_h(t_0)$:
\begin{align}
\begin{split}
\Omega_{\mathrm{GW}, 0} h^2
&=
\frac{27 \nu^2}{\sqrt{\pi}}
\frac{k_\star}{\sigma}
\left( \frac{m}{k_\star} \right)^{2}
\left[ \Omega_{\mathrm{rad}}(t_0) h^2 \right]^{-1/2}
\\ &\hphantom{ {}={} }
\times
\left[ \Omega_h(t_0) h^2 \right]^2
\left(
\frac{m}{H_{100}}
\right)^{-1}
\end{split}
\end{align}
where we have dropped factors of the relativistic degrees of freedom, which reduce the amplitude by
at most a factor of two at early enough times $t_m$ such that all SM species are in thermal equilibrium.
Here, $H_{100} = 100 \, \mathrm{km} / \mathrm{s} / \mathrm{Mpc} = 2.13 \times 10^{-33} \, \mathrm{eV}$ and $h = H_0/H_{100}$.
From our simulations we observe that $k_\star / m \approx 9$ and $\sigma \approx k_\star / 3$.
Taking $\nu = 1/20$ (for which the estimates agree well with the simulation results)
and considering
the regime $\mathcal{F} \lesssim 20$ for which the signal is not suppressed, the peak amplitude is
\begin{align}\label{eqn:omega-gw-scaling}
\Omega_{\mathrm{GW}, 0}(k_\star) h^2
&\approx 10^{-15}
\left(
\frac{\Omega_S(t_0) h^2}{0.06}
\right)^{2}
\left(
\frac{m}{10^{-21} \, \mathrm{eV}}
\right)^{-1},
\end{align}
at a present-day frequency of the peak of
\begin{align}\label{eqn:gw-frequency-scaling}
\frac{f_{\mathrm{GW}, \star}}{\mathrm{Hz}}
&\approx 2.8 \times 10^{-14}
\frac{k_\star}{m}
\left(
\frac{m}{10^{-21} \, \mathrm{eV}}
\right)^{1/2}.
\end{align}
The amplitude estimate \cref{eqn:omega-gw-scaling} agrees quite well---within a factor of a few---with
the spectra from simulations when evaluated at $t_\mathrm{nl}$.
However, the spectra evaluated at the end of the simulation (after all gravitational wave production has concluded) are about an order of magnitude larger due to factors not captured by
these simplistic estimates such as the time evolution of the source after $t_\mathrm{nl}$.
From \cref{eqn:omega-gw-scaling,eqn:gw-frequency-scaling} we see that adjusting the mass
$m$ to change the peak frequency by some factor modulates the gravitational wave power spectrum
by two powers of that factor.
Signals that could be visible at pulsar timing arrays (with frequencies of order
$10^{-9} \, \mathrm{Hz}$) could therefore not exceed amplitudes of $10^{-23}$
(about eight orders of magnitude below the projected sensitivity of the Square Kilometer Array~\cite{Janssen:2014dka,Schmitz:2020syl})
without requiring an axion abundance that would overclose the Universe.
This is an unfortunate consequence of the source both being short-lived and
redshifting more slowly than the SM plasma from the time of gravitational wave production
to the present day.\footnote{
Despite the dearth of direct gravitational-wave probes at frequencies between $10^{-15}$
and $10^{-9}$ Hz, two \textit{indirect} constraints apply around this range.
Precision CMB measurements of the energy density in radiation in the Universe provide an upper
bound on the present-day gravitational wave spectrum (gravitational waves themselves contributing to
expansion like radiation)~\cite{Maggiore:1999vm, Dvorkin:2022jyg}, currently of order
$\Omega_{\mathrm{GW}, 0} h^2 \sim 10^{-6}$~\cite{Planck:2018vyg,Pagano:2015hma}.
In addition, recent work has argued that spectral distortions of the CMB blackbody also probe
gravitational waves in this frequency range~\cite{Kite:2020uix}.
While far-future experiments could provide tighter constraints than
$N_\mathrm{eff}$ measurements, these still are unlikely to be useful probes of friendly axions.
}
Note that the stochastic background from single-field inflation, if detectable by future CMB
experiments, is nearly scale invariant and of order $10^{-16}$~\cite{Smith:2005mm,Caprini:2018mtu},
far larger than those possible from friendly axion DM.
On the other hand, existing measurements of the $B$-mode polarization of the CMB already constrain
gravitational waves at frequencies between $10^{-18}$ and $10^{-16}$ Hz, with a most stringent upper
limit of $\Omega_{\mathrm{GW}, 0} h^2 \sim 10^{-16}$ at
$f_\mathrm{GW} \sim 10^{-17} \, \mathrm{Hz}$~\cite{Clarke:2020bil}.
Importantly, the polarization is sourced almost exclusively at recombination, when the photon visibility
function spikes.
As a result, CMB constraints are only relevant for scenarios where gravitational waves are sourced
before this time, i.e., when the Hubble scale is
$H \gtrsim H_\mathrm{rec} \sim 3 \times 10^{-29} \, \mathrm{eV}$~\cite{Planck:2018vyg}
In the scenarios we consider here, the anisotropic stress maximizes after about ten field oscillations,
so the smallest relevant mass is of order $10^{-27} \, \mathrm{eV}$.
Consulting \cref{eqn:gw-frequency-scaling}, the peak of the signal at such a mass corresponds to a
frequency of order $2 \times 10^{-16} \, \mathrm{Hz}$.
Such hyperlight axion dark matter is well ruled out by fuzzy dark matter constraints,
but could make up some subcomponent of the total dark matter (depending on the mass)~\cite{Irsic:2017yje,Armengaud:2017nkf,Dentler:2021zij,Bozek:2014uqa,Hlozek:2014lca,Kobayashi:2017jcf,DES:2018zzu,Lague:2021frh,Hlozek:2017zzf,Dalal:2022rmp,Flitter:2022pzf}.
By the preceding argument, CMB measurements mainly probe the infrared tail of the gravitational wave
background from friendly axions.
Simulating a large enough volume to capture these modes while also resolving the nonlinear dynamics at
small scales requires orders of magnitude more computational resources than that available to us.
Fortunately, on causal grounds the gravitational
wave spectrum on scales much larger than the relevant dynamical scales (i.e., $k / a \ll m$) follows a
simple power law, independent of the underlying dynamics~\cite{Hook:2020phx}.
We therefore extrapolate the signals computed in simulations as decaying with $f_\mathrm{GW}^3$ at smaller
frequencies, appropriate for infrared, ``causal'' modes generated inside the horizon and those generated
while superhorizon that reenter during radiation-dominated era.
The lowest-frequency modes in the simulation do nearly follow an $f_\mathrm{GW}^3$ scaling, so we expect
this approximation to be at worst conservative.
Constraints were similarly derived on the infrared tail of gravitational waves from a model
of early dark energy in Ref.~\cite{Weiner:2020sxn}.
However, note that recombination occurs shortly after matter-radiation equality, and superhorizon
causal modes that reenter the horizon during the matter era instead grow with one
power of $f_\mathrm{GW}$.\footnote{
See Ref.~\cite{Hook:2020phx} for a thorough presentation of the imprint of the
expansion history and presence of free-streaming radiation on causal gravitational waves.
}
Since the CMB becomes increasingly less sensitive to $\Omega_\mathrm{GW}$ on scales larger than
the horizon at equality, we do not expect the break in the power law to improve constraints.
(On the other hand, it would likely be observable for causal gravitational wave backgrounds
that are indeed observable in the CMB.)
Our simulations themselves also do not account for the transition to the matter-dominated,
instead assuming a radiation-dominated Universe;
we expect this to be entirely sufficient for our estimates here.
We investigate the possibility of CMB constraints in \cref{fig:gws-at-cmb}, which displays
the possible signals from friendly axions subcomponents with $\calF = 20$.
To illustrate the constraining power on the present abundance of friendly axions,
we consider masses varying from $10^{-27}$ to $10^{-20} \, \mathrm{eV}$, each
taking the fraction of dark matter in friendly axions to saturate limits on ultralight axions
from CMB and large scale structure data~\cite{Lague:2021frh}.
Because the CMB is highly sensitive to low-frequency tensor modes (in terms of their effective
energy density)~\cite{Clarke:2020bil}, the CMB can place tight constraints on the fraction
of dark matter in hyperlight friendly axions.
For $m = 10^{-27} \, \mathrm{eV}$, Planck and BICEP2/\textit{Keck}~\cite{Planck:2018jri,BICEP2:2018kqh}
allow only a $0.1 \%$ friendly subcomponent.\footnote{
Note that the most recent BICEP/\textit{Keck} data release further improved constraints
on the tensor-to-scalar ratio by a factor of $\sim 1.6$, and that CMB-S4 projects to
provide upwards of a further factor of $\sim 30$~\cite{CMB-S4:2020lpa}.
}
We may obtain a heuristic bound as a function of mass by extrapolating the $f_\mathrm{GW}^3$
tail to the frequency corresponding to the horizon size at matter-radation equality,
$f_\mathrm{eq} = 1.54 \times 10^{-15} \, \mathrm{Hz}$, where Ref.~\cite{Clarke:2020bil} reports
an upper limit of $\Omega_{\mathrm{GW}, 0}^\mathrm{bound} h^2 = 3.2 \times 10^{-16}$.
Applying the scalings of \cref{eqn:omega-gw-scaling,eqn:gw-frequency-scaling}, the CMB probes
\begin{align}
\frac{\Omega_h + \Omega_l}{\Omega_\mathrm{DM}}
\lesssim 10^{-2}
\left( \frac{m}{5 \times 10^{-27} \, \mathrm{eV}} \right)^{5/4}
\sqrt{
\frac{\Omega_{\mathrm{GW}, 0}^\mathrm{bound}(f_\mathrm{eq}) h^2}{ 3.2 \times 10^{-16} }
}
\end{align}
which is only relevant for masses $m \gtrsim 10^{-27} \, \mathrm{eV}$ for which the signals are
produced before recombination.
With the bound of Ref.~\cite{Clarke:2020bil}, the limits become irrelevant (i.e., order unity)
for masses $m \gtrsim 10^{-25} \, \mathrm{eV}$, well below the present lower limits on the mass
of fuzzy dark matter.
\subsection{Driven oscillons}
\label{sec:drivenOscillons}
Beyond their important role in the nonperturbative dynamics of friendly axions, oscillons have long been a subject of interest~\cite{kudryavtsev1975solitonlike,Makhankov:1978rg,Kolb:1993hw,Salmi:2012ta,Amin:2011hj,Kawasaki:2019czd}.
As nontopological excitations, oscillons generically decay by radiating semirelativistic modes, making their lifetime an interesting object of study~\cite{Olle:2020qqy,Zhang:2020bec,Cyncynates:2021rtf}.
In models with friendly axions, the lifetime of an oscillon can be extended beyond its in-vacuum expectation because of energy transfer from the long axion~\cite{Cyncynates:2021xzw}.
In this section, we show that short axion oscillons are driven by the long axion via autoresonance, obeying equations analogous to those for the homogeneous fields.\footnote{
Other related nonlinear wave equations exhibit oscillons/solitons driven by autoresonance, e.g., Refs.~\cite{friedland2005excitation,friedland2006emergence,maslov2007breather,batalov2011control,karachalios2019excitation}.
}
We then provide analytic results to quantify the oscillon lifetime enhancement and numerical results to support our analysis.
Because of the importance of nonlinear dynamics, it is more convenient to discuss oscillons in the interaction basis of the short and long axion, which we adopt consistently throughout this section.
An oscillon is a quasi-periodic, quasi-localized excitation of the axion field, and its
fundamental frequency $\omega$ is smaller than the rest mass of the axion $m$.
After its initial formation, the binding energy per particle inside an isolated oscillon,
\begin{align}
E_\te{bind} = m - \omega,
\end{align}
is a decreasing function of time, and consequently the characteristic size of the oscillon
\begin{align}
r_\te{osc} \approx \f{\pi}{\sqrt{2 m E_\te{bind}}},
\end{align}
is an increasing function of time.
Over time its rate of radiation falls off approximately exponentially with this growing separation of scales~\cite{Cyncynates:2021rtf}:
\begin{align}
P_\te{rad}/f^2\sim \exp\ps{-\f{r_\te{osc}}{\lambda_\te{rad}}}\sim \exp\ps{-\f{3\omega}{\sqrt{2 m (m - \omega)}}},
\end{align}
where $\lambda_\mathrm{rad} = 3\omega$ is the radiation wavelength due to $3\to1$ processes, which dominate the decay for most $\omega$ in potentials with parity symmetry.
Thus as oscillons age and $\omega$ increases toward $m$, the oscillon radiates more and more slowly.
However, the oscillon is not infinitely long-lived: a geometrical consequence of living in three dimensions
is that an oscillon must die at a frequency $\omega_\te{crit} < m$ (see e.g., Refs.~\cite{Fodor:2009kf,Fodor:2019ftc,Cyncynates:2021rtf}).
An oscillon therefore typically spends the majority of its lifetime at a frequency close to $\omega_\te{crit}$.
The mechanism of radiation (which is discussed in detail in Refs.~\cite{Fodor:2019ftc,Zhang:2020bec,Cyncynates:2021rtf}) does not play an important role in our analysis, so we may simply characterize the decay rate of the oscillon by its instantaneous lifetime:
\begin{align}
T_\te{inst}(\omega)&= \f{E_\te{osc}(\omega)}{P_\te{rad}(\omega)}\equiv \f{1}{\Gamma_\te{inst}(\omega)},
\end{align}
where $E_\te{osc}$ is the total bound energy in the oscillon and $P_\te{rad}$ is the power radiated by the oscillon, all measured while the oscillon is at a frequency $\omega$.
[The precise values of $E_\mathrm{osc}(\omega)$ and $P_\mathrm{rad}(\omega)$ may be calculated using the software package \href{https://github.com/SimpleOscillon/Code}{\faGithub\hspace{0.1cm}\textsf{Simple Oscillon}}.]
As shown in earlier sections, friendly axions admit oscillon solutions too, although only the short axion is likely to form them.
To study this scenario, we work in the limit of a homogeneous long axion, $\phi_L = \Phi_L(t)$, and model the short axion as a spherically symmetric configuration with a fixed radial profile $\phi_S(r, t) = \Phi_S(t) R_S(r)$.
To be self-consistent, we work in the limit where the short axion does not backreact onto the long axion, i.e., $f_S \ll f_L$.
This approximation neglects the effects of radiation, which we include via an artificial damping term with coefficient $\Gamma_\mathrm{inst}$.
In the small $\Phi_L/f_L$ limit, keeping only linear terms,
\begin{subequations}\label{eq:oscillonEffectiveEOM}
\begin{align}
\ddot{\Phi}_L
+ \f{3}{2t}\dot{\Phi}_L
&= - m^2\mu^2\Phi_L,
\\
\ddot{\Phi}_S
+ \p{\f{3}{2t} + \Gamma_\te{inst}} \dot{\Phi}_S
&= - V_0'(\Phi_S) - V_1'(\Phi_S)\f{\Phi_L}{f_L}.
\end{align}
\end{subequations}
Here $V_0$ and $V_1$ are effective potentials derived by integrating out $R_S$.
Ultimately, the precise form of these potentials is unimportant, their salient feature being that they possess attractive nonlinearities.
These equations are thus precisely in the same form as those for the homogeneous system studied in Ref.~\cite{Cyncynates:2021xzw},
and autoresonance between $\Phi_S$ and $\Phi_L$ therefore occurs for a broad range of initial conditions (although the likelihood that any given patch of space leads to oscillon formation and autoresonance is most easily assessed with simulations).
\Cref{fig:phaseLock} demonstrates that autoresonance between the short oscillons and the long axion occurs in our full $3+1D$ simulations.
The short-axion oscillon oscillates at the same frequency $\omega = \mu$ as the driver
$\left\langle \phi_L \right\rangle$, conclusively demonstrating local nonlinear resonance
inside the oscillon.
The phase offset between the short and long axion evolves from roughly $\pi/3$ at early times
(left side of \cref{fig:phaseLock}) to about $\pi/2$ just before oscillon death (right side),
indicating an increasingly efficient energy transfer from the long axion to the short oscillon.
The energy transfer rate peaks when the phase shift reaches $\pi / 2$, and subsequently decreases
as the long axion's amplitude redshifts.
In analogy to homogeneous autoresonance, the long axion can then no longer support the short
axion against its own radiation.
Using the equations of motion \cref{eq:oscillonEffectiveEOM} for the spherically symmetric system, we can solve for the dynamics of the short oscillon and the long driver to arrive at the driven oscillon lifetime $t_\te{death}$ in terms of its instantaneous vacuum lifetime $T_\te{inst}(\mu)$.
The details of this calculation are described in \cref{app:oscillons}, and we summarize the results here.
First, the long driver amplitude must be large enough that it supplies sufficient energy to the oscillon, leading to
\begin{align}
\label{eq:driveLimit}
m \mu t_\te{death}\approx \left[ m \mu T_\te{inst}(\mu) \right]^{4/3}.
\end{align}
We compare this analytic scaling to simulations in \cref{fig:lifetime}, verifying the $(\mu T_\mathrm{inst}(\mu))^{4/3}$
dependence in the range $0.73 \lesssim \mu \leq 0.83$.
but spherically symmetric simulations verify the scaling of \cref{eq:driveLimit} out to $\mu = 0.89$.
At lower frequencies the driven oscillon lifetime is shorter than the vacuum lifetime: once
the oscillon falls off autoresonance, it rapidly dumps its energy into the long axion field,
cutting its life short.
Larger values of $\mu$ require longer ($3+1D$) simulation runtimes than our computational resources permit,
The oscillon also backreacts on the driver, inducing spatial perturbations.
These fluctuations remain small only until
\begin{align}
\label{eq:BRLimit}
m \mu t_\te{death}
\lesssim \calF^{8/3}.
\end{align}
This scaling is demonstrated in \cref{fig:Backreaction}: the spherically symmetric solutions to the full coupled nonlinear wave equations (see Eq.~\ref{eq:OscillonEOM}) exhibit $\calF^{8/3}$ behavior for $\calF \lesssim 40$,
at which point the lifetime saturates the driving limit, \cref{eq:driveLimit}.
Finally, the oscillon siphons energy from the long axion, depleting the latter's local energy density.
Nearby regions of space then resupply this region with long axion; requiring that this resupply rate exceeds the
depletion rate due to the oscillon leads to the final constraint
\begin{align}
\label{eq:DepletionLimit}
t_\te{death}\lesssim \calF^2 T_\te{inst}(\mu).
\end{align}
One can check that there is no region of $\calF-T_\mathrm{inst}$ parameter space where \cref{eq:DepletionLimit} dominates the lifetime.
Taken together, the lifetime is
\begin{align}
\label{eq:lifetime}
m \mu t_\te{death}
&\approx \min\left( \left[ m \mu T_\te{inst}(\mu) \right]^{4/3}, \calF^{8/3} \right).
\end{align}
While the lifetimes of these driven oscillons are parametrically enhanced relative to their in-vacuum expected lifespans, they are still far too short-lived to be of any cosmological relevance.
Nonetheless, these novel dynamics---interesting in their own right---potentially broaden the class of scalar field theories that admit oscillons surviving into the present day.
We discuss this possibility and associated challenges in App.~\ref{app:generalPotentialLongevity}.
\section{Discussion} \label{sec:discussion}
Nonlinear effects in the early Universe can have a drastic impact on the late-time distribution of energy in dark sectors.
The ``friendly'' axion system of Ref.~\cite{Cyncynates:2021xzw} provides a concrete example model, where nonlinearities dominate the dynamics of both the background and fluctuations and have important consequences for direct detection experiments.
In this paper, we numerically evolved the full system, showing that large axion fluctuations---in particular the nucleation of oscillons---work to equilibrate the relic densities of the two axions for a moderate ratio of the heavy axion's decay constant to that of the light one
($6 \lesssim \calF \lesssim 20$).
For smaller decay constant ratios $\calF \lesssim 6$, spatial fluctuations have a negligible effect
on the dynamics, and we recover the results of homogeneous computations from
Ref.~\cite{Cyncynates:2021xzw}. At larger ratios $\calF \gg 20$, oscillon nucleation prevents the heavy axion from ever
attaining a substantial abundance.
The novel dynamics in the intermediate-$\calF$ regime position friendly axions to be positively identified
as two-component dark matter by direct detection experiments.
The lighter axion's abundance is reduced by no more than a factor of about two, in sharp contrast to expectations based on homogeneous approximations in which its abundance would be parametrically depleted~\cite{Cyncynates:2021xzw}.
The heavier axion's abundance (and therefore detection prospects) is still parametrically enhanced (by a factor of $\approx \calF^2 / 2$), but only at a moderate cost to the visibility of the lighter axion.
Many upcoming axion direct detection experiments~\cite{Brouwer:2022bwo,Alesini:2017ifp,Stern:2016bbw,DMRadio:2022pkf} would potentially be sensitive to \textit{both} axions in a friendly pair having masses within the experiment's sensitivity band.
Direct detection of axion dark matter with a decay constant substantially smaller than that
expected in standard misalignment scenarios should prompt a search for a second, more weakly
coupled axion at a nearby mass.
We also computed the stochastic gravitational wave background produced by oscillon nucleation in a friendly axion sector.
If friendly axions compose all of the dark matter, the present-day strain is well out of reach of near-future gravitational wave experiments, but the cosmic microwave background polarization does constrain (and in the future may probe) hyperlight friendly pairs making up a subcomponent of dark matter.
Density and vector perturbations are also produced in these scenarios; their effect on the CMB
(and other cosmological observables) is less straightforward to evaluate, but they may well provide
even more stringent constraints than just the (as-yet unobserved) primordial $B$-mode polarization.
Finally, for $\calF \gtrsim 20$, although autoresonance is quenched by oscillon production (preventing the axions' energies from equalizing) our simulations demonstrate that short-axion oscillons produced in the early Universe are driven by the long axion background, parametrically extending their lifetimes.
For the specific friendly axion potential studied here (\cref{eq:twoAxionPotential}), driven oscillons can live one to two orders of magnitude longer than their undriven counterparts.
Though they are still not long-lived enough to be astrophysically relevant, even at the lightest possible axion masses, this may not be the case for other scalar potentials.
Similar dynamics in other coupled axion theories may lead to driven oscillons that could naturally live until the present day, with numerous possible observational signatures including gravitational lensing~\cite{VanTilburg:2018ykj}, optical lensing~\cite{Prabhu:2020pzm}, and electromagnetic bursts~\cite{Prabhu:2020yif,Buckley:2020fmh,Amin:2020vja,Amin:2021tnq}.
Our results are qualitatively insensitive to the amplitude of the initial primordial curvature perturbations.
However, the size does determine the
precise minimum and maximum decay constant ratios for which the two axion energy densities equalize.
For simplicity, we used a scale-invariant initial power spectrum with magnitude set by the \textit{Planck} measurements at CMB scales, but in reality adiabatic fluctuations are red-tilted on large scales and are much less constrained on smaller scales.
If there is less initial power at small scales, the time to nonlinearity and oscillon formation increases, allowing friendly pairs with larger decay constant ratios $\calF \gg 20$ to achieve equipartition.
We also note that whether the initial axion perturbations are adiabatic as above or seeded directly in the axion field (i.e., isocurvature perturbations) does not measurably affect any of our results, as verified by a set of simulations with purely isocurvature initial conditions.
The string axiverse is a rare example of a low-energy signature of quantum gravity,
most of whose novel predictions reside at the grand-unified or string scales, far outside experimental reach.
In general, an axiverse can comprise a multitude of light, coupled axions; this work provides further
evidence of the outsized impact nonlinear effects and interactions have on the phenomenology of the axiverse.
The friendly model considered here, for which nonperturbative dynamics revise predictions by multiple orders of magnitude, is only a prototypical example; further work is necessary to understand the phenomenology of fully realistic axiverses and the critical role played by nonlinear dynamics.
\begin{acknowledgments}
We thank Masha Baryakhtar and Davide Racco for thoughtful feedback on this manuscript, Savas Dimopoulos for useful discussions on ``oscillon longevity centers,'' and Dmitriy Zhigunov for helpful conversations about strongly nonlinear fields.
D.C.\ is grateful for the support of the Stanford Institute for Theoretical Physics (SITP), the National Science Foundation under Grant No.\ PHY-2014215, and the Gordon and Betty Moore Foundation under Grant No.\ GBMF7946. O.S.\ is supported by a DARE fellowship from the Vice Provost for Graduate Education at Stanford University. J.O.T.\ is supported by the ARCS Foundation, and is thankful to the Perimeter Institute for its hospitality during the final stages of completing this manuscript.
This work used the Extreme Science and Engineering Discovery Environment (XSEDE)~\cite{xsede}, which
is supported by National Science Foundation grant number ACI-1548562; simulations were run through allocation TG-PHY200037 on the Anvil cluster at Purdue University, Bridges-2 at the Pittsburgh Supercomputing Center which is supported by NSF award number ACI-1928147, and
Expanse at the San Diego Supercomputer Center. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science.
Simulations in this work were implemented with \textsf{pystella}~\cite{pystella}, which is available at
\href{https://github.com/zachjweiner/pystella}{github.com/zachjweiner/pystella} and makes use of
the Python packages \textsf{PyOpenCL}~\cite{kloeckner_pycuda_2012},
\textsf{Loopy}~\cite{kloeckner_loopy_2014}, \textsf{mpi4py}~\cite{DALCIN2008655,DALCIN20051108},
\textsf{mpi4py-fft}~\cite{jpdc_fft}, and \textsf{NumPy}~\cite{Harris:2020xlr}.
This work also made use of the packages \textsf{SciPy}~\cite{Virtanen:2019joe}, \textsf{matplotlib}~\cite{Hunter:2007ouj}, \textsf{SymPy}~\cite{Meurer:2017yhf}, and
\textsf{CMasher}~\cite{cmasher}.
\end{acknowledgments}
\appendix
\section{Equations of motion and numerical implementation}\label{app:numerical-details}
For completeness, we enumerate the evolution equations as implemented in simulations.
We use a conformal-time, ``mostly plus'' FLRW metric in the conformal Newtonian gauge with line
element
\begin{align}
\begin{split}
\mathrm{d} s^2
&= - a(\tau)^2 \left[ 1 + 2 \Phi(\tau, \mathbf{x}) \right] \mathrm{d} \tau^2
\\ &\hphantom{ {}={} }
+ a(\tau)^2 \left[
\left\{ 1 - 2 \Phi(\tau, \mathbf{x}) \right\} \delta_{ij}
+ h_{ij}(\tau, \mathbf{x})
\right]
\mathrm{d} x^i \mathrm{d} x^j.
\end{split}
\end{align}
Here $\Phi$ is the Newtonian potential and $h_{ij}$ the transverse ($\partial_i h_{ij} = 0$) and
traceless ($\delta^{ij} h_{ij} = 0$) tensor perturbation.
We neglect scalar anisotropic stress and vector perturbations.
We define the comoving Hubble parameter $\mathcal{H} \equiv \partial_\tau a / a$ (while the standard
Hubble parameter is $H \equiv \partial_t a / a = \mathcal{H} / a$).
Well before matter-radiation equality, the axions make a negligible contribution to the expansion of
the Universe; the solution to the Friedmann equations in a radiation Universe is
$a(\tau) / a(\tau_m) = \tau / \tau_m$.
We take $\tau_m = m^{-1}$ so that the scale factor is normalized to the time when $H = m$.
Because tensor perturbations (i.e., gravitational waves) from inflation are very small and their
subsequent production by axion production is suppressed by $(f_L / \Mpl)^2$, we neglect
their backreaction onto the axion fields.
For the same reason, we neglect the contribution of the axions to the Newtonian potential.
\subsection{Equations of motion}
The Euler-Lagrange equation for the axions reads
\begin{align}
\begin{split}\label{eqn:phi-I-eom}
\partial_\tau^2 \phi_I
&= - \left[
2 \mathcal{H}(\tau)
- 4 \partial_\tau \Phi
\right]
\partial_\tau \phi_I
+ \left( 1 + 4 \Phi \right) \partial_i \partial_i \phi_I
\\ &\hphantom{ {}={} }
- a(\tau)^2 \left( 1 + 2 \Phi \right) \frac{\partial V}{\partial \phi_I},
\end{split}
\end{align}
where the potential $V$ is defined in \cref{eq:twoAxionPotentialPhi} and $I = S$ or $L$.
(We use repeated Latin indices $i$, $j$, $k$, etc., to denote spatial components that are contracted
with the Kronecker delta regardless of their placement.)
In a Universe dominated by a single fluid with equation of state $w$ and a sound speed
$c_s^2 \equiv \delta P / \delta \rho$, the Einstein equations for $\Phi$ may be rearranged into
\begin{align}
\begin{split}\label{eqn:newtonian-eom}
\partial_\tau^2 \Phi
&= - (2 + 3 c_s^2) \mathcal{H} \partial_\tau \Phi
- \mathcal{H} \partial_\tau \Phi
\\ &\hphantom{ {}={} }
- 3 \left( c_s^2 - w \right) \mathcal{H}^2 \Phi
+ c_s^2 \partial_i \partial_i \Phi,
\end{split}
\end{align}
where both $w$ and $c_s^2$ are $1/3$ in the radiation-dominated era.
Finally, the tensor perturbations evolve according to
\begin{align}\label{eqn:gw-eom}
\partial_\tau^2 h_{ij} + 2 \mathcal{H} \partial_\tau h_{ij} - \partial_k \partial_k h_{ij}
&= \frac{2 a(\tau)^2}{\Mpl^2} \Pi^{i}_{\hphantom{i}j}.
\end{align}
Gravitational waves are sourced by the transverse and traceless anisotropic stress tensor
$\Pi_{ij}$ whose Fourier modes are given in terms of the full stress tensor
$T_{ij}$ by
\begin{align}
\Pi_{ij}(\tau, \mathbf{k})
&= \left(
P_{il}(\mathbf{k}) P_{jm}(\mathbf{k})
- \frac{1}{2} P_{ij}(\mathbf{k}) P_{lm}(\mathbf{k})
\right) T_{lm}(\tau, \mathbf{k})
\end{align}
with
\begin{align}
P_{ij}(\mathbf{k})
&= \delta_{ij} - \frac{k_i k_j}{k^2}.
\end{align}
Unlike the Newtonian potential, which is dominated by the standard adiabatic perturbations in the radiation fluid, the gravitational waves are only sourced by the axions.
However, the sourced gravitational waves are also suppressed by $(f_L / \Mpl)^2$, for
which reason it is entirely sufficient to evaluate the stress tensor as if in a homogeneous FLRW Universe:
\begin{align}\label{eqn:full-stress-tensor}
\begin{split}
T^i_{\hphantom{i}j}
&= \sum_I \partial_i \phi_I \partial_j \phi_I
\\ &\hphantom{{}={}}
- \delta^i_{\hphantom{i}j}
\left(
\frac{1}{2} \sum_I \partial_\mu \phi_I \partial^\mu \phi_I
+ V(\phi_S,\phi_L)
\right).
\end{split}
\end{align}
The final term purely contributes to the trace of the stress-energy tensor and may be dropped when computing the metric tensor perturbations.
Because the backreaction of gravitational waves on the axions is negligible, we may simply ignore any
initial amplitude generated during inflation and consider the evolution of \cref{eqn:gw-eom} from
zero initial conditions.
\subsection{Initial conditions}
Imposing that $\Phi$ was frozen outside of the horizon ($k \tau \ll 1$) to its primordial value
generated during inflation sets $\Phi(\tau \ll 1/k, \mathbf{k}) = \Phi_0(\mathbf{k})$
and $\partial_\tau \Phi(\tau \ll 1/k, \mathbf{k}) = 0$.
In this case, we find the solution
\begin{align}\label{eqn:Phi-analytic-solution}
\Phi(\tau, \mathbf{k})
&= \Phi_0(\mathbf{k}) \frac{\sin y - y \cos y}{y^3},
\end{align}
where $y \equiv \sqrt{w} k \tau$.
The primordial curvature perturbation is characterized by a dimensionless power spectrum
$\Delta_\Phi^2(k)$ defined by
\begin{align}\label{eqn:def-dimless-power-spectrum-Phi}
\frac{k^3}{2 \pi^2}
\left\langle \Phi_0(\mathbf{k}_1) \Phi_0(\mathbf{k}_2) \right\rangle
&= (2 \pi)^3 \delta(\mathbf{k}_1 + \mathbf{k}_2)
\Delta_\Phi^2(k),
\end{align}
which, for standard slow-roll inflation, is nearly scale invariant with amplitude $\sim 10^{-9}$~\cite{Planck:2018vyg}.
To avoid specifying the mass scale of the problem, we neglect the small spectral tilt
and simply take a scale-invariant spectrum.
We begin the simulations when $H = m$ (at $\tau = \tau_1$), solving the linearized system of equations from an early time when all relevant wavenumbers are well outside the horizon (i.e., when $k \ll a H$) to determine initial conditions.
The axions are initialized as Gaussian-random fields with mean field value and velocity at $\tau_1$ set according to the solution to the homogeneous system.
The fluctuations are set in Fourier space to match the linearized solution $\phi_{I, k}(\tau_1)$, multiplied by a Gaussian-random complex number (normalized such that the mean squared modulus is unity).
For each axion and wavevector $\mathbf{k}$ on the grid, we therefore set
\begin{align}
\phi_I(\tau_1, \mathbf{k})
&= \phi_{I, k}(\tau_1) \sqrt{- \ln U_1(\mathbf{k})} e^{2 \pi i U_2(\mathbf{k})} \\
\partial_\tau \phi_I(\tau_1, \mathbf{k})
&= \partial_\tau \phi_{I, k}(\tau_1) \sqrt{- \ln U_1(\mathbf{k})} e^{2 \pi i U_2(\mathbf{k})},
\end{align}
where $U_1(\mathbf{k})$ and $U_2(\mathbf{k})$ are random variates between 0 and 1, each of which are the same for both axions.
The Newtonian potential is initialized analogously (and with random numbers for each wavevector matching those for the axions) using the analytic solution \cref{eqn:Phi-analytic-solution}.
\subsection{Gravitational wave backgrounds}\label{app:gravitational-waves}
Gravitational waves carry an effective energy density which, deep inside the horizon (i.e.,
$k \gg \mathcal{H}$), is~\cite{Abramo:1997hu,Brandenberger:2018fte,Clarke:2020bil}
\begin{align}
\rho_\mathrm{GW}(\tau)
= \frac{\Mpl^2}{4 a^2}
\left\langle
\overbar{\partial_\tau h_{ij} \partial_\tau h_{ij}}
\right\rangle,
\label{eqn:rho-gw-subhorizon}
\end{align}
where the bar denotes a time average (i.e., over oscillations).
The relic abundance of gravitational waves today per logarithmic wavenumber is
\begin{align}\label{eqn:omega-gw-spectrum-def}
\Omega_\mathrm{GW}(\tau, k)
= \frac{1}{\bar{\rho}(\tau)} \dd{\rho_\mathrm{GW}(\tau)}{\ln k}.
\end{align}
Substituting the inverse Fourier transform of $h_{ij}$ into \cref{eqn:rho-gw-subhorizon} permits
rewriting \cref{eqn:omega-gw-spectrum-def} in terms of the dimensionless power spectrum of
$\partial_\tau h_{i j}$ (defined in analogy to \cref{eqn:def-dimless-power-spectrum-Phi}) as
\begin{align}\label{eqn:omega-gw-spectrum-ito-power-spectrum}
\Omega_\mathrm{GW}(\tau, k)
= \frac{1}{12 \mathcal{H}(\tau)^2}
\sum_{i, j} \Delta^2_{\partial_\tau h_{ij}}(\tau, k),
\end{align}
after plugging in $\bar{\rho}(\tau) = 3 \Mpl^2 H(\tau)^2$.
The spectrum evaluated in the early Universe is related to that at the present day (at $\tau_0$) by
the transfer function
\begin{align}
\frac{
\Omega_{\mathrm{GW}}(\tau_0, k) h^2
}{
\Omega_{\mathrm{GW}}(\tau, k)
}
&= \Omega_{\mathrm{rad}}(\tau_0) h^2
\frac{g_{\star}(\tau)}{g_{\star}(\tau_0)}
\left( \frac{g_{\star S}(\tau)}{g_{\star S}(\tau_0)} \right)^{-4/3},
\label{eqn:gw-amplitude-transfer-function}
\end{align}
and would be observed at present-day frequencies related to the wavenumber $k$ by
\begin{align}
\begin{split}
f_\mathrm{GW}
&= \frac{k / 2 \pi a(\tau)}{\sqrt{H(\tau) \Mpl}}
\left[
\Omega_{\mathrm{rad}}(\tau_0)
H_0^2 \Mpl^2
\right]^{1/4}
\\ &\hphantom{{}={} }
\times
\left( \frac{g_{\star}(\tau)}{g_{\star}(\tau_0)} \right)^{1/4}
\left( \frac{g_{\star S}(\tau)}{g_{\star S}(\tau_0)} \right)^{-1/3}.
\end{split}
\end{align}
Here $g_{\star}$ and $g_{\star S}$ are the numbers of relativistic degrees of freedom in energy and
entropy density, respectively.
Note that the present-day abundance of radiation is
$\Omega_{\mathrm{rad}}(\tau_0) h^2 \approx 4.2 \times 10^{-5}$~\cite{Planck:2018vyg} and that
$H_0 / h \equiv 100 \, \mathrm{km} \, \mathrm{s}^{-1} / \mathrm{Mpc} \approx 3.24 \times 10^{-18} \, \mathrm{Hz}$.
\subsection{Numerical implementation}
We discretize the evolution equations, \cref{eqn:phi-I-eom,eqn:newtonian-eom,eqn:gw-eom},
onto a three dimensional, regularly spaced grid with periodic boundary conditions.
Following Refs.~\cite{Adshead:2019igv,Adshead:2019lbr,Weiner:2020sxn}, we evolve the gravitational
wave equation of motion (\cref{eqn:gw-eom}) under the replacement of the transverse-traceless
stress tensor (which is only easily calculated in Fourier space) with the full stress tensor
(\cref{eqn:full-stress-tensor}).
The transverse-traceless projection is instead performed on the tensor field itself only when
outputting gravitational wave spectra, drastically reducing the required number of Fast Fourier
Transforms (which in distributed-memory contexts are a major bottleneck).
For similar reasons, rather than use the analytic solution \cref{eqn:Phi-analytic-solution} for the
Newtonian potential (which requires forward and inverse Fourier transforms at each step), we simply
evolve \cref{eqn:newtonian-eom} in position space.
Spatial derivatives are approximated with fourth-order centered differencing, while integration in time is implemented with a ``low-storage,'' fourth-order Runge-Kutta method~\cite{carpenter1994fourth}.
We have verified that the results are insensitive to the precise choice of box length $L$ and are consistent with simulations with the same physical volume but $1280^3$ and $1536^3$ gridpoints (compared to the $1024^3$ used for main results), as well as at the same resolution with a box length $1.25$ and $1.5$ times larger.
We have also checked for a representative case that results at a fixed resolution and volume are qualitatively
insensitive to the the random seed used in generating stochastic initial conditions.
\section{Details of driven oscillons} \label{app:oscillons}
In \cref{sec:drivenOscillons}, we demonstrated that short-axion oscillons remain in autoresonance with the nearly-homogeneous long axion, often extending their lifetime well beyond the in-vacuum expectation.
The physics of this localized autoresonance is mostly captured by the effective equations of motion (\cref{eq:oscillonEffectiveEOM})
obtained by integrating out the spatial profile of the oscillon, demonstrating that the short oscillon is well approximated as a one dimensional driven nonlinear oscillator.
However, \cref{eq:oscillonEffectiveEOM} fails to capture some important effects that ultimately limit the lifetime of the driven oscillon, namely \cref{eq:driveLimit,eq:BRLimit,eq:DepletionLimit}.
In what follows, we derive the lifetime bounds in \cref{eq:driveLimit,eq:BRLimit,eq:DepletionLimit} in \cref{app:lifetimeBounds} and discuss the possibility of cosmologically long-lived oscillons in more general multi-axion potentials in \cref{app:generalPotentialLongevity}.
\subsection{Lifetime bounds}
\label{app:lifetimeBounds}
Most long-lived oscillons are well approximated by a single-harmonic ansatz, so $\Phi_S$ oscillates predominantly at a single frequency.
On autoresonance, this frequency approximately matches the long-axion mass ($\mu m$) up to transient oscillations around this stable point.
Thus on autoresonance we may approximate
\begin{align}
\Phi_S &\approx f_S\Theta_S^0\sin(m\mu t + \delta),\\
\label{eq:oscillonLongAmplitude}
\Phi_L &\approx f_L\Theta_L^0\p{\f{t}{t_0}}^{-3/4}\sin(m\mu t),
\end{align}
where $\delta$ is a phase-offset.
While $\delta$ does have a time dependence, as discussed in App.~B.3 of~\cite{Cyncynates:2021xzw}, it is slow compared to the oscillatory timescale and thus we can approximate $\delta$ as a constant.
The power transferred from the long to the short axion is
\begin{align}
\f{P_{L\to S}}{V_\te{osc}}&=\p{\dot \Phi_S\partial_{\Phi_S} - \dot \Phi_L\partial_{\Phi_L}} V_\te{int},
\end{align}
where for our purposes we may approximate the interaction potential by a mass-mixing term,
\begin{align}
V_\te{int} \sim m^2 f_S \Phi_S\f{\Phi_L}{f_L}.
\end{align}
Thus, the time-averaged power transfer is
\begin{align}
\gen{P_{L\to S}} \approx -m\mu E_\te{osc} \f{\Theta_L^0}{\Theta_S^0}\p{\f{t}{t_0}}^{-3/4}\sin\delta,
\end{align}
where we've taken $m^2 f_S^2 (\Theta_S^0)^2 V_\te{osc}\approx E_\te{osc}$.
As shown in Ref.~\cite{Cyncynates:2021xzw}, the phase $\delta$ tends towards $-\pi/2$ towards the end of autoresonance, and thus we take $\delta = -\pi/2$ when calculating the driven oscillon lifetime.
The oscillon is supported by the driver when its radiated power $P_\mathrm{rad}$ is smaller than the maximum power transfer from the long axion $\gen{P_{L\to S}}$, from which we obtain the approximate time of death $t_\mathrm{death}$,
\begin{align}
\label{eq:drivenLifetimeApp}
m\mu t_\te{death} \approx \p{m\mu\f{\Theta_L^0}{\Theta_S^0}T_\te{inst}(\mu)}^{4/3},
\end{align}
taking $t_0 \approx 1/m\mu$.
Here $T_\mathrm{inst}(\omega) = E_\te{osc}(\omega)/P_\te{rad}(\omega)$ is the instantaneous oscillon lifetime at the frequency $\omega$, which may be longer or shorter than the in-vacuum oscillon lifetime.
Provided that the driving frequency $m\mu$ corresponds to a slow-decaying part of the oscillon lifecycle, this result shows that the driven oscillon lifetime is parametrically enhanced relative to its vacuum lifetime.
So far, we have neglected the backreaction of $\theta_S$ onto $\theta_L$ in order to approximate $\theta_L$ as a homogeneous field.
While the details of backreaction are complicated, it is simple to obtain an estimate for when $\theta_S$ causes ${\cal O}(1)$ fluctuations in $\theta_L$ which then terminate the autoresonance.
To make this estimate, we observe in the equations of motion,
\begin{subequations}
\label{eq:OscillonEOM}
\begin{align}
\label{eq:longOscillonEOM}
\square\theta_L + m^2 \calF^{-2}\sin(\theta_S + \theta_L) + m^2 \mu^2\sin\theta_L
&= 0 \\
\square\theta_S + m^2 \mu^2\sin(\theta_S + \theta_L)
&= 0,
\end{align}
\end{subequations}
that $\theta_L$'s evolution depends on $\theta_S$ multiplied by $\calF^{-2}$.
Comparing this term to the final term in \cref{eq:longOscillonEOM}, we see that backreaction becomes important when
\begin{align}
\label{eq:backreactionLifetime}
\theta_L \sim \calF^{-2}\theta_S.
\end{align}
Assuming that $\theta_S$ has a constant $\mathcal{O}(1)$ amplitude, as inside an oscillon, and that $\theta_L$ decays like cold matter as in \cref{eq:oscillonLongAmplitude}, we find that spatial fluctuations in $\theta_L$ become order one at the time
\begin{align}
m\mu t_\te{death} \sim \p{\f{\Theta_L^0}{\Theta_S^0}}^{4/3}\calF^{8/3}.
\end{align}
After this time, the long axion no longer serves as a good driver and quickly dephases with the short axion oscillon, siphoning its energy and causing its rapid death.
We plot this predicted $\calF^{8/3}$ scaling versus the observed lifetime in a spherically symmetric driven oscillon simulation in \cref{fig:Backreaction}.
The $\calF^{8/3}$ scaling at small $\calF$ is eventually replaced by approximately constant scaling at larger $\calF$ when the lifetime bound \cref{eq:drivenLifetimeApp} takes over.
As the oscillon siphons energy from the long axion, it depletes the energy density in its local environment of volume
\begin{align}
V_\te{dep} \sim \f{P_\te{rad} t}{\rho_L}.
\end{align}
The surrounding long axion flows into the depleted volume at a velocity which we approximate assuming the inner depleted region is diminished by ${\cal O}(1)$:
\begin{align}
v
= \frac{\mathrm{d} \omega}{\mathrm{d} k}
= \f{k}{m_L} \sim \f{2\pi}{m \mu R_\te{dep}}.
\end{align}
where $V_\te{dep} = 4\pi/3 R_\te{dep}^3$.
Using this velocity, we calculate the power flowing from the environment into the depleted region:
\begin{align}
P_{\te{env}\to\te{dep}} = 4\pi R_\te{dep}^2 v\rho_L.
\end{align}
Solving for the time at which $P_{\te{env}\to\te{dep}} = P_\te{rad}$, we find the following upper bound on the oscillon lifetime
\begin{align}
m\mu t_\te{death}\lesssim\p{\f{\Theta_L^0}{\Theta_S^0}}^2\calF^2 m T_\te{inst}(\mu).
\end{align}
\subsection{General potentials and cosmological longevity}
\label{app:generalPotentialLongevity}
For the simplest two-axion potentials, the lifetime of driven oscillons is still far shorter than the age of the Universe.
Generalizations of \cref{eq:twoAxionPotential} of the form
\begin{align}
V(\ths,\thl) = V_S(\ths + \thl) + V_L(\thl),
\end{align}
could plausibly be constructed so that driven oscillons survive into the present day.
The ``vacuum lifetime'' of the short oscillon $T_\mathrm{inst}(\mu)$ is then the instantaneous lifetime of single-field oscillons in the $V_S(\ths)$ potential.
During hierarchical structure formation, a driven short-axion oscillon finds itself in long-axion halos of increasing size, which could make it energetically possible for the oscillon to live indefinitely, although there are some significant uncertainties.
A stationary oscillon in a static long axion halo would eventually deplete its local environment of long axion, starving itself of the energy it needs to survive.
In a realistic galactic halo, however, the region inside the oscillon is constantly being replenished via the virial motion of the long axion.
To take advantage of this energy source, the oscillon must also adjust its phase to match that of the driver over one virial timescale; it is not clear whether particularly long-lived oscillons can perform this phase alignment without some efficient dissipation mechanism (such as dynamical friction due to photon radiation).
We thus leave the question of cosmologically long-lived driven oscillons to future work.
\bibliography{bibliography}
|
Title:
Packing the sky: coverage optimization and evaluation for large telescope arrays |
Abstract: Recent advancements in low-cost astronomical equipment, including
high-quality medium-aperture telescopes and low-noise CMOS detectors, have made
the deployment of large optical telescope arrays both financially feasible and
scientifically interesting. The Argus Optical Array is one such system,
composed of 900 eight-inch telescopes, which is planned to cover the entire
night sky in each exposure and is capable of being the deepest and fastest
Northern Hemisphere sky survey. With this new class of telescope comes new
challenges: determining optimal individual telescope pointings to achieve
required sky coverage and overlaps for large numbers of telescopes, and
realizing those pointings using either individual mounts, larger mounting
structures containing telescope subarrays, or the full array on a single mount.
In this paper, we describe a method for creating a pointing pattern, and an
algorithm for rapidly evaluating sky coverage and overlaps given that pattern,
and apply it to the Argus Array. Using this pattern, telescopes are placed into
a hemispherical arrangement, which can be mounted as a single monolithic array
or split into several smaller subarrays. These methods can be applied to other
large arrays where sky packing is challenging and evenly spaced array
subdivisions are necessary for mounting.
| https://export.arxiv.org/pdf/2208.08794 |
\keywords{Argus Optical Array, telescope, large arrays, wide-field surveys, sky coverage}
\section{INTRODUCTION}
\label{sec:intro}
The Evryscopes are a northern and southern pair of all-sky telescope arrays comprised of 24 6.1-cm aperture telescopes on a shared mount. By sacrificing survey depth and sky sampling for extreme field of view (FoV) and high-cadence observations, each Evryscope monitors over 8,000 square degrees in every two-minute exposure\cite{law_2015, ratzloff_2019}. The Evryscopes observe the sky in a series of ``ratchets,'' tracking the sky for two hours before ratcheting back to start another set of observations. These telescopes have been used in conventional surveys searching for long-term variability (e.g.\ Ref.~\citenum{ratzloff2019variables, ratzloff_2020_hot, galliher_2020}), while their innovative design allows for a new parameter space to be explored: the short-timescale transient sky. The Evryscopes have detected rapid transient events ranging from flares and superflares (e.g.\ Ref.~\citenum{howard_superflare, howard_evryflare_1, howard_evryflare_2, howard_evryflare_3, glazier_trappist}) to optical reflections of space debris (e.g.\ Ref.~\citenum{corbett_2020}).
The Argus Optical Array is a proposed next-generation system designed for deep, high-cadence, and all-sky observations\cite{law_2022}. The Argus Array will be comprised of 900 mass-produced medium-aperture telescopes and high-speed sensors. Argus, just like the Evryscopes, takes advantage of recent technological advancements that make it more cost effective to build an array of many smaller telescopes, compared to a single large-aperture telescope\cite{LAST}. The array will have the collecting area of a 5-m telescope, an Г©tendue approaching VRO (LSST), and the ability to follow the evolution of every $m_g<23.6$ time-variable source across the sky simultaneously.
The Argus Array is currently in the prototyping phase. The soon-to-be deployed prototype, called the Argus Pathfinder, will be an array of 38 telescopes that monitor a stripe of declination (dec.), from -20$^{\circ}$ to 72$^{\circ}$, for 15 minutes at a time; over the course of an observing night Pathfinder will build up observations for most of the visible sky\cite{argus_spie}. The Argus Array and Pathfinder telescopes will be rigidly mounted in hemispherical ``bowl'' mounts, that will track the sky in 15-minute ratchets. The bowls will be weather-sealed inside of a building, with the virtual center of the hemispherical bowl positioned on a window through which all of the telescopes look (Fig.~\ref{fig:argus}).
The Evryscopes, Argus, and Pathfinder all share a common design constraint that greatly reduces their mechanical complexity: the telescopes in each of the arrays are rigidly coupled to their shared mounts. The mounts hold the arrays in a hemispherical arrangement, so that the all telescope apertures lie on the surface of a sphere, mimicking the shape of the sky. The telescopes' pointings are normal to this pseudo-surface. Telescopes mounted in the Argus and Pathfinder arrays point radially inward while telescopes mounted in the Evryscope domes point radially outward. Therefore, the location of a telescope in the mount determines its on-sky pointing. By using fixed array pointings, the number of moving parts is reduced by several orders of magnitude. While simplifying the construction and maintenance of the system, this constraint requires careful planning of the ``telescope packing,'' i.e.\ the positioning of telescopes within this hemispherical arrangement. Determining the best packing pattern can be difficult, particularly in next-generation systems, because as the number and size of telescopes increases, and their FoVs decrease, much larger mounting structures are required to achieve continuous sky coverage. For very large arrays, the size of Argus or bigger, where the diameters of the mounts become many tens of feet, it can be beneficial to spread the array over several smaller mounts.
In this paper, we develop an algorithm for rapidly evaluating the on-sky performance metric for a large array (Section \ref{sec:eval}) and discuss how it can be used to calculate performance metrics (Section \ref{sec:metrics}). Next, we discuss the on-sky packing geometry for a large array (Section \ref{sec:geometry}), optimizing the array shape (Section \ref{sec:shape}), application of these tools to the Argus Array (Section \ref{sec:app}), and dividing the array into several similar subarrays (Section \ref{sec:multi}). Finally, we summarize conclusions from our work (Section \ref{sec:conc}).
\section{SKY COVERAGE EVALUATION}
\label{sec:cov}
Performance metrics include: total FoV, overlap between telescopes, and limiting magnitudes as the array builds up observations over time. The term ``sky coverage'' refers to the portions of the sky that are observed by the array.
\subsection{Evaluation Algorithm}
\label{sec:eval}
First, we evaluate the sky coverage of an individual telescope at an arbitrary pointing on the sky. The large FoVs of telescopes typically used in arrays make a careful evaluation challenging because the resultant spherical geometry distortion cannot be neglected. Our evaluation algorithm uses the on-sky pointings for all of the telescopes in the array and their FoVs to determine key survey metrics. The telescopes will be oriented so that their rectangular FoVs align with the celestial grid, with either their short or long axis along the direction of R.A., a helpful constraint for the data production pipeline. We define the z-axis to point along the center of the telescope's FoV. The algorithm proceeds as follows:
\begin{enumerate}
\item We create a mesh grid representing the area of the sky we expect our array to observe. For example, a representative grid for the entire sky would be a mesh grid of values ranging from 0 to 360 in the x-direction (R.A.) and -90 to 90 in the y-direction (dec.). The density of points (``sky pixels'') in this grid will impact the run time; we typically use 1000Г—1000 points.
\item We now convert our 2D sky pixel array into 3D coordinates. We define the coordinates as:
\begin{equation}
\begin{gathered}
X = \cos{(\alpha)}\sin{(\delta + \frac{\pi}{2})}\\
Y = \cos{(\delta + \frac{\pi}{2})}\\
Z = \sin{(\alpha)}\sin{(\delta + \frac{\pi}{2})}
\end{gathered}
\end{equation}
with $\alpha$ and $\delta$ being the sky pixel R.A. and dec.\ arrays in radians, respectively. These coordinates place the north celestial pole (NCP) at $(0,-1,0)$.
\item Next, we rotate our 3D coordinates about two axes; first around the y-axis and then around the x-axis:
\begin{multicols}{2}
\noindent
\begin{equation}
\begin{gathered}
X' = X\sin{\omega_1}+Z\cos{\omega_1}\\
Y' = Y\\
Z' = X\cos{\omega_1}-Z\sin{\omega_1}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
X'' = X'\\
Y'' = Y'\cos{\omega_2}-Z'\sin{\omega_2}\\
Z'' = Y'\sin{\omega_2}+Z'\cos{\omega_2}
\end{gathered}
\end{equation}
\end{multicols}
where $\omega_1 = -$R.A$_{\text{Tele}}$ and $\omega_2 = -$dec$_{\text{Tele}}$, the negative of the telescope's R.A. and dec.\ pointings in radians, respectively.
\item After these rotations, the sky pixel corresponding to the center of the telescope's view will be located at $(0,0,1)$. The positive edge of the telescope's FoV in the R.A. direction will be given by: $\tan{(\frac{\text{FoV}_{\text{x}}}{2}) = \frac{X''}{Z''}}$, where $\text{FoV}_{\text{x}}$ is the known telescope FoV in the x (R.A.) direction, in radians. A similar equality holds true for the dec.\ coverage (y-direction). With this, we can now determine which sky pixels are being observed by the telescope using the following conditions:
\begin{equation}
\begin{gathered}
|\tan({\text{FoV}_{x}}/{2})| > \frac{X''}{Z''}\\
|\tan({\text{FoV}_{y}}/{2})| > \frac{Y''}{Z''}\\
Z'' > 0
\end{gathered}
\end{equation}
Wherever these statements hold for the 3D coordinate array, the telescope observes the sky pixels in the 2D mesh grid (Fig.~\ref{fig:eval_fig}). We store these results as a grid of ones and zeroes.
\end{enumerate}
This algorithm is then repeated for every telescope in the array, using batch multiprocessing. By summing all of the resulting grids, we obtain a map of the sky coverage for the total array.
\subsection{Calculating Performance Metrics}
\label{sec:metrics}
For a given telescope arrangement, we use the sky-coverage grid generated above to calculate key performance metrics as follows:
\textbf{Overlap fraction:} The overlap fraction is the percentage of the array's sky coverage that is viewed by more than one telescope in a single exposure. The overlap fraction depends on the value of two weighted sums, one for the number of sky pixels observed by multiple telescopes and one for sky pixels covered by at least one telescope. Both sums use the coverage grid, C. We define the first sum as: $\chi = \sum_{C_{i,j}>1} \cos{\delta_{i,j}}$, and the second sum: $\xi = \sum_{C_{i,j}>0} \cos{\delta_{i,j}}$. The overlap fraction is then: $F={\chi}/{\xi}$.
\textbf{Total FoV coverage:} In terms of the overlap fraction F, the total FoV (in sq.\ deg.\@) is given by: FoV$_{\text{tot}}=\text{FoV}_{x}\times\text{FoV}_{y}\times(1-F)$.
\textbf{Survey depth over time:} An important performance metric is depth of observations over time. The Evryscopes and Argus, for example, gain signal-to-noise by summing (``coadding'') the images from successive exposures over the course of many ratchets and observing nights. We can use the coverage evaluation algorithm to create grids representative of these exposures and ratchets. A summation of these grids shows the number of times a telescope views a specific part of the sky during that series of observations. By pairing this information with signal-to-noise calculations, we obtain the limiting magnitude of the array as a function of sky coverage. As an example, the survey depth as a function of coverage for the Argus Pathfinder is shown in Fig.~\ref{fig:pathfinder} as it builds coverage over a single exposure, an hour of observations, and five nights of coadds.
\section{TELESCOPE PACKING}
\label{sec:packing}
There is no single optimal solution for packing rectangular telescope FoVs across the spherical sky. For the Argus Array, as with the Evryscopes, we reduce this parameter space by requiring the telescopes to form stripes of continuous declination. We arrange the stripes so that the resulting sky coverage is mostly free of gaps and contains slight overlaps, allowing for photometric solutions between the fields. The ``stripes pattern'' is used because it is a close match of the geometry of the rectangular telescope FoVs. It is also beneficial for data analysis because fields maintain roughly the same positioning in subsequent cameras from one ratchet to the next, at least in equatorial regions. Here, we describe the method for creating the stripes pattern, determine whether a hemispherical array shape is optimal for this pattern, and finally discuss how to split the array into several smaller subarrays with similar arrangements.
\subsection{Monolithic Mount}
\subsubsection{Packing Geometry}
\label{sec:geometry}
The stripes pattern used by the Evryscopes, and the planned Argus Array, is a simple overlapping grid of telescopes with slight adjustments in separations near the pole to remove gaps from distortion effects. Stripes begin at the greatest observable dec.\ from the survey deployment site, set either by the pole or the maximum allowable viewing airmass, and are separated in angular spacing by $\text{FoV}_y\times(1-F_y)$. Here, F$_y$ is the specified overlap in the dec.\ direction. The dec.\ of the last stripe is also set by the minimum allowable observing altitude. In each row, telescopes are separated by $\text{FoV}_x\times(1-F_x)\times\sec{\delta}$, where F$_x$ is the specified overlap in the R.A. direction and $\delta$ is the declination of the stripe. Around the pole, each row will contain only one or a few telescopes.
The stripes extend in both the positive and negative R.A. direction starting from an R.A. of 180 degrees, which we have arbitrarily defined to be the local meridian and center of the array, down to the observing altitude cutoff. We center each stripe on the meridian (assuming the array is pointed directly overhead), such that the two central telescopes' overlap is centered at an R.A. of 180 degrees. This makes the mount symmetric, and allows for the array to be more easily divided between multiple mounts if necessary. Depending on the parameters of the array, this method may produce a grid with more telescopes than available in the array. Points closest to the horizon can then be chosen by hand to be eliminated from the grid.
Near the pole, the NCP for northern-hemisphere surveys and the south celestial pole (SCP) for southern surveys, packing becomes more difficult. The distortion associated with projecting the rectangular FoVs onto that part of the sky can create gaps in sky coverage. To handle this, telescopes at decs.\ greater than $\sim$80 degrees require some manual adjustment. Gaps can be filled by creating more overlap in the y-direction. For the Argus Array, we arranged the rows above 80 degrees dec.\ so that they are separated by only 60\%-85\% of the FoV$_y$. A similar tactic can be used to achieve the same results by reducing the separation of the telescopes in R.A. Decreasing the separation of the telescopes does sacrifice total coverage area, so it is important to balance having small gaps with losing sky coverage due to large overlaps.
\subsubsection{Optimizing the Array Shape}
\label{sec:shape}
Using the Nelder-Mead optimization method, we evaluated using a non-spherical array shape. We hypothesized that telescopes with rectangular FoVs arranged into a smaller ellipsoidal array shape might provide similar sky coverage and overlap as a larger spherical arrangement. For the optimization, we began with the stripes pattern of packing telescopes in a hemisphere as described above, but allowed the telescopes pointings to vary as the shape evolved from spherical to ellipsoidal. We optimized for sky coverage and overlaps and found no significant decrease in arrangement size for similar coverage metrics. While in some configurations it helped reduce coverage gaps near the pole, the ellipsoidal mounts caused distortions that often left gaps in the sky coverage in central-sky regions. We conclude the best mounting array shape is hemispherical.
\subsubsection{Application: The Argus Array}
\label{sec:app}
We now explore the packing of the Argus Array. Argus will contain 900 8-in telescopes. Each telescope has a 2.45 deg.\ by 3.67 deg.\ FoV. We require $\sim$1\% overlap in both the dec.\ and R.A. direction between telescopes, and will have a maximum observing airmass of two. Argus will be a northern-hemisphere survey located at a latitude above 30 degrees, and therefore will be able to observe the NCP. Using the stripes pattern outlined above, and manually adjusting pointings near the pole to minimize gaps, we achieve a total sky coverage of over 7,850 square degrees with 3.4\% total overlaps (Fig.~\ref{fig:coverage_example}). By placing telescopes in a hemispherical array, and requiring that they point normal to the spherical surface formed by their apertures, we find the minimum radius defining the arrangement to be $r\approx\frac{d}{\text{FoV}_\text{min}}=21.5$ ft, where \textit{d} is the size of the telescope tube diameter and $\text{FoV}_\text{min}$ is the smaller of the two FoV angles. A structural mount holding the telescopes in a hemisphere\cite{law_2022} must be at least a few feet larger than this radius, although the new Argus pseudofocal bowl design\cite{argus_spie} would be somewhat smaller because it does not require structure extending over the full hemisphere.
\subsection{Multiple Mount Configurations}
\label{sec:multi}
When splitting the Argus Array into several smaller subarrays, we require that each subset contain similar numbers of telescopes and produce similar on-sky coverages. The method detailed below requires testing different numbers of mounts and determining the optimal value for one key parameter, the ``pattern constant''. This parameter, when specified for a set number of N subarrays, determines the pattern with which telescopes are placed into subarrays, and is described in more detail below.
We start with the same sky packing as the monolithic array, and create a list of values ranging from 1 to N (1,2,...,N), representing the possible subarrays telescopes can be placed into. This list will be updated throughout the following algorithm. The first value in this list is the ``central subarray.'' This value sets which subarray the first telescope from each row will be placed into. Starting with the lowest declination stripe, the algorithm proceeds as follows:
\begin{enumerate}
\item Place the first telescope on the positive side of the meridian into the central subarray for that row.
\item Move across the row in the direction of positive R.A., placing subsequent telescopes into subsequent subarrays until each telescope on the positive side of the meridian is placed. When all subarrays have one telescope from this row, start back at the beginning of the subarray list, placing the next telescope into subarray 1.
\item Mirror this pattern across the meridian. Every telescope in this dec.\ stripe should now be in a subarray.
\item Move up to the next row of dec.\ and iterate through the subarray list by a number of times given by the pattern constant, wrapping the list when necessary. This updated list now defines the new value for the central subarray. For example, with 5 mounts using a pattern constant of 2, the first telescope in the second dec.\ row would be placed in subarray 3. The following row would be started by placing the first telescope into subarray 5, and the row above that would start by placing a telescope into subarray 2.
\item Repeat steps 1-4 for all declination stripes until every telescope is distributed into a subarray.
\end{enumerate}
For the Argus Array, we found that using seven mounts with a pattern constant offset of five allowed us to use significantly smaller arrangements, with radii of 10 ft. Each dome would contain $\sim$130 telescopes. A comparison between the single- and multi-mount configurations can be seen in Fig.~\ref{fig:arr_comp}. For the Argus pattern, each mount also contained at least one telescope at every dec.\@, except near the pole where one telescope was placed manually, which allows each mount to get coverage of the entire sky over the observing night (Fig.~\ref{fig:multi_dome}).
\section{CONCLUSIONS}
\label{sec:conc}
Deployment of large optical arrays, such as the Argus Array, are becoming more feasible with recent advancements in commercially available astronomy equipment. These optical arrays will contain hundreds, or even thousands, of medium-aperture telescopes. Determining the best arrangement of these arrays is challenging, and requires large mounting structures. In this work, we explore the methods used to optimize telescope packing for the Argus Array in both single- and multiple-mount configurations. We develop an algorithm for rapidly evaluating the resulting sky coverage performance metrics. This algorithm is used to determine the total array FoV, overlap fraction, and create plots of sky coverage. With these metrics we can rapidly test different packing layouts and input parameters. We develop a method for packing telescopes in a hemispherical array, which we confirm to be the optimal arrangement. For the Argus Array we achieve continuous sky coverage of over 7,850 square degrees, with few-percent overlaps in coverage between telescopes, using a mount spanning $\sim$45 ft in diameter.
We find that by distributing the array over seven mounts we can reduce the size of each subarray by half, while allowing each subarray to observe the entire sky over the course of a single night. Building several smaller mounting structures could prove to be more cost-effective, at the cost of adding complexity. The reduced size of the structures could allow the multi-mount configuration to be constructed at sites with limited accessibility, where building a monolithic array may not be possible. Perhaps the most important benefit of a multi-mount array is that it enables a ``full-sized'' prototype system to be initially constructed, and completion of the array only requires replicating the prototype several times over. The Argus Pathfinder will help determine whether this strategy should be followed when building the full Argus Array.
\acknowledgments
This paper was supported by NSF MSIP (AST-2034381) and by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program. This research, and the construction of the Argus prototypes, is undertaken with the collaboration of the Be A Maker (BeAM) network of makerspaces at UNC Chapel Hill and the UNC BeAM Design Center.
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Reconstructing the Assembly of Massive Galaxies. II. Galaxies Develop Massive and Dense Stellar Cores as They Evolve and Head Toward Quiescence at Cosmic Noon |
Abstract: We use the SED-fitting code Prospector to reconstruct the nonparametric star
formation history (SFH) of massive ($\log M_*>10.3$) star-forming galaxies
(SFGs) and quiescent galaxies (QGs) at redshift $z_{\rm{obs}}\sim2$ to
investigate the joint evolution of star-formation activity and structural
properties. We find significant correlations between the SFH of the galaxies
and their morphology. Compared to extended SFGs, compact SFGs are more likely
to have experienced multiple star-formation episodes, with the fractional mass
formed during the older ($\ge1$ Gyr) episode being larger, suggesting that
high-redshift SFGs assembled their central regions earlier and then kept
growing in central mass as they become more compact. The SFH of compact QGs
does not significantly differ from the average for this category, and shows an
early burst followed by a gradual decline of the star formation rate. The SFH
of extended QGs, however, is similar to that of post-starburst galaxies and
their morphology is also frequently disturbed. Knowledge of the SFH also
enables us to empirically reconstruct the structural evolution of individual
galaxies. While the progenitor effect is clearly observed and accounted for in
our analysis, it alone is insufficient to explain the observed structural
evolution. We show that, as they evolve from star-forming phase to quiescence,
galaxies grow massive dense stellar cores. Quenching begins at the center and
then propagates outward to the rest of the structure. We discuss possible
physical scenarios for the observed evolution and find that our empirical
constraints are in good quantitative agreement with the model predictions from
dissipative accretion of gas to the center followed by massive starbursts
before final quiescence (wet compaction).
| https://export.arxiv.org/pdf/2208.04325 |
\defcitealias{Ji2022}{Paper~I}
\title{Reconstructing the Assembly of Massive Galaxies. II. Galaxies Develop Massive and Dense Stellar Cores as They Evolve and Head Toward Quiescence at Cosmic Noon}
\correspondingauthor{Zhiyuan Ji}
\email{zhiyuanji@astro.umass.edu}
\author[0000-0001-7673-2257]{Zhiyuan Ji}
\affiliation{University of Massachusetts Amherst, 710 North Pleasant Street, Amherst, MA 01003-9305, USA}
\author[0000-0002-7831-8751]{Mauro Giavalisco}
\affiliation{University of Massachusetts Amherst, 710 North Pleasant Street, Amherst, MA 01003-9305, USA}
\keywords{Galaxy formation(595); Galaxy evolution(594); Galaxy structure(622); High-redshift galaxies(734); Galaxy quenching (2040)}
\section{Introduction} \label{sec:intro}
Observations over a wide range of cosmic time, from the present day up to the so-called epoch of ``cosmic noon" at $1<z<4$ \citep{Madau2014}, have shown that galaxies are characterized by a bimodality of colors at UV through NIR wavelengths (e.g. see \citealt{Baldry2004, Bell2004,Brammer2009,Williams2009}, among many other). Blue galaxies on average are actively forming stars, and have higher star formation rates (SFRs) than red galaxies which generally have substantially lower levels of star formation activity, leading to a natural classification into the star-forming galaxies (SFGs) and the quiescent galaxies (QGs). Understanding the transition from SFGs to QGs, a process referred to as galaxy quenching, remains a key missing piece toward a complete picture of galaxy evolution.
Broadly, two general categories of quenching mechanisms -- mass quenching and environmental quenching -- have been identified \citep{Peng2010}. Environmental quenching is associated with the external environment of a galaxy, and it might be able to alter the structural evolution of galaxies \citep[e.g.,][]{Valentinuzzi2010,Papovich2012,Cappellari2013,HuertasCompany2013,Strazzullo2013,Newman2014,Matharu2019}. Observations generally show that the environmental effect on galaxy evolution primarily happens in relatively low-mass (stellar mass $\log M_*<10$), low-redshift ($z<1.5$) galaxies in dense environments such as groups/clusters of galaxies \citep{Peng2010,Guo2017,Kawinwanichakij2017, Ji2018}, which, however, are not the subject of the investigation presented here. In this work, we will specifically focus on massive galaxies at cosmic noon epoch, defined here as having stellar mass $\log M_*\ge 10.3$ and in the redshift range of $1<z<4$ (see section \ref{sec:sample}). Observations show that at this epoch massive galaxies begin to quench in large numbers \citep[e.g.,][]{Muzzin2013} and suggest that the quenching should be mainly due to physical processes internal to the galaxies themselves, i.e., mass quenching (see a recent review by \citealt{Man2018} and references therein).
Strong correlations are observed between the star-formation activity and the structural properties of galaxies, such as size, light profile and central mass density. It is now well-established that these correlations persist at least out to $z\sim3$. These include the emergence of the Hubble Sequence at $z>2$ \citep[e.g.][]{Franx2008,Kriek2009,Wuyts2011}, and the dependence of the mass-size relation and \sersic index $n$ on specific star formation rate (sSFR, e.g. \citealt{Williams2010,Wuyts2011,Newman2012,Barro2013,Patel2013,vanderWel2014,Shibuya2015}). On average, QGs have much smaller sizes, steeper light profiles (i.e. larger $n$) and larger central mass densities than SFGs of similar mass and redshift. The mass-size relation of QGs also is substantially steeper than that of SFGs. Phenomenologically, these correlations could be interpreted as evidence of a causal link between the structural transformations and the quenching of galaxies, but because the relative timing sequence of these two events remains empirically unconstrained, whether the causality is real remains unknown. In fact, some evidence suggests that at $z\sim2$ galaxies quench first and then secularly transform their structures later. This is based on the observations that (1) a substantial amount of massive QGs at redshift $z= 2\sim3$ appear to have a disk-like morphology \citep[e.g.][]{McGrath2008,Bundy2010,vanderWel2011,Bruce2012,Chang2013}, and (2) the spatially resolved spectra of a handful of strongly lensed QGs show disk-like kinematics, namely systems that are predominantly supported by rotation although with a larger velocity dispersion than modern disks \citep{Newman2015,Toft2017,Newman2018}.
Imaging observations, especially those with high-angular resolution taken with the \textit{Hubble Space Telescope} ({\it HST}), have unveiled what appears to be a threshold of central stellar-mass surface density ($\Sigma$) above which galaxies are found to be predominately quiescent. This observational finding seemingly suggests that the presence of dense central core in massive galaxies might be a necessary condition for, or a consequence of quenching \citep[e.g.][]{Kauffmann2003,Franx2008,Cheung2012,Fang2013,Lang2014,Whitaker2017,Barro2017,Lee2018}, and this consideration motivated a number of theoretical studies to explore possible physical processes that would result in such a phenomenology. For example, one such mechanism is actually a class of processes, generically referred to as ``wet compaction" whose main feature is highly dissipative gas accretion toward the central regions of a galaxy which, in turn, promotes enhanced activity of star formation at a higher rate relative to the average SFR \citep{Dekel2009}. Specifically, the larger gas mass fraction of galaxies at high redshifts compared to local ones (\citealt{Tacconi2020} and references therein) makes the early galactic disks gravitationally unstable on the scale of $\lesssim1$ kpc \citep{Toomre1964}. The disks can thus fragment into star-forming clumps \citep[e.g.][]{Genzel2008,Guo2015} which then migrate toward the center of galaxies, trigger central starbursts and form dense central cores. In the absence of further gas inflow, these lead to the central gas depletion and finally the quenching of central star formation \citep{Ceverino2010,Dekel2014,Wellons2015,Zolotov2015,Tacchella2016}. Simulations suggests that any mechanism that can cause a drastic dissipative loss of angular momentum would lead to hydrodynamical instabilities and the compaction, such as wet major mergers \citep{Zolotov2015,Inoue2016} or collisions of counter-rotating streams that feed the galaxy \citep{Danovich2015}.
An apparent correlation between quiescence and central density, including the the apparent $\Sigma$ threshold, however, does not necessarily imply any physical causality between the two properties, because it can also be the result of the ``progenitor effect" \citep{Lilly2016,Ji2022}. In an expanding universe, bound systems should keep memory of the cosmic density at the time when the halo collapses, implying that halos formed earlier should have higher densities than those formed later \citep{Mo2010}. Therefore, at a fixed epoch QGs are expected to be smaller and denser than SFGs, as the star-forming progenitors of what are being observed as QGs have necessarily formed earlier. The implication is that the apparent correlations between structural and star-formation properties reflect the collective effects of the interplay among different physical mechanisms, e.g., the progenitor effect vs. the physical compaction. This makes the empirical investigation of the relative contribution from each of them critical for understanding the structural transformations of galaxies and their relationships with quenching.
A substantial body of existing observational studies has investigated the redshift evolution of the mass-size and mass-$\Sigma$ relations of SFGs and QGs. Both relations contain, \textit{in an integral form}, information of the structural evolution. This means that a certain galaxy population, observed at redshift \zobs, includes galaxies that have formed at all different epochs of $z>$ \zobs. The progenitor effect, therefore, is inherently mixed with whatever other physical mechanism (e.g., wet compaction) is at play. This makes the interpretation of the observations model-dependent (see e.g., section 3.3 of \citealt{Barro2017}).
To account for the progenitor effect on the apparent structural evolution, the key is to identify galaxies that formed at different epochs. In this way we can then avoid aggregating all galaxies together. Some early attempts achieved this by selecting galaxies with a constant number density \citep[e.g., ][]{vanDokkum2010,Patel2013}. This is based on the idea of selecting dark matter halos of the same mass, which at high redshifts would be roughly coeval. In our view, a further step forward is to utilize the star-formation history (SFH) of galaxies, i.e. their full stellar-mass assembly histories, and to select galaxies following a common evolutionary path, namely formed at a similar epoch, have a similar stellar mass and are observed at a similar evolutionary stage. In this way any form of the progenitor effect is naturally taken into account. We can then statistically reconstruct the structural evolution of individual galaxies. And by looking at how the average properties of galaxies that formed at the same time and are observed at the same evolutionary stage change as a function of time, we can empirically constrain the physics behind the structural transformations of galaxies.
With the modelling of Spectral Energy Distributions (SEDs) growing in accuracy and sophistication, and with the availability of high-quality data that cover truly panchromatic swathes of wavelengths, significant progress has been made to reconstruct the SFH of galaxies at high redshifts, both in parametric \citep[e.g.][]{Maraston2010,Papovich2011,Ciesla2016,Lee2018,Carnall2018,Carnall2019} and nonparametric forms \citep[e.g.][]{Ocvirk2006,Tojeiro2007,Pacifici2012,Leja2017,Leja2019,Belli2019,Iyer2019}. The flexibility of the nonparametric form, in particular, is critical for not only an unbiased inference of physical parameters such as stellar mass, SFR and stellar age \citep[e.g][]{Leja2019,Lower2020}, but also for the fidelity of the reconstruction of the SFH itself, which can have arbitrarily complex forms generally not captured by parametric models \citep{Leja2019, Johnson2021, Tacchella2022}.
In the first paper of this series (\citealt{Ji2022}, hereafter \citetalias{Ji2022}), we have utilized the fully Bayesian SED fitting code \prospector \citep{Leja2017,Johnson2021} to reconstruct the nonparametric SFH of a sample of massive QGs at $z\sim2$ to quantify the progenitor effect. We found that the progenitor effect is strong in QGs with $\log_{10}(M_*/M_\sun)=10.3\sim11$, while much milder in more massive QGs, implying that the post-quenching mass and size growths of the latter happen via mergers, which reduce the memory of the time of their formation in their structural properties. What remains to be explored is if, in addition to the progenitor effect, the central regions of galaxies grow denser and more massive relative to the outskirts as they evolve toward quenching. In this second paper, we expand our investigation to all massive galaxies at $z\sim2$, selected from the CANDELS/COSMOS and GOODS fields, regardless of their star-formation activity. We use the SFHs to empirically and quantitatively estimate the contributions from different physical mechanisms to the apparent correlations between star-formation and structural properties. Throughout this paper, we adopt the AB magnitude system and the $\Lambda$CDM cosmology with parameters $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$ and $\rm{h = H_0/(100 kms^{-1}Mpc^{-1}) = 0.7}$.
\section{The sample} \label{sec:sample}
The parent sample considered in this work is extracted from the portions of the COSMOS, GOODS-South and GOODS-North fields observed by the CANDELS program \citep{Grogin2011,Koekemoer2011}. The original CANDELS photometric catalogs in each field, which already include \hst, {\it Spitzer} and ground-based broadband data, and have been augmented by additional narrow- and medium-band data from deep ground-based imaging.
The photometric catalogs used in this work are the same as those in \citetalias{Ji2022}. In brief, for the CANDELS/COSMOS field, we use the photometric catalog from \citet{Nayyeri2017} which includes the intermediate- and narrow-band optical photometry from Subaru \citep{Taniguchi2015} and medium-band NIR photometry from Mayall NEWFIRM \citep{Whitaker2011}. For the GOODS-North field, we use the photometric catalog from \citet{Barro2019} which includes twenty-five medium-band photometry at the optical wavelengths acquired during the SHARDS survey \citep{PerezGonzalez2013}. Finally, for the GOODS-South field, we use the latest photometric catalog from the ASTRODEEP project (\citealt{Merlin2021}). This expands the original \citet{Guo2013} catalog in the GOODS-South with eighteen new medium-band photometric data obtained with the Subaru SuprimeCAM \citep{Cardamone2010}, and another five NIR photometric bands observed with the Magellan Baade FourStar during the ZFOURGE survey \citep{Straatman2016}. In all cases the photometry has been aperture-matched where low angular-resolution images were de-blended based on positional priors from high angular-resolution ones. Overall, densely sampled ($\approx$ 40 bands), deep photometry covering the spectral range from the rest-frame UV to NIR for galaxies at $z\sim1-4$, is available in all the three fields considered here.
Starting from the CANDELS photometric catalog, we first impose a cut on the integrated isophotal signal-to-noise ratio, S/N $\ge10$, in the \hst/WFC3 \H band to ensure good-quality photometry and morphology. We then remove galaxies that have clear evidence of AGN activity using the \texttt{AGNFlag} in the aforementioned catalogs. Because the focus of this work is on massive galaxies at cosmic noon, we then select galaxies with redshift $1.2<$ \zobs $<4$ and stellar mass $\log_{10}(M_*/M_\odot)>10$ using the existing measurements of \citet{Nayyeri2017} for the COSMOS field and of \citet{Lee2018} for the GOODS fields. This initial cut on stellar mass also ensures high stellar-mass completeness in all fields considered in this work, namely $>80\%$, see \citealt{Ji2018} for the CANDELS/GOODS fields and \citealt{Nayyeri2017} for the CANDELS/COSMOS field. In principle, these cuts on \zobs and $M_*$ would not be needed, because we could use the results from our new SED modeling with \prospector (section \ref{sec:prosp}) from the upstart. In practice, however, running \prospector is computationally very intensive\footnote{In our case, typical running time for a single galaxy is $\approx10-20$ hours}, making it impractical to run it on the entire CANDELS catalogs for the primary selection. Thus, we resort to first make the cuts on \zobs and $M_*$ from previous measurements, and then run \prospector to derive the SFH of a substantially smaller sample.
Following \citetalias{Ji2022}, to ensure the quality of the adopted de-blending procedures of low-resolution images and hence the reliability of the photometry, we have visually inspected the galaxies of our sample in all bands in order to select galaxies with high-quality photometry (see Figure 2 of \citetalias{Ji2022}). We retain in the sample only galaxies with \texttt{GALFITFlag = 0} from \cite{vanderWel2012} to ensure that morphological measurements are reliable (section \ref{sec:morph}). Finally, using the more refined measures of the stellar mass from our \prospector fitting (section \ref{sec:prosp}), we adjust the stellar-mass cut to $\log_{10}(M_*/M_\sun)>10.3$ such that the environmental effect should only play a relatively minor role in the evolution and quenching of galaxies \citep[e.g.,][]{Ji2018}. Our final sample contains 1317 galaxies in total, with 492 located in the COSMOS, 448 in the GOODS-South and 377 in the GOODS-North. In Figure \ref{fig:uvj}, we show the distribution of the final sample in the rest-frame UVJ diagram \citep{Williams2009}.
\section{Measurements}
\subsection{SED Fitting with \prospector}\label{sec:prosp}
We have derived the integrated properties of the stellar populations of the galaxies in our sample by modeling their SEDs with the \prospector package \citep{Leja2017, Johnson2021}.
\prospector is built upon a fully Bayesian framework, and is ideally positioned to exploit the vast improvement in quality, quantity and wavelength coverage of modern (mostly) photometric data accumulated over the last two decades in legacy fields, such as the three fields that are subjects of this work, i.e. the COSMOS and the two GOODS fields. Packages such as \prospector now make it possible to fit the SED of high-redshift galaxies with advanced, complex stellar population synthesis models that provide robust constraints on the stellar age of galaxies. One key feature of \prospector is that it allows flexible parameterizations of galaxies' SFHs, including a piece-wise, nonparametric form which we have elected to adopt in this work.
Adopting a nonparametric SFH means that the assembly history of galaxies can be arbitrarily complex and yet the code yields an unbiased inference of the properties of stellar populations. Despite the reconstruction of SFHs is still a relatively new development, several recent studies have tested its robustness with \prospector using synthetic observations generated from cosmological simulations \citep{Leja2019, Johnson2021,Tacchella2022, Ji2022}. These tests have demonstrated that, at least in the case of the SFHs of the synthetic galaxies encountered in the simulations, \prospector is capable of recovering the nonparametric SFHs with high fidelity, if high-quality, panchromatic data coverage of the SED is available.
In this work, we assume the same model priors and settings of \prospector as we did in \citetalias{Ji2022}, and we refer readers to that paper for a detailed description and the quality tests that we have already conducted. Here we only briefly summarize the key assumptions. For the basic setup of the fitting procedure, we adopt the Flexible Stellar Population Synthesis (FSPS) code \citep{Conroy2009,Conroy2010} where the stellar isochrone libraries MIST \citep{Choi2016,Dotter2016} and the stellar spectral libraries MILES \citep{Falcon-Barroso2011} are used. We assume the \citealt{Kroupa2001} initial mass function (IMF), the \citealt{Calzetti2000} dust attenuation law and fit the V-band dust optical depth with a uniform prior $\tau_V\in(0,2)$, and the \citealt{Byler2017} nebular emission model. During the SED modeling, we assume that all stars in a galaxy have the same metallicity $Z_*$, and fit it with a uniform prior in the logarithmic space $\log_{10}(Z_*/Z_\sun)\in(-2,0.19)$, where $Z_\sun=0.0142$ is the solar metallicity, and the upper limit of the prior is chosen because this is the highest value of metallicity that the MILES library has. We do not keep track of the time evolution of the metallicity. Finally, we also fit the redshift \zobs as a free parameter but we use a strong prior in the form of a normal distribution centered at the best available redshift value from the CANDELS catalogs, either photometric or spectroscopic redshift when available, and with a width equal to the corresponding redshift uncertainty, i.e. either the spectroscopic redshift error bar or the width of the posterior distribution function of photometric redshifts.
Following recent studies \citep{Leja2017,Leja2019, Leja2021, EstradaCarpenter2020, Tacchella2022, Ji2022}, we model the nonparametric SFH with a piece-wise step function with nine bins of lookback time. Specifically, the first and second bins are fixed to be 0 - 30 and 30 - 100 Myr, the last bin is assumed to be 0.9$\rm{t_H}$ - $\rm{t_H}$ where $\rm{t_H}$ is the Hubble Time of \zobs, and the remaining six bins are evenly spaced in logarithmic lookback time intervals between 100 Myr and 0.9$\rm{t_H}$. Also, as done in \citetalias{Ji2022}, our fiducial measures throughout the paper have been obtained assuming the nonparametric SFH with the Dirichlet prior that (1) results in a symmetric prior on stellar age and sSFR, and a constant SFH with $\rm{SFR(t) = M_* /t_H}$; (2) has been tested with nearby galaxies across all different types \citep{Leja2017,Leja2018}; and (3) has been validated with simulated observations of synthetic galaxies from cosmological simulations \citep[][]{Leja2019,Ji2022}. However, we stress that the choice of the prior for the nonparametric SFH still remains somewhat arbitrary and the validity of each assumption still needs to be further investigated. For example, some other recent work \citep[e.g][]{EstradaCarpenter2020,Tacchella2022} adopted a so-called Continuity prior that is very similar to the Dirichlet prior, except it strongly disfavours sudden changes of SFRs in adjacent time bins, such as those resulting from powerful bursts of star formation. Which prior should be used for high-redshift galaxies is awaiting to be tested with future spectroscopic data that can provide independent measures of parameters such as stellar ages and timescales of star formation and of quenching. It is very likely that we will need a more complex prior than the aforementioned two to better capture different episodes of galaxies' assembly histories \citep{Suess2022}, and the choice of such priors might also depend on galaxy types. Here, without further constraints from other independent measurements, we decide to use the well-tested Dirichlet prior as our fiducial one. But we have also run another set of \prospector fittings for the entire sample with the Continuity prior, to check whether the choice of prior introduces significant differences in our results and conclusions. As Appendix \ref{app:cont_diri} shows, we find that the choice of the nonparametric SFH prior leaves the key conclusions of this work unchanged. In the following all the results presented, therefore, are the ones obtained using the Dirichlet prior.
Finally, once we have obtained the SFHs for the individual galaxy in our sample, we use a number of metrics, which we have introduced in \citetalias{Ji2022}, to characterize the overall shape of the SFHs, including different relevant timescales. We refer readers to \citetalias{Ji2022} for the definitions and characteristics of the metrics. Here we only briefly outline the ones discussed in the main text of this work, i.e.
\begin{itemize}
\item Mass-weighted age \tage
\item Formation redshift \zf: the redshift corresponds to when the time interval between \zf and \zobs equals to \tage
\item Asymmetry \tausf/\tauq (see equation 5 in \citetalias{Ji2022}): the ratio between the mass-weighted time widths of the period between \tage $\rm{<t<t_H}$ (\tausf) divided by that of the period between $0<t<$ \tage (\tauq)
\end{itemize}
\subsubsection{Comparing with Existing Measurements} \label{sec:prosp_compare}
We first compare our measurements of $M_*$, SFR and sSFR with previous ones. Specifically, for the galaxies in the CANDELS/COSMOS field we compare our measures with those done by \citet{Nayyeri2017}, who assumed a parametric, exponentially declining SFH, i.e. $\rm{SFR(t)\propto e^{-t/\tau}}$. Their SFRs at \zobs were estimated by combining the UV and IR luminosities, i.e. $\rm{SFR_{UV}^{Obs} + SFR_{IR}}$, which is the sum of the observed far-UV emission and the reprocessed thermal dust emission in the IR, after calibration onto a scale of SFR \citep{Kennicutt1998}. For the galaxies in the CANDELS/GOODS fields we compare our measures with those done by \citet{Lee2018}, who treated the functional form of the SFH as a free parameter, chosen from a set of five parametric analytical models. Their SFRs at $z_{obs}$ were also estimated using $\rm{SFR_{UV}^{Obs} + SFR_{IR}}$ when a galaxy is detected in and/or at wavelengths longer than the 24$\micron$ \textit{Spitzer}/MIPS images, or using the SED-derived SFRs otherwise. In Figure \ref{fig:compare_pre} we show the comparisons in the CANDELS/COSMOS and GOODS fields, respectively. We also run the Pearson correlation tests between our \prospector measurements and the previous ones for each parameter. In all cases we find a rather tight correlation between the two sets of measurements, with a Pearson correlation coefficient $\gtrsim0.6$. Yet, systematic deviations are also clearly seen.
As Figure \ref{fig:compare_pre} shows, on average our \prospector fitting returns a $0.3-0.4$ dex larger $M_*$ than the previous measures. This systematically larger $M_*$ has been extensively discussed and now well understood \citep{Leja2019,Leja2021}. Using nonparametric SFHs returns older stellar ages, and hence larger stellar masses, than using parametric ones, because they can easily accommodate a larger fraction of older stellar populations for a given SED \citep{Carnall2019,Leja2019}. Evidence that this systematically larger $M_*$ is actually more accurate than that found when using parametric forms for the SFH has also been found, including (1) the evolution of galaxy stellar-mass function inferred using nonparametric SFHs during SED fitting is more consistent with direct observations \citep{Leja2020}; (2) tests using synthetic observations generated from cosmological simulations show that $M_*$ derived from the SED fitting with nonparametric SFHs is unbiased, and in much better agreement with the intrinsic value than that derived with parametric SFHs \citep{Lower2020}.
Regarding the SFR measures, our \prospector fitting procedure returns values that are $0.3-0.8$ dex smaller than the previous measures, in quantitative agreement with the findings of \citet{Leja2020} and \citet{Leja2021}. These authors have shown that this is primarily due to older, evolved stellar populations in massive galaxies. Specifically, the older stellar populations can contribute to the IR flux via dust heating, which consequentially leads to the overestimated $\rm{SFR_{IR}}$. This systematics is particularly important in low-sSFR galaxies \citep{Conroy2013}, and it becomes increasingly important at higher redshifts when the evolved stellar populations were more luminous just as they were younger. For QGs, their SFR and sSFR measures thus are arguably more accurate via panchromatic SED fitting than via $\rm{SFR_{UV}^{Obs} + SFR_{IR}}$, considering that SED fitting takes into account the full shape of SEDs and self-consistently models stellar populations with different ages. In fact, this point has been demonstrated using mock observations of galaxies from semi-analytic models (e.g., see Figure 6 of \citealt{Lee2018}). This is also supported by the fact that we find a larger Pearson correlation coefficient, i.e. a stronger correlation, when comparing our new SFR and sSFR measures with those from \citet{Lee2018} than with those from \citet{Nayyeri2017}. As Figure \ref{fig:compare_pre} shows, the correlation improvement is mainly caused by galaxies with low sSFRs, because \citet{Lee2018} used SED-derived SFRs for galaxies without MIR/FIR detections and most of the low-sSFR galaxies are not detected in the MIR/FIR bands.
We then proceed to compare the results from our fitting procedure with the star-forming main sequence measured by \citet{Leja2021}, who also used \prospector. Instead of separating galaxy populations according to their star-formation properties prior to measuring the star-forming main sequence, \citet{Leja2021} utilized the normalizing flow technique to model the density distribution of full galaxy populations, from which they measured the redshift evolution of the ridge (i.e. mode) and the mean of the density distribution in the $M_*$-SFR parameter space, i.e. the star-forming main sequence. In this way they demonstrated that systematic errors of the star-forming main sequence, which are introduced by the selection methods of SFGs, can be effectively mitigated. More importantly, \citet{Leja2021} showed that within this framework they were able to substantially mitigate, if not resolve, a long-standing systematic offset of the star-forming main sequence between the observations and the predictions from cosmological simulations (see their section 7.1). In Figure \ref{fig:sfms}, we compare the distribution of our sample with the density ridge\footnote{We have checked that using the mean density of \citet{Leja2021} as the main sequence does not change any of our conclusions of this work.} (their equation 9) of \citet{Leja2021}. Because the star-forming main sequence evolves with redshift, for a better comparison, instead of plotting \citet{Leja2021}'s star-forming main sequence at the median redshift of our entire sample, we first bin galaxies in our sample according to their $M_*$, and then within individual $M_*$ ranges we plot the star-forming main sequence at the median redshift value of that given $M_*$ bin (blue solid line in Figure \ref{fig:sfms}). Our measurement is in great agreement with that of \citet{Leja2021}. In Figure \ref{fig:sfms} we also plot the subsample of galaxies which are classified as QGs with the UVJ technique. They are well below the star-forming main sequence, again showing the consistency between the SED fitting and the UVJ selection that we already saw in Figure \ref{fig:uvj}.
The other way to compare our measurements with \citet{Leja2021}'s is through the distribution of \dms\ \citep{Elbaz2011}, i.e. the distance to the star-forming main sequence that is defined as
\begin{equation}
\rm{\Delta MS = \frac{SFR}{SFR_{MS}}} \label{eq:dms}
\end{equation}
where $\rm{SFR_{MS}}$ is the star formation rate of a galaxy on the star-forming main sequence of \citet{Leja2021}. We show the comparison in Figure \ref{fig:rsb_dist} where we also fit a Gaussian distribution to the galaxies with \dms $\ge0.1$. While this study and that work both use \prospector, we note that our fiducial SED model is not the same as \citet{Leja2021}'s, because they used the Continuity prior for the nonparametric SFH reconstructions and assumed a different dust attenuation law. Yet, the Gaussian fit shows on average \dms $\approx 1$, showing no systematic shifts between our star-forming main sequence and \citet{Leja2021}'s. The Gaussian fit also shows that the width (i.e. dispersion) of the star-forming main sequence is $0.3-0.4$ dex, which is broadly consistent with \citet{Leja2021} and other recent studies \citep[e.g.][]{Whitaker2012,Speagle2014,Schreiber2015,Lee2018}.
\subsubsection{Examples of Results of the SED Fitting Procedure}
In Figure \ref{fig:examples} we show the SED fitting results, together with the \hst images, of four examples of massive galaxies at \zobs$\sim2$ with $\log_{10}(M_*/M_\sun)\sim11$ in our sample whose SFHs and morphological properties are illustrative of the variety that we have encountered in this study. The first row shows a QG with sSFR $<0.01$ Gyr$^{-1}$, and whose morphology is extended, with \re $\approx$ 4.5 kpc in the \H band. The SFH shows that the galaxy is a post-starburst one, because it experienced a recent burst of star formation $\approx 0.1$ Gyr prior to \zobs and it has a relatively low-level of ongoing SFR, which in this case is the SFR within 30 Myr, i.e the first lookback time bin of the nonparametric SFH. Interestingly, its \hst images in bluer bands, e.g. the \I band that probes rest-frame $\sim 2800$\AA, show features consistent with a merging event, fully in line with it being a post-starburst galaxy. The second row shows a very compact QG with \re $\approx$ 0.7 kpc, whose SFH shows a relatively gradual decline over the past 1 Gyr. The third and fourth rows show an extended SFG and a compact SFG, respectively. While in the \H band the size (\re) of the former is about two times larger than the latter, the images of both galaxies show a very compact central component in the \I band, and their reconstructed SFHs suggest they are experiencing an on-going burst of star formation. These four galaxies are representative of the types of correlations between the morphological properties and SFHs at the core of this study, and we will discuss and quantify them for the entire sample in the remaining of this paper.
\subsection{Morphological Measurements} \label{sec:morph}
Similarly to \citetalias{Ji2022}, the morphological measures of the sample galaxies are taken from \citet{vanderWel2012}, where the light profile of galaxies was modelled with a 2-dimensional \sersic function using \galfit \citep{galfit}. We take the measurements in the \H band, because it better probes the stellar morphology for galaxies at $z\sim2$ as it is the reddest \hst imaging available in the three fields. To secure good quality, we follow the recommendation of \citet{vanderWel2012} to only use the measurements with \texttt{GALFITFlag = 0}. This means that the galaxies with unreliable single \sersic fitting results are excluded, including when (1) the \galfit-derived total flux significantly deviates from that derived with \sextractor \citep{sextractor}, (2) at least one parameter reaches either bounds of the range of allowed values set prior to the fitting (e.g. $0.2<n<8$) and (3) the \sersic fitting is unable to converge.
Morphological parameters considered in this work include the
\begin{itemize}
\item \sersic index $n$
\item circularized effective/half-light radius \re $=R_{e,maj}\times \sqrt{b/a}$, where $R_{e,maj}$ is the effective semi-major axis and $b/a$ is the axis ratio
\item stellar-mass surface density within the effective radius \Se $=M_*/(2\pi R_e^2)$
\item stellar-mass surface density within the central radius of 1kpc, \Sone \citep{Cheung2012}. As shown in \citet{Ji2022a}, because \Sone is derived as the combination of \re and $n$, this helps to reduce the uncertainty stemming from the strong covariance between the two parameters, making \Sone a more robust parameter to quantify galaxy compactness than $n$ or \re individually.
\item fractional mass within the central radius of 1 kpc, \Mone \citep{Ji2022a}. Similar to the stellar-mass surface density, \Mone also is a good metric of the compactness of galaxies. Because $\Sigma$ is strongly correlated with $M_*$, it is hard to interpret correlations of any property with $\Sigma$, because they could be driven by $M_*$ rather than the compactness. Because the dependence of \Mone on $M_*$ is very weak \citep{Ji2022a}, using it can provide a more direct view on the links between compactness and other physical properties.
\end{itemize}
\section{Results}
\subsection{The Progenitor Effect} \label{sec:progenitor}
To begin, we investigate the progenitor effect on the apparent structural evolution of galaxies by studying the correlations between structural properties and \zf. We first divide the entire sample into UVJ-selected SFG and QG subsamples, and then divide each subsample using its median $M_*$, motivated by \citetalias{Ji2022} where we found that among QGs the strength of the progenitor effect depends on $M_*$. In Figure \ref{fig:morph_zf}, we show the relationships of \zf with \re, \Sone, \Se and \Mone, respectively. For each relationship we also fit a power-law function, i.e. (1+\zf)$^{-\beta}$, which is plotted as a dashed line in Figure \ref{fig:morph_zf}. To estimate the uncertainty of best-fit power-law relations, similarly to what we did in \citetalias{Ji2022}, we bootstrap the sample 1000 times, during each time we resample the value of each measurement using a normal distribution with a width equal to the measurement uncertainty. The best-fit power-law relation and the corresponding uncertainty are labeled in the legend of each panel of Figure \ref{fig:morph_zf}.
As Figure \ref{fig:morph_zf} shows, all considered structural properties depend on the formation epoch (\zf) of galaxies, which, in essence, is the progenitor effect. Regarding the relationship between \re and \zf, galaxies formed earlier, i.e. having larger \zf, tend to have smaller sizes. A clear dependence on $M_*$ is found for QGs where the relationship is steeper for lower-mass QGs than for higher-mass ones. This is most likely because of the increasing importance of repeated minor merging events that take place in more massive galaxies after quenching. The miner mergers drive the after-quenching size growth that flattens the \re-\zf relationship of more massive QGs, as we have already discussed in detail in \citetalias{Ji2022}. Compared to QGs, the \re of SFGs decreases with (1+\zf) seemingly at a slower rate. Quantitatively interpreting this relationship of SFGs requires the full knowledge\footnote{Because rejuvenation of star formation apparently is rare, we consider the whole history of mass assembly of any galaxy as the period that starts from the Big Bang and ends at the time when a galaxy becomes quiescent.} of their mass assembly histories, which we do not have because (1) by the time of \zobs SFGs are still undergoing active star formation and have not yet completed assembling their masses, and (2) we cannot predict their SFHs after \zobs. Nevertheless, the decreasing trend of \re with \zf shows, even for SFGs, that the progenitor effect also plays a role in their apparent size evolution.
Because the measurements of \Sone, \Se and \Mone directly depend on \re (section \ref{sec:morph}), it is naturally expected that these metrics of compactness also depend on \zf, a relationship that we indeed observe and show in the left three columns of Figure \ref{fig:morph_zf}. Regardless of star-formation properties at \zobs, galaxies that formed earlier, i.e. having larger \zf, tend to have a more compact morphology, i.e. have larger \Sone, \Se and \Mone.
Unlike using \zobs to select samples, which results in galaxies formed at different epochs of $z\ge$ \zobs being grouped together, using \zf naturally separates galaxies formed at different epochs, which automatically allows one to investigate, and account for, the progenitor effect. Figure \ref{fig:morph_zf} shows that the progenitor effect at least in part contributes to the apparent evolutions of the size and of the compactness metrics considered in this work with \zobs, if galaxies are mixed together with no accounting for their formation epochs. As Figure \ref{fig:morph_zf} shows, variations up to 50\% can be observed in \re and up to 1 full dex in \Sone, depending on the stellar-mass range, can be accounted for solely because of the progenitor effect. This highlights the importance of accounting for its contribution before attempting any interpretation or modeling of the apparent structural evolution in terms of different physical mechanisms.
\subsection{Correlations Between the Structural and Star-formation Properties of Galaxies Formed at a Similar Epoch} \label{sec:pistol}
Taking advantage of the reconstructed SFHs, we can group galaxies according to their formation epochs. Once the formation epoch is fixed, the progenitor effect should then be essentially eliminated from the apparent evolution. If correlations between star-formation and structural properties are still observed, then these will provide strong empirical constraints on physical mechanisms of the observed structural evolution. In what follows, we use \zf as a proxy for the formation epoch of galaxies. Specifically, we divide the entire sample into four subsamples using quartiles of \zf, and study the relationships of the compactness metrics of galaxies with \dms (equation \ref{eq:dms}). The results are plotted in Figure \ref{fig:pistol}.
In all bins of \zf, the majority of galaxies are distributed following a ``pistol''-shape pattern, both in the diagram of \dms vs. \Sone (the top row of Figure \ref{fig:pistol}), and in the diagram of \dms vs. \Mone (the middle row of Figure \ref{fig:pistol}). At a fixed \zf, galaxies with smaller \dms, i.e. being more quiescent, have larger \Sone and \Mone, i.e. being more compact, while at the same time the distributions of \Sone and \Mone also become narrower. In the diagram of \dms vs. \Se (the bottom row of Figure \ref{fig:pistol}), a similar trend is also observed for the QGs, which in general have larger \Se whose distribution also is narrower than that of the SFGs, although the knee of the pistol pattern is not as obviously seen as that in the diagrams of \Sone and of \Mone.
A similar pistol pattern between the star-formation properties and the central density of galaxies have been observed in recent literature \citep[e.g.][]{Barro2013,Barro2017,Lee2018}. However, in those studies the pistol pattern was produced \textit{without} regard to the formation epoch of galaxies, meaning that the quantitative details and overall significance of the observed patterns were at least in part due to the progenitor effect. In this work we refine the characterization of the evolutionary trends in the sense that we utilize the information extracted from the reconstructed SFHs to eliminate the contribution from the progenitor effect to the observed correlations. As Figure \ref{fig:pistol} shows, the pistol pattern, although different in shape, is still observed after the elimination of the progenitor effect, demonstrating the existence of a physical link between the effective radius, central mass density and compactness of galaxies and their star-formation properties. Overall as galaxies quench, their effective radii become smaller, central mass densities increase and thus their compactness also increases.
\subsection{Automated Morphological Classification: the Dawn of Spheroids?}
While on average galaxies with smaller \dms are more compact, Figure \ref{fig:pistol} shows that there are also substantial numbers of galaxies with suppressed star formation, i.e. \dms$<0.1$, that have a extended morphology. Morphological metrics derived from the single \sersic fitting, such as the ones considered above, have been extensively adopted to characterize the overall shape of the light distribution of galaxies. However, those metrics entirely ignore substructures such as clumps and nonaxisymmetric features like bars and tidal features that contain key information about the evolutionary mechanism of galaxies' structures. In fact, those substructures are commonly observed not only in SFGs at high redshifts \citep[e.g.][]{Elmegreen2007,ForsterSchreiber2011, Guo2012} but also in some QGs as well (Giavalisco \& Ji in preparation). Although in the nearby universe visual classifications are effective in identifying them \citep[e.g.][]{Lintott2008}, at high redshifts identifying the substructures are very challenging because of (1) the cosmological dimming of surface brightness and (2) the limitations imposed by the available angular resolution that even with \hst, and now JWST, are such that the substructures are still hard to visually identify. Rapidly developing techniques based on machine learning and artificial intelligence hopefully can greatly help improve the morphological classification of galaxies in the early universe. For example, utilizing deep learning techniques, \citet{HuertasCompany2015} trained the Convolutional Neural Networks to classify the morphological types of CANDELS' galaxies brighter than 24.5 magnitude (AB) in the \H band. The quality of their machine-based classifications is very high, namely they have no bias (with a scatter of $\sim10\%$) compared to those done by human classifiers and the fraction of mis-classifications is better than $1\%$.
We cross match our sample with the catalog of \citeauthor{HuertasCompany2015}, and then plot the successfully-matched subsample in Figure \ref{fig:pistol_ML}. Overall, a clear trend can be observed such that the morphology of galaxies changes from disk-like to spheroidal/bulge-like as they become more compact and quench. With the progenitor effect eliminated, this apparent morphological transformation of galaxies is closer to the intrinsic structural transformations that galaxies undergo as they evolve and quench in time. Thus, morphologically spheroidal structures (no dynamical information is included in this discussion), whether bulges or elliptical galaxies, emerge as galaxies suppress their star formation and terminate the assembly of the bulk of their stellar masses. We shall return later on this point from a different perspective.
Regarding galaxies with suppressed star formation in particular, the morphology of the extended ones is not only more disky, but also more disturbed compared to the compact ones. In particular, we use the median values of \Sone, \Se and \Mone of SFGs to divide the sample galaxies into compact and extended ones. The key result is that, regardless of the metrics adopted to classify galaxies' compactness, the probability of a QG to have a disturbed morphology is $\sim3-4$ times higher when it is extended relative to when it is compact. Specifically, for galaxies with \dms $<0.1$, the fraction of those having a disturbed morphology, namely being classified as an irregular disk or a merger according to \citeauthor{HuertasCompany2015}'s criteria (see their section 6 for details), is $68.6\pm12.9\%$ (48/70) for those with $\log_{10}$\Sone$<9.5$ compared to $18.2\pm0.3\%$ (42/231) for those with $\log_{10}$\Sone$>9.5$. If we use \Mone$=0.1$ as the dividing line between extended and compact galaxies, the fraction is $71.7\pm16.4\%$ (33/46) for the former compared to $22.4\pm0.3\%$ (57/255) for the latter. Similarly, if we use $\log_{10}\,$\Se$=9.2$ instead, the fraction is $64.3\pm12.3\%$ (45/70) for the extended ones compared to $19.5\pm0.3\%$ (45/231) for the compact ones. This distinct morphological difference \textit{within} the population of QGs already indicates the possibility that they reflect markedly different formation and evolutionary paths. We are going to elaborate this point more in the next section.
\subsection{The Relationship Between the Morphology and the Assembly History of Galaxies} \label{sec:morph_SFH}
So far, we have only considered the formation epoch (\zf) of galaxies as a key piece of information stemming from the knowledge of SFHs. In this section we take the full shape of reconstructed SFHs into account to further study the relationship between the morphological and star-formation properties. To better illustrate our findings, we divide the entire sample into four galaxy groups, namely (1) extended quiescent galaxies (eQGs); (2) compact quiescent galaxies (cQGs); (3) extended star-forming galaxies (eSFGs) and (4) compact star-forming galaxies (cSFGs).
We have verified that the results presented below are not sensitive to the exact criteria adopted to select the four galaxy groups. Specifically, to separate SFGs and QGs, we have tested our results by both using the UVJ technique and using the distance from the star-forming main sequence provided by the dividing line of \dms $=0.1$. To separate compact and extended galaxies, we have tested our results by using both the fixed dividing line of $\log_{10}$\Sone$=9.5$ and also the varying dividing line of the median \Sone of individual \zf bins. We have not observed any substantial changes in our results. Our results also appear to be insensitive to the exact choice of the compactness metrics adopted to separate compact and extended galaxies, as we demonstrate in Appendix \ref{app:stack_SFH_other} using \Mone and \Se. In what follows, we show the results of using \dms$=0.1$ to separate star-forming and quiescent galaxies, and using $\log_{10}$\Sone$=9.5$ to separate compact and extended galaxies.
We first measure the \textit{average} SFHs. Similarly to what we did in the previous sections, we first bin the galaxies according to their \zf. In each bin of \zf, instead of calculating the median/mean of best-fit SFHs of the individual galaxy groups, we stack the posteriors derived from the \prospector fittings, hence the full probability density distributions of the individual fittings are taken into account. Because in this work we only consider galaxies within a relatively narrow range of stellar mass, i.e. $\log_{10}(M_*/M_\sun)=10.3\sim11.7$, to emphasize the shape of SFHs, before stacking we normalize each SFH by $M_*$ observed at \zobs so that
\begin{equation}
\int_{0}^{t_H}\widetilde{\rm{SFR}}(t)\,dt = 1,
\end{equation}
where $\widetilde{\rm{SFR}}(t)=\rm{SFR}(t)/M_*$. We then calculate the median and standard deviation (1$\sigma$) from the stacked posteriors, and finally obtain the average SFHs.
In Figure \ref{fig:stack_SFH_S1}, we show the average SFHs of eQGs (magenta), cQGs (red), eSFGs (cyan) and cSFGs (blue), respectively. Clear differences in the shape of the SFH of each subgroup are clearly observed. While SFGs have either an overall flat or rising SFH, QGs show a clear decline of recent SFR, as expected. The novel aspect is the dependence of SFHs on morphology: while cQGs underwent an early phase of intense star formation followed by a gradual decline and eventual quiescence, eQGs followed the opposite trend, namely, a gradual rise of star formation toward a relatively recent peak followed by a rapid decrease toward quiescence. Among SFGs, the differences in the average SFH as a function of morphology seem to be more subtle, but we observe that the overall shape of their average SFH is similar to that of the eQGs minus the decrease toward quiescence. We also observe that the rapid rise toward the very early peak seems to be absent among SFGs, suggesting that this type of SFH is typical of the early type galaxies but becomes rare or absent in galaxies that are still forming stars later on in the cosmic evolution, even among cSFGs. In the following, we will study in detail the relationships between the SFH and the compactness of QGs (section \ref{sec:sfh_morp_qg}), and of SFGs (section \ref{sec:sfh_morp_sfg}), respectively.
\subsubsection{The Case of QGs} \label{sec:sfh_morp_qg}
Clear trends are observed between the assembly history and the morphology of QGs. As Figure \ref{fig:stack_SFH_S1} shows, the stacked SFH of eQGs peaks at a much later time, i.e. smaller lookback time, of $\lesssim 0.5$ Gyr compared to the cQGs whose stacked SFH peaks at $\gtrsim1$ Gyr ago. Thus, while the stacked SFH of cQGs is very similar to that of typical QGs, which have assembled most mass early on followed by a gradual decline of their SFRs, the stacked SFH of eQGs is very similar to that of post-starburst galaxies, which shows a recent (a short lookback time from \zobs) burst of star formation followed by a rapid decline of SFR.
In Figure \ref{fig:QG_sfh_shape_dist} we compare the distributions of \tage and of \tausf/\tauq. The eQGs have smaller \tage (i.e. younger) and larger \tausf/\tauq compared to the cQGs, which are consistent with the shape of the stacked SFHs. We also run both the Kolmogorov–Smirnov and the Anderson-Darling tests on the null hypothesis that the distributions of the two QG groups are the same. We can reject the null hypothesis with a $\gtrsim3\sigma$ confidence level, except in the largest \zf bin where we can reject it with an $\sim2.5\sigma$ confidence level. Thus, compared to the eQGs, in general the cQGs have both a larger mass-weighted age and a shorter star-formation timescale relative to the quenching timescale.
Taken together with the higher frequency of finding eQGs with a disturbed morphology, as we have already observed in Figure \ref{fig:pistol_ML} and discussed in section \ref{sec:pistol}, the finding of eQGs' average SFH being consistent with that of post-starburst galaxies supports a scenario that some eQGs might be merging transients, namely, they are merger remnants observed at a stage when the SFR rapidly declines after a merger-induced recent starburst. The stacked SFH of eQGs suggests that the decline of SFR happened $\lesssim 0.5$ Gyr ago, which is in broad agreement with studies of galaxy mergers based on hydrodynamical simulations \citep[e.g.][]{Springel2005,Hopkins2008}. A complete merging event very likely includes multiple episodes of starburst, and the whole process can help galaxies to form dense central cores because strong gravitational torques induced by the merging galaxy can effectively drive gas to the center and then trigger central star formation \citep[e.g.][]{Sanders1988,Hopkins2010}. Because the eQGs do not seem to have built up their central cores by the time of observation\footnote{It is possible that some of the eQGs might have highly dust-obscured dense cores. Unfortunately, the sensitivity of existing MIR/FIR observations in the three fields is relatively low. This possibility can be tested with future observations such as the upcoming \jwst surveys.}, if they indeed come from mergers, then they likely have just finished one earlier episode of merger-induced starburst, meaning that their star formation can possibly rejuvenate. Interestingly, $23\pm3 \%$ QGs in our sample are the eQGs, and this fraction is quantitatively consistent with the rejuvenation rate measured in recent spectroscopic studies of QGs at lower redshift $z=0.5\sim1$ \citep[e.g.][]{Belli2017,Chauke2019}, although we note that using different criteria for the rejuvenation event can result in quantitatively different rates \citep[e.g., section 6.5 of][]{Tacchella2022}.
\subsubsection{The Case of SFGs} \label{sec:sfh_morp_sfg}
Although \textit{on average} the SFHs of eSFGs and of cSFGs are very similar (Figure \ref{fig:stack_SFH_S1}), a correlation indeed is found between the central stellar-mass surface density of SFGs and a more detailed SFH classification that we describe in what follows. Observations of nearby elliptical galaxies\footnote{Given the stellar mass range the majority of the SFGs considered in this study have likely become massive elliptical galaxies at $z=0$.} suggest that their masses were assembled via multiple episodes that are responsible for the formation of different structures \citep[e.g.][]{Huang2013}. Motivated by this, we visually inspect individual SFHs to identify the SFGs with multiple, prominent episodes of star formation, i.e. two or more peaks that are clearly shown in a reconstructed SFH. As Figure \ref{fig:visual_sfh} illustrates, both the best-fit SFH and the corresponding uncertainty are taken into account during the process of visual classifications. We then study the relationship between \Sone and the fraction of SFGs with multiple star-formation episodes, i.e.
\begin{equation}
\mathcal{F}_{\rm{multi-SF}}=\frac{N_{\rm{multi-SF}}}{N_{\rm{tot}}},
\end{equation}
where $N_{\rm{tot}}$ is the total number of SFGs.
Before proceeding to discuss the results, we clarify in more detail our visual classification procedure, and also we caution about possible systematics affecting \fmulsf. First, given the limitation imposed by the data, the SFH reconstruction in this work is done with the \textit{integrated} photometry and those refer to the galaxy as a whole. Also, the time resolution of the SFH reconstruction remains relatively low compared to the timescale of starbursts, i.e. Gyr vs. tens or hundreds of Myr. The combination of these two effects means that every star-formation feature, such as a peak or a burst, identified in a reconstructed SFH might in fact consist of more than one independent star-formation events that happen at approximately the same epoch but are not physically associated, e.g., two or more episodes of star formation in different, physically uncorrelated HII regions. As a result, for each galaxy we are unable to identify all independent star-formation events, which however is not our goal. Our goal simply is to find SFGs that by the time of \zobs have had more than one enhanced and distinct epoch of mass assembly (as opposed to independent star-formation events). Because we only have nine bins of look-back time (section \ref{sec:prosp}), for nearly all cases of our SFGs the ``multiple star-formation episodes'' really means two well-separated episodes, in the form of peaks of SFR enhancement, a younger one and an older one with a typical dividing line of 1 Gyr. Because all SFGs, by definition, include the younger, on-going star-formation episode in their SFHs, what really is measured by \fmulsf can be considered as the fraction of SFGs with the clear presence of older stellar populations created in previous episodes of star formation. Second, ambiguous cases certainly exist such as the ``possible'' example illustrated in Figure \ref{fig:visual_sfh} where the uncertainty of reconstructed SFHs makes it hard to tell the robustness of individual star-formation episodes. While the measures presented below only include the ``secure'' cases, we have checked that, qualitatively, our findings do not change if the ``possible'' cases are also included. Finally, although for each galaxy we have $\approx40$ bands photometry, most of these cover the spectral range bluer than the rest-frame V-band for galaxies at $z\sim2$. In this wavelength range young, bright stars outshine older stars, meaning that either a large mass fraction of older stellar populations is present or a comparatively high S/N of the photometric data (especially in the rest-frame optical and NIR) is required to robustly recover the older stellar populations in the reconstructed SFHs \citep[e.g.][]{Papovich2001}. Therefore, even for the ``unlikely'' cases illustrated in Figure \ref{fig:visual_sfh} it is still possible that the galaxies in fact have older stellar populations which, however, cannot be recovered by our \prospector fitting because of wavelength coverage and data quality at the moment. This can lead to under-estimated \fmulsf.
The key result of this investigation is shown in Figure \ref{fig:F_mulSF}. We find that \fmulsf generally increases with \Sone, implying that the probability of finding SFGs that have a sizeable stellar mass in older stellar populations increases as their central stellar mass densities grow. The grey line in Figure \ref{fig:F_mulSF} illustrates the same point in a different way by plotting the probability of finding SFGs that have a \sersic index \n between 1.5 and 3, i.e. whose light profiles in the \H band indicate their morphology likely is a combination of a dense central core plus an extended outskirt, as a function of \Sone. We have checked that this trend does not change if we use a slightly different range of \n, e.g., between 1.5 and 2.5. We stress that the two trends of \Sone that we have just presented, one with \fmulsf and the other with the shape of light profiles rely on entirely independent observables, the former based on the SFH and the latter based on morphology. Yet we find that the two trends depict a consistent picture, which on the one hand points to the substantial robustness of the SFH reconstructions, and on the other to the power of incorporating the SFHs to study the structural evolution of galaxies.
For the SFGs with multiple episodes of star formation, we have further studied the fractional mass of stellar populations assembled earlier, namely the mass of the stellar populations older than $\rm{\frac{1}{3}t_H}$ divided by the total stellar mass observed at \zobs. The choice of $\rm{\frac{1}{3}t_H}$ is based on results from hydrodynamical simulations, which suggest that the characteristic timescale of galaxy compaction via dissipative processes at the redshift considered here is roughly a constant fraction ($0.3\sim0.4\times$) of the Hubble Time \citep[see section 5.2 of][]{Zolotov2015}. We have also checked our results by using the fixed 1 Gyr, and found no substantial changes in our results. Figure \ref{fig:old_mass_frac} shows the cumulative distribution of the fractional mass of older stellar populations in bins of \Sone. The fractional mass increases with \Sone. Taken together with the strong, increasing trend of \fmulsf with \Sone, this provides strong evidence that massive galaxies at cosmic noon assembled first their central regions which kept growing both in mass and density as the outer regions developed. These make the galaxies appear more compact and nucleated, and with a shrinking effective radius. This picture is broadly consistent with conclusions reached by using independent observables such as the radial profile of SFR obtained from spatially-resolved maps of H$\alpha$ emission \citep[e.g.][]{Nelson2012,Nelson2016}.
\section{Discussion}
Galaxies form and evolve accompanied by changes of their structural properties which, if their evolutionary tracks can be reconstructed, contain key empirical information on the physical drivers of structural evolution. Unlike cosmological simulations where the evolutionary path of galaxies and the contributions from different physical processes are known at any time, the morphology and other properties of real galaxies, however, can \textit{only} be known at the time of observation, \zobs. Here we propose the following methodology to statistically reconstruct the structural evolution of individual galaxies using their reconstructed SFHs. Because our method relies on good morphological measures which are only available at $z\lesssim3$ at the moment, only galaxies with \zf $\le 3$ are included to the following analysis.
\subsection{The Method} \label{sec:diss_method}
The method is straightforward, and it is built upon two simple basic assumptions. Thanks to the \hst's high angular resolution, over the last two decades significant progress has been made in measuring the morphology of massive galaxies of all spectral types up to $z\sim3$ \citep[][just to name a few]{vanderWel2014,Shibuya2015,Mowla2019}. A common result from these studies is that, at least for massive galaxies, structural properties are strongly correlated with $M_*$ (e.g. mass-size relation, \citealt{Shen2003, vanderWel2014}), star-formation activity (e.g. SFR, \citealt{Salim2007,Elbaz2007, Whitaker2012}) and redshift. At least to first order, it is thus reasonable for us to assume that the structure of a galaxy is determined once its redshift, $M_*$ and SFR are known. But we also have to keep in mind substantial, non-negligible intrinsic scatters in such relationships exist. For example the intrinsic scatter of galaxy's mass-size relation has been found to be $\sim0.2$ dex at $0<z<3$ \citep{vanderWel2014}. This suggests the possibility that additional parameters also determine the structure of galaxies. Nonetheless, because little information on how additional parameters drive the scatter is available, here we ignore this issue entirely. The second assumption of the method is that existing galaxy samples observed at any given redshift are representative of the population of massive ($>10^{10}M_\sun$) galaxies at that redshift. We do not see any strong evidence that this assumption is grossly violated at the redshifts discussed here. We remind, however, that given the relatively small area covered by the three legacy fields considered in this study, the results could still be biased if significant cosmic variance exists, which remains to be tested with future larger-area surveys.
The method includes three steps. First, for any given galaxy observed at \zobs we reconstruct its SFH and we use it to measure \zf and from this we obtain the galaxy's stellar mass and star-formation rate at \zf, i.e. $M_*^{\rm{z_{form}}}$ and SFR$\rm{^{z_{form}}}$. This first step requires accurate multi-band photometry (section \ref{sec:sample}) which is critical for the robust reconstruction of the SFH and thus of the evolutionary track of individual galaxies in the plane of SFR vs. $M_*$. About this point, we remind that the use of \prospector with nonparametric SFHs \citep{Leja2020} has essentially eliminated the long-standing tension between the observed cosmic star-formation rate density and the cosmic stellar-mass density, where the time integral of the former used to over-predict the latter by $\approx60\%$ \citep{Madau2014}. In Figure \ref{fig:predict_ms} we demonstrate this point in a more direct way using our SFH reconstructions. In particular, we use the SFHs of the UVJ-selected QGs in our sample to predict their distribution in the SFR-$M_*$ diagram at \zf, and then compare the distribution with the star-forming main sequence \textit{measured} at that redshift. Similarly to what we did in section \ref{sec:prosp_compare}, the star-forming main sequence used for the comparison is the one from \citet{Leja2021}. As Figure \ref{fig:predict_ms} shows, there is excellent agreement between the prediction from the reconstructed SFHs and the observed star-forming main sequence, demonstrating that \prospector is able to robustly reconstruct the evolutionary tracks of galaxies in the $M_*$ vs. SFR plane.
The second step is to select from the existing samples those galaxies whose (\zobs, $M_*$, SFR) are similar to the values of (\zf, $M_*^{\rm{z_{form}}}$, SFR$\rm{^{z_{form}}}$). We utilize the posteriors from individual \prospector fittings, and select all galaxies whose (\zobs, $M_*$, SFR) are within the 2$\sigma$ posterior contours of (\zf, $M_*^{\rm{z_{form}}}$, SFR$\rm{^{z_{form}}}$). We have also tested our results using 1$\sigma$ contours, and found no substantial changes.
We remind that using different SED fitting procedures generally results in systematic shifts in the measures of physical properties of galaxies (section \ref{sec:prosp_compare}). Here we use the entire sample of this work as the default, because all physical properties are measured consistently, namely using the same SED fitting procedure and under the same SED assumptions. However, our selection criteria are very strict (section \ref{sec:sample}) because of our emphasis on the robustness of SFH reconstructions, which requires high-quality, densely sampled photometric data. Because all we need for this step are (\zobs, $M_*$, SFR), which compared to the measurement of SFHs are less sensitive to the data quality, it is possible to use an enlarged sample to increase the statistics at the potential cost of introducing systematics stemming from blending measures derived from different SED fitting procedures. Specifically, we have tested the robustness of our results using the enlarged samples of the CANDELS/COSMOS and GOODS fields where \textit{all} galaxies with reliable \galfit fitting results have been included. Because running \prospector is computationally quite expensive and most galaxies in the enlarged sample do not have available \prospector fitting results, we decide to empirically correct the existing measurements of $M_*$ and SFR from the enlarged CANDELS catalogs using the median systematic shifts found between previous measurements and the ones we obtained with \prospector (section \ref{sec:prosp_compare} and Figure \ref{fig:compare_pre}). To ensure a uniform correction, for all galaxies in the enlarged sample we use the SFRs estimated using the UV+IR ladder of \citet{Barro2019} and then correct them by -0.7 dex (the middle panel of Figure \ref{fig:compare_pre}), and the median stellar mass of the CANDELS catalogs using the Hodges–Lehmann method \citep{Santini2015} and then correct them by +0.3 dex (the left panel of Figure \ref{fig:compare_pre}). As one can see in the inset of Figure \ref{fig:time_evo_morph}, using the enlarged sample does not substantially change our results. In the following, we therefore choose the results from the sample of our own selection as the fiducial ones, although we do also report and compare the results from the enlarged sample in the following discussions.
The final step of our method is to compute the weighted average of the structural properties of the galaxies selected from the step two, where we use the corresponding posterior as the weights, and use them as the inferred structural properties of a galaxy at its formation epoch \zf. Accordingly, the uncertainties of the inferred structural properties are computed as the weighted standard deviation of the selected galaxies. By comparing the inferred properties at \zf with the observed ones at \zobs, we can get the evolutionary track of structural properties.
Finally, before proceeding to discuss the results, we highlight the important differences of our method from earlier studies. Previous works have attempted to reconstruct the evolutionary trajectories of galaxies in planes defined by observables that include morphological (structural) properties, $M_*$ and star-formation properties \citep[e.g.][]{Barro2013,Barro2017}, with the ultimate goal of identifying the underlying physics that drives specific evolutionary phases. The method described above of reconstructing the evolutionary tracks provide another such attempt. What is different here, and provides a crucial advantage, is that we are able to statistically estimate the true tracks of the individual galaxies from the time of their formation, estimated by \zf, to that of their observation, i.e. \zobs, thanks to the reconstruction of SFHs without mixing together galaxies born at different times. This difference from previous works is key and an important step forward in the sense that we can avoid bundling together galaxies that formed at different epochs and experienced different assembly histories. This means that we can mitigate, in fact eliminate, any form of the progenitor effect solely based on empirical quantities. And the population averages, which define the general trends, are naturally obtained from the individual galaxies as they evolve, and are not predictions from models.
\subsection{The Structural Evolution of Galaxies Since \zf} \label{sec:diss_change}
Figure \ref{fig:time_evo_morph} shows the change of structural parameters, namely the observed structural properties at \zobs divided by the inferred ones at \zf, as a function of the star-formation property of individual galaxies. Specifically, the structural changes considered are: $\rm{\Delta log R_e}$, $\rm{\Delta log \Sigma_e}$, $\rm{\Delta log \Sigma_{1kpc}}$ and $\rm{\Delta log (M_{1kpc}/M_*)}$. We list the inferred median changes of these structural properties as a function of \dms in Table \ref{tab:struc_change}, where we also include the results from the enlarged sample as well. Also listed in Table \ref{tab:struc_change} are the uncertainties of the median values that we estimate by (1) bootstrapping the sample 1000 times and (2) during each bootstrapping using a normal distribution to resample the value of inferred structural properties at \zf with the estimated uncertainties that we described in the final step in section \ref{sec:diss_method}.
\begin{table*}[]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\dms & $\rm{\Delta log\, R_e}$ & $\rm{\Delta log\, \Sigma_e}$ & $\rm{\Delta log\, \Sigma_{1kpc}}$ & $\rm{\Delta log \,(M_{1kpc}/M_*)}$ \\
\hline
$>1$ & $0.11\pm0.03$ ($0.17\pm0.03$) & $-0.07\pm0.04$ ($0.08\pm0.05$) & $-0.01\pm0.04$ ($0.21\pm0.03$) & $-0.27\pm0.04$ ($-0.40\pm0.03$) \\
$0.1-1$ & $0.02\pm 0.02$ ($0.07\pm 0.02$) & $0.15\pm 0.05$ ($0.22\pm 0.05$) & $0.24\pm 0.04$ ($0.36\pm0.03$) & $-0.08\pm 0.03$ ($-0.19\pm 0.03$) \\
$0.01-0.1$ & $-0.19\pm 0.05$ ($-0.11\pm 0.04$) & $0.55\pm 0.09$ ($0.65\pm 0.07$) & $0.48\pm0.05$ ($0.65\pm 0.05$) & $0.14\pm 0.05$ ($0.05\pm 0.04$) \\
$<0.01$ & $-0.34\pm 0.04$ ($-0.28\pm 0.04$) & $0.91\pm 0.08$ ($0.93\pm 0.07$) & $0.72\pm 0.05$ ($0.76\pm 0.04$) & $0.29\pm 0.04$ ($0.21\pm 0.04$) \\
\hline
\end{tabular}
\caption{The median inferred structural changes since \zf for galaxies with different \dms (also see Figure \ref{fig:time_evo_morph}). The values in the parentheses are the results of using the enlarged sample for the reconstruction of structural evolution (section \ref{sec:diss_method}). The errors represent the uncertainties of the median.}
\label{tab:struc_change}
\end{table*}
A clear trend, with an absolute correlation coefficient $\sim0.5$ of both the Spearman's rank and the Pearson tests, of the structural evolution since \zf with \dms is observed for all parameters considered. The $\rm{\Delta log R_e}$ increases with the \dms. Galaxies observed to have on-going star formation (i.e. large \dms) grew in size since \zf. For galaxies with suppressed star formation we see that \re decreases and the decrease continuously becomes more prominent as \dms becomes smaller, i.e. the galaxies become more quiescent: galaxies do shrink as they approach quiescence, at least when the size is measured by \re.
In general, $\rm{\Delta log \Sigma_e}$, $\rm{\Delta log \Sigma_{1kpc}}$ and $\rm{\Delta log (M_{1kpc}/M_*)}$ decrease with increasing \dms. One the one hand, for galaxies above the star-forming main sequence, i.e. \dms$>1$, \Se does not significantly evolve since \zf. Depending on the exact sample selection (the step two described in section \ref{sec:diss_method}), \Sone either remains constant using the reference sample adopted in this work, or it exhibits a very mild increase of $\approx0.2$ dex if using the enlarged sample. On the other hand, since \zf, for galaxies with very little or no on-going star formation by the time of \zobs, \Se and \Sone significantly increase by $\approx0.9$ dex and $\approx0.7$ dex, respectively, regardless of the adopted sample. More importantly, as Figure \ref{fig:delta1kpc_deltare} shows, we find that the inferred evolution of \Sone is larger than that of \Se for galaxies with \dms$>0.1$, while for those with \dms$<0.1$ the opposite is found, namely the $\rm{\Delta log \Sigma_e}$ is larger than the $\rm{\Delta log \Sigma_{1kpc}}$ and the difference, $\rm{\Delta log \Sigma_{1kpc}-\Delta log \Sigma_{e}}$, becomes more negative as the galaxies become more quiescent.
The findings above have two important implications. First, galaxies grow in mass preferentially in the central regions as they are still in the star-forming phase, which explains $\rm{\Delta log \Sigma_{1kpc}\gtrsim\Delta log \Sigma_{e}}$ that we find for \dms$>0.1$ galaxies. This is consistent with what we found earlier in section \ref{sec:sfh_morp_sfg}, namely that compact SFGs are not only more likely to have a sizeable presence of older stellar populations (Figure \ref{fig:F_mulSF}) but also their fractional mass is larger (Figure \ref{fig:old_mass_frac}) than extended SFGs. We notice this conclusion is also consistent with that reached by \citet{Barro2017} (see their section 4) which, however, is based upon totally different assumptions. \citeauthor{Barro2017} showed that the scatter of the relationship between $\Sigma$ and $M_*$ is smaller than the expected growth in mass within the redshift intervals covered in their analysis. Because of this, they argued that it was reasonable to assume the observed $\Sigma$-$M_*$ relationship as a proxy for the evolutionary track of SFGs. Because the observed slope of the \Sone-$M_*$ relationship is steeper than that of the \Se-$M_*$ relationship, \citeauthor{Barro2017} concluded that high-redshift galaxies preferentially build up their centeral regions within 1 kpc. Our approach also remains entirely empirical, and includes the substantial improvement of getting rid of the key assumption of \citet{Barro2017}. Our approach uses the SFHs to statistically reconstruct the structural evolution of individual galaxies, including their central densities, from the data alone. It also avoids the progenitor effect stemming by avoiding to bundle together galaxies with different formation histories, which adds a spurious contribution to the intrinsic structural evolution, as we already discussed in section \ref{sec:progenitor}.
The second important implication is that the growth in central stellar-mass surface density after galaxies go through their last major episode of star formation, i.e. after they go through quenching, is slower within the central radius of 1 kpc than within \re. We interpret this as a consequence of two effects. First, as they are still in the star-forming phase galaxies continuously grow their central stellar mass densities. When the quenching starts, the central density, e.g. \Sone, is about to reach the maximum value (i.e. the knee of the pistol pattern seen in Figure \ref{fig:pistol}), and its growth rate slows down. Meanwhile, after star formation quenches in the center, galaxies can still keep growing their masses and sizes in part through circum-nuclear, in-situ star formation, and in part through galaxy mergers \citep[e.g.][]{Bezanson2009,vanDokkum2010,Oser2012,Ji2022}. However, unlike the growth of the central regions which is likely fed by dissipative accretion of gas (see next section), this after-quenching growth also includes, and very likely is dominated by less dissipative processes such as minor mergers that are more likely to affect the outer regions. These two effects combined can lead to the observed $\rm{\Delta log \Sigma_{1kpc}\lesssim\Delta log \Sigma_{e}}$ that we find for the galaxies with small \dms.
\subsection{On the Co-happening of Structural Transformations and Quenching} \label{sec:diss_imp}
The physics behind the empirically well-established correlations between galaxy structural and star-formation properties has long been eluded us. Critical, but yet open questions are whether a causal link exists between structural transformations and quenching, or whether they both are the results of a third, unidentified process, and what the relative timing of the two phenomena is \citep[e.g.][]{Khochfar2006,Wellons2015,Zolotov2015,Tacchella2016,Lilly2016}.
To empirically disentangle the timing sequence and also to investigate the possibility of a direct causal link between quenching and structural transformations from other possibilities, the first key step is to eliminate any contribution from the progenitor effect \citep{Lilly2016, Ji2022a}, as we already explained in section \ref{sec:intro}, namely to select galaxies with similar \zf and assembly histories.
So far, we have used the SFHs to separate galaxies formed at different epochs and having different assembly histories. We have shown that, while the progenitor effect clearly exists and is observed (section \ref{sec:progenitor}), it alone is not enough to explain the observed correlations among \dms, \Sone, \Se, \Mone and \re (section \ref{sec:pistol} and \ref{sec:diss_change}), pointing to the possibility of a physical link between quenching and structural transformations. Our findings are in broad agreement with the theoretical study of \citet{Wellons2015}, where they found that both the physical compaction of galaxies and the progenitor effect are responsible for the formation of compact QGs in the Illustris cosmological simulations.
Generally speaking, galaxies at cosmic noon have much larger gas fractions than local ones, around 50\% and reaching up to $\approx 80$\% \citep{Tacconi2020}, and the rate of gas inflow is also higher \citep[e.g.][]{Dekel2013,Scoville2017}. As the result of direct accretion, dynamical instabilities or wet mergers, inflowing gas would sink to the center of gravitational potential, trigger central starbursts and build up dense central regions of galaxies, a process often referred to as wet compaction. Recent ALMA observations of galaxies at this epoch indeed found that the distribution of cold gas is more compact than that of stars \citep[e.g.][]{Barro2016,Spilker2016,Tadaki2017,Kaasinen2020,GomezGuijarro2022}, supporting the notion that high-redshift galaxies build up their centers through dissipative processes associated with cold gas, and hence adding empirical support to the scenario of wet compaction.
Thanks to the reconstruction of the structural evolution of individual galaxies that we have described in section \ref{sec:diss_method} and \ref{sec:diss_change}, we are able to empirically derive their trajectories, from \zf to \zobs, in the planes of $M_*$ vs. structural property proxies, which are shown in Figure \ref{fig:time_evo_diagrams}. While there is a relatively large dispersion among the individual evolutionary vectors, the median trajectories gradually change as a function of \dms toward the direction expected from wet compaction processes and they are consistent with what we found in section \ref{sec:diss_change} and Figure \ref{fig:time_evo_morph}. The slope of the evolutionary vectors, which shows the relationship between the growth in stellar mass since \zf, i.e. $\rm{\Delta logM_*}$, and the change of structural properties, can be used to quantitatively compare with the predictions of theories. Intriguingly, Figure \ref{fig:time_evo_diagrams} shows that for galaxies with \dms$<0.1$ our reconstructed median evolutionary trajectory of the central density, both \Se (top-right panel) and \Sone (bottom-left panel), closely mirrors the one predicted by the simulations of wet compaction \citep{Zolotov2015}. We show this in a more quantitative way in Figure \ref{fig:slope_dist}, where we plot the distributions of the slope of the inferred evolutionary vectors, i.e. $\rm{\Delta log\Sigma_e/\Delta logM_*}$ and $\rm{\Delta log\Sigma_{1kpc}/\Delta logM_*}$, of galaxies with \dms$<0.1$. To estimate the uncertainties of the distributions, similarly to what we did in section \ref{sec:diss_change}, we bootstrap the sample 1000 times and during each bootstrapping we also resample the individual values using the measurement uncertainties. As Figure \ref{fig:slope_dist} shows, the median evolutionary slope measured by our method is in quantitative agreement with that predicted by the hydrodynamical simulations of wet compaction.
Although the consistency between our empirical constraints and the simulations shows wet compaction is a viable mechanism, however, we stress that we do not know what exact process(es) is responsible for the compaction. Physically, wet compaction can be triggered by several physical processes, including galaxy mergers \citep{Hopkins2006}, counter-rotating streams \citep{Danovich2015}, recycled gas with low angular momentum \citep{Dekel2014} and violent disc instabilities \citep{Dekel2009}. A common feature of all these mechanisms, however, is that once the inflow of gas ends, it is the central regions that first become gas-depleted, halting further stellar mass growth (e.g. as traced by \Sone) leading to inside-out quenching \citep{Ceverino2015,Zolotov2015,Tacchella2016}. This scenario provides a broad explanation for the correlations observed among $M_*$, $\Sigma$ and SFR \citep{Barro2013,Barro2016,Barro2017}. It also is in good quantitative agreement with what we found in Figure \ref{fig:delta1kpc_deltare}.
In conclusion, with the progenitor effect accounted for, the findings described here suggest a direct connection, at least in the form of simultaneous happening, of the process of quenching and the structural transformations. This temporal coincidence does not necessarily imply a causal link between these two phenomena. We suggest that this link is provided by the mechanisms that first triggers gas accretion into the central regions, with subsequent star formation and then shuts it down, eventually resulting in quenching. The accreted mass is large, and and it becomes compact (likely due to dissipative gaseous accretion), changing the dynamical state of the central region (Ji \& Giavalisco 2022, in preparation), and it also causes adiabatic contraction (Giavalisco \& Ji 2022, in preparation). As we have seen, the magnitude of the structural transformations that appear to take place is quantitatively consistent with the predictions from the wet compaction models, where this terminology is generically representative of a family of processes of growth of the central mass of galaxies through dissipative gaseous accretion. Regardless of the exact physical mechanism (or mechanisms) behind compaction, one important conclusion of our study is that quenching takes place at the same time as the the formation of a compact, dense central core, which reaches its maximum density as star formation terminates.
\section{Caveats }
We now explicitly address a few caveats of this study. First, the current nonparametric SFH reconstruction requires the assumption of a prior about how the SFR changes as a function of time on small timescales to enable the SED fitting to converge. Using different priors can lead to systematic shifts of the measurement of physical parameters. We have run a number of tests to constrain the magnitude of this uncertainty. As we already mentioned in section \ref{sec:prosp} and in \citetalias{Ji2022}, however, unfortunately at the moment we do not know what the optimal prior is, a situation that can hopefully be improved with future large, spectroscopic surveys at high redshifts. Luckily, as one can see in Appendix \ref{app:cont_diri}, tight correlations are observed between the parameters of interest to this study obtained from different priors. Therefore, there is evidence that the results of this work should not be overly sensitive to the assumed nonparametric SFH priors. Second, throughout the paper we used \zf as the proxy for the formation epoch of galaxies. This is purely from an empirical stand point, but there could be other better characteristic timescales (redshifts) to use. For example, in \citetalias{Ji2022} we argued that the best characteristic redshift to mitigate the progenitor effect should be the one that minimizes the scatter of the distribution of \Sone (see section 3.3 of \citetalias{Ji2022}). Perhaps due to the relatively small sample size and large scatter (both intrinsic and introduced by the current measurements), the best characteristic redshift to use remains inconclusive, and we plan to investigate this aspect further in the future. Third, in this work SED modeling was done with the integrated photometry, and we ignored possible color gradients inside the galaxies. Such gradients can make the light distribution different from the mass distribution, hence introduce systematics in our interpretations on the evolution of morphological properties. We plan to expand this study to include spatially resolved measurements using upcoming data from JWST. Finally, we remind that we ignore entirely the environmental effect on galaxy evolution in our interpretation of the observations, because in this work we only focus on massive galaxies whose evolution should be primarily driven by internal processes more than the external environment based on current observations, as mentioned in section \ref{sec:intro}. That said, however, a detailed understanding of when and how the environmental effect on galaxy evolution becomes important remains incomplete. A few recent studies found evidence that the dense environment might even be able to affect the evolution of very massive galaxies ($\log_{10}(M_*/M_\sun)\ge10.5$) at cosmic noon by altering their molecular gas content \citep[e.g.,][]{Wang2018,Zavala2019}. However, these studies were based on a very limited number of known galaxy (proto-)clusters at $z>2$, statistical samples are still required to confirm these findings. Therefore, we cannot exclude that the environmental effect might still contribute to some part of the apparent structural evolution described here, and this will need further investigations in the future.
\section{Summary}
In this work we studied the relationships among the structural and star-formation properties, and mass assembly history of a carefully characterized sample of galaxies, with stellar mass $\log_{10}(M_*/M_\sun)\ge10.3$, at cosmic noon epoch \zobs$\sim2$. We have reconstructed high-fidelity, nonparametric SFHs of these galaxies using the fully Bayesian SED fitting code \prospector. We found strong correlations between the structural properties of galaxies and their assembly history. Specifically,
\begin{itemize}
\item for QGs (section \ref{sec:sfh_morp_qg}), we found that the SFH of compact QGs, i.e. those having larger \Sone, \Se and \Mone, is similar to that of the average QG who has assembled most of its mass through an old burst of star formation and then its SFR gradually declined toward the quenching phase. The SFH of extended QGs, however, is very similar to the SFH of post-starburst galaxies, namely galaxies that have just experienced a recent major, massive burst of star formation $<1$ Gyr ago. Their morphology is also more disturbed compared to the compact ones.
\item for SFGs (section \ref{sec:sfh_morp_sfg}), we found that as they become more compact, their SFHs are more likely to show multiple, prominent star-forming episodes. In addition, among those with multiple star-formation episodes, we found the fractional mass formed during the older episode, typically with an age $>1$ Gyr, also increases as the SFGs become more compact.
\end{itemize}
With the availability of SFHs, we were able to separate galaxies formed at different epochs and with different assembly histories, hence to mitigate any form of the progenitor effect (e.g. the dependence of the size and central mass density on the epoch of formation) which otherwise biases the observed correlations between galaxies' structures and star-formation activities. We showed that
\begin{itemize}
\item the progenitor effect is clearly observed in the sample galaxies discussed here (section \ref{sec:progenitor}). We showed that galaxies that formed earlier, i.e. with larger \zf, tend to have smaller sizes and larger densities.
\item once the formation epoch (\zf) of the galaxies is controlled and accounted for, the distributions of galaxies in the diagrams of \dms, i.e. the distance from the star-forming main sequence, vs. various compactness metrics, including \Sone, \Se and \Mone, still exhibit a well-known pistol pattern (section \ref{sec:pistol}). Namely, as galaxies become quiescent they also become more compact, and the distributions of the compactness metrics also become narrower. These suggest the progenitor effect alone is not enough to explain the apparent correlations between the structural and star-formation properties of galaxies at cosmic noon.
\end{itemize}
Finally, we introduce a novel, purely empirical approach, which exploits the SFHs to reconstruct the evolution of structural properties of individual galaxies (section \ref{sec:diss_method}). Differently from earlier studies, our method naturally takes the assembly history of galaxies into account, and the population averages, which define the general trends, are obtained from the individual galaxies as they evolve, and are not predictions from models. We showed that since \zf
\begin{itemize}
\item the change of structural properties are strongly correlated with \dms (section \ref{sec:diss_change}). In particular, for SFGs, i.e. galaxies with large \dms, we found that they grew in size, and their compactness (\Sone, \Se and \Mone) is consistent either with remaining constant or with a slight increase. QGs, i.e. those with small \dms, exhibit a significant decrease in their \re and increase in their compactness. Importantly, on the one hand, we found the change of \Se is smaller than that of \Sone for SFGs, suggesting galaxies preferentially grow the central regions as they are still in the star-forming phase. On the other hand, an opposite trend, namely the change of \Se is larger than that of \Sone, was found for QGs, suggesting that galaxies, after they go through quenching, grow more slowly in their central radius of 1 kpc than they do within \re.
\item the evolutionary vectors in the diagrams of stellar mass vs. structural properties enable us to compare with the predictions from cosmological simulations (section \ref{sec:diss_imp}). Our reconstructed evolutions of \Se and of \Sone for QGs are in quantitative agreement with the predictions of wet compaction made by simulations.
\end{itemize}
Combining all these results together, we converge to a consistent picture where the quenching of star formation in massive galaxies and their structural transformations in the form of decreasing \re and increasing central stellar-mass density take place roughly at the same time. To see this, first the progenitor effect must be accounted for when interpreting any correlation between the structural and star-formation properties. Second, the progenitor effect can only be partly responsible for the observed evolution, implying the existence of some physical link between galaxies' structural transformations and quenching. Our reconstructed structural evolution suggests that the wet compaction is one viable such link that helps build up the central dense regions of galaxies via highly dissipative gaseous accretion at cosmic noon epoch. After galaxies go through their final major episode of star formation and eventually become quiescent, the growth of their central regions (i.e. $<1$ kpc) slows down, while galaxy mergers can continue growing their outskirts.
\section{ACKNOWLEDGMENT}
We thank the anonymous referee for their useful comments. This work was completed in part with resources provided by the University of Massachusetts' Green High Performance Computing Cluster (GHPCC).
\software{GALFIT \citep{Peng2002,galfit}, Prospector \citep{Leja2017,Johnson2021}, FSPS \citep{Conroy2009,Conroy2010}, MIST \citep{Choi2016,Dotter2016}, MILES \citep{Falcon-Barroso2011}}
\appendix
\section{Comparison of the physical parameters derived using the Dirichlet prior and the Continuity prior} \label{app:cont_diri}
In Figure \ref{fig:D_v_C} we compare the physical properties derived using the Dirichlet prior (the fiducial one in the main text) with the ones derived using the Continuity prior which unlike the former is strongly against the sharp change of SFR in adjacent lookback time bins. The comparison is not only for the basic properties such as $M_*$ and SFR, but also the parameters that we introduced in \citetalias{Ji2022} to quantify the shape of reconstructed SFHs. Systematic offsets and scatter notwithstanding, for all parameters we see strong correlations between the two measurements, suggesting that our results of this work should qualitatively not be sensitive to the prior assumed.
\section{The stacked SFH of individual galaxy groups divided using \Se and \Mone} \label{app:stack_SFH_other}
In section \ref{sec:morph_SFH} of the main text, we use \Sone to separate compact and extended galaxies. Here we use \Se and \Mone to do the separation, and then stack the SFHs of the individual groups of galaxies. The results are shown in Figure \ref{fig:stack_SFH_other} where no substantial change is found to our results.
\bibliography{ji_2022_compaction_sfh}{}
\bibliographystyle{aasjournal}
|
Title:
FilDReaMS 1. Presentation of a new method for Filament Detection and Reconstruction at Multiple Scales |
Abstract: Context. Filamentary structures appear to be ubiquitous in the interstellar
medium. Being able to detect and characterize them is the first step toward
understanding their origin, their evolution, and their role in the Galactic
cycle of matter. Aims. We present a new method, called FilDReaMS, to detect and
analyze filaments in a given image. This method is meant to be fast,
user-friendly, multi-scale, and suited for statistical studies. Methods. The
input image is scanned with a rectangular model bar, which makes it possible to
uncover structures that can be locally approximated by this bar and to derive
their orientations. The bar width can be varied over a broad range of values to
probe filaments of different widths. Results. We performed several series of
tests to validate the method and to assess its sensitivity to the level of
noise, the filament aspect ratios, and the dynamic range of filament
intensities. We found that the method exhibits very good performance at
recovering the orientation of the filamentary structures, with an accuracy of
0.5{\deg} in nominal conditions, up to 3{\deg} in the worst case scenario with
high level of noise. The filaments width is recovered with uncertainties better
than 0.5 px (pixels) in most of the cases, which could extend up to 3 px in
case of low signal-to-noise ratios. Some attempt to build a correspondence
between Plummer-type filament profiles and outcomes of the method is proposed,
but remains sensitive to the local environment. Conclusions. Our method is
found to be robust and adapted to the identification and the reconstruction of
filamentary structures in various environments, from diffuse to dense medium.
It allows us to explore the hierarchical scales of these filamentary structures
with a high reliability, especially when dealing with their orientation.
| https://export.arxiv.org/pdf/2208.14826 |
\title{FilDReaMS \\
1. Presentation of a new method
for \textbf{Fil}ament \textbf{D}etection and \textbf{Re}construction \textbf{a}t \textbf{M}ultiple \textbf{S}cales.}
\titlerunning{FilDReaMS. 1. A new method to detect filaments and determine their orientations}
\author{J.-S. CarriГЁre
\inst{1}
\and
L. Montier\inst{1}
\and
K. FerriГЁre\inst{1}
\and
I. Ristorcelli\inst{1}
}
\institute{IRAP, UniversitГ© de Toulouse, CNRS, 9 avenue du Colonel Roche, BP 44346, 31028 Toulouse Cedex 4, France\\
\email{jeansebastienpaulcarriere@gmail.com}
}
\abstract
{Filamentary structures appear to be ubiquitous in the interstellar medium. Being able to detect and characterize them is the first step toward understanding their origin, their evolution, and their role in the Galactic cycle of matter.}
{We present a new method, called {\tt FilDReaMS}, to detect and analyze filaments in a given image. This method is meant to be fast, user-friendly, multi-scale, and suited for statistical studies.}
{%
The input image is scanned with a rectangular model bar, which makes it possible to uncover structures that can be locally approximated by this bar and to derive their orientations. The bar width can be varied over a broad range of values to probe filaments of different widths.}
{We performed several series of tests to validate the method and to assess its sensitivity to the level of noise, the filament aspect ratios, and the dynamic range of filament intensities. We found that the method exhibits very good performance at recovering the orientation of the filamentary structures, with an accuracy of $0.5^{\circ}$ in nominal conditions, up to $3^{\circ}$ in the worst case scenario with high level of noise. The filaments width is recovered with uncertainties better than $0.5\,\rm{px}$ (pixels) in most of the cases, which could extend up to $3\,\rm{px}$ in case of low signal-to-noise ratios. Some attempt to build a correspondence between Plummer-type filament profiles and outcomes of the method is proposed, but remains sensitive to the local environment.}
{Our method is found to be robust and adapted to the identification and the reconstruction of filamentary structures in various environments, from diffuse to dense medium. It allows us to explore the hierarchical scales of these filamentary structures with a high reliability, especially when dealing with their orientation.}
\keywords{ISM: clouds --
ISM: structures --
ISM: magnetic fields --
dust --
infrared: ISM --
submillimiter: ISM --
techniques: image processing
}
\section{Introduction}
\label{sec:introduction}
A large variety of methods have already been developed to extract elongated structures in two-dimensional (2D) maps. Their approaches may be divided into three main categories: some focus on a purely \textit{local} analysis of the structures, based on local derivatives at each pixel; others adopt a \textit{non-local} approach, by exploring a larger space around each pixel to look for specific scales; the third category of methods propose a \textit{global} analysis of the whole map, applying a multi-scale decomposition.
\textit{Local} methods compute either the gradient (first-order derivatives) \citep[e.g.,][]{Soler_2013, Soler_2016}
or the Hessian matrix (second-order derivatives) \citep[e.g.,][]{Polychroni_2013, Schisano_2014, Bracco_2016} at each pixel
of the considered map. %
In some cases, the main purpose is to derive the orientations of elongated structures for statistical purposes \citep[e.g.,][]{Soler_2013, Soler_2016, Bracco_2016}.
In other cases, filament skeletons are extracted by connecting contiguous pixels along the crests of the (intensity or column-density) distribution.
For instance, the {\tt DisPerSe} method, originally developed to recover filament skeletons in cosmic web maps \citep{Sousbie_2011}, was successfully applied to {\it Herschel} column density maps \citep[][]{Arzoumanian_2011, Arzoumanian_2019, Peretto_2012, Palmeirim_2013} and to $^{13}{\rm CO}$ intensity maps \citep{Panopoulou_2014}.
A limitation of the local approach is the difficulty of detecting faint structures such as striations.\footnote{
Here, we use the term striations to refer to the faint and periodic structures seen in the {\it Herschel} maps. These are similar to the periodic magnetic-field-aligned structures detected in the diffuse $^{12}$CO emission from the Taurus molecular cloud \citep{Goldsmith_2008, Narayanan_2008}.}
\textit{Non-local} methods focus on a given scale around each pixel. The template-matching approach \citep{Juvela_2016} allows one to look for any specific morphology of given dimensions and to build the probability of finding such oriented structures through a kernel convolution on the map. Another efficient approach is the Rolling Hough Transform \citep[{\tt RHT},][]{Clark_2014} method, which computes an estimator of the level of linearity of the structures in the neighbourhood of a pixel at a given scale, making use of the Hough transform. This method was extensively used in recent studies, in HI, {\it Herschel}, and {\it Planck} maps \citep[][]{Clark_2015, Clark_2019, Malinen_2016, Panopoulou_2016, Alina_2019}.
A third approach is the {\tt filfinder} method, which extracts filament skeletons \citep{Koch_215}. However, in contrast to {\tt DisPerSe}, {\tt filfinder} starts with a spatial filtering at a given scale and covers a dynamic range broad enough to include striations.
Finally \textit{global} methods offer a multi-scale and complete analysis of a field.
The {\tt getfilaments} method \citep{Men'shchikov_2013} extracts a filament network with the help of statistical tools and morphological filtering to remove background noise. It also extracts point sources and is able to perform a multi-waveband analysis. This method is very complete, but it requires some fine-tuning and additional tools to extract the filament orientations and scales. It was already applied to {\it Herschel} maps \citep{Cox_2016, Rivera-Ingraham_2016, Rivera_Ingraham_2017, Arzoumanian_2019}.
The wavelet-based methods by \citet{Robitaille_2019} and by \citet{Ossenkopf-Okada_2019} use an anisotropic-wavelet analysis to extract a whole filament network by analyzing fluctuations in the map as functions of spatial scale.
This kind of method may circumvent the well-known biases of other typically-used methods \citep{Panopoulou_2017} and it remains quite fast. However, it requires additional steps to extract filament orientations. Although this method is multi-scale, the wavelet analysis implies a logspace scaling, which results in a lower resolution at higher scale.
A full comparison between these different methods would be extremely useful, %
but to date only a few limited comparisons exist.
For instance, \citet[][]{Juvela_2016} compared the template-matching and {\tt RHT} methods applied to simulation data. He found that both methods give equally good results, except in simulations with significant noise and background fluctuations, for which template matching performs better.
More recently, \citet{Micelotta_2021} compared the gradient and {\tt RHT} methods, again on simulation data.
They found similarities, but also disparities, in the results, and they attributed the disparities to intrinsic differences in the filamentary structures selected by both methods.
Here, we would like to study the relative orientations between filaments and the local magnetic field.
To that end, we need a filament extraction method that can operate in a broad range of Galactic environments, from dense and complex structures to the more diffuse (neutral) medium, and can provide robust and homogeneous filament detections in various fields for a multi-scale statistical analysis. While none of the methods described above is fully satisfactory for our purpose, the closest is probably {\tt RHT}, which is easy to use and has already given promising results \citep[see, e.g.,][]{Clark_2014, Malinen_2016, Panopoulou_2016, Alina_2019}. Therefore, we decided to start from {\tt RHT} and adapt it %
to match our requirements.
This led us to develop a new method, called {\tt FilDReaMS} ({\bf Fil}ament {\bf D}etection and {\bf Re}construction {\bf a}t {\bf M}ultiple {\bf S}cales).
In Sect.~\ref{sec:overview}, we present the main features of our new {\tt FilDReaMS} method, with reference to {\tt RHT}.
In Sect.~\ref{sec:Methodology}, we provide a detailed description of the {\tt FilDReaMS} methodology.
In Sect.~\ref{sec:Validation}, we present the results of several series of simulations designed to validate {\tt FilDReaMS}.
In Sect.~\ref{sec:comparison}, we apply {\tt FilDReaMS} to the {\it Herschel} G210 field and compare our results to those previously obtained with {\tt RHT}.
In Sect.~\ref{sec:conclusion}, we conclude our paper.
\section{General overview}
\label{sec:overview}
\subsection{Introduction to \texttt{FilDReaMS}}
\label{sec:FilDReaMS_intro}
Let us consider a map of a given quantity $\I$, which, in the astrophysical context, can represent intensity, column density, or temperature. This map will be our reference to present the {\tt FilDReaMS} method. For simplicity, we will refer to $\I$ as being an intensity, keeping in mind that $\I$ actually has a broader meaning.
The purpose of applying {\tt FilDReaMS} to the map of $\I$ is to identify filamentary structures over a range of spatial scales and intensities, from the largest and brightest filaments down to striations. By considering that filaments can be locally approximated by rectangular bars, {\tt FilDReaMS} is able to extract two of their characteristics: the widths and the orientations of the associated bars.
In the following, the rectangular bar used by {\tt FilDReaMS} is referred to as the model bar.
It is characterized by a width $W_{\rm b}$, a length $L_{\rm b}$, and an aspect ratio $r_{\rm b} = L_{\rm b} / W_{\rm b}$.
For any filament detected with a model bar of width $W_{\rm b}$, $W_{\rm b}$ is referred to as the bar width of the filament.
The orientation angle of the model bar, $\psi_{\rm b}$, is defined with respect to a given north direction (e.g., Galactic north for an astrophysical map) and taken to increase counterclockwise from north.
This definition is consistent with the IAU convention for polarisation angles.
Since the model bar is symmetric, $\psi_{\rm b}$ can be defined over a $180^{\circ}$ range, which we choose to be $[-90^{\circ}, +90^{\circ}]$.
We adopt the same convention for the orientation angle of a filament, $\psi_{\rm f}$, defined in Sect.~\ref{sec:orientation_angle}.
Furthermore, when considering the difference between two angles defined in $[-90^{\circ}, +90^{\circ}]$, we require the result to also lie in the range $[-90^{\circ}, +90^{\circ}]$, possibly by adding or subtracting $180^{\circ}$.
All the parameters related to {\tt FilDReaMS}
are listed in Table~\ref{tab:FilDReaMS_notations}.
\begin{table*}
\caption{List of all the symbols used in the paper.}
\centering
\begin{threeparttable}
\begin{tabular}{m{3.0cm} m{12.5cm}}
\midrule\midrule
A & Initial map of intensity $\I$\\
\cmidrule(l r ){1-2}
$\I$ & Intensity in a broad sense (intensity, column density, or temperature)\\
\cmidrule(l r ){1-2}
B & Smoothed map resulting from the convolution of A with a 2D top-hat kernel of radius $R$\\
\cmidrule(l r ){1-2}
$R$ & Radius of the 2D top-hat kernel\\
\cmidrule(l r ){1-2}
C & Binary map derived from A$-$B\\
\cmidrule(l r ){1-2}
$i$ & Index of the considered pixel of map C for the detection of bar-like filaments\\
\cmidrule(l r ){1-2}
$W_{\rm b}$ & Width of the model bar\\
\cmidrule(l r ){1-2}
$L_{\rm b}$ & Length of the model bar\\
\cmidrule(l r ){1-2}
$r_{\rm b} = L_{\rm b}/W_{\rm b}$ & Aspect ratio of the model bar\\
\cmidrule(l r ){1-2}
$\psi_{\rm b}$ & Orientation angle of the model bar\\
\cmidrule(l r ){1-2}
$\psi_{\rm f}$ & Orientation angle of a filament\\
\cmidrule(l r ){1-2}
$f^{\rm M}$ & Measured orientation function\\
\cmidrule(l r ){1-2}
$\sigma_f$ & Median absolute deviation of $f^{\rm M}$\\
\cmidrule(l r ){1-2}
$f^{\rm I}$ & Ideal orientation function\\
\cmidrule(l r ){1-2}
$\Delta\psi$ & Width of the angular window over which $f^{\rm M}$ is compared to $f^{\rm I}$\\
\cmidrule(l r ){1-2}
$\chi_{\rm r}^{\rm M}$ & Measure of the normalized difference between $f^{\rm M}$ and $f^{\rm I}$ (Eq.~\ref{eq:delta})\\
\cmidrule(l r ){1-2}
$\I_0$ & Central intensity of synthetic filaments\\
\cmidrule(l r ){1-2}
$j$ & Index of the considered pixel of map A in one Monte-Carlo iteration\\
\cmidrule(l r ){1-2}
$\sigma_{{\rm A}_j}$ & Standard deviation of sub-region A$_j$ of map A, centered on pixel $j$\\
\cmidrule(l r ){1-2}
$\SNRfil = \I_0 / \sigma_{{\rm A}_j}$ & Signal-to-noise ratio of the ideal filament in the Monte-Carlo simulations\\
\cmidrule(l r ){1-2}
$f^{\rm S}$ & Synthetic orientation function\\
\cmidrule(l r ){1-2}
$\chi_{\rm r}^{\rm S}$ & Measure of the normalized difference between $f^{\rm S}$ and $f^{\rm I}$\\
\cmidrule(l r ){1-2}
$(\chi_{\rm r})_{\rm th}$ & Statistical threshold on $\chi_{\rm r}$\\
\cmidrule(l r ){1-2}
$S = (\chi_{\rm r})_{\rm th}$/$\chi_{\rm r}$ & Significance of filament detection\\
\cmidrule(l r ){1-2}
C' & Binary map composed of the model bars associated with all the significant filaments\\
\cmidrule(l r ){1-2}
R & Map of reconstructed filaments\\
\cmidrule(l r ){1-2}
$i'$ & Index of the considered pixel of map R\\
\cmidrule(l r ){1-2}
$\psi_{\rm f}^{\star}$ & Orientation angle of the most significant filament\\
\cmidrule(l r ){1-2}
$W_{\rm b}^{\star}$ & bar width of the most significant filament\\
\cmidrule(l r ){1-2}
$\sigma_{\mathcal{W}}$ & Standard deviation of white noise in the sets of simulations\\
\cmidrule(l r ){1-2}
$\sigma_{\mathcal{B}}$ & Standard deviation of Brownian noise in the sets of simulations\\
\cmidrule(l r ){1-2}
$N_{\rm pix}$ & Number of pixels whose most significant bar width is $W_{\rm b}^\star$\\
\cmidrule(l r ){1-2}
$N_{\rm map}$ & Total number of pixels in the map\\
\cmidrule(l r ){1-2}
$W_{\rm b}^{\star{\rm peak}}$ & Most prevalent bar width for the entire map\\
\midrule\midrule
\end{tabular}
\end{threeparttable}
\label{tab:FilDReaMS_notations}
\end{table*}
To provide a first application of {\tt FilDReaMS}, we selected one of the {\it Herschel} fields observed in the Galactic cold core (GCC) key-program \citep[]{Juvela_GCCI_2010, Juvela_GCCIII_2012}, the so-called G210 field, which corresponds to the high Galactic latitude star-forming region L1642. This region was in particular studied in detail by \citet{Malinen_2016}, who investigated the relative orientations between the magnetic field (traced with {\it Planck} polarisation data) and filaments extracted from the G210 {\it Herschel} map using the {\tt RHT} method.
G210 is located at Galactic longitude $l = 210.90^{\circ}$, Galactic latitude $b = -36.55^{\circ}$, and distance $d = 140\pm20\,{\rm pc}$ \citep{Montillaud_2015}.
Its ${\rm H_2}$ column density map has dimensions $\Delta l \times \Delta b = 1.28^{\circ}\times1.22^{\circ}$, corresponding to $3.1\,{\rm pc}\times3.0\,{\rm pc}$.
The angular size of a pixel, equal to one-third of the beam size, is $12"$, corresponding to $0.0081\,{\rm pc}$.
The ${\rm H_2}$ column density, $N_{\rm H_2}$, varies over the range $[0.2, 7.5]\times10^{21}\,{\rm cm}^{-2}$.
\subsection{Overview of \texttt{RHT}}
\label{sec:RHT_overview}
The {\tt RHT} method developed by \citet{Clark_2014} is based on the Hough transform designed to search for straight lines in images, even when partially filled or dominated by noise. Defining a straight line with two parameters (the orientation $\theta_{\tt H}$ of its normal and the minimal distance $\rho_{\tt H}$ to the origin), the Hough transform allows us to pass from the pixel space ($x,y$) to this other parameter space ($\rho_{\tt H}$, $\theta_{\tt H}$), by quantifying the number of pixels located on the same linear feature. In the case of the {\tt RHT} method, in order to focus on straight features and to suppress large-scale structures, a binary version of the original image is first built by subtracting the smoothed image with a top-hat and thresholding to zero. The Hough transform is then applied locally on this binary image: for each pixel, a simplified version of the Hough transform can be applied (with $\rho_{\tt H}$=0) inside a circular area defined around this central pixel in order to pass from pixel space to ($\theta_{\tt H}$) space, and to build the distribution of orientations of the linear features around this central pixel. Once applied iteratively on every pixel of the whole image, the last step of the method consists in choosing a common threshold over which the distributions of orientations are stored, and used to derive local average orientation or total {\tt RHT} intensity over the whole image.
This method is extremely powerful to get an estimate of the linearity level inside local regions of images irrespective of the overall brightness of the region. It still suffers from a few limitations. Firstly, the use of the Hough transform, which performs transformation from pixel space to ($\rho_{\tt H}$, $\theta_{\tt H}$) space, directly implies that {\tt RHT} distributions of orientation are dependent of the pixelisation of the image, which could lead to subtle bias of the results. Secondly, the choice of the threshold used to cut the distributions of orientations is arbitrary and may impact the analyses from one image to another.
\subsection{\texttt{RHT} versus \texttt{FilDReaMS}}
\label{sec:RHT_FilDReaMS}
{\tt FilDReaMS} tries to overcome some of the limitations of {\tt RHT}, as explained in the rest of this section. It also makes it possible to access additional information about the widths (more exactly, the bar widths) of the detected filaments.
The sensitivity of the algorithm to pixelisation is inherent in the basic implementation of the RHT, which uses relative positions of pixel centers to a given pixel (centered on $x,y$) to build a $\theta_{\tt H}$ representation. {\tt FilDReaMS} solves this problem by a fundamental change of philosophy: it starts from a discretization of the $\theta_{\tt H}$ space to compute the intersection of a rotated bar with any pixel area centered on ($x,y$).
In {\tt FilDReaMS}, the arbitrary choice of threshold in {\tt RHT} for determining linearity significance is alleviated by a comparison to a random distribution obtained locally using ideal template bars. This process takes into account the noise level and the complexity of the region, and provides us a robust assessment of the reliability of the detections.
The determination of a preferred orientation based on the orientation distribution in each pixel is improved in {\tt FilDReaMS} by performing a search for local maxima, allowing us to derive multiple peaks in the orientation distribution with individual significances, instead of averaging the orientation angle estimated over the whole orientation distribution.
\subsection{The main steps of \texttt{FilDReaMS}}
\label{sec:overview_RHT}
In brief, the main steps of {\tt FilDReaMS} are the following:
\begin{enumerate}
\item {\bf Spatial filtering} : a 2D top-hat filtering is applied to the input image, which is then converted to a binary map that contains only structures narrower than a given width,
see Sect.~\ref{sec:binary_map}.\\
\item {\bf Building an orientation distribution}: the matching between the binary map and
a model bar with given width $W_{\rm b}$ and variable orientation
is evaluated at each pixel $i$, in order to build an orientation distribution, see Sect.~\ref{sec:histogram_of_orientation}.\\
\item {\bf Detection of preferred orientations}: local maxima %
in the orientation distribution at each pixel $i$ are identified with preferred orientations, see Sect.~\ref{sec:orientation_angle}.\\
\item {\bf Reliability assessment}: the significance of each preferred orientation is assessed by comparison with an ideal orientation distribution, see Sect.~\ref{sec:significance}.\\
\item {\bf Reconstruction of physical filaments}: the true shape and the intensity of physical filaments is reconstructed from the initial image masked by the binary map, and a filament orientation angle at each pixel $i'$, $(\psi_{\rm f}^{\star})_{i'}$, is derived, see Sect.~\ref{sec:filament_visualisation}.\\
\item {\bf Iteration over various bar widths}: the procedure is repeated for a range of values of $W_{\rm b}$ in order to derive the most significant bar width at each pixel $i'$, $(W_{\rm b}^\star)_{i'}$, as well as the most prevalent bar widths for the entire map, $W_{\rm b}^{\star{\rm peak}}$, see Sect.~\ref{sec:signif_bar_width}.
\end{enumerate}
While step 1 above is fully similar to the {\tt RHT} processing, steps 2 and 3 have the same objectives as {\tt RHT} but a totally different implementation, allowing to address the pixelisation sensitivity (see Sect.~\ref{sec:histogram_of_orientation}) and the multiplicitiy of the local preferred orientations in case of crossing of linear structures (see Sect.~\ref{sec:orientation_angle}). Finally steps 4 to 6 are entirely new and specific to {\tt FilDReaMS}.
\section{Detailed methodology}
\label{sec:Methodology}
In this section, we describe in more detail the successive steps of the {\tt FilDReaMS} method applied to a map of intensity $\I$. As explained at the beginning of Sect.~\ref{sec:FilDReaMS_intro}, the word "intensity", used in connection with the symbol $\I$, is to be understood in a broad sense, which includes quantities such as column density and temperature. To illustrate the method, we provide detailed figures that rely on an ${\rm H_2}$ column density ($N_{\rm H_2}$) map of the {\it Herschel} G210 field.
\subsection{Spatial filtering}
\label{sec:binary_map}
Let us start with a map of intensity $\I$, which we will refer to as the initial map A (top left panel of Fig.~\ref{fig:FilDReaMS_method}).
Our purpose is to identify in this map filaments that can be locally approximated by a rectangular bar of width $W_{\rm b}$.
The first step of {\tt FilDReaMS} is to filter out structures wider than $W_{\rm b}$.
The spatial filtering is performed with the help of a 2D top-hat kernel of radius $R$, whose value is adjusted to the value of $W_{\rm b}$ in the manner explained after the next paragraph.
To begin with, the initial map A is convolved with the 2D top-hat kernel to produce a smoothed map B (top middle panel of Fig.~\ref{fig:FilDReaMS_method}).
Roughly speaking, this smoothing removes structures with widths smaller than $\sim 2R$.
The smoothed map B is then subtracted from the initial map A to produce a map A$-$B from which structures with widths larger than $\sim 2R$ are removed.
Finally, the map A$-$B is transformed into a binary map C by setting all the pixels with positive values to 1 (yellow pixels in the right panels of Fig.~\ref{fig:FilDReaMS_method}) and all the pixels with negative values to 0 (dark pixels).
The adjustement of $R$ to $W_{\rm b}$ is done iteratively by considering increasing values of $R$, starting at $R = 2\,{\rm px}$. For each value of $R$, the initial map A is convolved with a kernel of radius $R$ (as explained above), and the binary map C is examined in search of regions of non-zero pixels wider than $W_{\rm b}$. In practice, this is done by convolving C with a 2D top-hat kernel of diameter $(W_{\rm b}+1\,{\rm px})$; when the normalized convolution reaches a value of 1 at any pixel, we may conclude that this pixel is the center of a disk of diameter $(W_{\rm b}+1\,{\rm px}$) filled with non-zero pixels, which in turn implies that C contains a region of non-zero pixels wider than $W_{\rm b}$. Hence the iteration process stops.
Thus, the binary map C represents the contrasted structures (yellow in Fig.~\ref{fig:FilDReaMS_method}) from the initial map A that are not wider than $W_{\rm b}$.
The convolution gives rise to border effects, with a blank band of width $R$ adjacent to the border of the convolved map B. As a result, map B is somewhat smaller than the initial map A. For large values of $W_{\rm b}$, the blank band may represent a significant fraction of map A; in that case, large filaments close to the edge of map A may escape detection.
\subsection{Orientation distribution}
\label{sec:histogram_of_orientation}
Let us consider a given pixel $i$ in the full binary map C (top right panel of Fig.~\ref{fig:FilDReaMS_method}) as well as the surrounding sub-region C$_i$ of size $L_{\rm b}$, centered on pixel $i$ (bottom right panel).
Obviously, pixel $i$ must be more distant than $L_{\rm b}/2$ from the border of map C.
Our purpose is to find structures in C$_i$ that can be locally matched to a model bar of width $W_{\rm b}$.
We consider values of the orientation angle of the model bar, $\psi_{\rm b}$ (defined with the conventions described in Sect.~\ref{sec:FilDReaMS_intro}), spanning the range -90$^{\circ}$ to +90$^{\circ}$ in 1$^{\circ}$ steps. For each value of $\psi_{\rm b}$, the model bar is centered on the considered pixel $i$, and we measure the fraction $f_i\,(\psi_{\rm b})$ of
the bar area covered by non-zero pixels
(yellow pixels). This gives us the "measured" orientation function, $f^{\mathrm{M}}_i$ (blue curve in the bottom-middle panel of Fig.~\ref{fig:FilDReaMS_method}).
The computation of the area resulting of the intersection of every pixel and the model bar rotated by $\psi_{\rm b}$ is performed only once at the beginning of the processing for all pixels of the $L_{\rm b} \times L_{\rm b}$ domain, and obtained through a numerical drizzling approach. In practice each original pixel is sub-divided into $N_{\rm{drizz}}$ sub-pixels, allowing us to compute an estimate of the exact intersection of the bar and the pixel at a desired accuracy. We adopt a value of $N_{\rm{drizz}}$=101, scaling with $5/W_{\rm b}$, so that we keep an accuracy better than 0.2\% on the computation of the intersecting area with the model bar.
Once computed, this kernel defined over $L_{\rm b} \times L_{\rm b}$ can be translated at any pixel $i$ location and multiplied with the full binary map to obtain the "measured" orientation function for this pixel.
\subsection{Detection of potential bar-like filaments}
\label{sec:orientation_angle}
The measured orientation function, $f^{\rm M}_i$, makes it possible to detect potential filaments of bar width $W_{\rm b}$ centered on pixel $i$ and to estimate their orientation angle, $(\psi_{\rm f})_i$. We stress that the measured orientation function is relatively smooth over the range of orientation angle, since it results from a kind of convolution with an extended bar at each orientation angle value, so that the determination of local maxima is not affected by local pixel-scale fluctuations (see bottom-right panel of Fig.~\ref{fig:RHT_vs_FilDReaMS}).
{\tt FilDReaMS} identifies the local maxima of the measured orientation function, $f^{\rm M}_i$, through an iterative process. It assigns the first peak to the maximum value of $f^{\rm M}_i$ over the whole range of orientation angle. It then assigns for this peak, an angular window of width $\Delta\psi$, centered on the peak orientation angle, and corresponding to the expected domain of angular extension of the model bar, defined as twice the angle between the bar's diagonals:
\begin{equation}
\label{eq:comparison_window}
\Delta\psi = 4 \ \arctan \left( \frac{1}{r_{\rm b}} \right) \, ,
\end{equation}
where $r_{\rm b} = L_{\rm b}/W_{\rm b}$ is the aspect ratio of the model bar.
\noindent
Once the first peak is identified with its own angular window, {\tt FilDReaMS} looks for any other local maximum over the remaining angular domain of the original interval, and assigns to this second peak the corresponding angular orientation and its $\Delta\psi$ angular window. This procedure is repeated until all the local maxima in $f^{\rm M}_i$ are identified. In the event that two consecutive peaks, at $\psi_1$ and $\psi_2$, are so close that their windows overlap ($\vert \psi_1 - \psi_2 \vert < \Delta\psi$), they are considered to actually be part of one and the same potential filament; in that case, only the higher peak is retained together with its window, while the lower peak is ignored. All the peaks remaining after the handling of overlapping windows correspond to potential filaments, whereas the other peaks are considered to be part of the background.
In the case of Fig.~\ref{fig:RHT_vs_FilDReaMS}, {\tt FilDReaMS} detects two potential filaments with orientation angles $\psi_1$ and $\psi_2$ (bottom-right panel), while the common practice with {\tt RHT} orientation functions is to compute the expectation over a given threshold leading to a single preferred orientation in this specific case (bottom-left panel).
\subsection{Reliability assessment of potential bar-like filaments}
\label{sec:significance}
\subsubsection{The significance criterion}
\label{sec:significance_criterion}
To test the reality of a potential filament detected at the considered pixel $i$, we compare the measured orientation function, $f^{\rm M}_i$, to the ideal orientation function, $f^{\rm I}_i$ (right panel of Fig.~\ref{fig:Ideal_hist_window}), that would be obtained for an ideal filament -- similar to a filament having the exact same shape as the model bar -- superposed on an empty background, at the orientation angle $(\psi_{\rm f})_i$ of the potential filament (left panel). Clearly, at $\psi_{\rm b}=(\psi_{\rm f})_i$, the model bar coincides exactly with the ideal filament, such that all the pixels of the model bar have a value of 1 in the binary map of the ideal filament (middle panel).
As a result, $f^{\rm I}_i$ reaches its peak value, $f^{\rm I}_i = 1$, at $\psi_{\rm b}=(\psi_{\rm f})_i$ (right panel).
Quite expectedly, the width of the peak in $f^{\rm I}_i$ is $\sim \Delta\psi$ (Eq.~\ref{eq:comparison_window}).
{\tt FilDReaMS} then compares the measured and ideal orientation functions over an angular window of width $\Delta\psi$ centered on the peak associated with the potential filament.
This window is both broad enough to capture the relevant characteristics of the potential filament and narrow enough to avoid contamination by a nearby filament at a slightly different angle. The comparison is performed with the help of the parameter
\begin{equation}
\label{eq:delta}
(\chi_{\rm r}^{\rm M})_i = \sqrt{\frac{\chi^2}{N_\psi}} = \sqrt{\frac{1}{N_\psi} \ \sum^{(\psi_{\rm f})_i + \Delta\psi/2}_{\psi_{\rm b} = (\psi_{\rm f})_i - \Delta\psi/2}\frac{\left( f^{\rm I}_i\,(\psi_{\rm b}) - f^{\rm M}_i\,(\psi_{\rm b}) \right)^2}{\sigma_f^2}}\,,
\end{equation}
\noindent where
\begin{equation}
\label{eq:MAD}
\sigma_f = \mathrm{median}\left(\left|f^{\rm M}_i\,(\psi_{\rm b}) - \mathrm{median}\left(f^{\rm M}_i\,(\psi_{\rm b})\right)\right|\right)\,,
\end{equation}
\noindent $N_\psi$ is the number of $\psi_{\rm b}$ values within the comparison window $\Delta\psi$, and $\sigma_f$ is the intrinsic error in $f^{\rm M}_i\,(\psi_{\rm b})$ defined as the median absolute deviation computed over all the angles $\psi_{\rm b}$ of all the pixels $i$ of the initial map A. $(\chi_{\rm r}^{\rm M})_i$, which is actually similar to the square root of a reduced chi-squared, quantifies how close a potential filament is to a rectangular bar in an empty background.
We assess the significance of the potential filament by comparing $(\chi_{\rm r}^{\rm M})_i$ to a threshold, $(\chi_{\rm r})_{\rm th}$, derived through Monte-Carlo simulations. Since these simulations need to be run only once (for any given $W_{\rm b}$) for the entire map A, not for each pixel $i$ separately, we describe them in a separate subsection (Sect.~\ref{sec:montecarlo}).
We consider that a potential filament detected at pixel $i$ is significant if
\begin{equation}
\label{eq:chi_r}
(\chi_{\rm r}^{\rm M})_i < (\chi_{\rm r})_{\rm th}\,,
\end{equation}
\noindent or, equivalently,
\begin{equation}
\label{eq:criterion_significance}
S_i>1 \, ,
\end{equation}
\noindent where
\begin{equation}
\label{eq:significance}
S_i = \frac{(\chi_{\rm r})_{\rm th}}{(\chi_{\rm r}^{\rm M})_i}\,
\end{equation}
\noindent is defined as the significance of the detection.
The definition of the threshold $(\chi_{\rm r})_{\rm th}$ in Sect.~\ref{sec:montecarlo}
implies that there is a 5$\,\%$ chance of mistakenly rejecting an ideal filament that was actually significant.
\subsubsection{Description of the Monte-Carlo simulations}
\label{sec:montecarlo}
The purpose of the Monte-Carlo simulations (illustrated in Fig.~\ref{fig:binary_image_significance}) is to derive the threshold, $(\chi_{\rm r})_{\rm th}$, below which a potential filament can be considered significant against the background (see Eq.~\ref{eq:chi_r}). This threshold depends only on the bar width, $W_{\rm b}$, and it applies to the entire map A. At each iteration (top row), a pixel $j$ is drawn at random in map A, and a sub-region A$_j$ of size $L_{\rm b}$, centered on pixel $j$, is cut-out (leftmost panel of Fig.~\ref{fig:binary_image_significance}). Pixel $j$ must lie far enough from the border of map A to ensure that A$_j$ is entirely contained within A and that convolution with a 2D top-hat function of radius $R$ will be possible (as explained in Sect.~\ref{sec:binary_map}). A synthetic map A$^{\rm S}_j$ is then created (middle left panel) by superposing on the sub-region A$_j$ an ideal filament with uniform intensity $\I_0$, centered on pixel $j$ and oriented at random (over a uniform distribution in angle). For convenience, $\I_0$ is written in terms of the standard deviation of sub-region A$_j$, $\sigma_{{\rm A}_j}$, and the signal-to-noise ratio of the filament, $\SNRfil$: $\I_0 = \sigma_{{\rm A}_j} \ \SNRfil$. $\SNRfil$ is a free parameter, whose value is discussed in Sect.~\ref{sec:instructions_for_use}.
Applying {\tt FilDReaMS} to the synthetic map A$^{\rm S}_j$ (surrounded by a band of width $R$ from map A to allow convolution with a 2D top-hat function of radius $R$
\footnote{For each value of the bar width, $W_{\rm b}$, the value of $R$ is calculated once and for all for the entire map A (see Sect.~\ref{sec:binary_map}); it is not recalculated at each Monte-Carlo iteration.}) leads to a synthetic binary map C$^{\rm S}_j$ (middle right panel of Fig.~\ref{fig:binary_image_significance}) followed by a synthetic orientation function, $f^{\rm S}_j$ (blue curve in the right panel of Fig.~\ref{fig:binary_image_significance}). Adding a contrasted synthetic filament in A$^{\rm S}_j$ implies that all the pixels close to the filament are now part of a less contrasted background (with respect to the filament), and together form a thin dark region the surrounding filament in C$^{\rm S}_j$ (see Sect.~\ref{sec:binary_map}). The synthetic orientation function $f^{\rm S}_j$ can be compared to the corresponding ideal orientation function, $f^{\rm I}_j$, similar to the orientation function obtained for the same ideal filament superposed on an empty background (orange curve).
The associated $(\chi_{\rm r}^{\rm S})_j$ is then derived from Eq.~\ref{eq:delta} with $f^{\rm M}$ replaced by $f^{\rm S}$. Clearly, $(\chi_{\rm r}^{\rm S})_j$ quantifies the impact of the background on the ideal filament by comparing the orientation functions $f^{\rm I}$ and $f^{\rm S}$ obtained from maps without (map A$_i$ in Fig.~\ref{fig:Ideal_hist_window}, for example) and with (map A$^{\rm S}_j$ in Fig.~\ref{fig:binary_image_significance}) background, respectively.
This Monte-Carlo iteration is repeated ten thousand times (each time with a new random region A$_j$ and a new random orientation of the ideal filament). The resulting Monte-Carlo distribution $D$ of $\chi_{\rm r}^{\rm S}$ is shown in the bottom panel of Fig.~\ref{fig:binary_image_significance}. The threshold $(\chi_{\rm r})_{\rm th}$ is chosen to be the 95-th percentile of the distribution of $\chi_{\rm r}^{\rm S}$.
\subsection{Reconstruction of physical filaments}
\label{sec:filament_visualisation}
So far, we have used a model bar with given width $W_{\rm b}$, and we have applied {\tt FilDReaMS} to a given pixel $i$ of the initial map A.
More specifically, we have detected potential bar-like filaments centered at pixel $i$ (Sect.~\ref{sec:orientation_angle}) and we have retained the significant filaments, similar to filaments with significance $S_i > 1$ (Sect.~\ref{sec:significance}).
To detect all the bar-like filaments of bar width $W_{\rm b}$ in map A, we repeat the procedure for all the pixels
$i$ of map C that are more distant than $L_{\rm b}/2$ from the border (see beginning of Sect.~\ref{sec:histogram_of_orientation}). The significance at all the pixels of C (for $W_{\rm b}=14\,{\rm px}$) is shown in the top panel of Fig.~\ref{fig:significance_maps}, while the significance at the centers of significant filaments ($S>1$) are shown in the bottom panel. We can see that most pixels of map C are not significant ($S<1$). The most significant filaments have $S\simeq6$.
By combining all the significant bar-like filaments, we can then reconstruct the true shape and the intensity of physical filaments, as illustrated in Fig.~\ref{fig:filament_reconstruction}.
We first produce a binary map C' in which the model bars associated with all the significant filaments have their pixels set to 1, while all the other pixels are set to 0. We then create a filament mask by multiplying the binary map C introduced in Sect.~\ref{sec:binary_map} by the new binary map C', and we apply this mask to the initial map A, by computing a simple product of both images.
The resulting map R (rightmost panel of Fig.~\ref{fig:filament_reconstruction}) reveals the network of physical filaments of bar width $W_{\rm b}$.
At this point, a filament orientation angle (denoted by $(\psi_{\rm f})_i$ at pixel $i$) is defined only at pixels at which one or more significant bar-like filaments are found (bottom-left panel of Fig.~\ref{fig:filament_reconstruction}).
We now assign a filament orientation angle (denoted by $(\psi_{\rm f}^\star)_{i'}$ at pixel $i'$) to all non-zero pixels of map R
(rightmost panel of Fig.~\ref{fig:filament_reconstruction}).
For each pixel $i'$, we consider all the significant filaments whose associated model bars pass through $i'$, and we define $(\psi_{\rm f}^\star)_{i'}$ as the orientation angle of the most significant filament, similar to the filament with the highest significance, $S_i$ (Eq.~\ref{eq:significance}).
It is important to realize that for pixels $i'$ at which $(\psi_{\rm f})_{i'}$ is defined, $(\psi_{\rm f}^\star)_{i'}$ generally differs slightly from $(\psi_{\rm f})_{i'}$, because the most significant bar-like filament passing through $i'$ is generally not the filament centered on $i'$, but a filament centered on a neighboring pixel $i$. Also keep in mind that $(\psi_{\rm f}^\star)_{i'}$ is a function of the bar width, $W_{\rm b}$.
\subsection{Derivation of most significant bar widths and most prevalent bar widths}
\label{sec:signif_bar_width}
Now that we have laid out the entire procedure for a given bar width, $W_{\rm b}$ (Sects~\ref{sec:binary_map} to \ref{sec:filament_visualisation}), we iterate over a range of values of $W_{\rm b}$
(see Sect.~\ref{sec:instructions_for_use} for a recommendation on the optimal range). This enables us to assign a "most significant bar width", $(W_{\rm b}^\star)_{i'}$, to every non-zero pixel $i'$ of the different reconstructed maps R$(W_{\rm b})$.
Namely, for every pixel $i'$, we consider all the significant filaments whose associated model bars pass through $i'$, and we define $(W_{\rm b}^\star)_{i'}$ as the bar width of the most significant filament.
We can then construct a histogram of $W_{\rm b}^\star$ over all pixels, namely,
the number of pixels, $N_{\rm pix}$, whose most significant bar width is equal to $W_{\rm b}^\star$.
In practice, to make it easier to compare different maps, we propose to work with a normalized histogram, where $N_{\rm pix}$ is divided by the total number of pixels in the map, $N_{\rm map}$.
If the histogram exhibits clear peaks (i.e., local maxima) the corresponding bar widths are denoted by $W_{\rm b}^{\star{\rm peak}}$ and referred to as "most prevalent bar widths".
Examples of histograms $(N_{\rm pix}/N_{\rm map})$ versus $W_{\rm b}^\star$ are shown in Figs.~\ref{fig:power_spectrum} and \ref{fig:plummer_power_spectrum}.
\begin{table*}
\caption{Free parameters of {\tt FilDReaMS}, with their definitions (second column) and the corresponding characteristics of the filaments to be detected by {\tt FilDReaMS} (third column).}
\centering
\begin{threeparttable}
\newcolumntype{M}[1]{%
>{\vbox to 2ex\bgroup\vfill}%
p{#1}%
<{\egroup}}
\begin{tabular}{m{2.0cm} | M{6.5cm} | M{6.5cm}}
\midrule\midrule
$W_{\rm b}$ & Width of the {\tt FilDReaMS} model bar & Width of the detected filaments\\
\cmidrule(l r ){1-3}
$r_{\rm b}=L_{\rm b}$/$W_{\rm b}$ & Aspect ratio of the {\tt FilDReaMS} model bar & Minimum elongation of the detected filaments\\
\cmidrule(l r ){1-3}
$\SNRfil$ & Signal-to-noise ratio of the ideal filament in the Monte-Carlo simulations & Minimum contrast of the detected filaments\\
\midrule\midrule
\end{tabular}
\end{threeparttable}
\label{tab:FilDReaMS_parameters}
\end{table*}
\subsection{Instructions for use}
\label{sec:instructions_for_use}
We now discuss the optimal values, or ranges of values, of the three important free parameters of {\tt FilDReaMS} (see Table~\ref{tab:FilDReaMS_parameters}): the width of the model bar, $W_{\rm b}$, the aspect ratio of the model bar, $r_{\rm b} = L_{\rm b} / W_{\rm b}$, and the signal-to-noise ratio of the ideal filament in the Monte-Carlo simulations, $\SNRfil$.
The aspect ratio of the model bar, $r_{\rm b}$, sets an approximate lower limit to the aspect ratio of elongated structures that can be detected with {\tt FilDReaMS}. Here, we consider that an elongated structure can be qualified as a filament if its aspect ratio is at least 3. Accordingly, we recommend to adopt $r_{\rm b}=3$. This value was also used by \citet{Panopoulou_2014,Arzoumanian_2019}.
The signal-to-noise ratio of the ideal filament in the Monte-Carlo simulations, $\SNRfil$, sets an approximate lower limit to the intensity $\I$ (in a broad sense) of filaments that have a high probability of being detected with {\tt FilDReaMS}. If $\SNRfil$ is too low, some of the small-scale noise structures might be mistaken for physical filaments. On the other hand, if $\SNRfil$ is too high, some physical filaments might escape detection. As a compromise, we recommend to adopt $\SNRfil=3$.
The width of the model bar, $W_{\rm b}$, is only constrained by the pixel size at the low end and by the size of the initial map A at the high end.
For the lower limit, a good choice is typically $(W_{\rm b})_{\rm min} = 5\,{\rm px}$; this choice is particularly relevant in the case of the {\it Herschel} fields, where the beam size equals three times the pixel size.
For the upper limit, we take $(W_{\rm b})_{\rm max} = (L_{\rm b})_{\rm max} / r_{\rm b}$, with $(L_{\rm b})_{\rm max}$ equal to one-third the size of map A;
in the {\it Herschel} G210 field, $(W_{\rm b})_{\rm max} = 27\,{\rm px}$.
\section{Validation}
\label{sec:Validation}
{\tt FilDReaMS} is in principle able to detect filaments with different sizes and orientations in a given map of intensity $\I$. Before applying {\tt FilDReaMS} to real scientific data, we need to validate the method and estimate its reliability.
In the following, we explore the impact of the filament profile, the noise level and the aspect ratio on the bar width and orientation estimates. We finally investigate the case with superposition of filaments with variable intensities, in combination with the above parameters.
We first describe in Sect.~\ref{sec:simulations} the set of simulations used to performed these analyses. We then test the ability of {\tt FilDReaMS} to recover the widths of the input filaments in Sect.~\ref{sec:fil_scales}, and their orientations in Sect.~\ref{sec:fil_orientations}.
\subsection{Set of Simulations}
\label{sec:simulations}
We create several series of simulated maps composed of synthetic filaments embedded in realistic environments, including noise and incoherent structures. To study the impact of the filament radial profile, we consider in these simulations two types of synthetic filaments, which, by default (i.e., unless explicitly stated otherwise), have the following characteristics:
\begin{itemize}
\item Ideal filaments: They have the shape of the rectangular model bar, with width $W_{\rm b}$, length $L_{\rm b}$, and aspect ratio $r_{\rm b} = L_{\rm b} / W_{\rm b}$, and they have uniform intensity, $\I_0$. \\
\item Plummer-type filaments: Their intensity can be described by a 2D Plummer-type profile with half-width $R_{\rm flat}$ and aspect ratio $r_{\rm b}$:
\begin{equation}
\label{eq:plummer}
\I_{\rm P}(x, y) = \I_0\left[ 1 + \left( \frac{x}{R_{\rm flat}} \right)^2 + \left( \frac{y}{r_{\rm b} \, R_{\rm flat}} \right)^2 \right]^{-(p-1)/2}\,,
\end{equation}
\noindent where $x$ and $y$ are the coordinates across and along the long axis of the filament, $p$ is the Plummer power-law index, and $\I_0$ is the central intensity. We adopt $p=2.2$ \citep[median value obtained by][for a sample of 599 filaments including G300]{Arzoumanian_2019}.
\end{itemize}
The realistic environment is simulated as a combination of white and power-law (Brownian) noises, with r.m.s. intensity $\sigma_{\mathcal{W}}$ and $\sigma_{\mathcal{B}}$, respectively. The white noise is meant to represent instrumental noise, which was estimated to be $\leq 7\%$ of the intensity signal for {\it Herschel} SPIRE data \citep[][]{Juvela_GCCIII_2012}, while the Brownian noise (with a power-law index of $-2$) is meant to represent astrophysical fluctuations \citep[e.g.][]{Miville-Deschenes_2003_brownian}. Both types of noise have different realisations in each map. In the following, we consider two different configurations denoted as 'default' and 'high' levels of noise. In the first case, we chose $\sigma_{\mathcal{W}} = 0.05\,\I_0$ and $\sigma_{\mathcal{B}} = 0.3\,\I_0$, while in the second case we chose $\sigma_{\mathcal{W}} = 0.1\,\I_0$ and $\sigma_{\mathcal{B}} = \I_0$.
We perform four different sets of simulations: \begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{SimSet 1}: $r_{\rm b}$=3 and default noise level.
\item \textbf{SimSet 2}: $r_{\rm b}$=3 and high noise level.
\item \textbf{SimSet 3}: $r_{\rm b}$=10 and default noise level.
\item \textbf{SimSet 4}: Overlapping filaments with variable aspect ratios and intensities, with default noise level.
\end{itemize}
For the first three sets of simulations, it consists of the co-addition of a map of synthetic filaments with realisations of realistic noise. Each original map is made of 91 non-overlapping synthetic filaments of common width, $W_{\rm b}$, and aspect ratio, $r_{\rm b}$, but different orientations spanning linearly from 0$^{\circ}$ to +90$^{\circ}$ in 1$^{\circ}$ steps. A series of 16 maps are thus created for each value of the width, $W_{\rm b}$ ranging from 5 to 20\,px (in steps of 1\,px), on which 2000 realisations of realistic noise are co-added.
For the last set of simulations, SimSet~4, it consists of a single series of 100 maps containing 64 synthetic filaments: 48 filaments with $W_{\rm b}=9\,{\rm px}$ and 16 filaments with $W_{\rm b}=15\,{\rm px}$. Both groups of filaments cover approximately the same total surface area. Each filament is randomly assigned one out of 8 possible values of the filament aspect ratio, $r_{\rm b}$, in the linear range $[3,10]$ and (independently) one out of 8 possible values of the filament intensity, $\I_0$, in the logarithmic range $[0.2,5]$. Each value of $r_{\rm b}$ and each value of $\I_0$ are assigned to exactly eight filaments. The filaments are located at random positions, they have random orientations, and they can possibly overlap. For each map realisation, we co-add a noise realisation using the default configuration.
All sets of simulations are finally repeated for ideal and Plummer-type filaments, and their simulation parameters are displayed in Table.~\ref{tab:simulation_sets}. In the case of SimSet~1, we also produce two other sets of simulations using values of the Plummer-type filaments with power-law index, $p=1.2$ and 4. The value $p = 4$ corresponds to an isolated filament in isothermal, non-magnetic hydrostatic balance \citep{Ostriker_1964}; this value was also used by \cite{Suri_2019} to fit filament profiles. The value $p = 1.2$ is the lower limit explored by \cite{Arzoumanian_2019} in their study. The larger and smaller values yield more and less contrasted filaments, respectively.
\begin{table*}
\centering
\caption{Description of the parameters of the various sets of simulations performed to test the robustness of the width and orientation estimates recovery by {\tt FilDReaMS}.\\}
\begin{threeparttable}
\begin{tabular}{c | cccccc }
\toprule\toprule
\multirow{3}{*}{\begin{tabular}{c} \textbf{\large SimSet} \\ \textbf{ID} \\ \end{tabular}}
& \multicolumn{6}{c}{Simulation parameters}\\
\cline{2-7}
& width & orientation & aspect ratio & central intensity & overlap & noise level \\
& $[\rm{px}]$ & & $r_{\rm{b}}$ & $\I_0$ & & $(\sigma_{\mathcal{W}}, \sigma_{\mathcal{B}}) \% \times \I_0$ \\
\hline
\textbf{SimSet 1} & $[5,20]$ & $[0^{\circ},90^{\circ}]$\tnote{a} & 3 & 1 & no & $(5,30)$ \\
\textbf{SimSet 2} & $[5,20]$ & $[0^{\circ},90^{\circ}]$\tnote{a} & 3 & 1 & no & $(10,100)$ \\
\textbf{SimSet 3} & $[5,20]$ & $[0^{\circ},90^{\circ}]$\tnote{a} & 10 & 1 & no & $(5,30)$ \\
\textbf{SimSet 4} & $9; 15$ & $[0^{\circ},90^{\circ}]$\tnote{b} & $[3,10]$\tnote{c} & $[0.2,5]$\tnote{d} & yes & $(5,30)$
\\
\bottomrule\bottomrule
\end{tabular}
\begin{tablenotes}
\item {\bf Notes.} The 'bar width' (in pixel), 'orientation', 'aspect ratio' ($r_{\rm{b}}$), and 'central intensity' ($\I_0$) provide the ranges of those parameters used to build the simulated bars, which are specified to present 'overlap' or not ; the 'noise level' is described with two components, a level of white noise ($\sigma_{\mathcal{W}}$) and Brownian noise ($\sigma_{\mathcal{B}}$).
\item[a] All values are considered within the interval with a step of $1^{\circ}$.
\item[b] Randomly picked in the considered range.
\item[c] Randomly assigned to one out 8 possible values in the interval, following linear steps of $1\,\rm{px}$.
\item[d] Randomly assigned to one out 8 possible values in the interval, distributed in logarithmic scale.
\end{tablenotes}
\end{threeparttable}
\label{tab:simulation_sets}
\end{table*}
\subsection{Filament bar widths}
\label{sec:fil_scales}
In the following, we construct the histogram $(N_{\rm pix}/N_{\rm map})$ versus $W_{\rm b}^\star$ of every map (as explained in Sect.~\ref{sec:signif_bar_width}), and we average over the 2000 noise realisations for the first three sets of simulations (SimSet~1, 2 and 3) and over the 100 noise realisations for the fourth set of simulations (SimSet~4). For this analysis, we can identify a most prevalent bar width $W_{\rm b}^{\star \rm peak}$, averaged over the multiple noise realisations, and its uncertainty (or precision), equivalent to the standard deviation of the histogram. The most prevalent bar width will then be compared with the input width of the simulation.
\subsubsection{Impact of the noise level}
\label{sec:fil_scales_noise}
For this study we focus on the first two sets of simulations, SimSet~1 \& 2.
In the case of ideal filaments, we find for each input value of $W_{\rm b}$ that the average histogram is dominated by a well-defined peak at $W_{\rm b}^{\star{\rm peak}} = W_{\rm b}$, which means that {\tt FilDReaMS} exhibits a very good accuracy by easily recovering the right width of the input filament, even in the presence of high noise. However, the peak does not stand out as clearly at the high noise level (SimSet~2) as at the default noise level (SimSet~1). The precision of the width estimate is very stable and is only moved from $0.01\,{\rm px}$ to $0.4\,{\rm px}$ with increasing noise level (see Table.~\ref{tab:uncertainties}).
This is illustrated for $W_{\rm b}= 12\,{\rm px}$ in Fig.~\ref{fig:test_histo}, where the histogram at the high noise level (middle panel, blue) can be compared to the histogram at the default noise level (top panel, blue).
We show the case of the Plummer-type filaments for the same sets of simulations in the top and middle panels of Fig.~\ref{fig:test_histo} (orange histograms), with $R_{\rm flat}= 12\,{\rm px}$. In the case of default noise level (top panel), the histogram of $W_{\rm b}^{\star}$ is clearly shifted compared to the input value and more spread than in the case of the ideal filament. This shift is expected and will depend on the power-law of the Plummer profile, as illustrated in Sect.~\ref{sec:correspondance}. The larger spread can be explained here by the larger impact of the noise on the tails of the Plummer profile.
In addition, the central intensity $\I_0$ is only reached along the crest of the Plummer-type filament. The integral of the transverse profile of a Plummer-type filament over a bar width $W_{\rm b} = 2R_{\rm flat}$ represents $75\,\%$ of the integration over the corresponding ideal filament (when computed with the default Plummer power-law index equal to 2.2). Therefore, there will be a slightly higher degradation from the noise as one moves away from the crest.
Despite the shift observed in the case of the Plummer-type filament with default noise, we still obtain a precision of $0.4\,{\rm px}$. However, the histograms $(N_{\rm pix}/N_{\rm map})$ versus $W_{\rm b}^\star$ of Plummer-type filaments may exhibit one or two spurious (lower) peaks.
The impact of increasing the noise level is much more pronounced than in the ideal filament case (see middle panel), leading to much larger uncertainties, $2.7\,{\rm px}$, stable over the whole range of input width (see Table.~\ref{tab:uncertainties}). The increase of noise level increases the frequency and the strength of the spurious peaks. The observed shift between the widths $2R_{\rm flat}$ and $W_{\rm b}^{\star, \rm peak}$ can be empirically predicted, as we will discuss in Sect.\ref{sec:correspondance}.
\subsubsection{Impact of the filament aspect ratio}
\label{sec:fil_scales_aspectratio}
For this analysis we use the two sets of simulations, SimSet~1 \& 3, between which the aspect ratio has been changed from 3 to 10. The peak becomes increasingly dominant as $r_{\rm b}$ increases, leading to a gain of a factor of 2 to 5 of the precision, for Plummer and ideal cases respectively (see Table.~\ref{tab:uncertainties}). This is in accordance with our expectation that more elongated filaments are easier to recognize. In the case of Plummer-type filaments, the shift between $2R_{\rm flat}$ and $W_{\rm b}^{\star, \rm peak}$ appears to be unchanged. This is illustrated in Fig.~\ref{fig:test_histo}, where the histograms obtained for $r_{\rm b}= 10$ (bottom panel) can be compared to the histograms obtained for $r_{\rm b}= 3$ (top panel).
\subsubsection{Plummer-to-ideal width relation}
\label{sec:correspondance}
The sets of simulations with Plummer-type filaments allow us to derive the empirical relation between $2R_{\rm flat}$ and $W_{\rm b}^{\star{\rm peak}}$ for different noise levels and also for different values of the Plummer index, $p$.
The curves linking the input $2R_{\rm flat}$ to the output $W_{\rm b}^{\star{\rm peak}}$ are plotted in the top panel of Fig.~\ref{fig:scale_plum_rect}, for the default and high noise levels, SimSet~1 (blue triangles) and SimSet~2 (dark blue triangles) respectively. Also plotted are the line $W_{\rm b}^{\star{\rm peak}} = 2R_{\rm flat}$ (dashed line), the linear fits to the two curves $W_{\rm b}^{\star{\rm peak}}$ versus $2R_{\rm flat}$ (solid lines in the corresponding colors), as well as the standard deviations computed on each average histogram $(N_{\rm pix}/N_{\rm map})$ versus $W_{\rm b}^\star$ (error bars on each symbol in the corresponding colors).
For future reference, the parameters $a$ and $b$ of the linear fits $W_{\rm b}^{\rm \star peak} = a \, (2R_{\rm flat}) + b$ are given in Table~\ref{tab:linear_fits}.
The error in the fit for SimSet~1 can be roughly estimated at $\lesssim 1\,{\rm px}$ over the range $R_{\rm flat} = [5, 20]\,{\rm px}$, from a comparison with linear fit constrained to pass through the origin.
This error adds to the statistical uncertainty $\simeq 0.5\,{\rm px}$ associated with the dispersion of the measured points about the fits and to the standard deviations (see Table.~\ref{tab:uncertainties}).
In the case of high noise level (SimSet~2), $a$ slightly decreases with increasing noise, whereas $b$ takes on larger values.
This, combined with the high statistical uncertainty and standard deviations (error bars), indicates that {\tt FilDReaMS} is not well suited to recover the width of input Plummer-type filaments in the case of high noise level.
\begin{table*}
\centering
\caption{Parameters $a$ and $b$ of the linear fits $W_{\rm b}^{\rm \star peak} = a \, (2R_{\rm flat}) + b$ to the curves plotted in Fig.~\ref{fig:scale_plum_rect}, in the case of two noise-levels (SimSet1 and SimSet2) and three Plummer power-law index $p=1.2, 2.2,$ and $4$.}
\begin{tabular}{c | cccc }
\toprule\toprule
\multirow{2}{*}{\begin{tabular}{c} \textbf{\large Linear fit} \\ \textbf{\large parameters} \\\end{tabular}}
& \multicolumn{4}{c}{SimSet ID \& Plummer power-law index $p$}\\
\cline{2-5}
& \textbf{SimSet 1 ($p=2.2$)} & \textbf{SimSet 2 ($p=2.2$)} & \textbf{SimSet 1 ($p=4$)} & \textbf{SimSet 1 ($p=1.2$)} \\
\hline
Slope $a$ & 0.70 & 0.64 & 0.56 & 0.83 \\
Origin $b$ & -1.27 & -1.45 & -0.18 & -1.09 \\
\bottomrule\bottomrule
\end{tabular}
\label{tab:linear_fits}
\end{table*}
For completeness, we also look at the impact of the Plummer index on the curves $W_{\rm b}^{\star{\rm peak}}$ versus $2R_{\rm flat}$.
Plotted in the bottom panel of Fig.~\ref{fig:scale_plum_rect} are the curves obtained for $p = 1.2$ (orange squares), $p = 2.2$ (blue triangles), and $p = 4$ (green circles) with their respective standard deviations (error bars in the corresponding colors).
The linear fits to the three curves $W_{\rm b}^{\star{\rm peak}}$ versus $2R_{\rm flat}$ are again shown in solid lines, and their parameters given in Table~\ref{tab:linear_fits}.
The parameter $a$ increases with decreasing $p$. The increase in slope can be understood by noting that, for a given $R_{\rm flat}$, a smaller $p$ yields a more spread-out Plummer-type filament (see Eq.~\ref{eq:plummer}), which in turn results in a detection at larger $W_{\rm b}^\star$.
As for $p=2.2$, the error in the fits for the other two values of $p$ can be roughly estimated at $\lesssim 1\,{\rm px}$ over the range $R_{\rm flat} = [5, 20]\,{\rm px}$.
Finally, the curves $W_{\rm b}^{\star{\rm peak}}$ versus $2R_{\rm flat}$ obtained for increasing values of $r_{\rm b}$ are identical, with the standard deviations (error bars in Fig.~\ref{fig:scale_plum_rect}) decreasing down to half those obtained for $r_{\rm b}= 3$. From this we conclude that the empirical relation between $2R_{\rm flat}$ and $W_{\rm b}^{\star{\rm peak}}$ derived in SimSet~3 remains valid independently of $r_{\rm b}$.
\subsubsection{Overlapping filaments of different properties}
\label{sec:fil_scales_overlap}
To study reliability of width estimate in a more realistic case, we focus now on the fourth set of simulations, SimSet~4, for ideal and Plummer-type filaments.
In the case of ideal filaments, {\tt FilDReaMS} is able to detect and reconstruct all of them (top right panel of Fig.~\ref{fig:power_spectrum}).
The histogram $(N_{\rm pix}/N_{\rm map})$ versus $W_{\rm b}^\star$, averaged over the 100 maps, has two pronounced peaks at $W_{\rm b}^{\star{\rm peak}}=9\,{\rm px}$ and $W_{\rm b}^{\star{\rm peak}}=15\,{\rm px}$ (bottom panel of Fig.~\ref{fig:power_spectrum}).
Hence, {\tt FilDReaMS} recovers again the widths of the input ideal filaments.
The reason why the histogram also contains non-negligible values away from the peaks is because the superposition of two input filaments can either lead to a structure that will be identified as a thicker filament or conversely cause the broader filament to effectively hide part of the narrower filament.
In the case of Plummer-type filaments, {\tt FilDReaMS} is also able to detect and reconstruct all of them (top right panel of Fig.~\ref{fig:plummer_power_spectrum}).
The histogram $(N_{\rm pix}/N_{\rm map})$ versus $W_{\rm b}^\star$, averaged over the 100 test maps, has three peaks at $W_{\rm b}^{\star{\rm peak}}=9, 11\, \rm{and} \, 19\,{\rm px}$ which present a larger spread compared to the ideal case (bottom panel of Fig.~\ref{fig:plummer_power_spectrum}). The two peaks at $W_{\rm b}^{\star{\rm peak}}=9\, \rm{and}\, 11\,{\rm px}$ are associated with the same input filament width ($R_{\rm{flat}}=9\,\rm{px}$). This can be explained by the log-scale distribution of the central intensity of the filaments, $I_0$, which leads to two different noise regimes, with $1/3$ of the filaments in the high noise level, and $2/3$ in the default noise level. Following top panel of Fig.~\ref{fig:scale_plum_rect}, a $2R_{\rm{flat}}$ value of $18\,\rm{px}$ roughly corresponds to bar widths of $W_{\rm b}=9 \, \rm{and} \, 11\, \rm{px}$ in the high and default noise levels, respectively, which is consistent with the two observed peaks. The same kind of behaviour is not observed in this simulation for the population of larger filaments, since they are more affected by the impact of overlapping filaments, leading to a spread distribution masking this effect.
The corresponding values of $2R_{\rm flat}$ can be derived from the linear fits $W_{\rm b}^{\rm \star peak} = a \, (2R_{\rm flat}) + b$, with the values of $a$ and $b$ given in Table~\ref{tab:linear_fits}. We use the case $p=2.2$ and the high noise level for the first peak at $9\,rm{px}$ (as explained above), and the case $p=2.2$ with default noise level for the others.
This leads to $2R_{\rm flat}=16.3\,{\rm px}$, $17.3\,{\rm px}$, and $28.5\,{\rm px}$, in good agreement with the input $R_{\rm flat}$ ($9\,{\rm px}$ and $15\,{\rm px}$). This agreement supports the general validity of the empirical relation between $2R_{\rm flat}$ and $W_{\rm b}^{\star{\rm peak}}$ obtained in Sect.~\ref{sec:correspondance}. As in the case of ideal filaments, the fact that the histogram contains more than the two expected peaks except the second peak at $9\,{\rm px}$ is not an indication that {\tt FilDReaMS} performs poorly, but rather a consequence of the subjective way of identifying filaments.
\subsection{Filament orientations}
\label{sec:fil_orientations}
Again, to study the impact of noise level and aspect ratio on the reliability of the filament orientation estimates, we use the first three sets of simulations SimSet~1, 2 and 3, for ideal and Plummer-type filaments. Hence, for each series, the derived orientation angles, $\psi_{\rm f}^{\star}$, are compared to the input orientations, $\psi_{\rm f}$ over the 2000 noise realisations of each set of simulation. This allows to estimate both the bias and the standard deviation $\sigma_{\psi}$ of $\psi_{\rm f}^{\star}$.
We first observe that the orientations of the reconstructed filaments don't show any deviation in average from the input orientation angles, and don't exhibit any specific dependency with the input value of the orientation angle.
We also observe from these simulations that $\sigma_{\psi}$ decreases with the filament width, so that the results obtained with $W_{\rm b}$ and $R_{\rm flat} = 5\,{\rm px}$ can be considered as upper limits. We consider these conservatives values as our final uncertainty on the filament orientation, displayed in Table~\ref{tab:uncertainties}.
We can see that the precision is very good, $\leq 0.2^{\circ}$ for ideal filaments and $\leq 1.8^{\circ}$ for Plummer-type filaments for the set of simulations SimSet~1 (default noise level and short aspect ratio). These uncertainties increase with increasing noise (SimSet~2), up to $\leq 0.4^{\circ}$ (ideal) and $\leq 3.1^{\circ}$ (Plummer). The impact of noise level on the filament orientation estimate is much less important than the one observed for the width, with a degradation of only a factor of $\leq 2$ for both kind of synthetic filament profiles.
The uncertainties decrease with increasing aspect ratio (SimSet~3), down to $\leq 0.1^{\circ}$ (ideal) and $\leq 1.2^{\circ}$ (Plummer), which is again in accordance with our expectation that more elongated filaments are easier to recognize.
\begin{table*}\centering
\caption{Uncertainties of {\tt FilDReaMS} obtained in three different cases ({\bf left}) in the estimation of width ({\bf center}) and orientation ({\bf right}).}
\begin{tabular}{m{3cm} | m{0.01cm} m{2cm} m{2cm} m{0.01cm} | m{0.01cm} m{2cm} m{2cm} m{0.01cm}}
\toprule\toprule
\textbf{\large SimSet ID} & & \multicolumn{2}{c}{Width} & & & \multicolumn{2}{c}{Orientation} & \\
& & \centering Ideal & \centering Plummer & & & \centering Ideal & \centering Plummer & \\
\midrule
SimSet~1 & & \centering $\simeq 0.01\,{\rm px}$ & \centering $\simeq 0.4\,{\rm px}$ & & & \centering $\leq 0.2^{\circ}$ & \centering $\leq 1.8^{\circ}$ & \\
\midrule
SimSet~2 & & \centering $\simeq 0.4\,{\rm px}$ & \centering $\simeq 2.7\,{\rm px}$ & & & \centering $\leq 0.4^{\circ}$ & \centering $\leq 3.1^{\circ}$ & \\
\midrule
SimSet~3 & & \centering $\simeq 0.002\,{\rm px}$ & \centering $\simeq 0.2\,{\rm px}$ & & & \centering $\leq 0.1^{\circ}$ & \centering $\leq 1.2^{\circ}$ & \\
\bottomrule\bottomrule
\end{tabular}
\label{tab:uncertainties}
\end{table*}
\section{First astrophysical application and comparison with previous work}
\label{sec:comparison}
As a first astrophysical illustration, we apply {\tt FilDReaMS} to the {\it Herschel} $250\,{\rm \mu m}$ intensity map of the G210 field (top panel of Fig.~\ref{fig:G210_previous_results}). The left, middle, and right panels in the second row of Fig.~\ref{fig:G210_previous_results} show the networks of reconstructed filaments with bar widths $W_{\rm b} = 6\,{\rm px}$, $12\,{\rm px}$, and $17\,{\rm px}$. The corresponding radius of the 2D top-hat kernel used to filter out the large scales in the initial map (to obtain map B; see top middle panel of Fig.~\ref{fig:FilDReaMS_method}) is $R=11\,{\rm px}$, $24\,{\rm px}$, and $45\,{\rm px}$, respectively. We immediately see that the majority of the small filaments are sub-structures of larger ones, while only a small fraction of them covers areas not already included in larger structures, stressing the hierarchical organisation of these filamentary structures in the interstellar medium.
The map of all the reconstructed filaments with bar widths between $W_{\rm b} = 5\,{\rm px}$ and $27 \,{\rm px}$ is displayed in the third row of Fig.~\ref{fig:G210_previous_results}.
This map bears some obvious resemblance to the initial map (top row), while a direct by-eye inspection indicates that {\tt FilDReaMS} recovers most of the filamentary structures independently of the intensity.
The striations in the northern-central part of the region are better reconstructed (i.e., over longer path lengths) when integrating all bar widths than for any single value of $W_{\rm b}$. These striations are in fact composed of filaments of different bar widths, mostly parallel to each other, with the thinner corresponding to the crests of the larger.
We now compare the above results to those of \citet[][]{Malinen_2016}, who applied the {\tt RHT} method to the {\it Herschel} $250\,{\rm \mu m}$ intensity map of G210 (top panel of Fig.~\ref{fig:G210_previous_results}), using a spatial filtering with $R=11\,{\rm px}$.
The filaments detected with {\tt RHT} are displayed in the bottom row of Fig.~\ref{fig:G210_previous_results}, where they can be compared to the filaments detected with {\tt FilDReaMS} (left panel in the second row).
It appears that both methods yield similar filamentary networks, with however a few noticeable differences: {\tt RHT} detects more filaments in the most diffuse part of the region, while in the densest part {\tt FilDReaMS} detects more strands and structures branching out from the main central filament.
The long and thin striations that seem to escape detection with {\tt FilDReaMS} for $R=11\,{\rm px}$ could be missed because of the short aspect ratio of the {\tt FilDReaMS} model bar. However, they are recovered when considering a short range of values around $R=11\,{\rm px}$.
\section{Concluding remarks}
\label{sec:conclusion}
In this paper, we presented a new method to detect filaments of different widths in a map of intensity (in a broad sense) $\I$.
We called our method {\tt FilDReaMS}, for {\bf Fil}ament {\bf D}etection and {\bf Re}construction {\bf a}t {\bf M}ultiple {\bf S}cales. In brief, {\tt FilDReaMS} uses a rectangular model bar of width $W_{\rm b}$ and aspect ratio $r_{\rm b}$,
where $W_{\rm b}$ is meant to cover a broad range of values and $r_{\rm b}$ is a free parameter (typically set to $r_{\rm b} = 3$).
For any given value of $W_{\rm b}$, {\tt FilDReaMS} (1) detects potential filaments that can be locally approximated by the model bar, (2) retains the potential filaments with significance $S>1$ (Eq.~\ref{eq:criterion_significance}), (3) reconstructs the true shape and the intensity of physical filaments from the initial map of $\I$ together with the associated binary map (see Fig.~\ref{fig:filament_reconstruction}), and (4) assigns a filament orientation angle, $(\psi_{\rm f}^{\star})_i$, to each pixel $i$ of a reconstructed filament of width $W_{\rm b}$.
After repeating the procedure for all the values of $W_{\rm b}$, a most significant bar width, $(W_{\rm b}^\star)_i$, is derived for each pixel $i$ of all the reconstructed filaments, and the most prevalent bar widths of the map, $W_{\rm b}^{\star{\rm peak}}$, are inferred from the peaks of the histogram of $W_{\rm b}^\star$.
The $W_{\rm b}^{\star{\rm peak}}$ can then be cautiously converted to the widths of the often-used Plummer-type profiles, $2R_{\rm flat}$ (see Fig.~\ref{fig:scale_plum_rect} and Table~\ref{tab:linear_fits}).
Thus {\tt FilDReaMS} makes it possible to detect filaments of a given bar width, to identify the most prevalent bar widths (and corresponding Plummer widths) in a given map, and to derive the local orientation angles of the detected filaments. The main assets of {\tt FilDReaMS} are
\begin{itemize}
\item the ability to detect filaments over a broad range of widths;
\item the small number of free parameters (only three; see Table~\ref{tab:FilDReaMS_parameters});
\item the speed of execution: typically, for a given map, running {\tt FilDReaMS} takes about $20-30\,{\rm sec}$ for each value of $W_{\rm b}$ and roughly $10-20\,{\rm min}$ to cover the entire range of $W_{\rm b}$;
\item the user-friendliness, which makes it particularly suited for statistical studies.
\end{itemize}
{\tt FilDReaMS} opens wide perspectives of application to astrophysical data, in order to study the processes of the stellar formation inside the interstellar filamentary structures. To start with we investigate the interplay between the orientation of the filaments and the Galactic magnetic field in a companion paper \citep{Carriere_2022b}, generalising the analysis of \citet{Malinen_2016} to four other fields.
A broader statistical analysis over 116 {\it Herschel} fields is also in preparation.
\begin{acknowledgements}
We extend our deepest thanks to our referee, Gina Panopoulou, for her careful reading of our paper and for her many constructive comments and suggestions.
We also acknowledge useful discussions with Dana Alina, Susan Clark, Mika Juvela, and Julien Montillaud.
Herschel SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including University Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, University Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, University Sussex (UK); Caltech, JPL, NHSC, University Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC (UK); and NASA (USA).
\end{acknowledgements}
\bibliographystyle{aa}
\bibliography{biblio}
|
Title:
Gaia DR2 and EDR3 data and evolutionary status of post-AGB stars with high radial velocities |
Abstract: Using the Gaia DR2 and EDR3 data and list of post-AGB candidates, we
investigate the parallax, proper motion and binarity for twenty post-AGB stars
and candidates having high radial velocities. From their Gaia distances their
luminosities and kinematics are derived. The evolutionary status of these stars
is discussed from their location on the post-AGB evolutionary tracks. Nine
stars are confirmed to be post-AGB stars that have their initial main-sequence
mass around one or two solar masses. From their kinematics information, two
objects among them are identified to clearly belong to the halo population,
suggesting that low-mass. We discuss on the origin and evolutionary status of
other objects in the sample of this work with high radial velocities.
| https://export.arxiv.org/pdf/2208.12971 |
\title{Gaia DR2 and EDR3 data and evolutionary status of post-AGB stars with high radial velocities}
\author{Wako \textsc{Aoki}\altaffilmark{1} \altaffilmark{2}}
\altaffiltext{1}{National Astronomical Observatory,
2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan }
\altaffiltext{2}{Department of Astronomical Science, School of Physical Sciences, The Graduate University of Advanced Studies (SOKENDAI), 2-21-1 Osawa, Mitaka,
Tokyo 181-8588, Japan}
\email{aoki.wako@nao.ac.jp}
\author{Tadafumi \textsc{Matsuno}\altaffilmark{3}}
\altaffiltext{3}{Kapteyn Astronomical Institute, University of Groningen \\ Landleven 12, 9747 AD Groningen, The Netherlands}
\email{matsuno@astro.rug.nl}
\author{Mudumba \textsc{Parthasarathy}\altaffilmark{1} \altaffilmark{4} \altaffilmark{5}}
\altaffiltext{4}{Indian Institute of Astrophysics, II Block, Koramangala, Bangalore 560 034, INDIA}
\altaffiltext{5}{Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA}
\email{m-partha@hotmail.com}
\KeyWords{stars:evolution --- stars:AGB and post-AGB --- stars:high-velocity --- stars:distances}
\section{Introduction}
Post-AGB stars are transition objects evolving from the tip of the AGB
horizontally towards the left in the H-R diagram into early stages of
young planetary nebulae (PNe). The post-AGB evolutionary stage is
short-lived, depending on the core-mass \citep{schoenberner83,iben83}. During
the transition from the tip of the AGB to early stages of young PNe
phase they appear as M-,K-, G-, F-, A-, and OB-type post-AGB
supergiants for a short period \citep{partha86, partha89, pottasch88, partha93a,partha93b}. They
mimic the spectra of supergiants because of their extended thin
atmospheres around the white-dwarf like C-O core (after severe
mass-loss and the termination of the AGB phase of evolution). Before
the advent of IRAS satellite, very few post-AGB supergiant candidates
were known. Analysis of IRAS data has revealed many cool to hot
post-AGB supergiants \citep{preite-marthinez88,kwok89}.
The list of post-AGB stars detected from the analysis of IRAS data by
several investigators can found in the paper of \citet{vickers15} and references therein.
Progresses in understanding of post-AGB stars are found in review papers
(e.g., \cite{vanwinckel03, kamath22u}).
Multi-wavelength studies of significant sample of post-AGB candidates
were carried out by several investigators during the past 35 years
which enabled us to understand their chemical composition, circumstellar
shells, s-process nucleosynthesis and late stages of evolution of low-mass
stars (e.g., \cite{desmedt12,desmedt16,kamath22,partha22}, and references therein). However, for a better understanding of these stars, their
distances, radial velocities and accurate proper-motion measurements
are required. With the advent of {\it Gaia} (DR2 and EDR3; \citet{gaia18,lindegren18}, accurate parallaxes(distances), radial velocities and proper-motions measurements of a
large sample post-AGB stars became available.
In an earlier paper
\citep{partha20}, we studied the Gaia DR2 data and
evolutionary status of eight high velocity hot post-AGB stars. Finding high-velocity objects among post-AGB stars is useful to constrain the final stage of stellar evolution of low-metallicity, low-mass stars in the halo population.
Such stars are very rare and are old low-mass stars in advanced stages of evolution. Some
of them may belong to the Galactic halo.
In this paper we present an analysis of {\it Gaia} DR2 and EDR3 data of
stars listed as post-AGB stars or candidates in literature.
\section{Data and analysis}
We investigate the list of stars given in the paper of \citet{vickers15} as likely or possibly post-AGB stars, and searched the {\it Gaia} DR2
and EDR3 for post-AGB stars with radial velocities and with accurate parallaxes. The sample of \citet{vickers15} contains the list of almost all the known post-AGB stars. Here we define
post-AGB stars as those which are in the transition region between the tip of the AGB and very early stages of
planetary nebulae (PNe). These objects are often termed as proto-planetary nebulae.
We select 20 objects that have absolute values of radial velocities larger than 45~km~s$^{-1}$ ($|RV|>45$~km~s$^{-1}$).
The galactic longitudes and latitudes, parallaxes, radial velocities,
$G$ ({\it Gaia G} band magnitude), $V$, $(B-V)$ and Spectral types are given in
tables \ref{tab:obj} and \ref{tab:param}. Spectral types, $V, B$ magnitudes are taken from SIMBAD.
Among the twenty stars twelve are high galactic latitude stars, eleven
stars have high negative radial velocities and nine have high positive
radial velocities.
The parallaxes taken from {\it Gaia} DR2 and EDR3 are given in table~\ref{tab:param}.
The distance derived from the parallaxes are also listed in the table.
Four stars have a large relative parallax uncertainty ($>$20\%) in {\it Gaia} EDR3. For these stars, only the lower limit of the distance is presented. The table also gives the distances estimated by \citet{Bailer-Jones21} using a prior constructed from a three-dimensional model of the Galaxy. Excluding the above four objects, the two estimates of the distance for each object agree within 5\%. We adopt the distances simply obtained from the parallaxes in the present work.
The normalized unit weight error (RUWE) values are also given in the
table. The RUEW values also indicate reliability of the parallaxes. Whereas RUWE values are sensitive to the photocentric motions of unresolved companions \citep{lindegren21, stassun21}, they could be affected by other factors including nebulosity of proto planetary nebulae.
The RUWE values of nine stars, including the above four
objects with large uncertainties of parallaxes, are larger than 1.4.
Among them, four objects are likely to be
post-AGB stars according to previous studies on stellar properties
including infrared excess, and metal depletion. See below more details
on these stars. The remaining five stars have large uncertainties in
parallaxes and/or very large RUWE values, and, hence, are not
regarded as candidates of post-AGB stars in this paper.
The other eleven stars in our sample have RUWE values smaller than 1.4. Among them, BD+33 2642 is suggested to belong a binary system
\citep{vanwinckel14}.
For the remaining ten objects, there is no signature of binarity from the Gaia astrometry.
The kinematic information that is calculated based on the {\it Gaia} EDR3 astrometry is presented in table 3, excluding the four objects with large uncertainties of parallaxes.
$E(B-V)$ values are obtained from dust maps in literature or by comparing expected intrinsic colors with observed ones (table \ref{tab:obj}).
We use three-dimensional dust extinction map from \citet{chen19} and \citet{green18} and two-dimensional dust extinction map from \citet{schlegel98}.
For a star to have an $E(B-V)$ estimate from three-dimensional maps, it needs to have a precise parallax measurement (relative uncertainty smaller than 20\%) and be in the sky coverage of the maps.
Since \citet{chen19} focus on low Galactic latitude field ($|b|<10^\circ$), we prioritize values from \citet{chen19} over \citet{green18} for objects with $|b|<10^\circ$.
We note that \citet{green18} only covers the sky with declination larger than $-30^\circ$ and hence we could not derive $E(B-V)$ from three-dimensional maps for HD 16745 and HD 178443 despite precise parallax measurements available for these objects.
The extinction coefficients from \citet{green18} and \citet{chen19} are converted to $E(B-V)$ using values provided in \citet{green18},
\citet{schlafly11}, and \citet{casagrande19}.
In addition to the interstellar extinctions considered in these dust maps, some objects could be affected by circumstellar dust extinction given the evolutionary status of the objects.
Thirteen objects are indeed IRAS sources
and their $(B-V)$ colours are likely affected by circumstellar reddening.
For instance, the $E(B-V)$ of HD~56126 (IRAS~07134+1005) estimated from the spectral type is 0.56, whereas the the $E(B-V)$ from the dust map is quite small (0.08 or less).
For these stars we used the
observed $(B-V)$ values from SIMBAD and intrinsic $(B-V)_{0}$ values
estimated from their spectral types using Table 15.7 of Allen's Astrophysical Quantities \citep{cox00} with interpolation to derive $E(B-V)$ values.
The $E(B-V)$ values derived in this way are prioritized over the values from dust maps.
On the other hand, the $E(B-V)$ values of IRAS~07140-2321, IRAS~07227-1320, and IRAS~14325-2321 estimated from the spectral types are significantly smaller than those from the dust map. This suggests that the estimate of the reddening from the dust map or spectral types could be uncertain for these objects. For these three stars, we adopt $E(B-V)$ from the dust map.
The typical errors of $E(B-V)$ from the dust map is 0.02--0.03 for high Galactic latitude objects. We adopt 0.05 as the uncertainty of reddening of these objects to determine the luminosity. This is consistent with the error of $E(B-V)$ given in \citet{vickers15} for our sample (0.043 on average). For objects with large reddening, in particular objects that could be affected by circumstellar reddening, we assume the error of $E(B-V)$ to be 0.2, including the uncertainty of the subclass of spectral types that results in difference of $E(B-V)$. The $E(B-V)$ and the error adopted are given in table 1. Taking the errors into account, the $E(B-V)$ values estimated in this study agree well with those obtained for HD~56126 (0.43) and IRAS~14325--6428 (1.07) by \citet{kamath22}. Although our values are slightly larger than those for IRAS~05208--2035 (0.01) and HD~46703 (0.23) by \citet{oomen18}, and for IRAS~08187--1905 (0.07) and HD~161796 (0.13) by \citet{kamath22}, the discrepancy is smaller than 0.1. if the error ranges are taken into account.
The absolute $V$ magnitudes are calculated from the apparent magnitudes and distances given in tables 1 and 2, respectively. The luminosity is calculated from the absolute magnitude and the bolometric
corrections taken from \citet{cox00}. The values
of bolometric corrections in \citet{flower96} are 0.1--0.25 mag larger
than those of \citet{cox00}, resulting in differences in $\log (L/L_{\odot})$ of
less than 0.1 dex. The changes of $E(B-V)$ of 0.05 and 0.2 result in the difference of $\log(L/L_{\odot}$) of 0.08 and 0.25, respectively. The $T_{\rm eff}$ values and spectral
types of most stars are available from the literature. These values are given in Table~2.
Kinematics are calculated using the {\it Gaia} EDR3 parallaxes and
proper motion measurements. We adopt 8.21 kpc as the distance between
the Sun and the Galactic center \citep{mcmillan17} and 0.021 kpc as the
vertical offset of the Sun \citep{bennett19}. Solar motion is
adopted from \citet{schonrich10} for radial and vertical
velocities (11.1~km~s$^{-1}$ and 7.25~km~s$^{-1}$, respectively) and is
calculated as 245.34~km~s$^{-1}$ using proper motion measurements by Reid
and \citet{brunthaler04}.
The orbital energy is calculated assuming the Milky Way potential from \citet{mcmillan17}.
These results are given in table 3. Kinematics of the post-AGB star candidates in Galactocentric frame are presented in figure\ref{fig:kinematics}.
We note that the sample selection of high velocity post-AGB stars
would not be affected if they belong to low mass binaries because the
radial velocity variations expected for low mass binaries are not as
large as the radial velocities of the stars studied in this paper
(tables 1 and 2).
\section{Notes on the twenty high velocity post-AGB candidates}
Figure~\ref{fig:track} shows the luminosity of the objects as a function of effective temperature with the post-AGB evolution
tracks \citep{miller16}. The four objects with large uncertainties in the parallaxes (see \S~2) are excluded. This figure indicates that typical luminosity range of low-mass post-AGB stars is $3<\log(L/L_{\odot})<4$.
We find that nine objects have luminosity of this range, among which five have reliable Gaia parallaxes (the RUWE values are smaller than 1.4). BD$-12$ 4970 that has very high luminosity ($\log(L/L_{\odot})=5.5$) is also regarded as a post AGB star. Among the remaining ten stars with higher or lower luminosity than the above post-AGB star range, five with small RUWE have lower luminosity $\log(L/L_{\odot})<3$ and the other five have large RUWE values.
Here we report some detailed information for individual objects separately with the above grouping.
It should be noted that the luminosities of binary stars are still uncertain due to the uncertainty of parallaxes. Although most of them are found in the groups with large RUWE values in this section, some known binary stars are also included in the groups with small RUWE values. Information on the binarity is given in the following notes for individual objects when available.
\subsection{Post-AGB stars with small RUWE values}
\begin{itemize}
\item{HD 56126 (IRAS 07134+1005)}
It is a high Galactic latitude, and high velocity, metal-poor, F-type post-AGB star with 21-micron emission feature. It is overabundant in carbon and s-process elements \citep{partha92}. \citet{desmedt16} report detailed abundances including [Fe/H]$=-0.91$ and large excesses of s-process elements (e.g., [Ba/Fe]$=1.82$). More recently, \citet{kamath22} list this object as a single post-AGB star with s-process enrichment.
\item{ IRAS 07140-2321 (V421 CMa)}
\citet{gielen11} derived $T_{\rm eff}$ = 7000~K, $\log g$ = 1.5, and [Fe/H] = $-0.8$.
\item{ HD 116745 (CD$-46^{\circ} 8644$, Fehrenbach’s star, ROA 24,)}
It is a high galactic latitude and high velocity metal-poor halo post-AGB star \citep{gonzalez92}. They found it to be overabundant in carbon and s-process elements. They derived $T_{\rm eff}$ = 6950~K, $\log g$ = 1.15 and [Fe/H] = $-1.77$. It is a member of the globular cluster Omega Cen.
\item{ IRAS 17436+5003 (HD 161796)}
It is a high galactic latitude and high velocity F-type post-AGB supergiant \citep{partha86}. \citet{luck90} derived $T_{\rm eff}$ = 6500~K, $\log g$ = 0.70, and [Fe/H] = $-0.32$. This object is listed as a single post-AGB star without s-process enrichment by \citet{kamath22}.
\item{ BD$-12^{\circ} 4970$ (LS IV -12 13)}
It is a high velocity hot (B0.5Ia) post-AGB candidate. It is not an IRAS source. High resolution spectroscopic study of this star is important.
\item{ BD+33 2642}
It is a high Galactic latitude and high velocity, metal-poor hot post-AGB star. This object is also classified into a proto planetary nebula. However, its nebula is not bright, and it is not an IRAS source. \citet{napiwotzki94} studied this star and derived $T_{\rm eff}$ = 20,000~K, $\log g$ =2.9, and [Fe/H] = $-2.0$. Its chemical composition indicates depletion of refractory elements. The [O/H]= $-0.8$ indicates it is intrinsically metal-poor. This star belongs to a binary system with a low-mass faint companion. High resolution spectrum shows no spectral features of the secondary. The orbital period determined by \citet{vanwinckel14} is 1105 days. The binarity of this object would not affect the RUWE value, which is smaller than 1.4 (1.295).
\end{itemize}
\subsection{Post-AGB stars and candidates with large RUWE values}
\begin{itemize}
\item{HD 46703 (IRAS 06338+5333)}
This star is a high Galactic latitude, and high velocity, metal-poor F-type pop II post-AGB star \citep{luck84}. \citet{luck84} derived $T_{\rm eff}$ = 6000~K, $\log g$ = 0.4, [Fe/H] = $-1.57$. \citet{partha86} were the first to find that it is a weak IRAS source with far-IR colours and flux distribution similar to that of high Galactic latitude post-AGB star HD 161796 \citep{partha86}. \citet{hrivnak08} also report [M/H]$=-0.6$ for this object, discussing depletion and binarity. This belongs to a binary system. \citet{oomen18} report the orbital period of 597 days for this object. The RUWE value of this star is 1.622, which is clearly higher than 1.4 as expected from the binarity.
\item{IRAS 08187-1905 (HD 70379, V552 Pup)}
It is a high galactic latitude and high velocity F6 post-AGB supergiant \citep{reddy96}. \citet{reddy96} derived chemical composition of this star from an analysis of high resolution spectra. They derived $T_{\rm eff}$ = 6500~K, $\log g$ = 1.0, [Fe/H] = $-0.5$. This star is listed as a single post-AGB star without s-process enrichment by \citet{kamath22}. The RUWE value of this object is 1.695, which is even higher than that of HD~46703. Although this would not indicate that this object belongs to a binary system, further investigation on the binarity will be useful.
\item{ IRAS 14325-6428}
It is a high velocity F5I star with IRAS colours and flux distribution similar to post-AGB stars and PNe. \citet{desmedt16} report [Fe/H]$=-0.56$ with excesses of s-process elements. This star is also regarded as a single post-AGB star without s-process enrichment by \citet{kamath22}. The RUWE value of this object is quite large (2.181). Whereas further study to investigate the binarity would be useful, this star can be treated as a post-AGB star with excesses of s-process elements according to literature.
\item{ HD 137569 (IRASF 15240+1452)}
It is a high Galactic latitude and high velocity post-AGB star. No Fe lines are detected in its spectrum by \citet{martin04} and \citet{martin06}, who classified it as a metal-poor, hot post-Horizontal Branch (post-HB) star.
\end{itemize}
\subsection{Objects with low luminosity with small RUWE}
\begin{itemize}
\item{ IRAS 07227-1320}
This star is listed as a possible post-AGB star by \citet{vickers15}. Its spectral type is M1I. No chemical composition study is available. The luminosity of this object ($\log(L_{*}/L_{\odot})=2.58$) is lower than post-AGB stars in our sample.
\item{ BD+32 2754}
This star is also listed as a possible post-AGB star by \citet{vickers15}. It is a high Galactic latitude and high velocity F-type star. There is no chemical composition analysis of this star. It may belong to Galactic halo. The luminosity of this object ($\log(L_{*}/L_{\odot})=1.10$) is clearly lower than post-AGB stars.
\item{ HD 178443 (LSE 182)}
It is not an IRAS source. It is a high Galactic latitude and high velocity (343.5 km~s$^{-1}$) star. \citet{mcwilliam95} derived $T_{\rm eff}$ = 5180~K, $\log g$ = 1.65, [Fe/H] = $-2.07$. They classify it as a red-HB star. It is a Galactic halo star (see next section). The luminosity of this object ($\log(L_{*}/L_{\odot})=1.99$) is lower than post-AGB stars in our sample.
\item{ PHL 1580}
It is a high Galactic latitude and high velocity hot post-AGB star. \citet{mccausland92} derived $T_{\rm eff}$ = 24,000~K, $\log g$ = 3.6, [Fe/H] =$-0.6$. They find it to be carbon deficient. This star may have left the AGB before the third dredge up. The luminosity of this object ($\log(L_{*}/L_{\odot})=1.12$) is clearly lower than post-AGB stars.
\item{LS III +52 5}
It is a high velocity ($-232.8$ km~s$^{-1}$) and high proper motion star. In the LS catalogue its spectral type is given as OB- \citep{hardorp64}. It is not an IRAS source. Detailed spectroscopic study of this star is important. The luminosity of this object ($\log(L_{*}/L_{\odot})=1.22$) is clearly lower than post-AGB stars.
\end{itemize}
\subsection{Others with uncertain luminosity and large RUWE}
\begin{itemize}
\item{IRAS 02143+5852}
It is a high radial velocity F7Iae star. The H$\alpha$ line is in emission. $T_{\rm eff}$ is estimated from its spectral type to be 6000K. \citet{fujii02} made $BVRIJHK$ photometry. \citet{omont93} classified it as a carbon-rich post-AGB star. The error in parallax is large (tables 1 and 2).
\item{IRAS 05089+0459}
It is a high galactic latitude and high velocity M3I post-AGB candidate. \citet{iyengar97} made near-IR photometric observations ($R$ = 12.68, $I$ = 11.62). There is no chemical composition analysis of this star. The error in {\it Gaia} EDR3 parallax is high (tables 2).
\item{ IRAS 05208-2035 (BD$-20^{\circ} 1073$, AY Lep)}
It is a high Galactic latitude and high velocity post-AGB candidate. \citet{gielen11} derived $T_{\rm eff}$ = 4000~K, $\log g$ = 0.5, and [Fe/H] =0.0. The observed $B-V$ colour indicates that it may be a G-type star. The spectral type is not available in SIMBAD. On the other hand, [Fe/H]$=-0.7$ and small overabundance of s-process elements are derived by \citet{rao12}. \citet{oomen18} derive the orbital period of this binary system to be 23 days. Although the luminosity of this star is still uncertain, it is likely to be a binary post-AGB star.
\item{ IRAS 15210-6554}
From {\it Gaia} DR2 data we find it to be a high velocity star. Its spectral type is K2I and Galactic latitude $b$ is $-7.7$ degrees. Based on the IRAS colours and flux distribution it is classified as a post-AGB star. This star does not have accurate {\it Gaia} DR2 parallax.
\item{ IRAS 18075-0924}
It is a high velocity star. Gaia DR2 parallax is not accurate. Spectroscopic and photometric study of this star is needed. Based on IRAS colours and flux distribution it is classified as a post-AGB candidate.
\end{itemize}
\section{Discussion and concluding remarks}\label{sec:discussion}
\subsection{Populations of post-AGB stars with high radial velocities}
Nine objects in our sample, HD 46703, HD 56126, IRAS 07140--2321, IRAS
08187--1905, HD 116745, IRAS 14325--6428, HD 137569,
BD$+33^{\circ} 2642$, and IRAS 17436+5003 are identified as post-AGB
stars with high radial velocities (tables 1 and 2). The very
luminous object BD-12 4970 is discussed separately.
Their computed
absolute luminosities and comparisons with post-AGB evolutionary track
(Figure 2) indicates that their initial main-sequence mass is less
than 2 solar masses. Among them, only two stars, HD 116745 and
BD$+33^{\circ} 2642$, clearly belong to the galactic halo population
(table 3, figure 1). IRAS 07140-2321 has the largest $L_{z}$ and $E$
(table 3). The other six post-AGB stars are not separated from disk
stars in figures 1 and 2, although they have relatively high radial
velocity. This indicates that the criterion of the radial velocity
($|V_{\rm Helio}|>45$~km~s$^{-1}$) is not sufficient to effectively
select halo post-AGB stars. The radial velocities of the clear
examples of halo objects identified by this work, HD~116745 and
BD$+33^{\circ} 2642$, are $V_{\rm Helio}=240$ and $-94$ km~s$^{-1}$,
respectively. It should be noted that the above criterion is adopted
in this work as we do not miss halo objects from the sample of
\citet{vickers15}.
IRAS~07140-2321 is a unique object that has high total energy of the orbital motion and high $z$-component of angular
momentum. The star seems to belong to the disk population rather than the halo from the prograde rotation with small $v_{R}$ and $v_{z}$. The distance and the high total energy suggests that it is a outer disk object.
BD$-12^{\circ} 4970$ (LS IV -12 13) is a hot high velocity star
with accurate parallax. Its computed absolute luminosity indicates
that its initial main-sequence mass may be 4.0 solar masses.
The kinematics of this object suggest this object belongs to disk population.
Among the objects studied in \citet{partha20}, three objects (LS~3593, LSE~148, and HD~214539) have clear kinematics features of halo objects (figure 1), whereas those of three other stars are not distinguished from disk stars. Another object, LS~5107, has high total energy of the orbital motion and high $z$-component of angular
momentum, as found for IRAS~07140-2321 in the current sample. As LSE~148 is less luminous objects, the clear halo post-AGB stars identified by the study is LS~3593 and HD~214539.
\subsection{Comments for other objects}
The two less luminous stars HD 178443 and PHL 1580 also belong to the halo
population (figure 1).
The high velocity hot metal-poor star
PHL 1580 with accurate parallax is found to have very low
luminosity (table 2) compared to post-AGB stars. It may be a hot sub-dwarf star. LSE 182
(HD 178443) is a high velocity metal-poor star in the Galactic halo
and could be a red HB star (McWilliam et al. 1995).
BD$+32^{\circ} 2754$ also has low absolute luminosity. It may
be a sub-dwarf. IRAS 07227--1320 with the spectral type M1 may be a cool
post-AGB star. It needs further study to understand its chemical
composition and evolutionary stage. The computed absolute luminosity
of post-AGB star HD 161796 (tables 2) and its location in figure
2 indicates that its initial main-sequence mass may be in the rage around 2
solar masses.
\citet{kamath15} found dusty post-red giant branch (post-RGB ) stars in LMC and SMC. They found that these stars have mid-IR excesses and stellar parameters ($T_{\rm eff}$, $\log g$, [Fe/H])
similar to those of post-AGB stars, but their luminosities are less than 2500 L$_{\odot}$. Their lower luminosities indicate they have lower masses and radii. Some of the stars
in our sample also have luminosities less than 2500~L$_{\odot}$ (table 2, figure 2) and they may be
post-RGB stars similar to those found by \citet{kamath15}. The very low luminosity
stars like PHL 1580 mentioned above is a puzzle. They may be post-HB stars or evolving
towards AGB-manque star stage.
Recently, \citet{bond20} found BD$+14^{\circ} 3061$ to be a luminous, metal-poor, yellow post-AGB supergiant star in the galactic halo.
He found it to be a a very high-velocity star moving in a retrograde Galactic orbit. It is not an IRAS source.
The Galactic halo post-AGB stars have relatively low core-mass. They evolve slowly and, by the time they
evolve to G and F-type post-AGB stage, their circumstellar dust shells get dispersed into the interstellar
medium. They never become PNe. The galactic halo post-AGB supergiants are very rare. Discovering them is a challenging task.
\citet{bond20} derived absolute visual magnitude $M_{V}$ of this star from {\it Gaia} DR2 parallax to be $M_{V} = -3.44$.
Since its bolometric correction is close to zero (i.e., $M_{V} = M_{\rm bol}$), \citet{bond20} proposed that these Galactic halo
A and F-supergiants are useful as standard candles as they are luminous and have same
absolute luminosity. Some of the galactic halo post-AGB stars in our sample seems to be similar to BD$+14^{\circ} 3061$.
Extensive survey is needed to detect more galactic halo post-AGB supergiants.
\section{Summary}
This paper investigates the list of post-AGB star candidates of \citet{vickers15} selecting objects with high radial velocities. We identify two clear examples of high-velocity low-mass post-AGB stars and a few candidates from the evolutionary status and kinematics information derived from the {\it Gaia} DR2 and EDR3. Through the studies of this paper and of the previous one \citep{partha20}, four clear halo post-AGB stars are identified (HD~116745, BD+33$^{\circ}$2642, LS~3525 and HD~214539).
We also find that the list of \citet{vickers15} include objects which are not classified into post-AGB stars, taking the new estimate of luminosity based on parallax measurements with {\it Gaia}. Further studies of the sample of \citet{vickers15} with spectroscopy to determine radial velocities are useful to obtain statistics of post-AGB stars as well as information on individual objects.
\begin{ack}
MP was supported by the NAOJ Visiting Fellow Program of the Research
Coordination Committee, National Astronomical Observatory of Japan
(NAOJ), National Institutes of Natural Sciences(NINS).
\end{ack}
\clearpage
\scriptsize
\begin{table*}
\tabcolsep 4pt
\tbl{Basic data of twenty high velocity post-AGB candidates
\label{tab:obj}}{
\begin{tabular}{lrrlrrrrrrrrrrl}
\hline\noalign{\vskip 3pt}
star & $l$ & $b$ & Sp. & $V$ & $G$ & $ (B-V)$ & $(B-V)_{0}$ & \multicolumn{4}{c}{$E(B-V)$} & $T_{\rm eff}$ & B.C & Ref. \\
\cline{9-12}
& (deg) & (deg) & & & & & Sp & Sp & SFD & 3D & adopted & (K) & & \\
\hline\noalign{\vskip3pt}
1)IRAS 02143+5852 & 133.8 & -1.93 & F7Ie & 13.8 & 13.51 & 1.22 & 0.48 & 0.74 & 1.05 & & ... & 6000 & -0.07 & 1,2,3\\
2)IRAS 05089+0459 & 196.3 & -19.5 & M3I & 14.08 & 13.13 & 1.74 & & & 0.14 & & ... &3200 & -2.24 &1,4 \\
3)IRAS 05208-2035 & 222.8 & -28.3 & & 9.48 & 8.98 & 1.04 & 0.7: & 0.3: & 0.06 & 0.08 & 0.3$\pm0.2$ &4900 & -0.33 & 5,6,7 \\
4)HD 46703 & 162.0 & +19. & F7I &9.04 & 8.84 & 0.48 & 0.02 & 0.46 & 0.08 & 0.10 & 0.46$\pm0.20$ & 6000 & -0.06 & 1,7,8,9\\
(IRAS 06338+5333) &&&&&&&&&&&& & \\
5)HD 56126 & 206.7 & +10.0 & F5Ia & 8.32 & 8.06 & 0.88 & 0.32 & 0.56 & 0.08 & 0.00 & 0.56$\pm0.20$ & 6500 & -0.03 & 1,10,11,12 \\
(IRAS 07134+1005) &&&&&&&&&&&& & \\
6)IRAS 07140-2321 & 236.6 & -5.4 & F5I & 10.73 & 10.49 & 0.43 & 0.23 & 0.20 & 0.59 & 0.57 & 0.57$\pm0.20$ & 7000 & 0.0 & 1,13 \\
7)IRAS 07227-1320 & 228.7 & +1.2 & M1I & 12.55 & 11.6 & 1.96 & 1.69 & 0.27 & 0.51 & 0.43 & 0.43$\pm0.20$ & 3500 & -1.45 & 1,14 \\
8)IRAS 08187-1905 & 240.6 & +9.8 & F6Ib/II& 9.02 & 8.83 & 0.61 & 0.40 & 0.21 & 0.11 & 0.15 & 0.21$\pm0.05$ & 6150 & - 0.06 & 1,12,15 \\
9)HD 116745 & 309.1 & +15.2 & A7/A9e & 10.79 & 10.68 & 0.29 & 0.13 & 0.16 & 0.13 & & 0.16$\pm0.05$ & 6950 & -0.0 & 1,16 \\
10)IRAS 14325-6428 & 313.9 & +4.1 & F5I & 12.0 & 11.27 & 0.56 & 0.32 & 0.24 & 0.64 &0.89 & 0.89$\pm0.20$ & 6400 & -0.03 & 1,11,12 \\
11)IRAS 15210-6554 & 317.7 & -7.7 & K2I & 11.85 & 11.72 & (0.03)*& 1.36 & & 0.20 & & ... & 4310 & -0.61 & 1 \\
12)HD 137569 & 21.9 & +51.9 & B9Iab:p& 7.91 & 7.89 & -0.05 & ... & 0.0 & 0.05 & 0.01 & 0.01$\pm0.05$ & 10,500& -0.53 & 1,17,18\\
13)BD+33 2642 & 52.7 & +50.8 & O7p & 10.73 & 10.78 & -0.12 & -0.27 & 0.15 & 0.02 & 0.06 & 0.15$\pm0.05$ & 20,000& -1.66 &1,19,20 \\
14)BD+32 2754 & 53.6 & +41.5 & F8 & 9.55 & 9.46 & 0.57 & 0.56 & 0.01 & 0.02 & 0.02 & 0.01$\pm0.05$ & 5750 & -0.09 & 1,14 \\
15)HD161796 & 77.1 &+30.9 & F3Ib & 7.21 & 7.08 & 0.47 & 0.26 & 0.21 & 0.03 & 0.04 & 0.21$\pm0.05$ & 6500 & -0.03 & 1,9,12,21 \\
( IRAS 17436+5003) &&&&&&&&&&&&& & \\
16)BD-12 4970 & 018.0 & +1.6 & B0.5Ia& 8.78 & 8.30 & 1.02 & -0.21 & 1.23 & 2.71 & 1.58 & 1.23$\pm0.20$ & 27,000& -2.40 & 1\\
17)IRAS 18075-0924 & 019.8 & +4.7 & ---- & 13.9 & 12.47 & 1.4 & & & 1.41 & & ... & & & 1\\
18)HD 178443 & 354.2 & -21.5 & F8 & 10.02 & 9.80 & 0.673 & 0.56 & 0.11 & 0.09 & & 0.11$\pm0.05$ & 5180 & -0.09 &1,22\\
19)PHL 1580 & 031.3 & -43.5 & B0I & 12.33 & 12.19 & (0.14)* &-0.22 & & 0.04 &0.03 & 0.03$\pm0.05$ & 24,000& -2.8 &1,23\\
20)LS III +52 5 & 095.1 & +0.8 & OB- & (12.2)* & 11.74 & (0.46)* &-0.22 & & 2.91 & 0.03 & 0.03$\pm0.05$ & 25,000& -2.9 &1,24\\
\hline\noalign{\vskip 3pt}
\end{tabular}
}
\begin{tabnote}
Notes:- ()* indicates (V-R) for 11)IRAS 15210-6554, (V-G) for 19)PHL 1580, and B mag and (B-G) for 20)LS III +52 5. $E(B-V)_{\bf Sp}$ indicates $(B-V)-(B-V)_{0}$, where $(B-V)_{0}$ is estimated from the spectral type. $E(B-V)_{\rm SFD}$ is from the 2D dust extinction map of Schlegel et al. (1998). $E(B-V)_{\rm 3D}$ is taken from 3D dust maps of Green et al. (2019) if $|b|>10^\circ$ and Chen et al. (2019) if $|b|<10^\circ$. References: 1)SIMBAD; 2)\citet{fujii02}; 3)\citet{omont93}; 4)\citet{iyengar97}; 5)\citet{gielen11}; 6)\citet{rao12}; 7)\citet{oomen18}; 8)\citet{luck84}; 9)\citet{partha86}; 10)\citet{partha92}; 11)\citet{desmedt16}; 12)\citet{kamath22}; 13)\citet{gielen11}; 14)\citet{vickers15}; 15)\citet{reddy96}; 16)\citet{gonzalez92}; 17)\citet{martin04}; 18)\citet{martin06}; 19)\citet{napiwotzki94}; 20)\citet{vanwinckel14}; 21)\citet{luck90}; 22)\citet{mcwilliam95}; 23)\citet{mccausland92}; 24)\citet{hardorp64}
\end{tabnote}
\end{table*}
\begin{table*}
\tbl{Gaia DR2 parallaxes and derived luminosities of twenty high velocity post-AGB candidates
\label{tab:param}}{
\begin{tabular}{lrrrrrrrl}
\hline\noalign{\vskip3pt}
Star & parallax\footnotemark[$*$] & Distance & Distance (BJ) & $\log(L/L_{\odot})$\footnotemark[$\ddagger$] & $\log$($T_{\rm eff}$/K) & RV\footnotemark[$\dagger$] & RUWE & Subsection \\
& (mas) & (kpc) & (kpc) & & & (km~s$^{-1}$) & & in Sect. 3 \\
\hline\noalign{\vskip3pt}
1)IRAS 02143+5852 & 1.364$\pm$0.289 & $>$0.510 & & $>$0.74 & 3.778 & -49.02$\pm$14.69 & 18.689 &3.4 \\
2)IRAS 05089+0459 & 0.754$\pm$0.345 & $>$0.684 & & $>$0.88 & 3.477 & 85.92$\pm$1.49 & 21.908 &3.4 \\
3)IRAS 05208-2035 & 0.687$\pm$0.030 & 1.420$\pm$0.064 & 1.403 $^{+0.053}_{-0.059}$ & 2.91$\pm$0.25 & 3.690 & 52.84$\pm$3.68 & 2.184 &3.4 \\
4)HD 46703 & 0.268$\pm$0.024 & 3.512$\pm$0.330 & 3.399 $^{+0.276}_{-0.278}$ & 3.92$\pm$0.26 & 3.778 & -83.53$\pm$7.71 & 1.622 &3.2 \\
5)HD 56126 & 0.454$\pm$0.024 & 2.124$\pm$0.114 & 2.099 $^{+0.108}_{-0.110}$ & 3.93$\pm$0.25 & 3.813 & 93.71$\pm$3.54 & 0.922 &3.1 \\
6)IRAS 07140-2321 & 0.178$\pm$0.012 & 5.116$\pm$0.343 & 5.122 $^{+0.377}_{-0.404}$ & 3.74$\pm$0.26 & 3.845 & 62.38$\pm$4.13 & 1.029 &3.1 \\
7)IRAS 07227-1320 & 0.489$\pm$0.021 & 1.975$\pm$0.087 & 1.982 $^{+0.068}_{-0.073}$ & 2.58$\pm$0.25 & 3.544 & 70.68$\pm$0.39 & 1.176 &3.3 \\
8)IRAS 08187-1905 & 0.288$\pm$0.033 & 3.280$\pm$0.403 & 3.259 $^{+0.342}_{-0.391}$ & 3.60$\pm$0.12 & 3.789 & 65.44$\pm$1.87 & 1.695 &3.2 \\
9)HD 116745 & 0.177$\pm$0.020 & 5.154$\pm$0.597 & 4.893 $^{+0.379}_{-0.444}$ & 3.20$\pm$0.11 & 3.842 & 240.11$\pm$0.54 & 0.933 &3.1 \\
10)IRAS 14325-6428 & 0.192$\pm$0.037 & 4.795$\pm$1.033 & 4.883 $^{+0.622}_{-0.928}$ & 2.77$\pm$0.30 & 3.806 &-76.54$\pm$10.08 & 2.181 &3.2 \\
11)IRAS 15210-6554 & -0.152$\pm$0.143& $>$6.623 & & $>$3.29 & 3.634 &-83.90$\pm$0.87 & 2.279 & 3.4 \\
12)HD 137569 & 0.752$\pm$0.079 & 1.301$\pm$0.150 & 1.316 $^{+0.132}_{-0.197}$ & 3.19$\pm$0.11 & 4.021 & -45.0 & 2.023 &3.2 \\
13)BD+33 2642 & 0.271$\pm$0.032 & 3.474$\pm$0.434 & 3.467 $^{+0.308}_{-0.466}$ & 3.54$\pm$0.12 & 4.301 &-94.7$\pm$2.5 & 1.295 &3.1 \\
14)BD+32 2754 & 3.239$\pm$0.014 & 0.307$\pm$0.001 & 0.307 $^{+0.001}_{-0.001}$ & 1.10$\pm$0.06 & 3.760 & -60.50$\pm$0.50 & 1.161 &3.3 \\
15)HD 161796 & 0.502$\pm$0.024 & 1.926$\pm$0.091 & 1.921 $^{+0.091}_{-0.095}$& 3.85$\pm$0.07 & 3.813 & -54.17$\pm$1.78 & 1.216 &3.1 \\
16)IRAS 18075-0924 & -0.171$\pm$0.192 & $>$4.348 & & & & -59.68$\pm$0.68 & 7.580 & 3.4 \\
17)BD-12 4970 & 0.467$\pm$0.020 & 2.065$\pm$0.090 & 1.984 $^{+0.072}_{-0.101}$& 5.50$\pm$0.25 & 4.431 & 124.95$\pm$9.43 & 0.956 &3.1 \\
18)HD 178443 & 1.034$\pm$0.016 & 0.951$\pm$0.014 & 0.939 $^{+0.014}_{-0.015}$ & 1.99$\pm$0.06 & 3.714 & 343.55$\pm$0.28 & 1.102 &3.3 \\
19)PHL 1580 & 3.156$\pm$0.018 & 0.315$\pm$0.002 & 0.314 $^{+0.001}_{-0.002}$ & 1.12$\pm$0.06 & 4.380 & -70.53$\pm$0.72 & 0.954 &3.3 \\
20)LS III +52 5 & 3.119$\pm$0.011 & 0.319$\pm$0.001 & 0.315 $^{+0.001}_{-0.001}$ & 1.22$\pm$0.06 & 4.398 &-232.83$\pm$0.67 & 0.817 &3.3 \\
\hline\noalign{\vskip 3pt}
\end{tabular}
}
\begin{tabnote}
\footnotemark[$*$] From Gaia EDR3 (Lindegren et al. A\&A, 2021, in press. arxiv:2012.03380).\\
\footnotemark[$\dagger$] From Gaia DR2 except for IRAS 07140--2321, HD 137569, and BD+33~2642, for which the values are respectively taken from RAVE DR6 (Steinmetz et al. 2020, AJ, 160, 82), Dulflot et al. (1995, \aaps,114, 269), and Gontcharov, G.~A.(2006, Astronomy Letters, 32, 759).\\
\footnotemark[$\ddagger$] The luminosity uncertainty includes the uncertainty in distance and reddening (\S~2). In case the relative parallax measurement uncertainty is larger than 20$\%$, we provide $2\sigma$ lower limits.
\end{tabnote}
\end{table*}
\begin{table}
\tbl{Kinematics information of twenty high velocity post-AGB candidates
\label{tab:kinematics}}{
\begin{tabular}{lrrrrrr}\hline
& $v_{T}$\footnotemark[$*$] & $v_{\phi}$ & $v_R$ & $v_z$ & $L_z$ & $E$ \\
& (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (kpc km~s$^{-1}$) & (km$^{2}$~s$^{-2}$) \\
\hline
1)IRAS 02143+5852 & $>$8.6 & & & & & \\
2)IRAS 05089+0459 & $>$18.5 & & & & & \\
3)IRAS 05208-2035 & 16.2 & 216.3 & 13.0 & -4.9 & 1983.8 & -153156 \\
4)HD 46703 & 66.5 & 162.6 & -51.8 & -17.7 & 1857.2 & -149824 \\
5)HD 56126 & 5.1 & 207.8 & 52.1 & 18.9 & 2104.9 & -148503 \\
6)IRAS 07140-2321 & 63.0 & 240.1 & -11.9 & -3.8 & 2839.3 & -134521 \\
7)IRAS 07227-1320 & 15.0 & 206.9 & 15.1 & 8.0 & 1992.9 & -152988 \\
8)IRAS 08187-1905 & 36.9 & 207.3 & -8.7 & -1.0 & 2119.8 & -149469 \\
9)HD 116745 & 185.4 & -84.6 & -56.6 & -74.8 & -541.3 & -184843 \\
10)IRAS 14325-6428 & 207.3 & 241.2 & 55.9 & 44.0 & 1457.7 & -167529 \\
11)IRAS 15210-6554 & $>$273.0 & & & & & \\
12)HD 137569 & 90.9 & 226.4 & -50.2 & -79.3 & 1688.5 & -156543 \\
13)BD+33 2642 & 238.3 & 13.5 & 142.2 & 85.1 & 98.0 & -169581 \\
14)BD+32 2754 & 66.9 & 159.4 & 20.9 & 12.8 & 1286.9 & -170975 \\
15)HD 161796 & 109.6 & 193.8 & -69.7 & -17.5 & 1551.1 & -161792 \\
16)BD-12 4970 & 27.0 & 272.0 &-111.3 & 3.6 & 1706.0 & -154487 \\
17)IRAS 18075-0924 & $>$137.4 & & & & & \\
18)HD 178443 & 233.1 & -21.9 &-300.4 & -131.0 & -160.5 & -135111 \\
19)PHL 1580 & 47.6 & 186.6 & 51.6 & 24.1 & 1495.6 & -165370 \\
20)LS III +52 5 & 79.7 & 7.0 & 33.3 & -42.0 & 57.6 & -181508 \\\hline
\end{tabular}
}
\begin{tabnote}
\footnotemark[$*$] Tangential velocity computed from the proper motion and parallax. In case the relative parallax measurement uncertainty is larger than 20\%, we provide 2$\sigma$ lower limit.
\end{tabnote}
\end{table}
|
Title:
Constraining effective neutrino species with bispectrum of large scale structures |
Abstract: Relativistic and free-streaming particles like neutrinos leave imprints in
large scale structures (LSS), providing probes of the effective number of
neutrino species $N_{\rm eff}$. In this paper, we use the Fisher formalism to
forecast $N_{\rm eff}$ constraints from the bispectrum (B) of LSS for current
and future galaxy redshift surveys, specifically using information from the
baryon acoustic oscillations (BAOs). Modeling the galaxy bispectrum at the
tree-level, we find that adding the bispectrum constraints to current CMB
constraints from Planck can improve upon the Planck-only constraints on $N_{\rm
eff}$ by about 10\% -- 40\% depending on the survey. Compared to the Planck +
power spectrum (P) constraints previously explored in the literature, using
Planck+P+B provides a further improvement of about 5\% -- 30\%. Besides using
BAO wiggles alone, we also explore using the total information which includes
both the wiggles and the broadband information (which is subject to systematics
challenges), generally yielding better results. Finally, we exploit the
interference feature of the BAOs in the bispectrum to select a subset of
triangles with the most information on $N_{\rm eff}$. This allows for the
reduction of computational cost while keeping most of the information, as well
as for circumventing some of the shortcomings of applying directly to the
bispectrum the current wiggle extraction algorithm valid for the power
spectrum. In sum, our study validates that the current Planck constraint on
$N_{\rm eff}$ can be significantly improved with the aid of galaxy surveys
before the next-generation CMB experiments like CMB-Stage 4.
| https://export.arxiv.org/pdf/2208.10560 |
\preprint{APS/123-QED}
\title{Constraining effective neutrino species with bispectrum of large scale structures}%
\author{Yanlong Shi}
\email{yanlong@caltech.edu}
\affiliation{\caltech}
\author{Chen Heinrich}%
\affiliation{\caltech}
\author{Olivier Dor\'e}
\affiliation{\caltech}
\affiliation{\jpl}
\date{\today}%
\section{Introduction}
Large scale structure (LSS) surveys have proved useful in furthering our understanding of the Universe, constraining various cosmological parameters such as the initial conditions, energy content and evolution of the Universe. Current and future specstrocopic surveys such as BOSS~\cite{DawsonSchlegel2013}, eBOSS~\cite{DawsonKneib2016},
DESI~\cite{DESICollaborationAghamousa2016a,DESICollaborationAghamousa2016b}, Euclid~\cite{LaureijsAmiaux2011}, PFS~\cite{TakadaEllis2014}, SPHEREx~\cite{DoreBock2014}, and Roman~\cite{SpergelGehrels2015} are designed to measure the distribution of galaxies in redshift space, which is especially well-suited for measuring properties of the baryon acoustic oscillations (BAOs)~\cite{EisensteinHu1998,SeoEisenstein2003,BlakeGlazebrook2003,ColePercival2005,EisensteinZehavi2005}.
The BAOs are imprints left behind by the propagation of sound waves inside the photon-baryon plasma before recombination. They induce a strong correlation of galaxies separated by the sound horizon scale $r_{\rm s}$ ($\sim 100\, h^{-1}{\rm Mpc}$), while in the Fourier space, they show up as oscillatory features with a frequency of roughly $2\pi/r_s$. This sound horizon (also called BAO scale) is sensitive to various cosmological parameters like the baryon and dark matter densities, and can be used to constrain those parameters. Moreover, in the precision cosmology era, it is possible to not only extract information from the frequency of the baryon acoustic oscillations in Fourier space, but also from their amplitude envelope and phases. Studies have shown that phases in the BAO of LSS were able to survive nonlinear and local gravitational evolution~\cite{BaumannGreen2017}, which makes it a novel probe of physical phenomena that alter the BAO phases.
One notable example of physics that induces phase shifts in the BAO is the effective number of neutrino species $N_{\rm eff}$. This parameter parameterizes the effect on the energy density from any dark radiation relic after the Big Bang, but whose fiducial value in the standard cosmology with three neutrino species is predicted to be 3.046. For various beyond standard model physics such as axions~\cite{PecceiQuinn1977}, light sterile neutrinos~\cite{AbazajianAcero2012} or dark photons~\cite{Holdom1986}, the predictions for $N_{\rm eff}$ would be different, so precision measurements of $N_{\rm eff}$ can provide evidence for either the Standard Model or new physics.
Observationally, a positive deviation from the fiducial value $\Delta N_{\rm eff}$ (either from neutrinos or other light particles) would result in a stronger damping envelope for the BAO wiggles in Fourier space due to diffusion damping~\cite{HouKeisler2013}, which arises from the photon diffusion that erases the anisotropies at scales smaller than the mean free path of photons. Due to neutrinos' free-streaming, it would also affect the dark matter clustering driving the oscillations in the plasma at early times, resulting in a predictable phase shift~\cite{BashinskySeljak2004}. Past studies have used these effects to constrain $N_{\rm eff}$ using either the cosmic microwave background (CMB)~\cite{HouKeisler2013,FollinKnox2015} or the galaxy power spectrum measurements~\cite{BaumannGreen2018,BaumannBeutler2019}. %
Besides the power spectrum, the bispectrum, which is the three-point correlation function of the density field in Fourier space, often contains additional information on the cosmological parameters (e.g.,~\cite{YankelevichPorciani2019,IvanovPhilcox2022}).
As the bispectrum describes how densities at three different scales are
correlated, it is known to be a probe of the primordial density field such as the primordial non-Gaussianity, as well as the late-time nonlinear growth of structures~\cite{2008PhRvD..77l3514D, Scoccimarro2000,SefusattiCrocce2006}.
The BAO scale information is also contained in the bispectrum measurement of spectrosopic surveys and have been detected in the BOSS data using the total bispectrum which includes both the broadband and wiggle components~\cite{PearsonSamushia2018} (as well as in the real-space three-point correlations~\cite{GaztanagaCabre2009,SlepianEisenstein2017a,SlepianEisenstein2017b}).
Later in Ref.~\cite{ChildTakada2018}, the authors showed that one can also extract the BAO wiggles from the bispectrum instead of using the entire broadband plus wiggles information, using the technique of ``bispectrum interference". The bispectrum interference consists of the interplay of BAO wiggles between different wavevectors in the bispectrum signal, leading to constructive or destructive interferences which are made explicitly manifest in a new parametrization of the triangle configurations.
In this new coordinates, it becomes clear that the BAO information is rather concentrated to some subsets of triangle configurations, which can be used to reduce computational cost.
More importantly, the interference is sensitive to amplitude and phase shift effects, which makes it ideal for constraining $N_{\rm eff}$.
In this paper, we use the bispectrum interference technique, and apply it for the first time to study the constraints on $N_{\rm eff}$ using the BAO wiggles in the bispectrum. We also investigate, for comparison, the constraints from using the total bispectrum (broadband + wiggles). In both cases, the bispectrum yields better constraints than the power spectrum. Although as in the case of the power spectrum study in Ref.~\cite{BaumannGreen2018}, the current LSS bipectrum constraints themselves are not as competitive as the current CMB constraints, we find that when combined, the Planck + LSS results improve significantly upon the Planck-alone constraints. This can be useful for achieving a better $N_{\rm eff}$ constraints before CMB-Stage 4 (CMB-S4), which would require a more futuristic LSS survey to be improved upon (or possibly modeling to higher $k_{\rm max}$ than our fiducial $k_{\rm max} = 0.2\ h \mathrm{Mpc}^{-1}$ with the upcoming LSS surveys).
Finally we show that the bispectrum interference is helpful in reducing computational costs by effectively reducing the triangle configurations used.
The paper is structured as follows. In Sec.~\ref{sec:background}, we introduce the background on neutrino physics and their effects on the matter power spectrum and the matter bispectrum; we also review the technique of bispectrum interference. In Sec.~\ref{sec:modeling}, we describe the modeling of our observables, namely the galaxy power spectrum and the galaxy bispectrum. We present the Fisher matrix formalism in Sec. \ref{sec:fisher} used to obtain the forecast constraints. In Sec.~\ref{sec:results} we present the results, comparing the bispectrum to the power spectrum constraints, as well as showing how one can use the bispectrum interference to decrease the computational cost. Finally, in Sec.~\ref{sec:conclusions} we summarize and discuss our conclusions.
Throughout the paper, we use the fiducial $\Lambda$CDM cosmology based on Planck 2018 results with the data \emph{TT,TE,EE+lowE+lensing}~\cite{PlanckCollaborationAghanim2020} with the initial spectrum amplitude and tilt $A_{\rm s}=2.207\times 10^{-9}$ and $n_{\rm s}=0.9645$, the baryon and cold dark matter densities $\omega_{\rm b}\equiv \Omega_{\rm b} h^2=0.0223$ and $\omega_{\rm c}\equiv \Omega_{\rm c} h^2=0.1188$, the sound horizon angular extent at recombination $\theta_\star \equiv r_{\rm s}(z_\star)/D(z_\star) = 1.0411\times 10^{-2}$, and the reionization optical depth $\tau=0.0544$. The resulting fiducial value of the sound horizon at recombination is $r_{\rm s} = 147.49\,{\rm Mpc}$.
Finally the fiducial value of $N_{\rm eff}$ used is 3.046. We use a helium fraction $Y_{\rm p} = 0.239$ that is consistent with BBN results.
\section{Background}
\label{sec:background}
In this section, we first briefly review the neutrino-induced effects in the matter power spectrum,
before introducing the matter bispectrum and its corresponding response to $N_{\rm eff}$. Then we review the technique of the bispectrum interference developed in Ref.~\cite{ChildTakada2018} and and apply it to the specific case of $N_{\rm eff}$.
\subsection{Effects of $N_{\rm eff}$ on the matter power spectrum}
We now briefly review the neutrino-induced phase shifts in the BAOs and describe the how we model these effects in the matter power spectrum. For more details, we refer the readers to Ref.~\cite{BaumannGreen2018}.
At very early times ({$\sim 1~{\rm s}$ after the Big Bang}), when the temperature of the Universe was high ($\gtrsim 3~{\rm MeV}$), neutrinos were kept in equilibrium with the rest of the plasma; they decoupled from the plasma when their interaction rate decreased below the expansion rate of the Universe. Around the same time, the annihilation of electrons and positrons, and the entropy of these particles was mostly transferred to photons. While this event increased the photon temperature, it did not affect that of the neutrinos as much. Assuming that neutrinos decoupled instantaneously, the neutrino's temperature would have been lower by a factor of $T_{\nu}/T_{\gamma} = (4/11)^{1/3}$ relative to that of the photons. {The effective number of neutrino species $N_{\rm eff}$ is then defined from}
\beq
\epsilon_\nu = \frac{\rho_\nu}{\rho_{\rm \gamma} + \rho_{\nu}} = \frac{N_{\rm eff}}{\alpha_\nu + N_{\rm eff}},
\label{equ:epsilon_nu}
\eeq
which is the neutrino energy density relative to the total radiation, and
\beq
\alpha_{\nu} = %
\frac{8}{7} \left(\frac{11}{4}\right)^{4/3}.
\eeq
In reality, the neutrino decoupling was not instantaneous; taking this into account along with various QED corrections, we have that $N_{\rm eff} = 3.046$ (corresponding to $\epsilon_{\nu} = 0.405$) in the Standard Model~\cite{Steigman2001, ManganoMiele2005}. Because the presence of any additional light particles that were relativistic at early times would simply add to the effective number of neutrinos measured, detecting deviations from $N_{\rm eff}=3.046$ could be hints of new physics beyond the Standard Model.
Because the free-streaming particles like neutrinos alter the BAO signatures observed in the CMB and in galaxy surveys, BAOs can be used to probe $N_{\rm eff}$. %
These oscillations originate from before recombination, when the photons and baryons were tightly coupled in a photo-baryon plasma (due to the Thomson scattering between photons and free electrons and the Coulomb interactions between electrons and protons). Acoustic oscillations perturbations propagated inside this plasma at the sound speed of $c_{\rm s}\sim c/\sqrt{3}$. When the Universe cooled enough to form stable neutral hydrogen from protons and electrons (at around $T\sim 0.3\, \mathrm{eV}$, $z\sim 1100$), the photons and baryons became decoupled, and the acoustic oscillations froze. This pattern of overdensities frozen in space gave rise to the anisotropies observed in the CMB; they also seeded dark matter perturbations by attracting dark matter, which later caused a preferential formation of galaxies around the sound horizon scale, which is observable in galaxy surveys~\cite{EisensteinZehavi2005,ColePercival2005}.
Since neutrinos had already decoupled from the photon-baryon plasma, they free-streamed at nearly the speed of light which is faster than the sound speed of the plasma at the time of recombination. As a result, their perturbations traveled ahead of the sound waves, altering the gravitational potential perturbations, which is the driving force of the acoustic oscillations~\cite{Baumann2018}. This change left observational signatures that are reflected in both the amplitude and the phases of the acoustic oscillations. The most remarkable effect is a nearly-constant phase shift on small scales proportional to the neutrino energy fraction $\epsilon_{\nu}$~\cite{BashinskySeljak2004,FollinKnox2015,BaumannGreen2018}. %
More specifically, let the comoving matter density contrast be defined as
$\delta(\vec{x}) = (\rho(\vec{x}) - \bar{\rho})/\bar{\rho}$
where $\bar{\rho}$ is the mean matter density in the Universe.
The matter power spectrum $P_{\rm m}(k)$ is defined as the correlation of the density contrast $\delta(\Vec{k})$ in Fourier space:
\begin{align}
\langle\delta (\Vec{k}) \delta(\Vec{k}') \rangle =(2 \pi)^{3} \delta_{\mathrm{D}}(\Vec{k}+\Vec{k}') P_{\rm m}\left(k\right),
\end{align}
where the Dirac delta $\delta_{\mathrm{D}}$ arises due to statistical homogeneity and isotropy.
We can decompose the linear matter power spectrum into a smooth (non-wiggle) part $P_{\rm m }^{\rm nw}(k)$ and a wiggle part $P_{\rm m}^{\rm w}(k)$ which contains the BAO, and further define $O(k)$ as the ratio $P_{\rm m}^{\rm w}(k)/P_{\rm m}^{\rm nw}(k)$ such that
\begin{align}
P_{\rm m}(k)=P_{\rm m }^{\rm nw}(k)\left[1+O\left(k\right)\right].
\label{eq:linear_power_spectrum}
\end{align}
To understand the effects of $N_{\rm eff}$ on the matter power spectrum, let us approximate the oscillatory part as $O(k)=A(k) \sin (r_{\rm s} k + \phi(k))$~\cite{BaumannGreen2018,BaumannBeutler2019}, where $A(k)$ is the scale-dependent amplitude, $r_{\rm s}$ is the sound horizon, and $\phi(k)$ is the phase shift term. %
The most visible impact of $N_{\rm eff}$ is on the damping envelope of the oscillations $A(k)$ as a result of diffusion damping during recombination. The finite mean free path of the Thompson scattering between electrons and photons allows the photons to diffuse and erase anisotropies on that scale. More specifically, the damping effect can be described as an exponential term $\exp(-k^2/k_{\rm d}^2)$ applied to the undamped wiggles in the power spectrum, where $k_{\rm d}$ is the damping scale which is related to the number density $n_{\rm e}$ of the free electrons responsible for scattering the photons. The strength of damping can be characterized with the ratio $r_{\rm d}/r_{\rm s} \propto \sqrt{H/n_{\rm e}}$, where $r_{\rm d}\equiv 2\pi/k_{\rm d}$.
When $a_{\rm eq}$ is fixed, we have $r_{\rm d}/r_{\rm s} \propto \sqrt{1/[n_{\rm e}(1-\epsilon_\nu)]}$~\cite{HouKeisler2013}, which means that the diffusion damping effect is stronger when $N_{\rm eff}$ increases~\cite{BaumannGreen2018}. Moreover, since $n_{\rm e} \propto 1-Y_{\rm p} $, there is a degeneracy between $N_{\rm eff}$ and $Y_{\rm p}$ when contrained from the diffusion damping effects alone; we expect therefore that $N_{\rm eff}$ constraints from BAO wiggles be degraded when $Y_{\rm p}$ is marginalized over~\cite{HouKeisler2013, BaumannGreen2018}. %
Besides the damping envelope, another important effect of $N_{\rm eff}$ is on the scale-dependent phase shift $\phi(k)$.
{As shown in Ref.~\cite{BaumannGreen2017}, even though there is the nonlinear evolution of the matter density field, the phase shift of BAO in the power spectrum is still a robust probe of additional species of light particles.} If the phase shift effect is only due to $N_{\rm eff}$, it could be used to relieve part of the degeneracy between $N_{\rm eff}$ and $Y_{\rm p}$ mentioned above. %
The authors of Ref.~\cite{BaumannGreen2018} found that the oscillations can be well described as
\begin{align}
O^{\rm temp}(k) = O^{\rm fid}\left(\frac{k}{\alpha}+(\beta-1)\frac{f(k)}{r_{\rm s}^{\rm fid}}\right), \label{equ:phase_shift}
\end{align}
where $O^{\rm fid}(k)$ is the oscillatory piece of the power spectrum in the fiducial cosmology. Here $\alpha = r_{\rm s}^{\rm fid}/r_{\rm s}$ accounts for the `stretching" or ``compressing" the BAO oscillations in Fourier space as the sound horizon may be different in the given cosmology than that of the fiducial cosmology. The additional phase shift due to the deviation from the fiducial $N_{\rm eff}^{\rm fid}$ is proportional to $\beta-1$, where $\beta \equiv \epsilon_\nu/\epsilon_\nu^{\rm fid}$ is normalized such that $\beta=1$ for $N_{\rm eff}=N_{\rm eff}^{\rm fid}$. The function $f(k)$ describes the shape of the scale-dependent phase shift and can be approximated using the template derived from simulations in Ref.~\cite{BaumannGreen2018}
\beq
f(k) = \frac{\phi_\infty}{1+(k_\star/k)^\xi},
\eeq
with $\phi_\infty=0.227$, $k_\star=0.0324\,h\,{\rm Mpc}^{-1}$, and $\xi=0.872$. Later in Ref~\cite{BaumannBeutler2019}, the amplitude of the phase shift $\beta$ was successfully measured in the BOSS DR12 data {(e.g., $\beta=2.22\pm 0.75$ when marginalizing over the $\Lambda$CDM+$N_{\rm eff}$, using a prior on $\alpha$ from Planck)}.
\subsection{Effects of $N_{\rm eff}$ on the matter bispectrum}
The bispectrum is the three-point function of the density contrast
in Fourier space
\begin{align}
\langle\delta (\Vec{k}_{1}) \delta(\Vec{k}_{2}) \delta(\Vec{k}_{3}) \rangle& = (2 \pi)^{3} \delta^{\mathrm{D}}(\Vec{k}_{1}+\Vec{k}_{2}+\Vec{k}_{3}) B_{\rm m}\left(k_{1}, k_{2}, k_{3}\right),
\end{align}
where $\Vec{k}_1, \Vec{k}_2$ and $\Vec{k}_3$ form a closed triangle. In the standard perturbation theory (SPT)~\cite{BernardeauColombi2002}, the tree-level contribution to the matter bispectrum is
\begin{align}
B_{\mathrm{m}}(\vec k_1, \vec k_2, \vec k_3) = 2 F_{2}(\Vec{k}_1, \Vec{k}_2) P_{\rm m}(k_1) P_{\rm m}(k_2) + 2\; \mathrm{cyc.,}
\label{equ:matter-bis-tree}
\end{align}
where
\begin{align}
F_{2}(\Vec{k}_1, \Vec{k}_2) = \frac{5}{7} + \frac{\hat{k}_i \cdot \hat{k}_j}{2} \left(\frac{k_i}{k_j} + \frac{k_j}{k_i} \right) + \frac{2}{7}(\hat{k}_i\cdot \hat{k}_j)^2
\end{align}
is the second-order density kernel in SPT, and $P_{\rm m}^{\rm lin}$ is the linear matter power spectrum. The tree-level expression is only valid in the linear regime for $k\lesssim 0.2\,h\, {\rm Mpc}^{-1}$ {at $z=0$ (see tests shown in Ref.~\cite{BaldaufMercolli2015}). At higher redshifts, the linear regime extends to a higher $k$ since there is less nonlinearity, but we choose $k_{\rm max}=0.2\,h\,{\rm Mpc}^{-1}$ in our conservative forecast.}
In Fig.~\ref{fig:bispectrum} top panel, we show the tree-level matter bispectrum at $z=0$, calculated using the linear matter power spectra from \texttt{CAMB}~\cite{LewisChallinor2000}. We order the triangle configurations first by increasing $k_1$, then by increasing $k_2$, and then by $k_3$. To avoid double-counting, we only include triangle configurations that satisfy $k_1\le k_2 \le k_3$. The grey lines correspond to where $k_1$ steps up and the orange dots, where $k_2$ steps up. The value of $k_3$ increases from one orange dot $k_3=k_2$ until $k_{\rm max}$ before coming back down at the next orange dot. The green dots show increasing $k_3$ for fixed $k_1 = k_2$.
In the lower panels, we examine the changes in the different parts of the matter bispectrum corresponding to a step $\Delta N_{\rm eff}=1$ from its fiducial value. In this process, we keep $a_{\rm eq}$ fixed to break the degeneracy between $N_{\rm eff}$ and $\omega_{\rm c}$. %
The second row shows the change in the total matter bispectrum, which includes both the broadband and the BAO wiggles. %
The third row shows the changes in $B_{\rm m}^{\rm w}$, the wiggle part of the bispectrum $B_{\rm m}^{\rm w} = B_{\rm m} - B_{\rm m}^{\rm nw}$ relative to the total bispectrum, where the non-wiggle bispectrum $B_{\rm m}^{\rm nw}$ is defined as in Eq.~\eqref{equ:matter-bis-tree} but using the smooth part of the matter power spectrum $P_{\rm m}^{\rm nw}$, so that
\begin{align}
B^{\rm w}_{\rm m} & = 2 F_{2}(\Vec{k}_1, \Vec{k}_2) P_{\rm m}^{\rm nw}(k_1;N_{\rm eff}^{\rm fid}) P_{\rm m}^{\rm nw}(k_2;N_{\rm eff}^{\rm fid})
\nonumber\\
& \times [O(k_1;N_{\rm eff}) + O(k_2;N_{\rm eff}) + O(k_1;N_{\rm eff})O(k_2;N_{\rm eff})] \nonumber\\
& + 2\; \mathrm{cyc.\, perm.}\label{equ:matter-bis-wiggles}
\end{align}
Finally, the last row of Fig.~\ref{fig:bispectrum} shows the change in the phase shift part of the matter bispectrum $B^{\phi}_{\rm m}$ relative to the total matter bispectrum, where
\begin{align}
B^{\phi}_{\rm m} & = 2 F_{2}(\Vec{k}_1, \Vec{k}_2) P_{\rm m}^{\rm nw}(k_1;\beta^{\rm fid}) P_{\rm m}^{\rm nw}(k_2;\beta^{\rm fid})
\nonumber\\
& \times [O^{\rm temp}(k_1;\beta) + O^{\rm temp}(k_2;\beta) \nonumber\\
& + O^{\rm temp}(k_1;\beta)O^{\rm temp}(k_2;\beta)] \nonumber\\
& + 2\; \mathrm{cyc.\, perm.}\label{equ:matter-bis-phase}
\end{align}
Here $P^{\rm nw}_{\rm m}(k;\beta^{\rm fid})$ is obtained with the fiducial $N_{\rm eff}$, while $O^{\rm temp}(k;\beta)$ is the template defined in Eq.~\eqref{equ:phase_shift} with $O^{\rm fid}$ fixed, so that varying $\beta(N_{\rm eff})$ in $B^{\phi}_{\rm m}$ represents only the phase-shift effects induced by $N_{\rm eff}$ while ignoring other effects in the BAO wiggles as well as the broadband effects. %
Comparing the three signals, we find that the total fractional change $\Delta B_{\rm m}/B_{\rm m}$ is always positive {since $\Omega_{\rm m}$ increases when we increase $N_{\rm eff}$ but keep $a_{\rm eq}$ fixed, which means the amplitude of matter density fluctuations entering the horizon during the matter-dominated era is larger}. The fractional changes in the BAO wiggles $\Delta B_{\rm m}^{\rm w}/B_{\rm m}$ and in the BAO phase shifts $\Delta B_{\rm m}^{\phi}/B_{\rm m}$ are oscillatory. The amplitude of the deviations are also indicators of the information contained: {The change in the total bispectrum $\Delta B_{\rm m}/B_{\rm m}$} contains all the information one can extract from the matter bispectrum, so it has the largest amplitude, {up to a few times that of the fractional changes illustrated in the third row from wiggles alone.}
The total signal is generally increasing with larger triangle configuration index which corresponds to going to larger $k$'s, {since it is dominated by the effects of $N_{\rm eff}$ on the broadband matter power spectrum,} while the amplitude in the third row for the wiggle parts stay mostly stable over the range of triangle configurations we consider, but should damp out at high enough $k$ (not shown here) as the BAO wiggles become suppressed. Finally, the phase-induced BAO deviation is an order of magnitude smaller in its overall amplitude than the other two cases, so it is expected to give much less stringent constraints on $N_{\rm eff}$. We will study $N_{\rm eff}$ constraints with the phase shift effect alone in the appendix only for the purpose of literature comparison.
\subsection{Bispectrum interference}
\label{sec:interference}
In this study we will extract information from the BAO wiggles in the bispectrum in order to constrain $N_{\rm eff}$. As we have seen in the previous subsection, the $N_{\rm eff}$ signal in the BAO part of the bispectrum oscillates around zero with
triangle configurations. Using the techniques of bispectrum interference, first introduced in Ref.~\cite{ChildTakada2018}, we will identify later in Section~\ref{sec:notes_on_interference} the subset of triangles that contribute the most to the $N_{\rm eff}$ constraint and show how it can increase computational efficiency. We now briefly review the concept of bispectrum interference and show its effects for the $N_{\rm eff}$.
In Ref.~\cite{ChildTakada2018}, the authors proposed a new set of coordinates $(k_1, \delta, \theta)$ (hereafter `Child18 coordinate'') where
\begin{align}
k_2 = k_1+ \pi \delta/r_{\rm s} \qquad \mathrm{and}\qquad \cos\theta = \hat{k}_1 \cdot \hat{k}_2. \label{equ:child18}
\end{align}
The parameter $\delta$ now parametrizes the phase difference between $k_1$ and $k_2$ in terms of the number of half periods given an oscillation frequency of $\pi/r_s$.
The angle $\theta$ is defined as the angle between $\vec k_1$ and $\vec k_2$ %
and is confined to $0 \leq \theta < \pi$. See Fig.~\ref{fig:coordinate} for an example of configurations with $\theta > \pi/2$ and $\theta < \pi/2$.
{When $k_1, k_2 \ll k_3$, the first of the three permutations}, $2F_2(\vec{k}_1, \vec{k}_2)P_{\rm m}(k_1)P_{\rm m}(k_2)$ dominates over the other cyclic permutations due to the weighting by $F_2(\vec{k}_1, \vec{k}_2)$ (see Fig. 2 of Ref.~\cite{ChildTakada2018}).
{In this case, omitting the second and third permutations, we can approximate the ratio $O^{\rm bis}$ as }
\begin{align}
O^{\rm bis}(k_1, \delta, \theta) \equiv \frac{B^{\rm w}(k_1, \delta, \theta)}{B^{\rm nw}(k_1, \delta, \theta) } \approx O(k_1) + O(k_2) + \mathcal{O}(O^2),
\label{equ:bispectrum_wiggle}
\end{align}
where the second-order term in $O$ is negligible since the BAO wiggles are only a small fraction of the broadband matter power spectrum with $O \ll 1$. This prediction can be verified explicitly by plotting $B^{\rm w}/B^{\rm nw}$ in the Child18 coordinates $(k_1, \delta, \theta)$ \cite{ChildTakada2018}.
In Fig.~\ref{fig:beta-bis} we plot $B^{\rm w}/B^{\rm nw}$ as a function of $k_1$ for fixed $\delta$ and $\theta$ to show the effect of bispectrum interference for various values of $N_{\rm eff}$.
It is clear how the wiggle part of the bispectrum for the constructive triangle configuration ($\delta = 0$) looks significantly different than that of the ``destructive'' configuration ($\delta=1$).
For our choice of $\theta = \pi/4$, we have $k_1 \leq k_2 \ll k_3$, so the permutations containing $P_{\rm m}(k_3)$ are further suppressed compared to the $(k_1, k_2)$ permutation, and we have that
$B_{\rm m}^{\rm w}/B_{\rm m}^{\rm nw} \approx O(k_1)+O(k_2)$. %
Indeed, we have verified that for the constructive interference in the top panel for which $k_1 = k_2$, the amplitude is approximately twice that of $O(k_1)$.
In the destructive interference case with $\delta = 1$ in the bottom panel, the cancellation is not perfect since there is a decaying amplitude envelope and the oscillations are not exactly with constant periods, but the amplitude is an order of magnitude lower than that of the constructive interference with the same $\theta$. We show also that shifting $N_{\rm eff}$ by $\pm 1$ around the fiducial value introduces phase shifts that are small enough such that the definition of constructive and destructive interference can still be used largely unaffected using the fiducial $r_s$.
\section{Modeling}
\label{sec:modeling}
We have introduced the effects of $N_{\rm eff}$ on the matter power spectrum and bispectrum, and now we will describe our modeling of the observables, the galaxy power spectrum and bispectrum.
\subsection{Galaxy power spectrum}
We observe the galaxies rather than the matter distribution in the Universe. We model the galaxies as a biased tracer of the underlying matter distribution and account for redshift space distortions (RSD), since we can only measure the galaxy redshifts rather than their true distances.
We follow Ref.~\cite{BaumannGreen2018} in modeling the observed power spectrum as:
\begin{align}
P_{\rm g}(\vec{k}) = \frac{Z_1^2(\mu')}{q^3} P_{\rm m }^{\rm nw}(k') \left(1 + O(k')\mathcal{D}(k', \mu') \right) + \frac{1}{n_{\rm g}},
\label{eq:galaxy_ps}
\end{align}
where we omitted the redshift bin dependence. Let us now walk through each effect considered.
\begin{enumerate}
\item \textit{Redshift space distortions and galaxy bias}
The linear redshift space distortion effects and the galaxy bias are grouped into one kernel
\beq
Z_1(\mu) = b_1 + f\mu^2,
\eeq
where $f(a) =\dd \ln D/\dd \ln a$ is the linear growth rate, and $\mu$ is the cosine of the angle between the line-of-sight vector and the wavevector $\vec{k}$.
We model the galaxy bias to linear order using the linear bias $b_1$ as a function of redshift which we assume to scale as $1/D(z)$, as is appropriate for the evolution of samples in which the galaxy number is conserved:
\begin{align}
b_1 (z) = \frac{D(0)}{D(z)} b_1(0).
\label{equ:b1}
\end{align}
\item \textit{Nonlinear damping of BAO wiggles and its reconstruction}
Baryon acoustic oscillations $O(k)$ are damped by nonlinear structure formation, and we model the damping as
\begin{eqnarray}
\mathcal{D}(k, \mu) = \exp\left[-\frac{1}{2}\left(k^2\mu^2\Sigma_{\parallel}^2 + k^2 (1-\mu^2) \Sigma_{\perp}^2 \right) \right]. \notag\\
\label{equ:bao-damping}
\end{eqnarray}
Here $\Sigma_\parallel$ and $\Sigma_\perp$ describe respectively the damping scales for directions parallel and perpendicular to the line-of-sight and they are redshift dependent:
\begin{align}
\Sigma_\perp(z) & = 9.4 r\left( \frac{\sigma_8 (z) }{0.9}\right) \, h^{-1}{\rm Mpc}, \notag \\
\Sigma_\parallel(z) & = \left[1+f(z)\right] \Sigma_\perp(z),
\label{equ:bao-damping-sigma}
\end{align}
where $\sigma_8(z)$ is the variance of the matter density field within 8 $h^{-1}$Mpc at redshift $z$. BAO reconstruction techniques~\cite{EisensteinSeo2007a,PadmanabhanWhite2009,WangYu2017,ShiCautun2018} are often used to revert some of the damping effects due to nonlinear evolution, rendering sharper BAO features.
Here we model the reconstruction with a fraction $r$:
$r = 1$ means no reconstruction, whereas $r = 0$ means full reconstruction. In practice $r$ is modeled as a function of galaxy number density (following Ref.~\cite{BaumannGreen2018} Eq.~3.13) and satisfy $0.5\le r \le 1$.
\item \textit{Alcock-Paczynski effect}
A reference cosmology needs to be assumed when converting the observed galaxy redshifts to distances. As a result, in a given cosmology, the different mapping from redshifts to distances results in a different wavenumber $k'$, which is related to $k$ in the fiducial model as
\begin{align}
k'(k, \mu, z) & = k\, \sqrt{\frac{\mu^2}{q_{\parallel}^2(z)}+\frac{1-\mu^2}{q_{\perp}^2(z)}}\, ,
\label{equ:ap} \\
\mu' (\mu, z) & = \frac{\mu}{q_{\parallel}(z)}/ \sqrt{\frac{\mu^2}{q_{\parallel}^2(z)}+\frac{1-\mu^2}{q_{\perp}^2(z)}}\, .
\end{align}
where
\bea
q_{\parallel}(z) &=& D_A(z)/D_A^{\rm ref}(z), \\
q_{\perp}(z) &=& H^{\rm ref} (z)/H(z).
\eea
Here, $D_A$ is the angular diameter distance and $H(z)$ the Hubble parameter.
All the functions appearing in Eq.~\eqref{eq:galaxy_ps} are evaluated at
$k'(k, \mu)$ and $\mu'(\mu)$. %
Furthermore, a volume factor multiplies the power spectrum due to the different survey volume inferred comoving volumes in the two cosmologies
\begin{align}
q = & q_\parallel^{1/3} q_\perp^{2/3}.
\label{equ:ap_volume_factor}
\end{align}
We choose the reference cosmology to be the same as our fiducial cosmology. We use $h^{-1}\,{\rm Mpc}$ as our distance unit, thus in practice all the AP factors will be adapted by multiplying with $h/h^{\rm ref}$. %
\item \emph{Systematics}
To account for measurement systematics in the broadband such as stellar contaminations, we add extra terms that are polynomials of $k$ to the non-wiggle power spectrum, and marginalize over their amplitudes~\cite{BaumannGreen2018}
\begin{align}
P_{\rm m}^{\rm nw} (k) \to \tilde{B}(k)P_{\rm m}^{\rm nw} + \tilde{A}(k),
\label{equ:ps-poly-broadband}
\end{align}
where
\beq
\tilde{A}(k) = \sum_{n} \tilde{a}_{n}k^n, \quad \tilde{B}(k) = \sum_{m} \tilde{b}_{m}k^{2m}.
\label{equ:ps-poly-broadband-coeff}
\eeq
In the fiducial case we have $\tilde{b}_0=1$, $\tilde{b}_{m\neq 0}=0$ and $\tilde{a}_n=0$.
Note that $\tilde{b}_0$ is degenerate with the linear galaxy bias $b_1$, so we do not vary it in the Fisher forecast.
For the BAO-only forecast, we use a similar relation on the oscillations to account for systematics such as those arising from the modeling uncertainties in the nonlinear damping of the wiggles (we assumed a particular model in Eq.~\ref{equ:bao-damping} and~\ref{equ:bao-damping-sigma}) or from the wiggle extraction algorithm
\begin{align}
O(k) \to B'(k)O(k) + A'(k),\label{equ:ps-poly-wiggle}
\end{align}
where $A'(k)$ and $B'(k)$ are defined similarly as in Eq.~\eqref{equ:ps-poly-broadband-coeff}. %
Note that the choice of polynomials terms to marginalize over can sometimes have a significant impact on the result. We study and discuss this more in detail in Sec.~\ref{sec:forecast_dependencies}.
\end{enumerate}
When calculating the covariance of the observed power spectrum, we include the shot noise term $1/n_{\rm g}$ which arises from the sampling of the underlying matter density field with galaxies assuming a Poisson statistics. We assume a constant galaxy density $n_{\rm g}^i$ for the $i$-th redshift bin with middle redshift $z_i$;
for the case of cosmic variance we set $1/n_{\rm g} = 0$.
\subsection{Galaxy bispectrum}
To model the observed galaxy bispectrum in redshift space, we follow Ref.~~\cite{YankelevichPorciani2019} to include RSD, galaxy biases and the nonlinear damping of BAOs, and use a new set of polynomial terms to account for systematics. The tree-level galaxy bispectrum is modeled as
\begin{align}
B_{\rm g}(\vec k_1, \vec k_2, \vec k_3) & = 2 \frac{1}{q^6} Z_2(\vec k_1', \vec k_2') Z_1(\vec k_1') Z_1 (\vec k_2')\nonumber\\
& \times P_{\rm m }^{\rm nw}(k'_1) \left(1 + O(k'_1)\mathcal{D}(k'_1, \mu'_1) \right) \nonumber\\
& \times P_{\rm m }^{\rm nw}(k'_2) \left(1 + O(k'_2)\mathcal{D}(k'_2, \mu'_2) \right) \nonumber\\ & + \mathrm{2 \ cyc.\ perm.} \label{equ:galaxy-bispectrum}
\end{align}
The redshift kernel $Z_2$ encodes the RSD and the
second-order bias effects:
\begin{align}
Z_2 (\vec k_i, \vec k_j) = \frac{b_2}{2} + b_1 F_2 (\vec k_i,\vec k_j) + f \mu_{ij}^2 G_2(\vec k_i, \vec k_j) \nonumber \\
+ \frac{f \mu_{ij} k_{ij}}{2} \left[\frac{\mu_i}{k_i} Z_1(k_j) + \frac{\mu_j}{k_j} Z_1(k_i) \right] + \frac{b_{s^2}}{2} S_2(\vec k_i, \vec k_j),
\end{align}
where $\mu_{i} = \vec k_{i} \cdot \hat{n}$, $\vec k_{ij} = \vec k_i + \vec k_j $ and $\mu_{ij} = \vec k_{ij} \cdot \hat{n}$, and where $G_2$ is the second-order kernel of velocity divergence in SPT and $S_2$ is the tidal tensor:
\begin{align}
G_{2}(\Vec{k}_1, \Vec{k}_2) & = \frac{3}{7} + \frac{\hat{k}_i \cdot \hat{k}_j}{2} \left(\frac{k_i}{k_j} + \frac{k_j}{k_i} \right) + \frac{4}{7}(\hat{k}_i\cdot \hat{k}_j)^2,\\
S_2(\vec{k}_1, \vec{k}_2) & = (\hat{k}_1\cdot \hat{k}_2)^2 - \frac{1}{3}.
\end{align}
The second-order bias $b_2$ in the fiducial model is calculated using the relation fit from simulations~\cite{LazeyrasWagner2016}
\begin{align}
b_2 (z) & = 0.412 - 2.143 \ b_1(z) + 0.929 \ b_1^2 (z)\nonumber\\& + 0.008 \ b_1^3(z),
\label{equ:b2}
\end{align}
and the fiducial tidal bias $b_{s^2}$ is modeled with~\cite{SaitoBaldauf2014}
\begin{align}
b_{s^2} (z) = \frac{4}{7}\left(1-b_1(z)\right).
\label{equ:bs2}
\end{align}
Both are evaluated at the center of the redshift bin and assumed to be constant within the bin.
Note that in addition to the linear galaxy bias in the power spectrum, we also model the second-order bias contributions to the bispectrum in order to account for all contributions at tree-level. This is not consistent with the power spectrum in the sense that we are not cutting the galaxy density at the same order in $\delta_m^{(n)}$ in perturbation theory to model our observables (say by including including one-loop terms in the power spectrum induced by second-order terms in $\delta_g$). But in terms of modeling the lowest-order contributions as a good approximation for each observable in the linear regime, this is a reasonable choice.
Similarly to the power spectrum, the nonlinear damping of BAOs is accounted for by multiplying the wiggle part of matter power spectrum $O(k)$ by a damping factor $\mathcal{D}$ (Eq.~\ref{equ:bao-damping}). The AP effect is also included just as in the power spectrum: Each wavevector $\vec{k}_i$ is mapped to $\vec{k}_i'$ following Eq. \eqref{equ:ap} and there is a volume factor $1/q^6$ that is different than in the power spectrum, where $q$ was defined in Eq.~\ref{equ:ap_volume_factor}.
To mimic the effects of marginalizing over systematics in the measurements of the broadband bispectrum, we introduce a new set of polynomials $\tilde{A}$ and $\tilde{B}$, different from those used for the power spectrum, such that the galaxy bispectrum becomes %
\begin{align}
B_{\rm g}({\vec k_1, \vec k_2, \vec k_3}) \rightarrow \tilde{B}(k_1, k_2, k_3) B_{\rm g}({\vec k_1, \vec k_2, \vec k_3}) + \tilde{A}(k_1, k_2, k_3).
\end{align}
They allow for different powers of $k_1$, $k_2$ and $k_3$ and are composed of terms proportional to $k_1^r k_2^s k_3^t$: %
\bea
\tilde{A}(k_1, k_2, k_3) &=& \sum_{n=0} \sum_{(r, s, t) \in S(n)} \tilde{a}_{n}^{rst} \left(k_1^r k_2^s k_3^t + {\rm 2\ cyc.\ perm.}\right), \notag\\
\tilde{B}(k_1, k_2, k_3) &=& \sum_{n=1} \sum_{(r, s, t) \in S(n)} \tilde{b}_{n}^{rst} \left(k_1^r k_2^s k_3^t + {\rm 2\ cyc.\ perm.}\right). \notag\\
\label{equ:bs-poly-broadband}
\eea
At each total power $n=r+s+t$, we sum over all $r, s$ and $t$ combinations in the set $S(n) = \{(r, s, t)\, |\, r+s+t =n;\, r\ge s \ge t\}$. For example, $S(3) = \{(3,0,0), (2, 1, 0), (1, 1, 1)\}$. For each $(r,s,t)$ combination, all cyclic permutations of the term are included and assumed to be affected in the same way, being assigned to the same coefficient $\tilde{a}_n^{rst}$. This is purely for simplicity, as there could be systematics terms that affect the different permutations differently.
As suggested in Ref.~\cite{ChildTakada2018}, one can also extract the BAO wiggles from the bispectrum using the interference coordinates $(k_1, \delta, \theta)$. Recall that $O^{\rm bis}(k_1,\delta, \theta)$ from Eq.~\eqref{equ:bispectrum_wiggle} is the equivalent of the fractional BAO contribution of the BAO wiggles in the bispectrum. It could have additional terms than those shown in the right hand side of Eq.~\eqref{equ:bispectrum_wiggle} for a general triangle shape where the two other cyclic permutations also contribute significantly. We assume that an algorithm can be developed to successfully extract $O^{\rm bis}(k_1,\delta, \theta)$ for all configurations (for a list of necessary problems to solve to reach this assumption, see Appendix~\ref{app:bao-extraction}.)
Working under this assumption that BAO wiggles can be successfully extracted to match the theory $O^{\rm bis}$, we now use a similar technique to marginalize over the modeling uncertainties in the nonlinear damping term for the bispectrum measurement of BAO wiggles, as well as those that possibly arise during the wiggle extraction procedure for the bispectrum. Like in the case for the power spectrum, we apply the polynomials $\tilde{A}'(k_1)$ and $\tilde{B}'(k_1)$ to the undamped part of the oscillations:
\begin{align}
O^{\rm bis}(k_1,\delta, \theta) \to \mathcal{D}(k_1, \mu_1) \left[\tilde{B}'(k_1) O^{\rm bis}(k_1, \delta,\theta) + \tilde{A}'(k_1) \right],
\end{align}
where $\mathcal{D}(k_1, \mu_1)$ accounts for the damping of wiggles at $k_1$ due to the nonlinear evolution. Note that this is not an exact treatment, since $O^{\rm bis}(k_1, \delta, \theta) \approx O(k_1) + O(k_1 + \delta)$ to first order in $O$ if the first cyclic permutation in the tree-level expression dominates, so the damping treatment applied as a function of $k_1$ (and but not $k_1 + \delta$) here is only approximate. Yet we expect that marginalizing over the polynomials $\tilde{A}'$ and $\tilde{B}'$ would account for some of the uncertainties in the damping treatment as in the power spectrum case. The definition for $\tilde{A}'(k)$ and $\tilde{B}'(k)$ is just as in the power spectrum case:
\beq
\tilde{A}'(k)=\sum_n \tilde{a}'_n k^n; \quad \tilde{B}'(k)=\sum_m \tilde{b}'_m k^{2m}.\label{equ:bs-poly-wiggle}
\eeq
In the rest of this paper, we will drop for simplicity the tilde and prime symbols that are used to distinguish between the $a_n$ and $b_m$ coefficients of the various observables, but they should still be distinguishable through the context.
\section{Fisher matrix setups}
\label{sec:fisher}
In this section, we use the Fisher matrix formalism to study the constraining power on $N_{\rm eff}$ and other cosmological parameters from the power spectrum, bispectrum and their combination.
Let us call the data vector $\vec{d}$ a series of data taken by the observer. The data can be modeled given a set of theory parameters $\vec{p}$ (which we call parameter vector) with a likelihood function $\mathcal{L}(\vec{d}|\vec{p})$. For unbiased estimation of the parameter vectors from the data, denoted $\hat{p}$, the variance $\mathrm{Var}(\hat{p})$ is constrained by the Cramer-Rao inequality $\mathrm{Var}(\hat{p}_i) \ge (-\partial^2 \ln \mathcal{L}(\vec{d}|\vec{p}) / \partial p_i^2 )^{-1}$. In the limit where the likelihood function is well-approximated by a Gaussian, the Fisher matrix
\begin{align}
F_{ij} = - \frac{\partial^2 \ln \mathcal{L}(\vec{d}|\vec{p})}{\partial p_i \partial p_j} = \frac{\partial \vec{d}}{\partial p_i} C^{-1} \frac{\partial \vec{d}}{\partial p_j},
\end{align}
where $C_{ab} = \mathrm{Cov}(d_a, d_b)$ is the covariance matrix of the data vector, gives the best possible constraint on the parameter $i$: $\sigma_{p_i} \ge \sqrt{(F^{-1})_{ii}}$.
\subsection{Power spectrum}
Using the power spectrum as our data vector, and for a single survey volume in which the galaxy number density $\bar{n}_{\rm g}$ is constant, the Fisher matrix is given by~\cite{BaumannGreen2018,YankelevichPorciani2019}
\begin{align}
F_{i j}=\int_{-1}^{1} \frac{\mathrm{d} \mu}{2} \int_{k_{\min }}^{k_{\max }} \frac{\mathrm{d} k k^{2}}{(2 \pi)^{2}} \frac{\partial \ln P_{\rm g}(k, \mu)}{\partial p_{i}} \frac{\partial \ln P_{\rm g}(k, \mu)}{\partial p_{j}} V_{\mathrm{eff}},
\label{equ:ps-fisher}
\end{align}
where
\begin{align}
V_{\mathrm{eff}} = \left(\frac{\bar{n}_{\rm g} P_{\rm g}(k, \mu)}{\bar{n}_{\rm g} P_{\rm g}(k, \mu)+1}\right)^{2} V,
\end{align}
is the effective survey volume (which is smaller than the real comoving volume $V$), and where we also assumed Gaussian covariance matrix.
In realistic surveys, one often measures the power spectrum in multiple redshift bins. We will treat such cases with the galaxy number density assumed to be constant within each bin, but different from bin to bin. We also assume that there is no correlation between galaxies of different redshift bins, in which case the total Fisher matrix is just the sum over that of all the redshift bins.
To evaluate the Fisher matrix, we need to compute the derivatives of the galaxy power spectrum with respect to the cosmological, bias and systematics parameters. We consider two different ways of evaluating the derivatives and call them loosely here the total
and BAO wiggles, reflecting where the information is drawn from.
The authors of Ref.~\cite{BaumannGreen2018} also derived constraints on $N_{\rm eff}$ from the phase of the BAO wiggles (see also Ref.~\cite{BaumannBeutler2019}); for the sake of comparison with previous literature, we also include this method in Appendix \ref{app:phase-shift} along with its description and constraints from the power spectrum and bispectrum. %
We now detail the total and BAO wiggle measurements which we focus on in the main text.
\subsubsection{Total constraints}
We start by considering the effect of the parameters $p_i$ on the entire power spectrum including both the broadband shape and the BAO wiggles.
Our parameter vector is
\begin{align}
\vec{p}=(N_{\rm eff}, \theta_\star, \omega_b, \omega_c, A_s, n_s, \tau, Y_p;\; \vec b_1;\; {a}_n, {b}_m). \nonumber
\end{align}
The first set of parameters are the cosmological parameters that we label $\Lambda$CDM+$Y_{\rm p}$+$N_{\rm eff}$. We use \texttt{CAMB}~\cite{LewisChallinor2000} to evaluate their derivatives numerically. To do so, we change the fiducial value of each parameter one at a time by a step size of $\pm h$:
\beq
\frac{\partial P_g}{\partial p_i} \approx \frac{[P_g(p_i=p_i^{\rm fid}+h))-P_g(p_i=p_i^{\rm fid}-h)]}{2h}.
\eeq
Note that we did not fix $\theta_\star$ or $a_{\rm eq}$ here when varying $N_{\rm eff}$.
The second set of parameters are the galaxy biases. We treat them as independent parameters from different redshift bins. For example, if we have $n_z$ redshift bins in a survey, then there are $n_z$ bias parameters in the Fisher matrix. %
Finally, for the polynomial coefficients ${a}_n$ and ${b}_m$ (defined in Eq.~\ref{equ:ps-poly-broadband})
the derivatives are calculated analytically. The polynomial terms are also dependent on the redshift bin, so there will also be $n_z$ coefficients to consider for every $n$ or $m$ in ${a}_n$ or ${b}_m$. For our fiducial setup, we choose $b_{m\le 1}$ following Ref.~\cite{BaumannGreen2018}, amounting to $n_z$ polynomial parameters in the Fisher matrix. Later in Section~\ref{sec:results} we will explore the effects of using a different set of polynomial parameters on our results.
In realistic surveys, the broadband measurement is often prone to systematics like stellar contaminations and those in our modeling of the nonlinear evolution effects and galaxy bias~\cite{DesjacquesJeong2018}. On the other hand, the scale and phase of BAO wiggles are more robust to nonlinear evolution~\cite{EisensteinSeo2007b}, which can also be partly reversed by reconstruction techniques~\cite{EisensteinSeo2007a,PadmanabhanWhite2009,WangYu2017,ShiCautun2018}. For this reason, we will also consider extracting information from only the BAO wiggles in the next subsection.
For our fiducial forecast for the total power spectrum, we set $k_{\rm min}=0.01h\,{\rm Mpc}^{-1}$ and $k_{\rm max} = 0.2h\,{\rm Mpc}^{-1}$.
\subsubsection{BAO wiggles}
We now consider using the information from BAO wiggles alone. Here we take the data vector as the wiggle part of the power spectrum, so that the derivatives are computed as follows, keeping only terms with $P_{\rm g}^{\rm nw}$ fixed at the fiducial cosmology while calculating the derivatives $\partial O(k|\vec{p})/\partial p_i$ numerically:
\begin{align}
\frac{\partial P^{\rm w}_{\rm g}}{\partial p_i} = P_{\rm g}^{\rm nw} (k, \mu) \mathcal{D}(k, \mu)\frac{\partial O(k|\vec{p})}{\partial p_i}.
\label{equ:power-spectrum-derivative-bao}
\end{align}
To do so, we need to calculate the matter power spectrum with different cosmological parameters, and separate smooth and oscillatory parts. We follow Ref.~\cite{BaumannGreen2018} to extract the non-wiggle power spectrum by applying a discrete sine transform, cutting the characteristic ``bump'' of the BAO before doing an inverse discrete sine transform. Details of the algorithm may be found in Appendix C of Ref.~\cite{BaumannGreen2018}.
For power spectrum BAO wiggles, we set $k_{\rm min}=0.01h\,{\rm Mpc}^{-1}$ and $k_{\rm max} = 0.5h\,{\rm Mpc}^{-1}$. The polynomial terms to include are $a_{n\le 3}, b_{ m\le 4}$~\cite{BaumannGreen2018}.
\begin{table}[!htbp]
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXXXXX}
\toprule
$z_{\rm mid}$ &0.050 &0.150 &0.250 &0.350 &0.450 &0.550 \\
$10^3n_{\rm g}$ &0.289 &0.290 &0.300 &0.304 &0.276 &0.323 \\
\midrule
$z_{\rm mid}$ &0.650 &0.750 & & & & \\
$10^3n_{\rm g}$ &0.120 &0.010 & & & & \\
\bottomrule
\end{tabularx}
\caption{BOSS: $f_{\rm sky}=0.242$ (10000 deg$^2$).}
\end{subtable}
\medskip
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXXXXX}
\toprule
$z_{\rm mid}$ &0.150 &0.250 &0.350 &0.450 &0.550 &0.650 \\
$10^3n_{\rm g}$ &2.380 &1.070 &0.684 &0.568 &0.600 &0.696 \\
\midrule
$z_{\rm mid}$ &0.750 &0.850 &0.950 &1.050 &1.150 &1.250 \\
$10^3n_{\rm g}$ &0.810 &0.720 &0.560 &0.520 &0.510 &0.450 \\
\midrule
$z_{\rm mid}$ &1.350 &1.450 &1.550 &1.650 &1.750 &1.850 \\
$10^3n_{\rm g}$ &0.360 &0.240 &0.130 &0.070 &0.030 &0.010 \\
\bottomrule
\end{tabularx}
\caption{DESI: $f_{\rm sky}=0.339$ (14000 deg$^2$).}
\end{subtable}
\medskip
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXXXXX}
\toprule
$z_{\rm mid}$ &0.650 &0.750 &0.850 &0.950 &1.050 &1.150 \\
$10^3n_{\rm g}$ &0.640 &1.460 &1.630 &1.500 &1.330 &1.140 \\
\midrule
$z_{\rm mid}$ &1.250 &1.350 &1.450 &1.550 &1.650 &1.750 \\
$10^3n_{\rm g}$ &1.000 &0.840 &0.650 &0.510 &0.360 &0.250 \\
\midrule
$z_{\rm mid}$ &1.850 &1.950 &2.050 & & & \\
$10^3n_{\rm g}$ &0.150 &0.090 &0.070 & & & \\
\bottomrule
\end{tabularx}
\caption{Euclid: $f_{\rm sky}=0.364$ (15000 deg$^2$).}
\end{subtable}
\medskip
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXXXXX}
\toprule
$z_{\rm mid}$ &0.700 &0.900 &1.100 &1.300 &1.500 & \\
$10^3n_{\rm g}$ &0.300 &0.300 &0.400 &0.400 &0.400 & \\
\bottomrule
\end{tabularx}
\caption{PFS: $f_{\rm sky}=0.048$ (2000 deg$^2$).}
\end{subtable}
\medskip
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXXXXX}
\toprule
$z_{\rm mid}$ &0.100 &0.300 &0.500 &0.700 &0.900 &1.300 \\
$10^3n_{\rm g}$ &9.970 &4.110 &0.501 &0.071 &0.032 &0.016 \\
\midrule
$z_{\rm mid}$ &1.900 &2.500 &3.100 &3.700 &4.300 & \\
$10^3n_{\rm g}$ &0.004 &0.001 &0.002 &0.002 &0.001 & \\
\bottomrule
\end{tabularx}
\caption{SPHEREx: $f_{\rm sky}=0.750$ (30940 deg$^2$).}
\end{subtable}
\medskip
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXXXXX}
\toprule
$z_{\rm mid}$ &0.425 &0.475 &0.525 &0.575 &0.625 &0.675 \\
$10^3n_{\rm g}$ &0.482 &0.638 &0.862 &0.975 &1.134 &1.242 \\
\midrule
$z_{\rm mid}$ &0.725 &0.775 &0.825 &0.875 &0.925 &0.975 \\
$10^3n_{\rm g}$ &1.266 &1.282 &1.248 &1.224 &1.189 &1.120 \\
\midrule
$z_{\rm mid}$ &1.025 &1.075 &1.125 &1.175 &1.225 &1.275 \\
$10^3n_{\rm g}$ &1.053 &0.984 &0.903 &0.842 &0.769 &0.713 \\
\midrule
$z_{\rm mid}$ &1.325 &1.375 &1.425 &1.475 &1.525 &1.575 \\
$10^3n_{\rm g}$ &0.645 &0.604 &0.542 &0.487 &0.439 &0.394 \\
\midrule
$z_{\rm mid}$ &1.625 &1.675 &1.725 &1.775 &1.825 & \\
$10^3n_{\rm g}$ &0.347 &0.309 &0.260 &0.217 &0.187 & \\
\bottomrule
\end{tabularx}
\caption{Roman: $f_{\rm sky}=0.048$ (2000 deg$^2$).}
\end{subtable}
\caption{Survey specifications used in this study. Here we list the galaxy number density ($n_{\rm g}$, in units of $({\rm Mpc}/h)^{-3}$) as a function of median redshift ($z_{\rm mid}$) at different redshift bins, as well as the sky coverage $f_{\rm sky}$. }
\label{tab:surveys}
\end{table}
\subsection{Bispectrum}
For the bispectrum , the Fisher matrix of a single redshift bin with volume $V$ is given by~\cite{YankelevichPorciani2019,IvanovPhilcox2022}
\begin{align}
F_{ij} & = \int_{k_{\rm min}}^{k_{\rm max}} \dd k_1 \int_{k_{1}}^{k_{\rm max}} \dd k_2 \int_{k_{2}}^{k_{\rm max}}\dd k_3 \int_{-1}^{1}\dd \mu_1 \int_{-1}^1 \dd \mu_2\nonumber \\ & \frac{\partial B }{\partial p_i}\frac{\partial B}{\partial p_j} \frac{V k_1 k_2 k_3 \gamma(\cos \theta)\Sigma (\mu_1, \mu_2, \cos \theta) }{8 \pi^4 s_{123} P(\vec k_1)P(\vec k_2)P(\vec k_3)},
\label{equ:fisher-bispectrum}
\end{align}
where $k_{\rm min}=0.01h\,{\rm Mpc}^{-1}$ and $k_{\rm max} = 0.2h\,{\rm Mpc}^{-1}$ in our fiducial setup for both bispectrum broadband and BAO constraints. Note that we use $k_1 \leq k_2 \leq k_3$ in order to count only a unique set of triangles.
Similar to the power spectrum, we only consider the Gaussian contribution to the covariance matrix (for details, see Appendix \ref{app:N_tri}).
Here $k_i = |\vec{k}_i|$ and $\mu_i = \hat{k}_i \cdot \hat{n}$ ($i=1,2,3$) where $\hat{n}$ is the line-of-sight direction. The factor $\gamma(\cos\theta)$ describes contributions of different combinations of $(k_1, k_2, k_3)$, and the angular factor $\Sigma (\mu_1, \mu_2, \cos \theta)$ accounts for the orientation of the triangle configuration in the redshift space (see Appendix \ref{app:N_tri} for explicit expressions). Finally, the factor $s_{123}$ accounts for the symmetry factor for different types of triangle configurations: 1, 2 and 6 for the scalene, isosceles and equilateral triangles respectively.
Here again, we have two types of derivatives, one that uses the total information from the broadband and the BAO wiggles, and one that solely extracts information from the wiggles. For the total, we differentiate Eq.~\eqref{equ:galaxy-bispectrum}. For the BAO wiggles, the derivatives $\partial B^{\rm w}_{\rm g}/\partial p_i$ are calculated by applying the product rule on the tree-level expression for the bispectrum in Eq.~\eqref{equ:galaxy-bispectrum} keeping only the terms with $P^{\rm nw}_m$ and $q$ fixed at fiducial cosmology where $q=1$. More specifically,
\begin{align}
\frac{\partial B^{\rm w}_{\rm g}}{\partial p_i} &
\approx
2 \frac{1}{q^6} Z_2(\vec k_1, \vec k_2) Z_1(\vec k_1) Z_1 (\vec k_2') \left[P^{\rm nw}_{\rm m}(k_1|p_i^{\rm fid}) \frac{\partial P^{\rm w}_{\rm m}(k_2)}{\partial p_i} \right. \nonumber\\
& \left. + P^{\rm nw}_{\rm m}(k_2|p_i^{\rm fid}) \frac{\partial P^{\rm w}_{\rm m}(k_1)}{\partial p_i} \right] + {\rm 2~cyc.~perm.}
\end{align}
where
\begin{align}
\frac{\partial P^{\rm w}_{\rm m}}{\partial p_i} = P_{\rm m}^{\rm nw} (k, \mu) \mathcal{D}(k, \mu)\frac{\partial O(k|\vec{p})}{\partial p_i}.
\end{align}
The parameter vector for the bispectrum is slightly different than that of the power spectrum:
\begin{align}
\vec{p}=(N_{\rm eff}, \theta_\star, \omega_b, \omega_c, A_s, n_s, \tau, Y_p;\; \Vec{b}_1, \vec{b}_2, \vec{b}_{s^2};\; {a}_n, {b}_m),\nonumber
\end{align}
where we have the usual set of cosmological parameters, but now with two additional sets of second-order bias parameters $b_2$ and $b_{s^2}$. Note that we do not model the effects of the second order bias parameters in the power spectrum, so this is not a consistent truncation in the expansion of $\delta_g$ in perturbation theory; however, we are adhering to taking the lowest order term in the power spectrum and bispectrum observables themselves.
For the polynomial parameters that marginalize over systematics, we have $a_n$ and $b_m$ defined differently for the bispectrum than in the power spectrum (see Eq.~\eqref{equ:bs-poly-broadband} for more details). We choose for the fiducial set of parameters $b_{m\le 1}$ for the broadband version and $a_{n\le 4}, b_{ m\le 3}$ for BAO wiggle version. The choice of polynomials and how they impact our results are discussed in Sec.~\ref{sec:forecast_dependencies}.
Note that the combined constraints from the power spectrum and the bispectrum are obtained by simply adding the corresponding Fisher matrices ($P+B$ hereafter), while their correlations are ignored in this study. In this case, the polynomial coefficients in power spectrum and the bispectrum Fisher matrices are treated as independent parameters, since we assume that they marginalize over systematic effects that affect these observables differently.
In order to evaluate the integral in Eq.~\eqref{equ:fisher-bispectrum}, we use a quasi Monte-Carlo method based on low-discrepancy sequences, more precisely, the Sobol sequence~\cite{sobol1967distribution}. Compared with integration on a regular grid or the ordinary Monte-Carlo integration method, the Sobol sequence method features a much faster convergence: $\mathcal{O}(N^{-1})$ versus $\mathcal{O}(N^{-0.5})$ where $N$ is the number of sampling points. For the 5-dimensional integral in the bispectrum Fisher matrix above, we only needed a total of $10^4 - 10^5$ sampling points for it to converge with a relative error below $5\%$.
Finally, we note also that since we are taking our derivatives using the chain rule on the tree-level expression for the bispectrum, where we make use of the power spectrum wiggle extractions, we do not perform a wiggle extraction directly on the bispectrum itself for calculating our Fisher forecast.
In a real data analysis however, one would need to extract the wiggles directly from the measured bispectrum. This is best done by going into the $(k_1, \delta, \theta)$ coordinates. We show a naive attempt of directly using the same extraction algorithm for the power spectrum on the bispectrum in Appendix \ref{app:bao-extraction}. Since the algorithm assumes a near-constant period in wiggles, we see high-fidelity recovery of the BAO information for constructive configurations but worse performance for the destructive ones.
\subsection{Survey specifications}
We will forecast the constraints for a variety of galaxy redshift surveys:
BOSS\footnote{\url{https://www.sdss.org/surveys/boss/}}, DESI\footnote{\url{https://www.desi.lbl.gov/}}, Euclid\footnote{\url{https://www.cosmos.esa.int/web/euclid/euclid-survey}}, PFS\footnote{\url{https://pfs.ipmu.jp/}}, SPHEREx\footnote{\url{https://spherex.caltech.edu/}; for number density see \url{https://github.com/SPHEREx/Public-products/blob/master/galaxy_density_v28_base_cbe.txt}} and Roman Space Telescope\footnote{\url{https://roman.gsfc.nasa.gov/}}. Note that for SPHEREx, we do not use all five samples listed as in past forecasts~\cite{DoreBock2014}, but only the sample with the best redshift accuracy between $\sigma_z/(1+z) = 0$ and 0.003, amounting to negligible damping of modes along the line-of-sight due to photometric redshift errors for the scales we consider. For Roman, instead of using both the $H_{\alpha}$ and the $O_{\rm III}$ samples, we restrict only to the $H_{\alpha}$ sample which dominates at lower redshifts up to $z \approx 1.8$~\cite{2021MNRAS.507.1746E}.
For each survey, the key survey parameters include the mean galaxy number density at different redshift bins $\bar{n}_{\rm g}(z_i)$ and the sky coverage $f_{\rm sky}$, which are summarized in Table~\ref{tab:surveys}.
For comparison, we also include an idealized survey in the cosmic variance limit (CVL), setting $\bar{n}_{\rm g}= \infty$, $f_{\rm sky}=1$, and $z_{\rm max}=4$, while maintaining the BAO reconstruction rate fixed at 0.5.
All of our LSS results are combined with a CMB Fisher matrix for a mock Planck 2018 experiment with $\Lambda$CDM+$N_{\rm eff}$+$Y_{\rm p}$ following the formalism and specifications detailed in Ref.~\cite{BaumannGreen2018}. The Planck-only constraint gives $\sigma_{N_{\rm eff}}=0.32$, and serves as our baseline when comparing with CMB + LSS constraints in the next section.
\begin{table}
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXl}
\toprule
Planck+Survey & P & B & P+B \\
\midrule
BOSS & 0.30 & 0.28 & 0.28 \\
DESI & 0.27 & 0.21 & 0.20 \\
Euclid & 0.26 & 0.18 & 0.17 \\
PFS & 0.30 & 0.27 & 0.27 \\
SPHEREx & 0.28 & 0.24 & 0.23 \\
Roman & 0.30 & 0.26 & 0.26 \\
CVL & 0.08 & 0.08 & 0.06 \\
\bottomrule
\end{tabularx}
\caption{Our fiducial results of BAO-only constrains from power spectrum and bispectrum, and a Planck 2018 Fisher matrix is included for all cases. }
\label{tab:constraints-bao}
\end{subtable}
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXl}
\toprule
Planck+Survey & P & B & P+B\\
\midrule
BOSS & 0.23 & 0.22 & 0.18 \\
DESI & 0.13 & 0.11 & 0.09 \\
Euclid & 0.12 & 0.09 & 0.08 \\
PFS & 0.22 & 0.20 & 0.17 \\
SPHEREx & 0.16 & 0.14 & 0.12 \\
Roman & 0.19 & 0.17 & 0.15 \\
CVL & 0.05 & 0.04 & 0.03 \\
\bottomrule
\end{tabularx}
\caption{Same as above, but for {total} constraints including the broadband and BAO wiggles.}
\label{tab:constraints-broadband}
\end{subtable}
\caption{Forecasted joint constraints on $N_{\rm eff}$ from the Planck 2018 CMB experiment ($\Lambda$CDM+$N_{\rm eff}$+$Y_{\rm p}$) and different LSS surveys, after marginalizing over ($b_1, b_2, b_{s^2}$) and polynomial coefficients. For reference, the forecasted Planck-only constraint is $\sigma_{N_{\rm eff}}=0.32$.
}
\label{tab:constraints}
\end{table}
\section{Results}
\label{sec:results}
We now present the constraints on $N_{\rm eff}$ from the power spectrum and the bispectrum for the various surveys using the Fisher formalism presented above.
\subsection{Fiducial results}
\label{sec:fiducial_results}
In Fig.~\ref{fig:contours}, we show our fiducial results for the constraints on $\theta_\star$ and $N_{\rm eff}$ using BAO wiggles for various LSS surveys. As in Ref.~\cite{BaumannGreen2018} for the case of the power spectrum, the LSS constraints by themselves are not as competitive as Planck alone. So here we show their joint constraints with Planck. The power spectrum (P), bispectrum (B) and joint P+B constraints with Planck are shown in red, green and blue respectively, whereas Planck alone is plotted in grey. %
To start with, we compare the improvement of adding the LSS power spectrum to the Planck-only constraints. Typically we see small improvements for most surveys except DESI and Euclid. This is consistent with that in Ref.~\cite{BaumannGreen2018}. Then for the LSS bispectrum, we can also see improvement upon the Planck-only result for different surveys, though for BOSS and PFS the improvement is not significant. For Euclid, we see that with bispectrum there is about a factor of $\sim 1.5$ improvement in $\sigma_{N_{\rm eff}}$ and $\theta_\star$ when compared to Planck-only. Finally, the combination P+B offers negligible further improvement upon bispectrum alone.
More precisely, we show in Tab.~\ref{tab:constraints-bao} the 1D marginalized constraints for ${N_{\rm eff}}$ from BAO wiggles. Compared to the Planck constraint $\sigma_{N_{\rm eff}}=0.32$, the final CMB+P+B result is better by a factor of ranging from 1.15 for BOSS to 1.84 for Euclid. We note that DESI and Euclid have the best improvements because of the large volume they probe.
Tab.~\ref{tab:constraints-broadband} shows similar results but for using both the broadband and the wiggle parts of the LSS observables. As expected, these results are better than using the BAO wiggles alone and would reflect the reality if all systematics could be successfully controlled to yield reliable broadband measurements. Here again, the improvement from the bispectrum alone over the power spectrum alone is about a factor of 1.05 -- 1.30, whereas going to joint P+B results improves upon the power spectrum only by about 1.23 -- 1.50. Not limiting ourselves to the BAO wiggles only, the best measurement from next-generation surveys would allow us to probe $N_{\rm eff}$ with $\sigma_{N_{\rm eff}}=0.08$, a factor of 4 improvement from the Planck alone result of 0.32, and merely a factor of 2 from an actual CVL experiment.
As stated before, we also checked that the improvements from LSS power spectrum or bispectrum based on a CMB-Stage 4 experiment instead of Planck would be negligible. For a CMB-Stage 3 experiment, there is insignificant improvement from power spectrum BAO for all surveys as was also shown in Ref.~\cite{BaumannGreen2018}, but a slight improvement (by a factor of $\sim 1.1$) from the bispectrum BAO wiggles for DESI and Euclid.
In sum, we find that the LSS bispectrum signals, both the BAO wiggles and total, can help to improve the Planck-only constraint on $N_{\rm eff}$. The Planck+B also have better constraint on $N_{\rm eff}$ than the Planck+P.
\subsection{Forecast Dependencies}
\label{sec:forecast_dependencies}
We now investigate forecast dependencies on our setup, by varying $k_{\rm max}$ and the number of polynomial parameters used for marginalizing over systematics.
\paragraph*{Varying $k_{\rm max}$.}
Recall that throughout this work we set an upper limit $k\leq k_{\rm max}^{P}$ for the power spectrum and $k_1, k_2, k_3 \leq k_{\rm max}^{B}$ for the bispectrum Fisher computation, where we used the fiducial values $k_{\rm max}^{P}=0.2\,h\,{\rm Mpc}^{-1}$ for power spectrum BAO wiggles, $k_{\rm max}^{P}=0.5\,h\,{\rm Mpc}^{-1}$ for power spectrum broadband, and $k_{\rm max}^{B}=0.2\,h\,{\rm Mpc}^{-1}$ for both cases in the bispectrum. Now we explore the dependence of our forecast results for each survey in Fig.~\ref{fig:kmax_dependence} as we vary $k_{\rm max}$ from $k_{\rm min}$ to $0.25\, h\,{\rm Mpc}^{-1}$. In the left, middle and right panels, we show respectively the power spectrum, bispectrum and the joint P+B constraints. We do this for both the BAO-only (dashed lines) and the total
results (solid lines).
As expected, the constraining power gets better with higher $k_{\rm max}$. It does so faster for the bispectrum than for the power spectrum because the number of triangle configurations increases faster with $k_{\rm max}$ than the number of $k$-modes in the power spectrum. Additionally, for the power spectrum results, we see that the BAO constraints get a relatively sharp decrease near around $0.14\,h\,{\rm Mpc}^{-1}$ after the third peak in the BAO wiggles. For the bispectrum, this sharp decline is not at the same place as in the power spectrum, reflecting the fact that the information on $N_{\rm eff}$ is not coming from a peak in the oscillations with respect to one of the $k$'s, but rather in the interference between two $k$'s (see Fig.~\ref{fig:bispectrum-bao-extraction}). %
Finally, this feature is carried over into the P+B BAO constraints since they are dominated by the bispectrum results for $k \gtrsim 0.08\,h\,{\rm Mpc}^{-1}$.
Now comparing between the surveys, Euclid gives the most optimistic result, for which the BAO constraints could reach $\sigma_{N_{\rm eff}}\approx 0.25$ for the PS and $\sigma_{N_{\rm eff}}\approx 0.17$ for the bispectrum (whereas the
total
constraints reach $\sigma_{N_{\rm eff}}\approx 0.12$ for the PS and $\sigma_{N_{\rm eff}}\approx 0.07$ for the bispectrum) for our fiducial $k_{\rm max}=0.2 h\,{\rm Mpc}^{-1}$. The results become even better at higher $k_{\rm max}$. We caution the readers however that past $k \sim 0.2 h\,{\rm Mpc}^{-1}$, the linear scale modeling that we use for both the power spetrum and the bispectrum becomes less accurate. %
\paragraph*{Varying the polynomial model.}
As we introduced in the previous section, the polynomial terms are included in the galaxy power spectrum and bispectrum modeling to account for uncertainties say, in measurement or modeling measurement, or in the extraction the BAO wiggles. However, the specific number of terms we choose to include could impact the forecasted constraints significantly. Including not enough parameters, one may get too optimistic forecast; including too many parameters could over-penalize the analysis. Recall that we follow Ref.~\cite{BaumannGreen2018} to choose a specific set of terms for the power spectrum and keep the same powers of $k$ for the bispectrum, namely, $b_{m\le 1}$ for the total bispectrum
and $a_{n\le 3},b_{ m\le 4}$ for BAO wiggles.
In Fig.~\ref{fig:bs-poly} we vary the fiducial set of polynomial parameters for the Euclid bispectrum forecast.
For the total in the upper panel,
we see that the impact of polynomial terms is not as significant when using a high $k_{\rm max}$, but the additive coefficients $b_m$ do cast significant impact at lower $k_{\rm max}$. For $k_{\rm max}^{B} = 0.2\,h\,{\rm Mpc}^{-1}$, our fiducial choice of $b_{m\le 1}$ (red) does not deviate much from other choices.
For the BAO wiggles in the bottom panel, we see that the polynomial coefficients do affect the result more significantly. At $k_{\rm max} = 0.2\,h\,{\rm Mpc}^{-1}$, the constraints on $N_{\rm eff}$ can vary between $\sim 0.1$ to $\sim 0.2$ depending on the choice of polynomials. Our fiducial setup of $a_{n\le 3},b_{m\le4}$ (red) is on the more conservative side.
\subsection{Note on bispectrum interference and computational costs}
\label{sec:notes_on_interference}
One of the advantages of using bispectrum for spectroscopic survey is that one can exploit the interference structure in order to simplify the analysis. Most of the signal will be concentrated around the constructive configurations as explained in Sec.~\ref{sec:interference}. In this section we will investigate how much Fisher information we can preserve when choosing to measure only a subset of configurations.
In Fig.~\ref{fig:integrand} we present the Fisher information of $N_{\rm eff}$, $\tilde{F}_{N_{\rm eff} N_{\rm eff}}$, in the $(\delta, \theta)$ plane. Here $\tilde{F}_{N_{\rm eff} N_{\rm eff}}(\delta, \theta)$ is obtained from the same integrand as in Eq.~\eqref{equ:fisher-bispectrum}, but integrated over $0.01\,h\,{\rm Mpc}^{-1}\le k_1\le0.2\,h\,{\rm Mpc}^{-1}$ and $-1\le \mu_1, \mu_2\le 1$, and then normalized over the whole plane:
\begin{align}
\tilde{F}_{N_{\rm eff} N_{\rm eff}} & \propto \int_{k_{\rm min}}^{k_{\rm max}} \dd k_1 \int_{-1}^{1}\dd \mu_1 \int_{-1}^1 \dd \mu_2 \left(\frac{\partial B }{\partial N_{\rm eff}}\right)^2 \nonumber \\ & \frac{V k_1 k_2 k_3 \gamma(\cos \theta)\Sigma (\mu_1, \mu_2, \cos \theta) }{8 \pi^4 s_{123} P(\vec k_1)P(\vec k_2)P(\vec k_3)} \nonumber\\ & \Theta(k_{\max}-k_2)\Theta(k_{\max}-k_3).
\label{eq:F_Neff_Neff}
\end{align}
Here the step function $\Theta$ ensures that $k_2, k_3 \le k_{\rm max}$. Thus $\tilde{F}_{N_{\rm eff} N_{\rm eff}}$ is the density distribution function of the bispectrum BAO information
in terms of $N_{\rm eff}$.
As expected the information is mostly concentrated around constructive interferences $\delta = 0$ and 2. However, there are some deviations around $\delta = 4$. We have verified that this is due to imposing an upper bound $k_3 \leq k_{\rm max}$, which excludes some triangle configurations around $\delta = 4$. More specifically, since $\delta = 1$ corresponds to $k_2 - k_1 \approx 0.03\,h\,{\rm Mpc}^{-1}$, at $\delta = 4$ and for $k_1 \gtrsim 0.08\,h\,{\rm Mpc}^{-1}$ (a large part of the $k_1$ range noted above), we have that $k_2 \gtrsim k_{\rm max} = 0.2 h\,{\rm Mpc}^{-1}$ for which there are no triangle configurations available.
Now, to quantify how the interference can help reduce computational costs without much loss of information, we first select the regions where most of the information is contained (see boxed regions in Fig.~\ref{fig:integrand}). Then we calculate the number of triangle configurations as well as the Fisher information enclosed in these regions, and report their fractions compared to the total.
For a setup mimicking the first redshift bin ($0.6 \leq z \leq 0.7$) of the Euclid experiment, we get that 51\% of the Fisher information is enclosed within the boxed regions which contain 36\% of the triangle configurations. This reduction in the number of triangles can represent a significant cut in computation time during real data analysis: Cutting the data vector dimension by a factor $f_\triangle$ means a similar cut on time spent on computing the estimator and theory prediction in a MCMC analysis, as well as a significant larger cut ($\sim f_\triangle^2$) on the number of simulations required to generate the bispectrum covariance matrix.
Finally, we recommend doing the Fisher analysis before the real data analysis to identify the ideal boundaries for the regions with most information, as they could change depending on the survey setup. For example, we find that the peaked regions may move in the $\theta$ direction between different redshift bins. In the setup we explored, the peaked regions shift toward the right in the $\theta-\delta$ plane with higher redshift because of the redshift evolution of the galaxy bias and the linear growth rate.
\section{Discussion and Conclusion}
\label{sec:conclusions}
In this paper we forecast constraints on the effective number of neutrino species $N_{\rm eff}$ from various LSS surveys using the Fisher formalism, examining for the first time the impact of including bispectrum measurements with BAO wiggles. We present two versions of the forecasts where the information comes from the BAO wiggles alone, which is our fiducial results, as well as from the total bispectrum including both the broadband shape of the bispectrum and the BAO wiggles. %
We find for both cases that, although the LSS constraints alone are not competitive with Planck, combining the LSS constraints with Planck provides a clear improvement over Planck alone ($\sigma_{\rm N_{\rm eff}}=0.32$). This is in alignment with what the authors of Ref.~\cite{BaumannGreen2018} found for the power spectrum, which we also reproduce. Using BAO wiggles only, we find that Planck+B clearly improves upon Planck alone, with a $\sigma_{\rm N_{\rm eff}}$ ranging from 10\% to 40\% improvement depending on the survey. There is also a notable improvement from Planck+P to Planck+B of about 5\% - 30\% depending on the survey. Planck+P+B does not in general provides better constraints than Planck+B because the bispectrum constraints were already very good, except for the case of CVL where combining all data allows one to reach $\sigma_{\rm eff} = 0.06$.
When using the {total} bispectrum including both the broadband and wiggles, we obtain better constraining power as expected. The broadband information is valuable if the systematics in the measurement can be reliably controlled. Here the Planck+B constraint reaches $\sigma_{N_{\rm eff}}=0.09$ for Euclid, and as low as $\sigma_{N_{\rm eff}}=0.04$ for a CVL experiment up to $z_{\rm max} = 4$. However, measurements of the broadband is challenged by systematics and modeling uncertainties. The latter could be well-controlled using an effective field theory of LSS~\cite{BaldaufMercolli2015}, especially when using higher $k_{\rm max}$ than our fiducial choice of $k_{\rm max}=0.2\, {h\, {\rm Mpc}^{-1}}$.
We also utilize the template modeled in Ref.~\cite{BaumannGreen2018} to study the constraints from the BAO phase shift. Similary here, we see better performance for the bispectrum over the power spectrum. However, the phase-shift constraint is not as competitive as the BAO-only or total constraints. For example, for CVL the phase shift constraint with prior from Planck is $\sigma_{N_{\rm eff}}=0.27$ for the bispectrum. {This probe can however be useful for probing physical effects that mainly show up as a phase shift, such as the isocurvature perturbations}.
Note that we have chosen $k_{\rm max}=0.2\, {h\, {\rm Mpc}^{-1}}$ {for the bispectrum forecasts},
the regime of validity for the tree-level bispectrum and linear theory. To push $k_{\rm max}$ higher into the weakly nonlinear regime, one may choose to add higher loop terms~\cite{LazanuLiguori2018}; alternatively, one may use the tree-level form with a nonlinear matter power spectrum and an effective second-order kernel $F_{2, \rm eff}$ fit from simulations in the nonlinear regime for the broadband modeling~\cite{ScoccimarroCouchman2001}.
To fully simulate the wiggles and the broadband in the nonlinear regime, one would measure the bispectrum directly from simulations, though it takes special simulations capable of capturing the neutrino-induced BAO {effects}
However, this may not be necessary since, due to nonlinear damping of BAO signals at smaller scales, the BAO information will be limited there and there may not be significant improvement by extending $k_{\rm max}$ in the BAO modeling. However, there would be more information to be gained from the broadband in principle. EFTofLSS has been shown to be promising in this regard, and it may be worth applying it to the measurement of $N_{\rm eff}$~\cite{BaldaufMercolli2015}.
Additionally, we extended the concept of bispectrum interference, first explored in Ref.~\cite{ChildTakada2018} for the sound horizon measurement, by applying it to $N_{\rm eff}$ here. The bispectrum interference technique allows us to reduce computational cost by identifying the set of triangle configurations that contain the most information (those exhibiting constructive interference). We find for example that using only about a third of the triangles would give us half of the Fisher information in $N_{\rm eff}$ for the Euclid experiment's lowest redshift bin. This can dramatically reduce the computational challenges involved in measuring the bispectrum, especially when deriving covariance matrices from simulations.
The bispectrum interference coordinates $(k_1, \delta, \theta)$ also offer a natural way to extract the BAO wiggles from a bispectrum measurement. We attempted a naive application of the current wiggle extraction algorithm designed for the power spectrum directly on the bispectrum in Appendix~\ref{app:bao-extraction}. We found that the while the algorithm does not work well for the destructive interference configurations, these are exactly the configurations that do not contain much information on $N_{\rm eff}$ and can therefore be neglected in a real data analysis.
We also showed that improvements are needed in the current algorithm are for getting better results in the constructive configurations in the bispectrum: While the main shape of the damping envelope is well captured for several $\delta = 0$ and $\delta = 2$ configurations tested, the amplitude of the first two peaks are not accurately measured. It may be that the periodicity assumptions in the algorithm is a less good approximation in the case of bispectrum interference case than in the power spectrum. Therefore, improvements on the algorithm or a rigorous characterization of the errors induced are needed in order to directly measure BAO wiggles in the bispectrum. In our forecast, we have chosen to marginalize over a set of polynomials in order to capture some of the measurement errors induced.
In sum, the next-generation LSS surveys can improve on current constraints on $N_{\rm eff}$ from Planck by up to a factor of 2 using observations in the linear regime. Future work could include extending the modeling of the galaxy bispectrum into the weakly non-linear regime, which may provide improvement upon even CMB-Stage 4 like experiments without requiring a more futuristic LSS survey. Developing wiggle extraction algorithms specifically tailored for the bispectrum interference could also open doors for alternative measurements the BAO wiggles in the bispectrum, which is useful for constraining physical effects that affect the BAOs such as $N_{\rm eff}$ and isocurvature perturbations.
\begin{acknowledgments}
This work was partially supported by NASA grant 15-WFIRST15-0008 Cosmology with the High Latitude Survey Roman Science Investigation Team. Part of this work was done at Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. We also acknowledge support from the SPHEREx project under a contract from the NASA/GODDARD Space Flight Center to the California Institute of Technology.
The code and data sets used in this article are available at \url{https://github.com/yanlongastro/galaxy_survey}.
\end{acknowledgments}
\begin{appendix}
\section{Extraction of BAO wiggles from bispectrum}
\label{app:bao-extraction}
To extract the BAO wiggles from the measured bispectrum, one would need to transform the bispectrum from the $(k_1, k_2, k_3)$ coordinates to the $(k_1, \delta, \theta)$ coordinates.
{There the interference of wiggles are made explicitly manifest and wiggle extraction algorithms similar to those used in the power spectrum can be developed to extract them.}
In Fig.~\ref{fig:bispectrum-bao-extraction}, we show the results directly applying one of the standard algorithms developed for the power spectrum (that described in Appendix C of Ref.~\cite{BaumannGreen2018}) on a variety of $(\delta, \theta)$ configurations. In the top and bottom panels, we show the result for the constructive interference configurations $\delta = 0$ and 2, whereas in the middle panel we show the result for the destructive interference with $\delta = 1$, where the amplitude of the wiggles are about 10 times lower.
The solid lines, call it intrinsic wiggles, correspond to the ratio $B_{\rm m}^{\rm w}/B_{\rm m}^{\rm nw}$ where the wiggle and non-wiggle parts of the bispectrum are obtaining by computing the tree-level expressions with $P_{\rm m}^{\rm w}$ and $P_{\rm m}^{\rm nw}$, where the wiggle-extraction algorithm has been applied on the power spectrum. The dashed lines, call it extracted wiggles, correspond to applying the extraction algorithm directly on the matter bispectrum with wiggles, which is more in line with what one would do in an actual measurement when we do not make use of the measured power spectrum. These two methods end up defining a different non-wiggle bispectrum, hence the differences in the plotted ratios $B_{\rm m}^{\rm w}/B_{\rm m}^{\rm nw}$. The differences seen may be related to the fact that the extraction algorithm employed assumes a near-constant period in wiggles. This assumption is broken in slightly different ways for the wiggles in the bispectrum and in the power spectrum.
It is clear from the plot that the difference is less significant for the constructive configurations ($\delta=0,2$) than for the destructive configurations ($\delta = 1$). Since one may select to only measure the constructive configurations where most of the information resides, this makes it possible to model the bispectrum wiggles by doing the split in the power spectrum space first and then using the tree-level expression to get the bispectrum wiggle predictions during an MCMC analysis, which would be faster than performing the split on every bispectrum configuration in the theory calculation. How well this works in practice will depend on the details of the algorithm chosen, and should be tested on a simulated data first prior to use in an actual data analysis.
In sum, developing extraction algorithm specifically tailored to the bispectrum interference, perhaps without the stringent assumption of periodicity could be useful. Alternatively, characterizing the error induced when applying a non-ideal extraction algorithm could be useful as well. Finally, this illustration is done on the matter bispectrum. Further tests with the galaxy bispectrum including the realism of redshift space distortions would be necessary as well.
\section{Gaussian bispectrum covariance}
\label{app:N_tri}
To evaluate the Fisher matrix for the bispectrum, we start with the covariance matrix, e.g., the correlation between $B(\vec{k}_1, \vec{k_2}, \vec{k}_3)$ (for brevity B hereafter) and $B(\vec{k}_1', \vec{k}_2', \vec{k}_3')$ (for brevity $B'$). We consider the small range of $(\Vec{k}_i-\dd \vec{k}_i/2, \Vec{k}_i + \dd \Vec{k}_i/2)$ ($i=1, 2, 3$), the Gaussian contribution to the covariance matrix is given by~\cite{ChanBlot2017,YankelevichPorciani2019}
\begin{align}
{\rm Cov}(B, B') & = \frac{V}{N_{\rm tri}} s_{123} P_{\mathrm{obs}}\left(\vec k_{1}\right) P_{\mathrm{obs}}\left(\vec k_{2}\right) P_{\mathrm{obs}}\left(\vec k_{3}\right) \nonumber\\
& \times \delta_{\rm D} (\vec{k}_1+\vec{k}_1') \delta_{\rm D} (\vec{k}_2+\vec{k}_2')\delta_{\rm D} (\vec{k}_3+\vec{k}_3').
\end{align}
Here $N_{\rm tri}$ is the number of modes, which follows $N_{\rm tri} = V_{123}/k_{\rm f}^6$, where $k_{\rm f}^3=(2\pi)^3/V = V_{\rm f}$ is the fundamental volume and $V_{\rm 123}$ is the volume constrained by $[\vec k_i-\dd \vec k_i/2, \vec k_i + \dd \vec k_i/2]$ ($i=1,2,3$). Since $\vec{k}_1+\vec{k}_2+\vec{k}_3=0$, we only need $\vec k_1$ and $\vec{k}_2$.
To derive $V_{123}$, we may decomposed the $\vec{k}$'s into spherical coordinate, e.g., $(k_1, \mu_1, \phi_1)$ and $(k_2, \mu_2, \phi_2)$, where $\phi_1$ and $\phi_2$ are azimuthal angles. Note that in RSD there is azimuthal symmetry for the triangle configuration, we may set $\phi_1=0$ without loss of generality. By definition we have
\begin{align}
V_{123} & = \int_{[\vec{k}_{1, \pm}]} \dd^3 \vec{p} \int_{[\vec{k}_{2, \pm}]} \dd^3 \vec{q} \int_{\infty} \dd^3 \vec{r} \delta^{\rm D} (\vec{p}+\vec{q}+\vec{r}) \nonumber\\
& = 2 \pi k_1^2 \dd k_1 \dd \mu_1 \cdot k_2^2 \dd k_2 \dd \mu_2 \dd \phi_2.
\end{align}
Here $[\vec{k}_{i, \pm}]$ denotes a volume region constrained by $[\vec k_i-\dd \vec k_i/2, \vec k_i + \dd \vec k_i/2]$. To relate $\phi_2$ to $k_3$, we have
\begin{align}
k_3^2 & = k_1^2+k_2^2 + 2k_1k_2\cos \theta;\\
\cos \theta & = \sqrt{1-\mu_1^2}\sqrt{1-\mu_2^2} \cos \phi_2 + \mu_1 \mu_2; \\
\frac{\partial \phi_2}{\partial k_3} & = -\frac{2\pi k_3}{k_1 k_2} \cdot \Sigma(\mu_1, \mu_2, \cos \theta),
\end{align}
where (also refer Ref.~\cite{YankelevichPorciani2019})
\begin{align}
\Sigma = \frac{1}{2 \pi \sqrt{1-\cos^2 \theta-\mu_{1}^{2}-\mu_{2}^{2}+2 \mu_{1} \mu_{2}\cos\theta}}.
\end{align}
One may check that $\int_0^1 \dd \mu_1 \int_0^1 \dd \mu_2 \Sigma =1$. Numerically, we will encounter singularities and waste many sampling points if choosing $\mu_1$ and $\mu_2$ as our angular coordinate, especially when $\abs{\cos \theta} \to 1 $. This is also illustrated in Fig. 1 of Ref.~\cite{YankelevichPorciani2019} (see the lower two panels, where there are no triangle configurations outside the ellipse). To avoid this problem, we can perform a coordinate transformation and use new coordinate $(\mu_s, \zeta)$ such that
\begin{align}
\mu_{1} \to \tilde{\mu}_{1} & =\frac{\sqrt{2} \left(\mu_{1}-\mu_{2}\right)}{2 \sqrt{1-\cos \theta }} = \cos \zeta \sqrt{1-\mu_s^2};\\
\mu_{2} \to \tilde{\mu}_{2} & =\frac{\sqrt{2} \left(\mu_{1}+\mu_{2}\right) }{2 \sqrt{1+\cos \theta }} = \sin \zeta \sqrt{1-\mu_s^2}; \\
\Sigma \dd \mu_1 \dd \mu_2 & \to \frac{1}{2\pi} \dd \mu_s \dd \zeta.
\end{align}
The setup can dramatically improve efficiency and accuracy of the integration.
We also note that for $\theta$ and $2\pi - \theta$ (or equivalently $\phi_2 \to 2\pi - \phi_2$) we have the same triangle configuration despite different chirality, and this property will bring in an additional factor of 2 in $V_{123}$. However, the chirality will not contribute twice if all the three $\vec{k}$'s are parallel to each other, i.e., $\cos \theta =\pm 1$. This explains the factor $\gamma(\cos \theta)$ we introduced in the text, which has the explicit form (also refer Ref.~\cite{ChanBlot2017} for a different derivation)
\begin{align}
\gamma(x) = \begin{cases}
1 & \abs{x}<1;\\
1/2 & x = \pm 1;\\
0 & \mathrm{else}.
\end{cases}
\end{align}
\section{Phase-shift constraints on effective neutrino species}
\label{app:phase-shift}
\begin{table}
\centering
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXl}
\toprule
Survey & P & B & P+B \\
\midrule
BOSS & 11 & 7.6 & 6.0 \\
DESI & 3.0 & 2.0 & 1.6 \\
Euclid & 2.4 & 1.5 & 1.3\\
PFS & 8.8 & 5.9 & 4.8 \\
SPHEREx & 4.4 & 3.1 & 2.5 \\
Roman (H$\alpha$) & 6.8 & 4.3 & 3.5\\
CVL & 0.96 & 0.59 & 0.47 \\
\bottomrule
\end{tabularx}
\caption{Phase shift only constrains from the power spectrum and the bispectrum.}
\end{subtable}
\begin{subtable}{\linewidth}
\begin{tabularx}{\textwidth}{XXXl}
\toprule
$\alpha$-prior+Survey & P & B & P+B \\
\midrule
BOSS & 3.4 & 2.6 & 2.3 \\
DESI & 1.3 & 1.0 & 0.87 \\
Euclid & 1.0 & 0.76 & 0.70 \\
PFS & 2.7 & 1.9 & 1.6 \\
SPHEREx & 2.1 & 1.7 & 1.5 \\
Roman (H$\alpha$) & 2.0 & 1.4 & 1.3 \\
CVL & 0.39 & 0.27 & 0.23 \\
\bottomrule
\end{tabularx}
\caption{Same as above, but with an $\alpha$-prior from Planck Fisher matrix.}
\end{subtable}
\caption{Phase-shift only constraints on $N_{\rm eff}$ for various LSS surveys, with or without imposing a CMB prior on $\alpha$.
}
\label{tab:constraints-phase}
\end{table}
In this appendix we consider a third case for the analysis where the information is solely extracted from the phase shift in the BAO wiggles. To do so, we make use of the phase shift template described in Sec. \ref{sec:background}.
Here we only include $\alpha$ and $\beta$ as parameters and infer constraints on $N_{\rm eff}$ through $\beta(N_{\rm eff})$, to facilitate comparison with literature results (e.g. in Ref.~\cite{BaumannGreen2018}). Since $\alpha$ is redshift-dependent, there are $n_z$ values at different redshift bins, while $\beta$ is a constant over all bins. We evaluate the derivatives of $\alpha$ and $\beta$ from the analytical model shown in Eq. \eqref{equ:phase_shift}. With the reduced wavenumber $k_t$ defined as that inside $O$, we have %
\begin{align}
\frac{\partial P_{\rm m}^{\phi}}{\partial p} = P_{\rm m }^{\rm nw} \left.\frac{\partial O}{\partial k} \right\vert_{k=k_t} \frac{\partial k_t}{\partial p}.
\end{align}
The expression can also be extended to the galaxy power spectrum by adding additional terms referring RSD, damping, etc. However, in this prediction, we ignore multiple additional parameters like bias and polynomial terms, and keep only $\alpha$ and $\beta$.
Note that we assumed a Gaussian likelihood function for $\beta$ here, which means that the inferred $N_{\rm eff}$ distribution using the relation between $\beta$ and $N_{\rm eff}$ will be non-Gaussian (Eq.~\ref{equ:epsilon_nu}).
As a result, we define the $1\sigma$ constraint on $N_{\rm eff}$ as $\sigma_{N_{\rm eff}} \equiv N_{\rm eff}(\beta^{\rm fid}+ \sigma_\beta)-N_{\rm eff}(\beta^{\rm fid})$.
Similarly, for the galaxy bispectrum, we obtain $\partial B^{\phi}_{\rm g}/\partial p$ through the tree level expression. %
A prior on $\alpha$ from a CMB experiment, here Planck 2018, will also be included to increase constraints on $\beta$. We choose the same definition as that of Ref.~\cite{BaumannGreen2018}. The prior matrix $C_\alpha$ is derived from $C_\alpha^{-1} =A^{\rm T} F A$, where $F$ is the Fisher matrix of $\Lambda$CDM from Planck 2018, and $A$ is the Moore–Penrose inverse of the matrix $\nabla_{\vec \theta} \vec{\alpha}$ which is obtained with \texttt{CAMB}. %
In Fig.~\ref{fig:dist-beta} we show the $\beta$ for different surveys. Unlike in the main text, we do not include Planck constraints here. For some of the surveys like BOSS and PFS, their constraints alone without $\alpha$ prior (dashed lines) are not stringent enough to distinguish between $N_{\rm eff} = 0$ and $\infty$; but with an $\alpha$-prior from Planck 2018 (solid lines), the constraints improve considerably. These results are in agreement with Ref.~\cite{BaumannGreen2018}, where we add in addition the bispectrum results. In all the surveys, with or without the Planck prior, the bispectrum yields better constraints than the power spectrum alone, typically about 30\% improvement, with negligible further improvement when using P+B. More precise results are shown in Tab.~\ref{tab:constraints-phase} tabulates the 1-$\sigma$ constraints on $N_{\rm eff}$.
In Fig.~\ref{fig:bis-ps}, we also reproduce the $\sigma_{\beta}$ resultsfor toy surveys with varying $z_{\max}$ from Ref.~\cite{BaumannGreen2018}, but showing in addition the bispectrum contraints. Using the same setup as in Ref.~\cite{BaumannGreen2018}, we use $f_{\rm sky}=0.5$ and a redshift range going from a fixed redshift lower limit $z_{\min} = 0.1$ to an upper limit $z_{\max}$, with a bin width of $\Delta z = 0.1$. The galaxies are uniformly distributed inside the comoving volume enclosed between $z_{\min}$ and $z_{\max}$, while the total number of galaxies $N_{\rm g}$ remains fixed to $10^6$, $10^7$, $10^8$, and $\infty$ for the CVL case -- so the galaxy number density is constant with redshift at a given $z_{\rm max}$, and is lower for a higher $z_{\max}$ at a fixed total $N_{\rm g}$. %
Again the bispectrum performs better than the power spectrum, except for the low galaxy number density setups (low $N_g$ and high $z_{\rm max}$) where the shot-noise dominates. We show both the results with and without $\alpha$-prior in solid and dashed lines respectively.
\end{appendix}
\bibliography{lss,non-ads}%
|
Title:
The long stare at Hercules X-1 -- I. Emission lines from the outer disk, the magnetosphere boundary and the accretion curtain |
Abstract: Hercules X-1 is a nearly edge-on accreting X-ray pulsar with a warped
accretion disk, precessing with a period of about 35 days. The disk precession
allows for unique and changing sightlines towards the X-ray source. To
investigate the accretion flow at a variety of sightlines, we obtained a large
observational campaign on Her X-1 with XMM-Newton (380 ks exposure) and Chandra
(50 ks exposure) for a significant fraction of a single disk precession cycle,
resulting in one of the best datasets taken to date on a neutron star X-ray
binary. Here we present the spectral analysis of the High State high-resolution
grating and CCD datasets, including the extensive archival data available for
this famous system. The observations reveal a complex Fe K region structure,
with three emission line components of different velocity widths. Similarly,
the high-resolution soft X-ray spectra reveal a number of emission lines of
various widths. We correct for the uncertain gain of the EPIC-pn Timing mode
spectra, and track the evolution of these spectral components with Her X-1
precession phase and observed luminosity. We find evidence for three groups of
emission lines: one originates in the outer accretion disk (10^5 RG from the
neutron star). The second line group plausibly originates at the boundary
between the inner disk and the pulsar magnetosphere (10^3 RG). The last group
is too broad to arise in the magnetically-truncated disk and instead must
originate very close to the neutron star surface, likely from X-ray reflection
from the accretion curtain (~10^2 RG).
| https://export.arxiv.org/pdf/2208.08930 |
\thispagestyle{plain}
\newcommand{\btx}{\textsc{Bib}\TeX}
\newcommand{\thestyle}{\texttt{\filename}}
\begin{center}{\bfseries\Large
Reference sheet for \thestyle\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \thestyle\ package, \LaTeX\ the
source file \thestyle\texttt{.dtx}.
\end{quote}
\head{Overview}
The \thestyle\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \thestyle.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\thestyle|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \thestyle\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \thestyle\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \thestyle\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \thestyle\ is also loaded; instead, add
the option to \thestyle.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \thestyle; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \thestyle\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description} |
Title:
A report on the status of astrophotonics for interferometry and beyond |
Abstract: Long-baseline interferometry and high-resolution spectroscopy are two
examples of areas that have benefited from astrophotonics devices, but the
application range is expanding to other subareas and other wavelength ranges.
The VLTI has been one of the pioneering astronomical infrastructure to exploit
the potential of astrophotonics instrumentation for high-angular resolution
interferometric observations, whereas new opportunities will arise in the
context of the future ELTs. In this contribution, I review the current state of
the art regarding the interplay between photonic-based solutions and
astronomical instrumentation and highlight the growth of the field, as well as
its recognition in recent strategy surveys such as the Decadal. I will explain
the benefits of different technological platforms making use of
photolithography or laser-writing techniques. I will review the most recent
results in the field covering simulations, laboratory characterization and
on-sky prototyping. Astrophotonics may have a unique role to play in the
forthcoming era of new ground-based astronomical facilities, and possibly in
the field of space science.
| https://export.arxiv.org/pdf/2208.05380 |
\keywords{Astrophotonics, Integrated Optics, optical/infrared instrumentation, long-baseline interferometry}
\section{Introduction}
In the era of modern astronomy and astrophysics, imaging and spectroscopy are two major pillars
to observe and study our Universe.
To date, there is little debate on the critical importance of
technical and technological advances to enable new, unprecedented discoveries that can fundamentally
deepen our understanding of the Universe. This technological endeavour, without which observational
capabilities would stagnate, addresses important challenges relevant to instruments and facilities
such as, among many other examples, building large telescope apertures and interferometers, imagers delivering high-angular resolution and high-contrast capabilities, high-precision and
high-stability spectroscopic units and state-of-the-art large area low-noise detectors.
The steady growth of multi-messenger astronomy requires collecting data across the whole electromagnetic spectrum, which would remain impossible without a continuous growth of astronomical instrumentation. \\
The last decade has seen tremendous progress in several areas of astrophysics, which has been supported by steadily improving instruments at, e.g., the VLT/VLTI or ALMA, whereas the James Webb Space Telescope launched in December 2021 will deliver transformational science in the near future.
In this context, the field of astrophotonics has steadily grown at the interface between photonics and
astronomical instrumentation.
Initially expanding from the established field of optical telecommunication technologies tailored to astronomical applications, the astronomy-photonics synergy has led to the emergence of novel breakthrough concepts, which maturity ranges from laboratory demonstrators to operational community-wide astronomical instruments.
\noindent Since the SPIE meeting on astronomical instrumentation in 2016 in Edinburgh, the organizers of the conference on optical and infrared interferometry have regularly included in the program a specific session on astrophotonics technologies to be introduced by a review of the field \cite{Labadie2016,Norris2018,Cvetojevic2020}\,. This illustrates the increasing scientific and technical relevance of astronomical photonics for interferometry, as seen in the success of the VLTI with GRAVITY or with new key instrumentation at CHARA, all using guided optics building blocks. Astrophotonics has already more than two decades of history through the fruitful collaboration of various communities that were motivated by the development of new astronomical instruments operating primarily in the visible and infrared range, and for which unique optical functionalities could be enabled thanks to photonic-based designs not accessible with a more classical bulk optics approach.
\noindent From an historical perspective, astrophotonics represents a tiny fraction of the multi-century long expansion phase of instrumentation in astronomy, primarily because of the recent nature of the elementary photonic components that are at the heart of the discipline. But revolutionary discoveries in astronomy and astrophysics have always been the fruit of patient technological and engineering innovation. And indeed, for many of us in the community, it is still fascinating to remember that the legacy of Edwin Hubble and its groundbreaking transformation of our understanding of the size and nature of the Universe is intimately linked to the tireless engineering effort of a group a persons who managed to build, assemble and maintain the Hooker telescope and its 100-inch primary mirror on Mount Wilson, the world's largest optical telescope in the first half of the twentieth century. The power of which was further expanded by Albert Michelson using his boom innovation with which he measured Betelgeuse's diameter. Well, astrophotonics has not had this influence yet, but its role in transforming modern optics for astronomy cannot be underestimated.
\noindent In Section 2, I present the context of the astrophotonics activities and emphasizes the growth of the community. In Section 3, I detail the so-called ``Astrophotonic flow'' and uses it to present recent achievements in the field, focusing primarily on the topic of the conference, namely interferometry. In Section 4, I discuss few bright perspectives for our community.
\section{A growing and active community}
While this review concentrates primarily on interferometry as the main topic of the conference, it is noticeable that the applications of astrophotonics go well beyond this community, triggering the interest of scientists that need to overcome major limitations of existing observational capabilities. In brief, the techniques of long-baseline interferometry, high-contrast imaging and high-resolution spectroscopy represent the three major areas relevant to the field of astrophotonics. At the time of writing, the majority of the groups active in the field or showing interest in its exploitation are located in Europe, with the largest concentration in France, Germany and the United Kingdom. The European groups are active in all areas of astrophotonics applications to the aforementioned observing techniques. A smaller albeit very active and diversified community is located in the area of Sydney, in Australia. Finally, groups with a marked interest in high-contrast imaging techniques and high-resolution spectroscopy are found in the United States, alongside with groups historically involved in the implementation of photonics for interferometry. Most groups identified in Fig.~\ref{fig2} have established close collaborations with highly complementary competencies in the areas of fabrication, characterization and integration of astrophotonics components. All share the common objective to exploit astrophotonics solutions either to increase the stability and reduce the overall complexity of an instrument compared to its bulk optics counterpart, or to enable optical functions that can only be reliably enabled with the help of photonics. In this last category, I consider devices such as the fibre Bragg gratings for the efficient suppression of the narrow and bright hydroxyl sky emission lines in the near-infrared \cite{Bland-Hawthorn2011}\,, or the laser-written waveguide networks used to reconfigure a pupil or an image plane, as implemented for instance in the Dragonfly instrument \cite{Jovanovic2012}\,.
\noindent Astrophotonics is to be considered as an active research field in which fundamental questions related to the phase control of a wavefront \cite{Ellis2021}\,, the exploitation and optimization of manufacturing processes \cite{Gatkine2021}\,, or the quantum manipulation of single photons \cite{Bland-Hawthorn2021} are at the heart of a vivid academic activity. Recently, a special feature focusing on astrophotonics has been jointly published by two journals of the Optical Society of America with about thirty papers broadly addressing the progress and challenges of photonics in astronomy \cite{Dinkelaker2021a,Dinkelaker2021b}\,. Hence, as an emerging academic field, astrophotonics goes beyond the simple exploitation of the existing photonics market, but rather invents new concepts and proposes new ideas serving astronomical needs and rarely emerging from the conventional photonic community. The relevance of astrophotonics for the astronomical community as large and its recognition as an academic field is probably best illustrated with the first recently invited review of the field in the Astronomy \& Astrophysics Review \cite{Minardi2021}\,.
\noindent In an invited paper published in 2016, I represented the yield of the astrophotonics venue for different observational techniques such as interferometry and high-resolution spectroscopy in the form of a graph locating different projects and initiatives in a ``spectral-resolution versus wavelength-coverage'' plot (see Fig.\,7 in Labadie et al. 2016 \cite{Labadie2016})\,. In particular, I differentiated the level of maturity by grouping them in the categories of lab demonstrators, on-sky experiments or prototypes and community instruments. In Fig.~\ref{fig3}, I extended this view taking into account a six-year span. The comparison clearly illustrates how much significant progress has been achieved. Following the immense success of the GRAVITY instrument at the VLTI, other community instruments are now offered mainly as single-mode interferometers, as for instance the complementary pair MIRC-X/MYSTIC sharing the CHARA platform \cite{Monnier2018,Anugu2020}\,, or the commissioned visible SPICA interferometer \cite{Pannetier2020}\,
, all combining six telescopes. In the upper part of the plot, one can observe clear progress in the area of photonic-based functionalities for high-resolution spectroscopy. While still at the stage of lab experiments, the silica-on-silicon (SiO$\rm 2$/Si) lithographic platform has been able to produced arrayed waveguide gratings (AWGs) demonstrating a spectral resolution of almost 30,000 in the H\,band over a bandwidth larger than 100\,nm \cite{Stoll2021}\,, which is to my knowledge about the highest spectral resolution demonstrated with AWGs. Similarly, in the field of static Fourier Transform spectroscopy (SWIFTS), Bonduelle et al. \cite{Bonduelle2021} has demonstrated experimentally the successful multiplexing of the SWIFTS concept with no movable parts to reconstruct a near-infrared spectrum with a resolution larger than $R$\,=\,30,000. Further slightly deviating from the applications of astrophotonics to interferometry, it is of high interest to report the on-sky demonstration of the concept of MCF-IFU (i.e. Multi-Core Fiber Integral Field Unit) at visible and near-infrared wavelengths by Anagnos et al. \cite{Anagnos2021} and Haffert et al. \cite{Haffert2020}\,, which emerges as a promising extension of the well established principle of fiber fed spectrograph. I will briefly come back to this idea in Sect.~\ref{Remapp-printing}.
\\
It is striking to remark that the research in astrophotonics remains strongly dominated by devices operating in the near-infrared range, and more particularly around 1550\,nm, which reminds us the strong heritage of the telecommunication and semi-conductor fields despite active research programs to steer photonics towards more specific needs for astronomy. In fact, a significant effort has been undertaken to open the mid-infrared window to long-baseline interferometry and the results obtained by Tepper et al. \cite{Tepper2017a,Tepper2017b} and Gretzinger et al. \cite{Gretzinger2019} are now converging towards the construction of Hi-5 instrument \cite{Defrere2018}\,, the first integrated-optics based mid-infrared instrument at the VLTI.
\section{The astrophotonics flow}
The development of an astronomical instrument is always primarily motivated by the scientific objectives to be achieved, which are then broken down into technical requirements. In case the photonic option is considered more competitive than the bulk optics one -- in particular with respect to recognized advantages such as compactness, instrumental stability, cost and potential maturity -- a development flow can be established as illustrated in Fig.~\ref{fig4}. Note that for the purpose of this article, only the cases of interferometry and spectroscopy are considered. The problematic of the detectors together with the techniques of signal recording and processing is not discussed here.
\subsection{Manufacturing aspects}
As a first consideration, the availability and maturity of the fabrication platform needs to be assessed. The choice is motivated by the high-level requirements of the instrument such as the spectral range of operation, the level of complexity of the optical functions to be implemented, and the expected throughput or insertion losses. The three main fabrication platforms that can be currently considered are the lithography platform (e.g., photolithography or E-beam lithography), the ultrafast laser writing (ULI) platform, and the ion-diffusion in glasses. It is outside the scope of this review to describe in details the underlying processes and methods for each platform and I refer the reader to the large existing literature on the topic or to the corresponding references in Labadie et al. \cite{Labadie2016} as a starting point. I only point out that I do not consider here the potential of these platforms for instrumentation in the UV spectral range or for wavelengths longer than $\sim$10\,$\mu$m as little has been done with astrophotonics in these domains. Few important points can be nonetheless highlighted.
\begin{itemize}
\item For all the three platforms, the propagation losses can be grossly estimated to less than a dB/cm, in particular in the near-infrared range around 1550\,nm where silica-based photonics exhibits losses smaller than 0.1 dB/cm. In general, the propagation losses increase significantly when using more exotic materials for the mid-infrared range (see Butcher at al. \cite{Butcher2018} at 7.85\,$\mu$m).
\item The level of field confinement is directly dependent on the achievable index contrast, which in return determines the possible level of compactness of the chip due to more or less important bending losses. Currently, the silica-based platform (e.g. SiO$_{\rm 2}$/Si) offers the best option for high index contrast with achievable $\Delta$n as high as 0.1.
\item Generating sub-micron photonic structures is at reach with, for instance, e-beam lithography. However, it was recently demonstrated by that the versatile laser-writing platform is also capable of generating $\sim$100\,nm-scale isotropic nanovoids that could be exploited for the development of low-resolution dispersing elements \cite{Lei2021}\,. This might be a future interesting perspective for astrophotonics.
\end{itemize}
\subsection{Interface to the infrastructure}\label{interface}
A critical part of the flow concerns the interface with the infrastructure, whether a telescope or an interferometric array. How the light is collected at the focus of the telescope(s) and how it will be injected into a photonic chip stage?
\subsubsection{Mode control}
For single apertures, most recent photonic instruments have emphasized the importance of operating in conjunction with an adaptive- or even extreme adaptive optics system \cite{Jovanovic2016}\,. Indeed, the precise phase control of the incoming wavefront through modal filtering or the circumventing of the spectrograph-telescope size relation requires to operate in the single-mode regime, which implies coupling a diffraction-limited spot in the input waveguide. In this sense, the increasing availability of adaptive optics on large telescopes helps significantly the usage of single-mode photonic devices. \\
In case of a modest correction from the AO stage, mode conversion using a photonic lantern has been proposed\cite{LeonSaval2005,Thomson2011} and can be implemented in the few-mode regime or in the seeing-limited regime for which the number of modes is approximated\cite{Harris2015}
by the relation
M$\sim$($\pi\alpha$D/4$\lambda$)$^2$, where $M$ is the number of modes, $\alpha$ the angular size of the PSF and $D$ the telescope diameter, all in IS units. With the mode converter, the idea is to attack the photonic device always in the single-mode regime. While the mode converting photonic lantern has found applications for high-resolution spectroscopy\cite{Bland-Hawthorn2011,Harris2015}\,, its implementation for long-baseline interferometry is less straightforward. The main reason is that a lantern interfaced with a few-mode or seeing-limited point-spread function delivers single modes with a random time-variable phase relationship between them, which is equivalent to multimode interferometry that essentially scrambles the measured contrast. This is the reason why all optical/infrared interferometers have equipped the array telescopes with adaptive optics, the last example being the NAOMI adaptive optics system on the VLTI auxiliary telescopes\cite{Woillez2019} replacing the single tip-tilt system. Note however that recent progress on a all-photonic wavefront sensor by Norris et al.\cite{Norris2020} may change this perspective.
\subsubsection{Beam transport}
In long-baseline interferometry, optical fibers have long played an important role in interfacing the individual apertures with the beam combiner. It is generally done only at the level of the instrument over few meters, like at the VLTI or at CHARA, where an upstream classical optical train forms the optical relay and the delay lines for the beam transport in the interferometry combination lab. In this configuration, near-infrared fibers are now well developed with exceptionally low losses of $\sim$0.2\,dB/km at 1550\,nm. High-quality Nufern\,1950 fibers cover the K-band as well with polarization-maintaining capabilities. Commercial polarizing (PZ) fibers -- only transmitting one linear polarization whatever the input polarization state -- are also available in the near-infrared, as well as endlessly single mode fiber and large-mode area fibers. In other words, optical fibers for the H\,band -- and to a large extent for the K\,band as well -- do not represent any particularly strong challenge for astrophotonics.\\
Fiber transport over much larger distances, hence replacing the bulk optics optical train, has been of the interest of interferometrists for a long time, originally starting with the OHANA experiment\cite{Perrin2006b} in the K band and combining the two Keck telescopes. To my knowledge, this is to date the only example of hectometric fibered links for direct detection interferometry. Another noticeable example regards the recent implementation of hectometric fiber links between the CHARA telescopes to support accurate stabilization of the optical path difference down to 4\,nm rms using stretchers of polarization-maintaining fibers operating at 1550\,nm\cite{Lehmann2019}.\\
At wavelengths longer than 2.2\,$\mu$m, the situation appears less mature. Mid-infrared fluoride-based solid-core fibers are now commercially available on the market, but with poorer performance than the silica-based counterparts. To date, the basic entry-level Thorlabs fibers exhibit ``only'' 0.5dB/m ($\sim$90\% throughput over 1\,m), which remains a relatively modest value, while the polarization properties remain generally insufficiently constrained. In Fig.~\ref{fig5}, I show a comparison of the polarization behavior for an unspecified Thorlabs fiber and a second commercial fiber announced to be polarization-maintaining, with clearly different results indicating the low level of maturity in treating the polarization properties of these mid-infrared fibers. The Thorlabs fiber (blue curve) shows clear angular directions separated by 90$^{\circ}$ for which the input linear polarization remains unchanged, hence identifying the fast and slow axis reported usually for a polarization-maintaining (PM) fiber. On the contrary, the measurement on the anticipated PM test fiber indicates a scrambling of any input linear polarization. Of course, these measurements do not set any sort of quantitative classification between manufacturers but simply suggest, based on systematic tests, the difficulty to have the polarization state of the field through the fiber transmission correctly specified.
\\
In the longer wavelength range corresponding to 10\,$\mu$m, the state of the art remains unfortunately even more primitive. Hollow-core, large area, quasi single-mode and multimode are offered commercially with propagation losses on the order of 0.5\,dB/m as well, whereas chaclogenide fibers covering the 1-6\,$\mu$m range and polycristalline fibers covering the 4-17\,$\mu$m are produced, though to my knowledge none of these fibers are reported to having been used for astronomical applications, yet.
\subsubsection{Remapping devices and microlenses}\label{Remapp-printing}
The last interface unit between the telescope and the photonic chip that I wish to address concerns the reformmatting and/or sampling of the pupil or image plane. In pupil-remapping based aperture masking experiments, photonic technologies have demonstrated their potential in breaking the baseline redundancy of a full telescope pupil, while preserving the advantage of the aperture masking technique capable of detecting a faint companion close or even below the $\lambda$/D resolution limit\cite{Biller2012}\,. The FIRST instrument\cite{Huby2012,Huby2013} represents a classical example of fiber-linked remapper, but noticeable progress has been obtained with ultrafast laser writing of three-dimensional pupil remappers in a single and compact glass substrate, which typically guarantees high mechanical and thermal stability in comparison to a fiber network. Pioneered in the DRAGONFLY instrument\cite{Jovanovic2012}\,, it is also used in the GLINT nulling instrument\cite{Norris2020b} as well as in the four-telescope Discrete Beam Combiner experiment\cite{Nayak2021} (see Sect.~\ref{interfero}). In this last three cases, the exploitation of the ULI technique has been key in obtaining these results.\\
While the pupil plane can be sampled using photonic devices, the same can occur with the image plane of a telescope. I already mentioned here above the implementation of a photonic lantern in the image plane to serve as all-photonic wavefront sensor\cite{Norris2020}\,, which shares a similar conceptual approach -- although different in its implementation -- to the micro-lens tip-tilt sensor of Hottinger et al.\cite{Hottinger2021}\,. In the area of telescope interfaces with image plane sampling/remapping, a promising result was recently published where a small portion of the extreme-AO corrected field-of-view with SCExAO on the 8-m Subaru telescope was sampled and remapped into a pseudo-slit feeding a high-resolution spectrograph. The all-photonic optics train -- except for the dispersing element -- was based on a multi-core fiber (MCF) on top of which a microlens array was 3D-printed to sample the image plane. At the extreme end of the MCF, the cores were rearranged from a 2D to a 1D linear arrangment using a remapping lantern. This prototype was optimized for the typical astrophotonic H-band\cite{Anagnos2021}\,, whereas a visible version was assembled for the 4.2-m William Herschel Telescope\cite{Haffert2020}\,. While the concept of image plane sampling using fiber bundles or bulk optics lenslet arrays is not new with these works, it is interesting to demonstrate the feasibility of a one-block interface with a filling factor of the image plane of about 100\%. The 3D-printed lenslet approach remains however still limited in its physical extension, which in returns limits the total size of the sampled field-of-view.
I illustrate in Fig.~\ref{fig6} the original interface flow between the telescope and the spectrograph. A possible extension of the 3D-printed lenslets towards the mid-infrared might be accessible in the future, albeit depending on the progress that can be achieved in the area of polymer resins transparency.
\subsection{Astrophotonics in Interferometry}\label{interfero}
Following the flow of Fig.~\ref{fig4}, once the interface with the telescope or the interferometric array is ensured (see Sect.~\ref{interface}), the next step is to address the astrophotonics design of the beam combination stage: which combination scheme is more appropriate to the high-level science requirements of the instrument? What is the targeted sensitivity ultimately justifying the use of photonic elements? How the imaging capabilities (e.g., the number of apertures) connects with the technical complexity of the beam combiner? Is a stage of rapid phase control required? What about the need for low-temperature operations? In this context, I address in the following paragraphs the progress in astrophotonics for the field of interferometry.
\subsubsection{Fiber-based beam combiners}\label{fiberbased}
Using single-mode fibers to implement a multi-axial interferometric beam combination is the lowest level of complexity involving astrophotonics components. In several cases, non-redundant multi-axial fiber-based interferometers are actually referred to as bulk-optics concepts. To date, all interferometric instruments that have adopted this scheme are located at the CHARA interferometric facility, namely MIRC-X, MYSTIC and VEGA. It is also remarkable that this is so far the only scheme adopted for the interferometric combination of six telescopes.
\subsubsection{ABCD Integrated-optics (IO) beam combiners}\label{io-beamcombiners}
The GRAVITY experience -- and success -- is based on a silica-on-silicon platform beam combiner with four inputs and 24 outputs to implement the static ABCD combination scheme to encode the interferometric quantities and the telescopes fluxes. This chip, well-known to the community, and its principle are described in details elsewhere\cite{Benisty2009,Perraut2018}\,. Here, I will only concentrate on the fact that the GRAVITY beam combiner is the only photonic chip that has operated in low-temperature conditions ($\sim$\,-80$^{\circ}$C) over the last five years. It can be therefore of high interest to verify the survival conditions of this chip, and in particular of its glued connections to the fibers. Since it is not possible to remove the IO beam combiner from GRAVITY for test, and since to my knowledge no ``ground model'' exists in the same operational conditions of the ``flight model'', we can analyze the evolution of the throughput of the instrument to at least exclude that the beam combiner is the worst offender in terms of throughput. The plot of Fig.~\ref{fig7} reports the throughput of the GRAVITY instrument fed with the internal calibration source from January 2017 to July 2022. Having no particular information on the properties of the calibration lamp, we can at least observe two patterns with relatively stable throughput over the years. The jump towards a higher transmission in mid-2019 is unrelated to the IO chip, but rather corresponds to an increased throughput due to the change of grism. One can conclude that the system ``fibered integrating optics combiner'' does not appear to be the weak point of the whole transmission chain. The GRAVITY-like ABCD scheme due to the reduced number of pixels in comparison to the multi-axial scheme has motivated the MYSTIC team to consider an additional mode of their instrument using the spare version of the GRAVITY beam combiner. The tests are currently on-going at CHARA (J. Monnier, private communication).
\\
\\
The success of the GRAVITY integrated optics beam combiner has motivated a first 6-telescope version with the a similar technology and manufactured by VLC-Photonics to serve as a phase-delay fringe tracker at CHARA to co-phase, for instance, the SPICA instrument. This solution, for which a first chip has been produced, is currently under evaluation (cf. Pannetier et al.\cite{Pannetier2020} and Mourard et al., this proceeding).
\subsubsection{Ultrafast-laser-written (ULI) beam combiner: extension to the K\,band}\label{io-beamcombiners}
The GRAVITY beam combiner -- and the PIONIER beam combiner in the H band before him -- is the prototype of science productive astrophotonics chip for interferometry. And probably the only one of this kind. In parallel, the simplicity and versatility of the ULI platform has motivated a team from the Heriot-Watt University, the University of Cologne and the AIP-Potsdam institutions to explore a new type of beam-combiner based on commercially available and highly transparent Infrasil-type subtrate of the silica family. The objective is to cover the whole K\,band with a glass substrate better matching the band of operation.\\
This instrumental research has led to the manufacturing of a two-telescope infrasil chip with photometric tapers fiber-in fiber-out. The accurate manufacturing optimization and lab characterization has produced a fairly achromatic chip delivering about 92\% of broadband interferomeric contrast in unpolarized light\cite{Benoit2021}\,.
This first K\,band ULI combiner is being integrated and tested at CHARA with the JouFlu infrastructure and the NICMOS camera.
\subsubsection{Ultrafast-laser-written (ULI) beam combiner: the 4T-DBC experiment on-sky}
As an alternative scheme to the multi-axial and ABCD beam combination schemes, the Discrete Beam Combiner option was proposed\cite{Minardi2016} relying only on the evanescent coupling between channel waveguides, and thus avoiding all forms of bending losses in the interferometric combiner. This architecture relies on a 2D ``zig-zag'' configuration\cite{Diener2017} allowing non-nearest-neighbor interaction, which is essential to the encoding of the coherence function \cite{Minardi2015}\,, and which naturally requires the advantage of the 3D writing capabilities of ULI as opposed to planar integrated optics solutions. After several steps of laboratory characterization\cite{Nayak2020}\,, a four-telescope DBC was manufactured in borosilicate glass and tested at the William Herschel Telescope, in conjunction with a $\sim$30-40\% Strehl correction by the CANARY adaptive optics system serving as fringe-tracker. It is to be noted that an advantage of the DBC architecture is on the small numbers of encoded pixels in comparison to the ABCD or multi-axial scheme, in particular when increasing the number of apertures. For instance, for the multi-axial combination scheme, the minimum number of encoding pixels per wavelength channel is $\sim$30 in the four-telescope (4T) configuration and $\sim$140 in the 6T configuration, respectively. The pairwise ABCD combiner requires 24 pixels in the 4T and 60 pixels in the 6T configuration, while the DBC requires 23 pixels in the 4T and 41 pixels in the 6T configuration, respectively. Now the concept needed to be tested on sky.
\\
\\
Light from Vega and Altair was coupled in the 4T-DBC combiner designed to fit in a classical aperture-masking/pupil remapping experiment (see Fig.~\ref{fig10}). The injection into the DBC component is controlled via a segmented mirror conjugated to a bulk-optics microlenses array. The retrieval of the coherence function was performed inverting the calibration V2PM matrix. Squared visibilities could be reconstructed for the two sources and compared to the internal calibration source, or to the retrieval from a noisy region on the detector. Instrumental visibilities of less than unity were retrieved on the six baselines, whereas the closure phases appeared paradoxically highly noisy for being self-calibrated quantities. It was concluded that, while the experiment could demonstrate the proof-of-principle of the on-sky DBC, the low flux conditions coupled to unavoidable long integration times had limited the yield of the experiment\cite{Nayak2021}\,. In a next step, the consideration of a 3D-printed lenslet array could significantly improve the stability of the transfer function.
\subsubsection{Ultrafast-laser-written (ULI) beam combiner: revival of nulling}
In the last two years, nulling interferometry has experienced a new birth primarily thanks to the results of the GLINT instrument installed at Subaru (see Fig.~\ref{fig11}). GLINT relies on an extension of the original two-telescope prototype\cite{Norris2020b} to a four-input chip based on directional couplers and producing sixteen spectrally dispersed outputs corresponding to the six baselines (i.e., one null and one anti-null per coupler) and the four photometric channels. GLINT is interfaced through a remapping stage to the SCExAO output, while a MEMS (or segmented mirror) allowed to maximize the coupling and control the required phase shift to be on the dark fringe. In Martinod et al.\cite{Martinod2021}\,, the authors demonstrate in the lab an instrumental contrast of $\sim$10$^{-3}$ in dispersed light, while the on-sky observation of $\alpha$\,Boo in nulling mode has resulted in the detection of few 10$^{-2}$ stellar leakage, in agreement with the expected stellar diameter of this source. In a similar context, the potential of the ULI platform was further analyzed through simulations only by studying the potential of an integrated optics tri-coupler in an equilateral configuration within the interaction region to serve as achromatic nuller and in situ fringe-tracker simultaneously\cite{Martinod2021b}\,. These results open new perspectives for nulling interferometry on other observational platforms.
\subsubsection{The Kernel-Nulling approach: a lab demonstration}
The Kernel-nulling approach based on self-calibrated quantities should allow the measurement of nulls that are less sensitive to phase excursions that are classically encountered in long-baseline interferometry from the ground\cite{Martinache2018}\,.
\\
An experimental validation of the technique has been demonstrated in the lab by Cvetojevic et al.\cite{Cvetojevic2022}\, making use of a three-input integrated optics multimode interferometer (MMI) design (see Fig.~\ref{fig12}). This is particularly relevant in the context of this astrophotonics review since the photonic chip is at the heart of this approach.
In brief, three-input MMI produces one bright output, and two ``classical'' nulled outputs that are subtracted each other to form a kernel-null robust to random phase error from, for instance, the fringe-tracker (see their publication for details). The MMI photonic architecture was produced using UV-photolithography and operated essentially in the H-band. The main message is that, while raw nulls of 2$\times$10$^{-3}$ were measured, the kernel distribution resulting from the subtraction of the two nulled outputs proved to be a self-calibrated quantity with respect to an induced residual piston of 100\,nm rms. The kernel null was consistent with zero within an error of 10$^{-4}$, which has then permitted the detection of a simulated 10$^{-2}$ dimmer companion at a $\sim$2\,mas separation, assuming a VLTI configuration using the UTs an observing at the zenith. This laboratory demonstration marks an important step towards the assessment of nulling techniques from the ground.
\newpage
\subsubsection{On-sky remapping instruments and on-chip active phase control}
The FIRST instrument dedicated to the remapping of the full telescope pupil in the visible and handling currently 2$\times$9 sub-apertures has now evolved towards a ``version 2'' in which a passive integrated optics chip achieves the beam combination instead of the original multi-axial encoding\cite{Barjot2020}\,. Furthermore, FIRST-v2 is the only hybrid case where it is attempted to insert a Niobate Lithium stage for active on-chip phase control\cite{Martin2020}\,, although the throughput seems still insufficient (E. Huby, provate communication).\\
Regarding this point, the availability of on-chip phase modulation holds a major interest in astrophotonics for the rapid stabilization of the phase in interferometry. While the Niobate Lithium technologies based on the electro-optic effect allow active phase control at the MHz speeds, other venues have been recently explored using the thermo-optic effect that operates at lower kHz speeds. In this area, I wish to briefly report on the promising result obtained by Montesinos-Ballester et al.\cite{Montesinos2019}\,, who demonstrated on-chip Fourier-Transform spectroscopy in the mid-infrared using an array of SiGe Mach-Zender Interferometers in which the OPD in one arm was scanned using the thermo-optic effect. This is certainly part of future improvements of all-photonic interferometric beam combiner chips. \\
Finally, a recent interesting concept that pertains to the area of astrophotonics interferometry is the ``waveguide-free'' approach proposed by Doelman et al.\cite{Doelman2021} to perform aperture-masking on large telescope and preventing baseline redundancy using holographic aperture masks made of liquid-crystal phase patterns. The idea is comparable to the segment-tilting experiment on the Keck by Monnier et al.\cite{Monnier2009}\,, but without having to unphase the primary mirror. All the possibly redundant baselines are encoded at different spatial locations on the detector taking advantage of the adequately dimensioned phase mask. Tested on-sky, the concept has remapped 83\% of the telescope pupil (30 segments out of 36), making it one of the most successful approach to efficient pupil remapping.
\section{Discussion}\label{Discussion}
In this review, the recent progress in the field of astrophotonics for interferometry has been highlighted. No claim for full exhaustivity is made here, however the rapid growing of the field is emphasized. I did not treated the strong potential of astrophotonics for high-resolution spectroscopy and high-contrast imaging, but it is important to understand the strong complementarity and interplay between all these techniques in terms of photonic functionalities and fabrication platforms. These three observational techniques share common grounds as seen from the perspective of astrophotonics.
I also did not emphasize in detail the recent progress in the field of mid-infrared photonics. However, I tried to cover these themes in a more general vision that is summarized in Fig.~\ref{fig3}. The future of up-conversion technologies pioneered by the group of Fran\c cois Reynaud (see for instance Lehmann et al.\cite{Lehmann2019b}) may also bring further novelty, unfortunately not addressed here. Finally, I fully concentrated here on prospects for ground-based observations, whereas it is likely that one the strongest potential of astrophotonics is in space due to obvious small-scale integration capabilities. Groups outside the immediate astronomical environment have started to explore the impact of high-energy radiation on waveguide structures, as this might be expected from a low Earth orbit space environment\cite{Piacentini2021}\,.\\
\\
Interferometry and spectroscopy are currently two important drivers for astrophotonics and, given the conference, emphasis has been set on the prospects for the ground-based interferometers like the VLTI or CHARA. However, it is easy to predict that a new frontier for astrophotonics will be set by the emergence of the class of Extremely Large Telescope for which new innovation will be needed: considering that the primary mirror of the E-ELT is formed by 900 segments each with a diameter of 1.4\,m, the task for interferometry-biased astrophotonicists may result arduous, requiring significant creativity.\\
\\
A last comment regards the wider recognition of the field. The recent Decadal Survey ``Pathways to Discovery in Astronomy and Astrophysics for the 2020s'' has evoked for the first time the discipline of astrophotonics, highlighting the importance of a measured plan. The report quotes in the section K.4.7 Technology development -- Astrophotonics: {\it
Strengthening the coordination between the most active astrophotonics research groups in the United States would optimize resources and facilitate the passage from laboratory research to industrial partnership. This could be done through the creation of a distributed, multi-disciplinary Institute of Astrophotonics to coordinate the teams working in this field. The more coordinated approach adopted by Europe (Germany in particular) and Australia has led to success and leadership in this field. A few tens- of-millions of dollars of funding over the next decade would be needed to significantly advance this technology and reestablish U.S. leadership in astrophotonics.
}\\
The need for an Institute of Astrophotonics might be questionable, but the community is definitely present and active.
\acknowledgments
L. Labadie acknowledges discussions with many colleagues in the field and with interested persons and students.
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Search for extended sources in the images from Chandra X-ray Observatory Advanced CCD Imaging Spectrometer |
Abstract: We present a convenient tool (ChaSES) which allows to search for extended
structures in Chandra X-ray Observatory Advanced CCD Imaging Spectrometer
(ACIS) images. The tool relies on DBSCAN clustering algorithm to detect regions
with overdensity of photons compared to the background. Here we describe the
design and functionality of the tool which we make publicly available on
GitHub. We also provide online extensive examples of its applications to the
real data.
| https://export.arxiv.org/pdf/2208.09923 |
\title{Search for extended sources in the images from Chandra X-ray Observatory Advanced CCD Imaging Spectrometer }
\correspondingauthor{Oleg Kargaltsev}
\email{kargaltsev@gwu.edu}
\author{Igor Volkov}
\altaffiliation{}
\affiliation{The George Washington University}
\author{Oleg Kargaltsev}
\altaffiliation{}
\affiliation{The George Washington University}
\keywords{methods: statistical; techniques: image processing; Astrophysics - Instrumentation and Methods for Astrophysics; Astrophysics - High Energy Astrophysical Phenomena}
\section{Abstract}
We present a convenient tool (ChaSES) which allows to search for extended structures in Chandra X-ray Observatory Advanced CCD Imaging Spectrometer (ACIS) images. The tool relies on DBSCAN clustering algorithm to detect regions with overdensity of photons compared to the background.
Here we describe the design and functionality of the tool which we make publicly available on GitHub. We also provide online extensive examples of its applications to the real data.
\section{Background}
Chandra X-ray Observatory (CXO) Advanced CCD Imaging Spectrometer (ACIS; \citealt{2003SPIE.4851...28G}) has taken thousands of images with unprecedented sub-arcsecond angular resolution and very low background. Therefore, even shallow Chandra images provide an opportunity
to look for faint extended sources of X-ray emission (such as supernova remnants, pulsar-wind and magnetar-wind nebulae, galaxy clusters, shocks driven by massive stars or star clusters, planetary nebulae, etc.). Although, Chandra is well known for spectacular images of bright extended sources, there are no convenient tools that allow to look for fainter extended structures that may be serendipitously imaged while observing other targets.
Given the large volume of data collected by CXO,
such a search would require fast, efficient, and robust structure-finding algorithm. The standard CIAO source-detection tools\footnote{These are the wavedetect, celldetect, and vtpdetect tools described in https://cxc.harvard.edu/ciao/download/doc/detect\_manual/} available in CIAO\footnote{Software package developed by CXC to analyze Chandra data: http://cxc.harvard.edu/ciao/} are mostly geared toward point source detection and, hence, do not provide accurate characterization of significantly extended (compared to the {\sl CXO} PSF) sources.
Therefore, after extensive comparison of various algorithms, we opted to use the well-known and well-tested DBSCAN clustering algorithm \citep{10.5555/3001460.3001507} available within the {\tt scikit-learn} Python library\footnote{https://scikit-learn.org/}.
\section{Under the Hood: DENSITY-BASED CLUSTERING with DBSCAN}
At the core of the ChaSES Tool is the scikit-learn's implementation of the DBSCAN clustering algorithm\footnote{https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html}. This is a density-based algorithm capable of finding cluster of arbitrary shape and variable density in the presence of noise. A cluster is a set of core points (such that there exist $min\_samples$ other points within a distance of $\epsilon$) which is built by recursively taking a core point, finding all of its neighbors that are core samples, finding all of their neighbors that are core points, and so on. It also includes a set of non-core boundary points (neighbors of a core point in the cluster but are not the core points themselves). Higher $min\_samples$ or lower $\epsilon$ correspond to higher density. The optimal vales of $min\_samples$ or lower $\epsilon$ depend on the dataset and are often found by the trial-and-error method. There are no universally optimal values. However, we found the values that are appropriate for most ACIS datasets and set them as defaults in the ChaSES GUI (see below). Users are encouraged to vary these parameters in the vicinity of these values.
\section{Pre-processing, Graphical User Interface (GUI), and Output}
The ChaSES tool\footnote{Available at https://github.com/ivv101/ChaSES} can be run in the user's web browser
or in the a Python/Jupyter notebook. ChaSES allows users to analyze any observation existing in the CXO archive which can be specified by the unique observation ID (ObsID). The analysis consists of two parts: (1) creation of energy-filtered event list with point sources being removed and (2) search for extended structures by running DBSCAN with the results shown graphically and in tabular form. The regions corresponding to clusters can be exported in SAO DS9 format.
Since removal of point sources requires user to have CIAO installed and also computationally expensive, we pre-computed event lists that have point source removed\footnote{Note that the removal can be imperfect but it does, on average, help to increase sensitivity to extended sources.} for
1,042 ACIS observations and put those files onto Hugging Face\footnote{https://huggingface.co/datasets/oyk100/Chandra-ACIS-clusters-data} (HF) website\footnote{Due to the large volume of data these files could not be placed on GitHub.}. Users can directly download those files from our ChaSES
GUI\footnote{ Implemented with Bokeh Python library: https://docs.bokeh.org/ } by using the ``pre-computed'' button and selecting any of the available ObsIDs. For each ObsID the photons in the ACIS event list are filtered to the 0.5-8 keV energy range. (The original event list with point sources is also included.)
On GitHub we provide the script\footnote{See the corresponding CIAO thread https://cxc.cfa.harvard.edu/ciao/threads/diffuse\_emission/}.) that can be used to remove point sources from any ACIS observation, when the corresponding ObsID is not present among the pre-processed datasets in our HF repository\footnote{Note, this step requires installing CIAO and downloading the entire set of data products associated with a particular ObsID. This can be done with the help of CIAO's {\tt download\_chandra\_obsid script.} }.
Once the ObsID is selected in the ChaSES GUI, the user can select the CCD needs to choose the corresponding drop down menu (``ccd'') since the cluster search is performed per single CCD (to avoid the interference with the gaps separating CCDs). The ``holes'' button allow one to switch between the original event files and files where the point sources have been removed. When pressed, the ``n\_max'' button randomly under-samples the data by capping the number of photons at 20,000. This is helpful when the the dataset is large and hence computations can take long time. (De-pressing the button will cause ChaSES run on the entire dataset.) In addition to the above, the user can change
\begin{itemize}
\item the number of pixels in the image: $nbins$, and
\item the DBSCAN algorithm parameters: $\epsilon$ (``eps'' in the GUI) and $min\_samples$.
\end{itemize}
The $\epsilon$ ($eps$) and $min\_samples$ are the two main parameters of the DBSCAN clustering algorithm. By default they are set to the values that should be closed to the optimal ones (see above). The cluster search is initiated by pressing the ``Apply'' button in GUI. Once the clusters are found by DBSCAN, the background is determined by calculating the average number of photons per image pixel after excluding the regions associated with the clusters. Therefore, it is dependent on the choice of $nbins$, $eps$, and $min\_samples$ (which should be sensible). The average background value, after multiplying by it by the cluster area, is used to calculate the chance occurrence probability ($P_{\rm chance}$) of the observed number of photons in the cluster (which is the extended source significance) according to Poisson distribution.
The output of ChaSES consists of the visualization of detected clusters on top of the image (with cluster regions numbered and overplotted with different colors) and the tables with the properties of detected clusters (extended sources). These properties are the silhouette score (one of the metrics used to characterize the quality of cluster), area (as a fraction of the total image area), number of net counts ($n-n_{\rm bkg}$), detection significance, $S$ (``signif.'' in GUI; in units of Gaussian standard deviation\footnote{$1-P_{\rm chance}=\left[\int_{-\infty}^{S } (2\pi)^{-1/2}\exp^{-x^2/2}dx\right]^{A/A_c}$, where $A$ and $A_c$ are the CCD and cluster's areas, respectively.}), and the position of cluster center of mass in the physical ($x,y$) and celestial coordinates ($R.A., Decl.$) coordinates. The table can be filtered by detection significance using the $min sigma$ slider located below the image.
The remaining adjustable parameters only affect the appearance of the image that is shown but do not affect how the detection is performed.
\section{Summary}
We developed a Python-based tool with a convenient GUI which streamlines the detection and characterization of extended sources in {\sl CXO} ACIS data. It can also be easily adopted for use with imaging data from other X-ray telescopes.
\section{Acknowledgement}
Support for this work was provided by NASA through Chandra X-ray Observatory Award AR8-19008X.
\newpage
\bibliography{ms}{}
\bibliographystyle{aasjournal}
|
Title:
Alouette: Yet another encapsulated TAUOLA, but revertible |
Abstract: We present an algorithm for simulating reverse Monte Carlo decays given an
existing forward Monte Carlo decay engine. This algorithm is implemented in the
Alouette library, a TAUOLA thin wrapper for simulating decays of tau-leptons.
We provide a detailed description of Alouette, as well as validation results.
| https://export.arxiv.org/pdf/2208.11914 |
\begin{frontmatter}
\title{Alouette: Yet another encapsulated TAUOLA, but revertible}
\author[lpc]{Valentin~Niess\corref{cor1}}
\ead{niess@in2p3.fr}
\cortext[cor1]{Corresponding author}
\address[lpc]{
Universit\'e Clermont Auvergne, CNRS/IN2P3, LPC, F-63000 Clermont-Ferrand,
France.}
\begin{keyword}
tau \sep
decay \sep
Monte Carlo \sep
reverse
\end{keyword}
\end{frontmatter}
{\bf PROGRAM SUMMARY}
\begin{small}
\noindent
{\em Program Title: Alouette} \\
{\em CPC Library link to program files:} (to be added by Technical Editor) \\
{\em Developer's repository link: https://github.com/niess/alouette } \\
{\em Code Ocean capsule:} (to be added by Technical Editor)\\
{\em Licensing provisions: LGPL-3.0 } \\
{\em Programming language: C, Fortran and Python} \\
{\em Nature of problem:
Perform reverse Monte Carlo decays. } \\
{\em Solution method:
Invert an existing forward Monte Carlo engine using the Jacobian backward
method. Apply the algorithm to $\tau$ decays generated by TAUOLA.
} \\
\end{small}
\section{Introduction}
TAUOLA~\cite{Jadach1991,Jezabek1992,Jadach1993,Golonka2006} is a reference
Monte Carlo engine for simulating decays of $\tau$-leptons. It is a long
standing software, initiated in the eighties, and still being contributed to
nowadays (see e.g.~\citet{Davidson2012,Nugent2013} and \citet{Chrzaszcz2018}).
TAUOLA is used in the Monte Carlo simulations of many particle physics
experiments. In order to motivate the present discussion, let us emphasize a
particular use case. TAUOLA is also used by astroparticle experiments looking
at high energy, $E_\nu \geq 100\,$GeV, $\nu_\tau$ neutrinos. For example,
neutrino telescopes like IceCube~\cite{Abbasi2012} or
KM3NeT~\cite{AdrianMartinez2016}, and cosmic rays arrays like the Pierre Auger
Observatory~\cite{Allekotte2008}.
Let us briefly explain the $\nu_\tau$ use case. The transport of $\nu_\tau$
through the Earth is a coupled $\nu_\tau$-$\tau$ problem. High energy
$\nu_\tau$ undergo \ac{DIS} collisions with nucleii, converting to $\tau$
leptons in the \ac{CC} case. Since $\tau$-leptons are very short lived, they
mostly decay in flight, thus re-producing a secondary high energy $\nu_\tau$.
This $\nu_\tau$ regeneration scenario has been studied in details in the past
(see e.g.~\citet{Bugaev2004} or \citet{Bigas2008}). As a result, $\nu_\tau$
neutrinos are more penetrating than other flavours. In addition, they have
specific signatures and detection methods. For example, several experiments aim
at detecting Earth skimming $\nu_\tau$, of cosmic origin, through the radio
signature of secondary $\tau$ decaying in the atmosphere. In particular, let us
refer to the GRAND collaboration~\cite{Alvarez-Muniz2020} for additional details
on this technique.
Accurate sensitivity estimates to high energy $\tau$ from $\nu_\tau$, for
astroparticle detectors, require sophisticated Monte Carlo computations. Various
software have been developed recently in order to address this problem (see e.g.
NuTauSim~\cite{Alvarez-Muniz2018,Alvarez-Muniz2019}, TauRunner~\cite{Safa2020},
NuPropEarth~\cite{Garcia2020} and Danton~\cite{Niess2018a}). Of those, the most
detailed Monte Carlo engines rely on TAUOLA in order to simulate $\tau$ decays,
while others use parametrisations, sometimes also derived from TAUOLA.
The efficiency of detailed sensitivity computations can be significantly
improved by using reverse Monte Carlo methods, as shown by \citet{Niess2018a}
with the Danton Monte Carlo engine~\cite{GitHub:Danton}. Reverse methods allow
to sample specific final states by inverting the simulation flow. That is, by
running the Monte Carlo simulation from the $\tau$, at the detector level, back
to the primary $\nu_\tau$, at top of the atmosphere. However, reverse Monte
Carlo should not be mistaken with time reversal. It does not time rewind the
evolution of a stochastic system. Reverse Monte Carlo actually belongs to the
category of \ac{IS} methods.
Reverse Monte Carlo transport engines traditionally rely on the \ac{AMC}
method~\cite{Kalos1968,Eriksson1969}. Recently, an alternative \ac{BMC} method
has been developed, differing in approach from the \ac{AMC} one. \ac{BMC}
allows one to invert a Monte~Carlo procedure, considered as a stochastic
process, without the need to formulate transport equations or to compute adjoint
cross-sections. Instead, Monte Carlo events are re-weighted by the Jacobian of
the process, as a for a change of variable in an integral. As a particular case,
the \ac{AMC} process can also be used in a \ac{BMC} formulation, but one is not
limited to that. Let us refer to \citet{Niess2018} where the \ac{BMC} method is
introduced in more details.
Reverse propagating $\tau$ leptons can be done with the
PUMAS~\cite{Niess2022,GitHub:PUMAS} transport engine, which implements the
\ac{BMC} method. Using a similar approach than PUMAS, the ENT
library~\cite{GitHub:ENT} can reverse propagate high energy neutrinos. In
addition, reverting the $\nu_\tau$-$\tau$ transport problem requires to simulate
reverse decays of $\tau$ leptons, i.e. ``undecaying'' a $\nu_\tau$ to a $\tau$.
This is the purpose of the Alouette library, presented herein. To our knowledge,
the problem of undecaying Monte Carlo particles has not been addressed
previously.
This paper is separated in two parts. In the first part, i.e.
section~\ref{sec:algorithms}, we present an algorithm for undecaying particles
using the \ac{BMC} method and an existing forward decay engine. For the sake of
clarity, the discussion specifically considers TAUOLA as forward engine. In the
second part, i.e. the following sections~\ref{sec:implementation}
and~\ref{sec:validation}, we present Alouette, a TAUOLA thin wrapper. Alouette
is meant to be simple to use for $\nu_\tau$-$\tau$ transport problems, yet
efficient and accurate. Alouette is available as a C library and as a Python
package. It can operate in forward or in backward Monte Carlo mode.
\section{Decay algorithms \label{sec:algorithms}}
\subsection{Forward Monte Carlo}
Before discussing the backward decay algorithm, let us briefly recall the
forward one. TAUOLA's Monte Carlo algorithm was described in detail in several
articles (see e.g. \citet{Jadach1991} and references therein). Let us highlight
some practical results relevant for the present discussion. A specificity of
TAUOLA is that it allows one to simulate spin dependent effects in the decay of
$\tau^+\tau^-$ pairs, e.g. as produced in $e^+e^-$ collisions. The spin states
of $\tau$ leptons are set by their production process, i.e. essentially \ac{DIS}
in the coupled $\nu_\tau$-$\tau$ transport problem. The $\tau$ spin state is
important because it significantly impacts the angular distribution of decay
products.
For the purpose of $\nu_\tau$-$\tau$ transport, let us consider only single
$\tau$ decays herein, and let us introduce some notations. Let $(E_0,
\vb{p}_0$) denote the 4-momentum of the mother $\tau$ particle in the Laboratory
frame, and let $(E_i, \vb{p}_i)$ be the momenta of the daughter decay products,
where $i \geq 1$. Note that natural units are used, where $c=1$. Thus, $E_i^2 =
\vb{p}^2_i + m_i^2$, where $m_i$ denotes the rest mass of particle $i$.
Let $E_i^\star$ ($\vb{b}_i^\star$) denote the energy (momentum) in the \ac{CM}
frame, i.e. the $\tau$ rest frame. Thus, $\vb{p}_0^\star = \vb{0}$ and
$E_0^\star = m_0$. For the following discussion on the backward decay, it is
relevant to explicitly recall the Lorentz transform from the \ac{CM} frame to
the Laboratory one. Let $\vb*{\beta}$ be the parameter of the Lorentz transform.
Then
\begin{linenomath*}
\begin{align}
\label{eq:lorentz_E}
E_i &= \gamma \left(E_i^\star + \vb*{\beta} \cdot \vb{p}_i^\star \right), \\
\label{eq:lorentz_p}
\vb{p}_i &= \vb{p}_i^\star + \left(\frac{\gamma^2}{\gamma + 1}
\vb*{\beta} \cdot \vb{p}_i^\star + \gamma E_i^\star \right) \vb*{\beta},
\end{align}
\end{linenomath*}
where $\gamma = 1 / \sqrt{1 - \vb*{\beta}^2}$. In the forward Monte Carlo case,
$\vb*{\beta}$ is determined from the mother particle properties, as $\vb*{\beta}
= \vb{p}_0 / E_0$ and $\gamma = E_0 / m_0$.
The mother's spin state is conveniently represented by a spin polarisation
vector, $\vb{s}^\star$, defined in the \ac{CM} frame (see e.g.
\citet{Jadach1984}). The spin dependent part of the differential decay width can
be factored as
\begin{linenomath*}
\begin{equation} \label{eq:differential-width}
d\Gamma = \sum_k{\left(1 + \vb{s}^\star \cdot \vb{h}^\star_k \right)
d\Gamma_{0,k}},
\end{equation}
\end{linenomath*}
where $d\Gamma_{0,k}$ is the spin-averaged differential decay width for the
$k^\text{th}$ mode. The decay polarimeter vectors, $\vb{h}^\star_k$, are
computed from the matrix elements of the different decay modes. Detailed results
can be found in TAUOLA articles (see
e.g.~\cite{Jadach1991,Jezabek1992,Jadach1993}). For the present purpose, it
suffice to notice that the polarimeter vectors depend only on the decay products
momenta $\vb{p}^\star_i$ in the \ac{CM} frame. In particular, rotating the decay
products $\vb{p}_i^\star$ results in an identical rotation of the polarimeter
vectors $\vb{h}_k^\star$. Thus, equation~\eqref{eq:differential-width} allows
one to decouple the simulation of \ac{CM} polarized decays in two steps, as
outlined e.g. in \citet{Jadach1991}. First, an unpolarized decay is simulated in
the \ac{CM} frame, with corresponding polarimeter vector $\vb{h}_{0,k}^\star$.
By definition, this process has no preferred direction. Secondly, the actual
direction of the polarimeter vector, $\vb{h}^\star_k$, is randomised according
to the spin factor
\begin{linenomath*}
\begin{equation} \label{eq:spin-factor}
f_s = 1 + \vb{s}^\star \cdot \vb{h}^\star_k .
\end{equation}
\end{linenomath*}
This second step determines the actual direction of decay products, which are
rotated such that $\vb{h}_{0,k}^\star$ matches $\vb{h}^\star_k$.
In practice, the direction of the polarimeter vector can be randomised using the
inverse \ac{CDF} method. For this purpose, let us parametrise $\vb{h}^\star_k$
using spherical coordinates $(\theta^\star_k, \phi^\star_k)$ with polar axis
$\vb{u}_z = \vb{s}^\star / \|\vb{s}^\star\|$. Then, according to
equation~\eqref{eq:spin-factor}, the \ac{PDF} of the polar angle is
\begin{linenomath*}
\begin{equation} \label{eq:polarimeter-angle}
p(\theta^\star_k) = \frac{1}{2} \left(1 +
\alpha_k \cos(\theta^\star_k) \right),
\end{equation}
\end{linenomath*}
where $\alpha_k = \|\vb{s}^\star\| \|\vb{h}^\star_k\| \in [0,1]$, and where the
azimuthal angle $\phi_k^\star$ is uniformly distributed over $[0,2\pi]$. Thus,
using the inverse \ac{CDF} method, the angular coordinates of the polarimeter
vector are randomised as
\begin{linenomath*}
\begin{align}
\label{eq:polarimeter-theta}
\cos(\theta^\star_k) &= \frac{\sqrt{4 \alpha_k \xi_\theta +
(1 - \alpha_k)^2} - 1}{\alpha_k}, \\
\label{eq:polarimeter-phi}
\phi_k^\star &= 2 \pi \xi_\phi,
\end{align}
\end{linenomath*}
where $\xi_\theta$ and $\xi_\phi$ are independent random variates uniformly
distributed over $[0,1]$.
The forward decay procedure is summarised below as
algorithm~\ref{al:forward-decay}. Note, that the first step, the selection of
the decay mode, is optional. In practice, the user can specify a specific decay
mode if desired.
\begin{algorithm}[h]
\caption{Forward Monte Carlo \label{al:forward-decay}}
\vskip .5em
\begin{enumerate}[(i)]
{\item Select a decay mode $k$ with probability $p_k =
\Gamma_{0,k} / \Gamma_0$, where $\Gamma_0 = \sum_k{\Gamma_{0,k}}$.}
{\item Generate a \ac{CM} decay according to $d\Gamma_{0,k}$, i.e. assuming
an unpolarized mother. Let $\vb{p}_{0,i}^\star$ denote the momenta of
the decay products, and let $\vb{h}_{0,k}^\star$ be the corresponding
polarimeter vector.}
{\item Draw the direction $\vb{h}_k^\star$ of the polarimeter vector
according to the spin factor, $f_s = 1 + \vb{s}^\star \cdot
\vb{h}_k^\star$, using equations~\eqref{eq:polarimeter-theta} and
\eqref{eq:polarimeter-phi}.}
{\item Let $R$ denote the rotation matrix from $\vb{h}_{0,k}^\star$ to
$\vb{h}_k^\star$. Rotate the momenta of decay products accordingly, as
$\vb{p}_i^\star = R\,\vb{p}_{0,i}^\star$.}
{\item Lorentz-transform the rotated decay products to the Laboratory frame,
using equations \eqref{eq:lorentz_E} and \eqref{eq:lorentz_p}, where
$\vb*{\beta} = \vb{p}_0 / E_0$.}
\end{enumerate}
\end{algorithm}
\subsection{Backward Monte Carlo}
The backward decay problem consist in sampling the mother's particle momentum,
$\vb{p}_0$, given a specific daughter one, let us say $\vb{p}_j$. This requires
inverting the forward Monte Carlo decay process, i.e.
algorithm~\ref{al:forward-decay}.
\subsubsection{Unpolarised backward decays}
Let us first reduce the problem to a non ambiguous case. Let us consider a
daughter particle that would be present in all decay modes, e.g. a $\nu_\tau$
neutrino in the case of a $\tau^-$ decay. Let us further assume that knowing the
daughter particle uniquely determines the mother's particle type, e.g. in the
case of a $\nu_\tau$ daughter the mother can only be a $\tau^-$, not a $\tau^+$.
Let us denote $L$ the forward Monte Carlo decay process corresponding to
algorithm~\ref{al:forward-decay}, defined as
\begin{linenomath*}
\begin{equation}
\vb{p}_j = L\left(\vb{p}_0, \vb{p}_j^\star \right) .
\end{equation}
\end{linenomath*}
That is, the daughter (final) momentum $\vb{p}_j$ is a function of the mother
(initial) momentum $\vb{p}_0$ and of the random variate $\vb{p}_j^\star$,
generated by the \ac{CM} decay process. The expression of $L$ can be derived
from equation~\eqref{eq:lorentz_p}, substituting $\vb*{\beta} = \vb{p}_0 / E_0$
and $\gamma = E_0 / m_0$. For the sake of clarity, let us write the result
below, as
\begin{linenomath*}
\begin{equation} \label{eq:forward-process}
L\left(\vb{p}_0, \vb{p}_j^\star \right) = \vb{p}_j^\star + \frac{1}{m_0}
\left( \frac{\vb{p}_0 \cdot \vb{p}_j^\star}{E_0 + m_0} +
E_j^\star \right) \vb{p}_0 .
\end{equation}
\end{linenomath*}
Let us further consider unpolarised decays. Then, the \ac{CM} process does not
depend on $\vb{p}_0$, and in particular $\vb{p}_j^\star$ does not. Thus,
inverting equation~\eqref{eq:forward-process} w.r.t. the first variable yields
the \ac{BMC} process $L^{\shortminus 1}$, where
\begin{linenomath*}
\begin{equation}
\vb{p}_0 = L^{\shortminus 1}(\vb{p}_j, \vb{p}_j^\star) .
\end{equation}
\end{linenomath*}
In order to perform this inversion, it is useful to notice that in the \ac{BMC}
case the Lorentz transform parameters can be obtained from daughter $j$ as
\begin{linenomath*}
\begin{align}
\label{eq:lorentz_gamma}
\gamma &= 1 + \frac{\left(\vb{p}_j - \vb{p}^\star_j\right)^2}
{E_j E^\star_j + \vb{p}_j \cdot \vb{p}^\star_j + m_j^2} , \\
\label{eq:lorentz_beta}
\vb*{\beta} &= \frac{\gamma + 1}{\gamma \left(E_j + E^\star_j\right)}
\left( \vb{p}_j - \vb{p}^\star_j \right) .
\end{align}
\end{linenomath*}
Indeed, in the \ac{BMC} case the momentum of daughter $j$ is known both in the
\ac{CM} and in the Laboratory frame. Since $\vb{p}_0 = \gamma m_0 \vb*{\beta}$,
one obtains
\begin{linenomath*}
\begin{equation} \label{eq:bmc-transform}
L^{\shortminus 1}(\vb{p}_j, \vb{p}_j^\star) =
\frac{m_0 \left(E_j + E_j^\star\right)}{
E_j E_j^\star + \vb{p}_j \cdot \vb{p}_j^\star + m_j^2}
\left(\vb{p}_j -\vb{p}_j^\star \right) .
\end{equation}
\end{linenomath*}
To summarise, an unpolarised \ac{BMC} process starts by generating a \ac{CM}
decay, as in step (i) and (ii) of algorithm~\ref{al:forward-decay}. Thus, one
gets $\vb{p}_j^\star$. Since in the backward case $\vb{p}_j$ is already known,
one can determine the Lorentz transform parameter $\vb*{\beta}$ from
equations~\eqref{eq:lorentz_gamma} and \eqref{eq:lorentz_beta}. One then
obtains $\vb{p}_0$ from $\vb{p}_0^\star = \vb{0}$, yielding
equation~\eqref{eq:bmc-transform}. Thus, in practice un-polarized forward and
backward Monte Carlo decays are almost identical. They differ only by the
computation of $\vb*{\beta}$.
The present case provides a clear example of the difference between reverse
Monte Carlo and time reversal. The \ac{BMC} process let us generate Monte Carlo
decays with a fixed momentum for a specific decay product, rather than fixing
the mother momentum as in forward decays. Time reverting decays would instead
consist in determining the mother momentum from the momenta of all its decay
products.
\subsubsection{Polarised backward decays}
In the polarised case, the decay procedure cannot be directly inverted because
\ac{CM} decays depend on the unknown mother's momentum $\vb{p}_0$, through the
spin factor $f_s$. A workaround is to rely on a bias process, approximating
\ac{CM} decays, and then to reweight Monte Carlo events accordingly. A simple
bias process would be to consider unpolarized decays. However, this can be
rather inefficient when $f_s \rightarrow 0$, resulting in null weights.
Therefore, let us instead consider the following bias distribution for the spin
factor
\begin{linenomath*}
\begin{align} \label{eq:bias-factor}
f_b &= 1 + \vb{s}_b^\star \cdot \vb{h}_k^\star, \\
\label{eq:bias-spin}
\vb{s}_b^\star &= \epsilon b \frac{\vb{p}_j^\star}{\|\vb{p}_j^\star\|},
\end{align}
\end{linenomath*}
where $\epsilon = \pm 1$ depending on the $\tau$ charge, and where $b \in [-1,
1]$ is a configurable bias factor. Note that this is identical to the true spin
factor $f_s$, but substituting $\vb{s}^\star$ with $\vb{s}_b^\star$, which is
known in a backward decay. With this bias process, \ac{CM} decays can be
randomised in the backward case using the same procedure than in the forward
case, i.e. equations~\eqref{eq:polarimeter-theta} and \eqref{eq:polarimeter-phi}
but substituting $\vb{s}^\star$ with $\vb{s}_b^\star$. However, a Monte Carlo
weight
\begin{linenomath*}
\begin{align} \label{eq:spin-weight}
\omega_S &= f_s / f_b \nonumber, \\
&= \frac{1 + \vb{s}^\star \cdot \vb{h}_k^\star}{
1 + \vb{s}_b^\star \cdot \vb{h}_k^\star},
\end{align}
\end{linenomath*}
must be applied to the result in order to correct for the spin biasing.
Thus, equation~\eqref{eq:bmc-transform} can be used to obtain $\vb{p}_0$,
as in the unpolarised case, but with $\vb{p}_j^\star$ generated from a bias
\ac{CM} process.
The rationale for using equations~\eqref{eq:bias-factor}
and~\eqref{eq:bias-spin} as bias distribution is the following. High energy
$\tau$-leptons are expected to be essentially produced with a longitudinal
polarisation, left (right) handed for $\tau^-$ ($\tau^+$). In particular, this
is the case for \ac{DIS} (see e.g.~\citet{Graczyk2005}). In addition, in the
high energy limit, i.e. for $\gamma \gg 1$, the mother and daughter particles
have similar momentum direction in the Laboratory frame. Consequently, one
would typically set $b = 1$ for decays of polarised $\tau$-leptons\footnote{
When $b=1$, the denominator of equation~\eqref{eq:spin-weight} can approach
zero. Then, $\omega_S$ could tend to infinity resulting in non-convergent
Monte Carlo estimates. Thus, whenever $f_b$ is close to zero, we reject the
corresponding polarimeter direction, and instead we draw a new one.},
and $b = 0$ otherwise.
\subsubsection{Jacobian backward weight \label{sec:jacobian-weight}}
Inverting the Monte Carlo process is not sufficient for a backward procedure in
order to yield proper results. In addition, one must weight events by a Jacobian
factor corresponding to the change of ``integration variable'' from $\vb{p}_0$
to $\vb{p}_j$. Let us refer to section~2 of \citet{Niess2018} for a detailed
justification. For the present case, the backward Monte Carlo weight is
computed in \ref{sec:backward-weight}. Using equation~\eqref{eq:bmc-transform}
for $\vb{p}_0 = L^{\shortminus 1}(\vb{p}_j, \vb{p}_j^\star)$, one finds
\begin{linenomath*}
\begin{align}
\omega_J &= \left|
\frac{\partial \vb{p}_0}{\partial \vb{p}_j} \right| , \nonumber \\
\label{eq:jacobian-weight}
&= \frac{\left(E_0 + E_0^\star\right)^2 E_0}{
\left(E_j + E_j^\star\right)^2 E_j} ,
\end{align}
\end{linenomath*}
where $|\partial y / \partial x|$ denotes the determinant of the Jacobian matrix
corresponding to the change of variable from $x$ to $y$.
Let us emphasize an important property of \ac{BMC}, not discussed previously in
\citet{Niess2018}. The \ac{BMC} weight depends on the coordinates system used
for the Monte Carlo integration. Equation~\eqref{eq:jacobian-weight} assumes
that a Cartesian 3-momentum is used. However, this is not the case when working
with a flux, e.g. like in $\nu_\tau$-$\tau$ transport problems. Instead,
``spherical'' coordinates are used, i.e. the differential flux is given per unit
of momentum and of solid angle. The \ac{BMC} weight in spherical coordinates,
$(p, \cos(\theta), \phi)$, can be derived from the previous one in Cartesian
coordinates, $(p_x, p_y, p_z)$, using Jacobians composition law. Let $\vb{c}$
($\vb{s}$) denote the momentum in Cartesian (spherical) coordinates. Then
\begin{linenomath*}
\begin{equation} \label{eq:cartesian-spherical}
\left| \frac{\partial \vb{s}_0}{\partial \vb{s}_j} \right| =
\left| \frac{\partial \vb{s}_0}{\partial \vb{c}_0} \right|
\left| \frac{\partial \vb{c}_0}{\partial \vb{c}_j} \right|
\left| \frac{\partial \vb{c}_j}{\partial \vb{s}_j} \right|,
\end{equation}
\end{linenomath*}
where the middle term in equation~\eqref{eq:cartesian-spherical} is given by
equation~\eqref{eq:jacobian-weight}. The two other terms correspond to the usual
Jacobian weight for changing from Cartesian to spherical coordinates, i.e.
$|\partial \vb{c} / \partial \vb{s}| = p^2$. Thus
\begin{linenomath*}
\begin{equation} \label{eq:spherical-weight}
\left| \frac{\partial \vb{s}_0}{\partial \vb{s}_j} \right| =
\left| \frac{\partial \vb{c}_0}{\partial \vb{c}_j} \right|
\frac{p_j^2}{p_0^2} .
\end{equation}
\end{linenomath*}
Alternatively, it is frequent for transport engines to consider the kinetic
energy, $T$, instead of the momentum. Let $\vb{e} = (T, \vb{u})$ denote such
``energy-direction'' coordinates, where $\vb{u}$ is a unit vector giving the
momentum direction. Then, with a similar reasoning than previously one finds
\begin{linenomath*}
\begin{equation} \label{eq:energy-weight}
\left| \frac{\partial \vb{e}_0}{\partial \vb{e}_j} \right| =
\left| \frac{\partial \vb{c}_0}{\partial \vb{c}_j} \right|
\frac{p_j E_j}{p_0 E_0} .
\end{equation}
\end{linenomath*}
Note also that using the kinetic energy, $T$, or the total energy $E$ as
Monte Carlo variable does not modify the \ac{BMC} weight, since $T = E - m$,
thus $|\partial T / \partial E| = 1$.
\subsubsection{General backward algorithm}
An additional difficulty arises when the daughter particle $j$ can have multiple
mothers, or when it is not present in all decay modes. This is the case, for
example, for $\pi$-mesons in $\tau$ decays. In this case, the decay mode
selection procedure, i.e. step (i) in algorithm~\ref{al:forward-decay}, must be
generalised. Let $\Gamma_{kl}$ denote the partial decay width for mother $l$ and
mode $k$. Let $m_{jkl}$ be the multiplicity of particle $j$ for the
corresponding decay. In particular, $m_{jkl} = 0$ if particle $j$ is not a decay
product for the given mode and mother. Then, the probability to select decay
mode $k$ and mother $l$ is set to
\begin{linenomath*}
\begin{equation} \label{eq:selection-probability}
p_{jkl} = \frac{m_{jkl} \Gamma_{kl}}{
\sum\limits_{l}\sum\limits_{k}{m_{jkl} \Gamma_{kl}}}
\end{equation}
\end{linenomath*}
In addition, when there are several daughter candidates for a given mode, i.e.
$m_{jkl} \geq 2$ e.g. as in $\tau^- \to \pi^- \pi^- \pi^+$, then one of them
must be selected as particle $j$. This is done randomly with equal probabilities
$1 / m_{jkl}$. Thus, it is assumed that same type daughter particles cannot be
distinguished.
Let us point out that this generalised procedure for selecting the decay mode,
and the mother particle $l$, is again a bias procedure. Thus, as for the spin
factor the biasing must be corrected by the ratio of the true selection
probability, $\Gamma_{kl} / \Gamma_l$, to the bias one, i.e. $p_{jkl}$. The
corresponding weight is
\begin{linenomath*}
\begin{align} \label{eq:selection-weight}
\omega_{jkl} = \frac{\sum\limits_{l}\sum\limits_{k}{m_{jkl}
\Gamma_{kl}}}{m_{jkl} \Gamma_l},
\end{align}
\end{linenomath*}
where $\Gamma_l$ is the total decay width of mother $l$. Note that alternative
bias selection procedures could be used in backward mode, e.g. with different
probabilities. In order to be valid, the only requirement is that the bias
procedure has a non null probability to select any possible mother and decay
mode combination.
The general backward decay procedure is summarised below as
algorithm~\ref{al:backward-decay}. The total backward Monte Carlo weight, taking
bias factors into account, is
\begin{linenomath*}
\begin{align} \label{eq:backward-weight}
\omega_B = \omega_{jkl} \, \omega_J \, \omega_S,
\end{align}
\end{linenomath*}
where $\omega_{S}$, $\omega_{J}$ and $\omega_{jkl}$ have been given in
equations~\eqref{eq:spin-weight}, \eqref{eq:jacobian-weight} and
\eqref{eq:selection-weight}. One should also recall that the weight
$\omega_{J}$ actually depends on the coordinates system used for Monte Carlo
variables, as discussed previously in section~\ref{sec:jacobian-weight}.
\begin{algorithm}[h]
\caption{Backward Monte Carlo \label{al:backward-decay}}
\vskip .5em
\begin{enumerate}[(i)]
{\item Select a mother $l$ and a decay mode $k$ with probability
$p_{jkl}$ given by equation~\eqref{eq:selection-probability}.}
{\item Generate a \ac{CM} decay according to $d\Gamma_{0,k}$, i.e. assuming
an unpolarized mother. Let $\vb{p}_{0,i}^\star$ denote the momenta of
the decay products, and let $\vb{h}_{0,k}^\star$ be the corresponding
polarimeter vector.}
{\item Draw the direction $\vb{h}_k^\star$ of the polarimeter vector
according to the bias factor, $f_b = 1 + \vb{s}_b^\star \cdot
\vb{h}_k^\star$, using equations~\eqref{eq:polarimeter-theta} and
\eqref{eq:polarimeter-phi}, but substituting $\vb{s}^\star$ with
$\vb{s}_b^\star = b\,\vb{p}_j^\star / \|\vb{p}_j^\star\|$.}
{\item Let $R$ denote the rotation matrix from $\vb{h}_{0,k}^\star$ to
$\vb{h}_k^\star$. Rotate the momenta of decay products accordingly, as
$\vb{p}_i^\star = R\,\vb{p}_{0,i}^\star$.}
{\item If there are multiple candidates for the decay product $j$, then pick
one randomly with equal probabilities.}
{\item Compute the Lorentz-transform parameter $\vb*{\beta}$ from the
daughter's momenta in the \ac{CM} and Laboratory frames, using equations
\eqref{eq:lorentz_gamma} and \eqref{eq:lorentz_beta}. Then, compute the
mother's momentum $\vb{p}_0$ using equation~\eqref{eq:lorentz_p}, as
well as the momenta $\vb{p}_{i \neq j}$ of companion decay products.}
{\item Request the true mother's spin polarisation, $\vb{s}^\star$, from the
user, given its momentum $\vb{p}_0$.}
{\item Compute the total backward Monte Carlo weight $\omega_B$, using
equation~\eqref{eq:backward-weight}.}
\end{enumerate}
\end{algorithm}
\section{Alouette implementation \label{sec:implementation}}
In this section, we discuss the implementation of Alouette (version $1.0$). The
corresponding source is hosted on GitHub~\cite{GitHub:Alouette}. Before going
into the details, let us point out that a technical overview of Alouette is
provided herein. For a more practical ``end-users'' documentation, let us refer
to Read the Docs~\cite{RTD:Alouette}. The latter documentation contains
instructions for installing Alouette, as well as a summary of the C and Python
\acp{API}.
The Alouette library is structured in three layers, described in more details in
the following subsections. The lowest layer is a C compliant encapsulation of
the TAUOLA Fortran library. The corresponding functions and global variables are
packaged with the \mintinline{C}{tauola_} prefix. This low level is internal.
Its functions are not intended to be directly called by Alouette end-users.
Nevertheless, it exposes some TAUOLA specific parameters that might be relevant
for expert usage.
The second layer is a C library, \mintinline{C}{libalouette}, implementing the
algorithms described in previous section~\ref{sec:algorithms} on top of TAUOLA.
It contains two main functions, \mintinline{C}{alouette_decay} (forward mode)
and \mintinline{C}{alouette_undecay} (backward mode), as well as some related
configuration parameters.
The third layer is a Python package wrapping the C library. This layer is
optional. C users would only use the second layer, i.e.
\mintinline{C}{libalouette}.
\subsection{TAUOLA encapsulation \label{sec:tauola-encapsulation}}
In this subsection we describe the low level encapsulation of TAUOLA that has
been developed for Alouette. But, let us first warn the reader that some parts
of this subsection are rather technical since they refer to the very details of
the TAUOLA Fortran implementation. Understanding all these details is not
necessary for using the $2^\text{nd}$ and $3^\text{rd}$ software layers of
Alouette.
\subsubsection{TAUOLA distribution}
The TAUOLA library was initially released as a Fortran package. Since then, it
has been widely extended and customized. Various distributions exist today. In
particular, let us point out Tauola\pp{ }from~\citet{Davidson2012}, hosted by
CERN~\cite{Tauolapp:website}. Tauola\pp{ }is a C\pp{ }extension built over the
core Fortran package.
The algorithms discussed in section~\ref{sec:algorithms} require an initial
Monte Carlo engine performing \ac{CM} decays of polarized $\tau$-leptons. This
is done by the \mintinline{Fortran}{DEKAY} routine implemented in TAUOLA
Fortran (see e.g.~\citet{Jadach1991}). Since we are only concerned with single
$\tau$ decays, the C\pp{ }layer of Tauola\pp{ }is not relevant to us. However,
Tauola\pp{ }also maintains and updates the Fortran source of TAUOLA, under the
\mintinline{bash}{tauola-fortran} directory. The latter is used as starting
point for building the lower software layer of Alouette. Thus, in the following
when ``TAUOLA'' is mentioned, it refers to the Fortran core routines shipped
with Tauola\pp.
Namely, we use version $1.1.8$ of Tauola\pp{ }tagged ``for the LHC''. This
release includes updated parametrisations, ``new currents'', for $\tau$ decays
to $2$ and $3$ $\pi$-mesons, according to \citet{Fujikawa2008} and
\citet{Nugent2013}. However, the LHC release does not include the very latest
developments, e.g. from \citet{Chrzaszcz2018}.
\subsubsection{TAUOLA software design}
TAUOLA is a reference Monte Carlo engine for $\tau$ decays, producing sound
physics results. However, some software design choices are unfortunate to us in
order to use TAUOLA as a library in a C project. The points that we are
concerned with are listed below. But, before discussing these details, let us
recall that the core Fortran functionalities of TAUOLA have been designed more
than 30 years ago, in a different software context than nowadays. Let us further
mention that points (iv) and (v) are also discussed in \citet{Chrzaszcz2018},
and might be addressed by future TAUOLA developments.
\begin{enumerate}[(i)]
{\item TAUOLA defines hundreds of global symbols, function and structures,
using Fortran 77 short names, i.e. without any library specific prefix.
This complicates code readability when TAUOLA entities are used.
Furthermore, it could lead to collisions with other libraries when
TAUOLA is integrated in a larger framework.}
{\item TAUOLA messages are directly written to the standard output instead
of being forwarded to the library user. In addition, there is little to
no severity information associated, e.g. debug, info, warning or error.
This prevents integrating TAUOLA with another messaging system.}
{\item TAUOLA errors issue a hard \mintinline{Fortran}{STOP} statement,
exiting to the OS, instead of resuming back to the caller with an error
status.}
{\item TAUOLA routines use a built-in \ac{PRNG},
\mintinline{Fortran}{RANMAR}, shipped with the library. The \ac{PRNG} is
not configurable at runtime, yet partially with source pre-processing
(see e.g~\citet{Golonka2006}). When integrating TAUOLA with another
Monte Carlo, it enforces using \mintinline{Fortran}{RANMAR} if a single
\ac{PRNG} is desired. Alternatively, one can use two different
\acp{PRNG}, e.g. as in Tauola\pp. The latter solution can however be
confusing for end-users.}
{\item TAUOLA was not written with concurrency in mind. For example, it uses
common blocks and static variables that might be written concurrently in
multi-threaded applications.}
\end{enumerate}
Solving the previous issues requires modifying TAUOLA Fortran source. Making
the library thread safe would imply a significant re-writing of the source,
which is beyond the present scope. However, other points can be addressed with
only a little refactoring.
\subsubsection{TAUOLA refactoring}
TAUOLA source is more than $14\,$kLOC. Modifying it manually would be tedious
and error prone. Instead, modifications are done procedurally with a Python
script, \mintinline{bash}{wrap-tauola.py}, distributed with Alouette source.
This script maps common blocks and routines, and it builds a call tree. Then,
it applies the modifications discussed hereafter. Let us emphasize that these
modifications are only software refactoring. They, do not change any of TAUOLA's
algorithms. The resulting ``refactored TAUOLA'' library is packaged as a single
file, \mintinline{bash}{tauola.f}, also distributed with Alouette source. In
addition, a companion C header file is provided, \mintinline{bash}{tauola.h},
for stand-alone usage of this software layer.
First, let us recall that only the \mintinline{Fortran}{DEKAY} routine is needed
for our purpose. However, TAUOLA also exports hundreds of other sub-routines as
global symbols. These sub-routines are called by \mintinline{Fortran}{DEKAY},
but not directly by end-users. Thus, they could conveniently reside in a
private scope of the library. A simple solution to this problem is to make
TAUOLA routines internal. This is achieved by relocating them into a
\mintinline{C}{tauola_decay} top routine, by enclosing them with a
\mintinline{Fortran}{CONTAINS} statement. During this process, orphan routines
not in the \mintinline{Fortran}{DEKAY} call tree are removed. Similarly, the
\mintinline{Fortran}{INIMAS}, \mintinline{Fortran}{INITDK} and
\mintinline{Fortran}{INIPHY} initialisation routines, from
\mintinline{bash}{tauolaFortranInterfaces/tauola_extras.f}, are also
internalised. Then, only the \mintinline{C}{tauola_decay} routine needs to be
exported. This top routine takes care of properly calling internal routines, for
initialisation or for decay.
In addition, explicit C \ac{ABI} names are given to external symbols, using the
\mintinline{Fortran}{BIND(C)} attribute introduced in Fortran~2003. Common
blocks, as well as the user supplied \mintinline{C}{filhep} callback function,
are prefixed with \mintinline{C}{tauola_}. The latter callback allows one to
retrieve decay products. Note also that with this method, no compiler specific
mangling occurs, e.g. C symbols have no trailing underscore.
The remaining issues (ii), (iii) and (iv), are solved by substituting
\mintinline{Fortran}{PRINT}, \mintinline{Fortran}{STOP} and
\mintinline{Fortran}{RANMAR} statements by C callback functions, i.e.
\mintinline{C}{tauola_print}, \mintinline{C}{tauola_stop} and
\mintinline{C}{tauola_random}. These callbacks are implemented in the second
software layer, i.e. the Alouette C library.
The \mintinline{C}{tauola_print} case deserves some more explanations. Directly
substituting a callback is not possible because print formats differ between C
and Fortran, and because variadic functions are not interoperable. A workaround
would be to redirect Fortran prints to a buffer string, i.e. keep using Fortran
formatting functions. Then, the resulting formatted string would be forwarded to
the C callback function. However, this creates an extra runtime dependency on
the Fortran library, e.g. \mintinline{bash}{libgfortran}, because Fortran
formatting functions are not part of system libraries on Unix systems. This is
unfortunate, because except from this formatting issue, the compiled TAUOLA
library does not depend on the Fortran library.
An alternative solution would be to forward all \mintinline{Fortran}{PRINT}
arguments to C, and then to perform the parsing of the Fortran format string,
and the formatting, in C. This is however more complex, though there exist C
libraries performing Fortran formatting.
Given the previous issues, and since TAUOLA is intended to be used only
internally by Alouette, a simplified solution was adopted. Compound print
statements, implying several formatted variables, are suppressed. We observed
that those are always informative messages. Other statements must be forwarded
since they might be associated to an error. However, in this case only the body
text is kept without formatting. This is sufficient because the second software
layer takes care of properly configuring TAUOLA and of checking input parameters
to \mintinline{C}{tauola_decay}. Thus, low level TAUOLA errors seldom occur. In
case a low level TAUOLA error nevertheless occurs, the unformatted error message
still provides some ``expert'' insight on what happened.
\subsubsection{TAUOLA initialisation \label{sec:tauola-initialisation}}
The initialisation of TAUOLA deserves some additional explanations. First, let
us point out that our refactored TAUOLA is systematically started in ``new
currents'' mode~\cite{Fujikawa2008,Nugent2013}, i.e. by calling
\mintinline{Fortran}{INIRCHL(1)} before actually initialising TAUOLA. This is
needed in order to be able to use new currents at all. However, the legacy CLEO
parametrisation can be restored at any time by setting
\mintinline{C}{tauola_ipcht.iver} to $0$.
Secondly, TAUOLA relies on rejection sampling in order to generate Monte Carlo
decays. This requires determining $20$ maximum weights, $W_\text{max}$,
corresponding to the different decay modes (see e.g. \citet{Jadach1991}) for
more details. These weights are computed during TAUOLA's initialisation using
an opportunistic optimisation method, i.e. by keeping the maximum out of several
random trials, and by applying a $1.2$ multiplicative security factor to the
result. Consequently, TAUOLA's initialisation consumes random numbers from the
\ac{PRNG}. In addition, although the number of trials has been set ``high
enough'' in order to ensure a large probability of success, this method is not
guaranteed to succeed.
In order to monitor the result of this initialisation step, the decay routines
\mintinline{Fortran}{DADMAA}, \mintinline{Fortran}{DADMEL},
\mintinline{Fortran}{DADMMU}, \mintinline{Fortran}{DADMKS},
\mintinline{Fortran}{DADMRO} and \mintinline{Fortran}{DADNEW} have been slightly
modified. Initially, the maximum weights value were stored in static variables
\mintinline{Fortran}{WTMAX}, internal to each decay routine. In the refactored
TAUOLA, $W_\text{max}$ values are exported by relocating them into new common
blocks, e.g. \mintinline{Fortran}{tauola_weight_dadmaa.wtmax}, for the
\mintinline{Fortran}{DADMAA} decay routine. This allows one to read back their
values after TAUOLA's initialisation without interfering with any TAUOLA
algorithm.
\subsection{C library}
The Alouette C library is implemented on top of the refactored TAUOLA library.
Since the latter is not thread safe, the C layer was as well designed as not
thread safe, which facilitates its implementation.
\subsubsection{Library initialisation}
Alouette initialisation is automatic. It occurs on need, e.g. when calling the
\mintinline{C}{alouette_decay} function. Initialisation consists mainly in
calling the TAUOLA Fortran initialisation discussed previously in
section~\ref{sec:tauola-initialisation}. TAUOLA's initialisation is performed
with a dedicated (independent) instance of Alouette's internal \ac{PRNG}. Let
us point out that this step does not interfere in any way with the \ac{PRNG}
stream exposed to Alouette end-users, since different instances are used. In
addition, this dedicated \ac{PRNG} stream is seeded with a fixed value in order
to guarantee the success of TAUOLA's initialisation, i.e. proper $W_\text{max}$
values. The seed has been selected as yielding median $W_\text{max}$ values for
all decay modes. This is a good compromise between speed and accuracy, as
further discussed in section~\ref{sec:seed-validation}.
At the end of TAUOLA's initialisation, partial decay widths $\Gamma_k$ are read
back from common blocks. These data are needed for the backward Monte Carlo
procedure (see e.g. equation~\eqref{eq:selection-weight}). In addition, the
multiplicities $m_{jk}$ of decays products are needed. This information is
hard-coded in Alouette source. As a consequence, Alouette $1.0$ is bound to a
specific physics implementation of TAUOLA. That is, if decay modes are added or
removed from TAUOLA, then the corresponding information must be mirrored
manually in Alouette source.
Alouette initialisation can also be triggered directly with the
\mintinline{C}{alouette_initialise} function. This is useful if non standard
settings are needed. The latter function takes two optional parameters, as
\begin{minted}{C}
enum alouette_return alouette_initialise(
unsigned long * seed, double * xk0dec),
\end{minted}
where \mintinline{C}{seed} is
an alternative seed value for TAUOLA's initialisation, and where
\mintinline{C}{xk0dec} ($k_0^\text{decay}$) specifies the soft photon cut for
leptonic radiative decays (see e.g.~\citet{Jezabek1992}). Setting
$k_0^\text{decay} = 0$ disables radiative corrections for leptonic modes. Note
that providing a \mintinline{C}{NULL} pointer for \mintinline{C}{seed} or for
\mintinline{C}{xk0dec} results in Alouette's default value to be used for the
corresponding parameter.
\subsubsection{Error handling}
Alouette C functions indicate their execution status with an \mintinline{C}{enum
alouette_return} error code. If the execution is successful, then
\mintinline{C}{ALOUETTE_RETURN_SUCCESS} is returned. Otherwise, the return code
indicates the type of error that occurred, as:
\begin{itemize}
{\item \mintinline{C}{ALOUETTE_RETURN_VALUE_ERROR} e.g. for an
invalid input parameter value.}
{\item \mintinline{C}{ALOUETTE_RETURN_TAUOLA_ERROR} for a low level
TAUOLA error.}
\end{itemize}
The \mintinline{C}{alouette_message} function can be used in order to get a more
detailed description of the last error as a characters string. The synopsis of
this function is
\begin{minted}{C}
const char * alouette_message(void).
\end{minted}
Note that if no error occurred, then the \mintinline{C}{alouette_message}
function might still return an informative or warning message generated by
TAUOLA.
TAUOLA errors would normally trigger a hard \mintinline{Fortran}{STOP}, exiting
back to the OS. With the refactored TAUOLA discussed in
section~\ref{sec:tauola-encapsulation}, these stops are however redirected to a
\mintinline{C}{tauola_stop} callback function implemented in the C layer. Note
that it is not possible to simply \mintinline{C}{return} from the latter
callback, since this would continue TAUOLA's execution with undefined behaviour.
Instead, a jump back to the calling context must be done. Thus, before any call
to \mintinline{C}{tauola_decay}, a rally point is defined with
\mintinline{C}{setjmp}. Then, if an error occurs, the
\mintinline{C}{tauola_stop} function jumps back to the rally point using a
\mintinline{C}{longjmp}.
\subsubsection{Random stream}
The Alouette library embeds a Mersenne Twister \ac{PRNG}
from~\citet{Matsumoto1998}. Version MT19937 is used. The generator is exposed to
users as a function pointer
\begin{minted}{C}
extern float (*alouette_random)(void),
\end{minted}
delivering a pseudo random \mintinline{C}{float} in $(0,1)$. The
\mintinline{C}{alouette_random_set} function allows one to (re)set the random
stream with a given seed. It synopsis is
\begin{minted}{C}
void alouette_random_set(unsigned long * seed).
\end{minted}
A \mintinline{C}{NULL} pointer can be provided as \mintinline{C}{seed} argument,
in which case the seed value is drawn from the OS entropy using
\mintinline{bash}{/dev/urandom}. The current seed value can be retrieved using
\begin{minted}{C}
unsigned long alouette_random_seed(void).
\end{minted}
Let us point out that \mintinline{C}{alouette_random} is the single \ac{PRNG}
stream used both by TAUOLA and by Alouette. This is achieved by redirecting
\mintinline{Fortran}{RANMAR} calls, as explained previously in
section~\ref{sec:tauola-encapsulation}. In addition, users can provide their
own \ac{PRNG} by overriding the \mintinline{C}{alouette_random} function
pointer. Note however that in this case, Alouette's internal \ac{PRNG} is still
used for TAUOLA's initialisation.
\subsubsection{Decay functions}
Forward or backward Monte Carlo decays of $\tau$-leptons are simulated by the
\mintinline{C}{alouette_decay} or \mintinline{C}{alouette_undecay} functions,
respectively. Theses functions implement algorithms~\ref{al:forward-decay}
and~\ref{al:backward-decay}. Step (ii), the \ac{CM} decay, is performed by the
\mintinline{Fortran}{DEKAY} function from TAUOLA's refactored interface. Other
steps are done in the C layer. In addition, the numeric output of step (ii) is
checked in the C layer, for \mintinline{C}{nan} and \mintinline{C}{inf}. Indeed,
in some rare cases the \mintinline{Fortran}{DEKAY} function might return an
invalid polarimeter vector, as discussed in section~3.2 of
\citet{Chrzaszcz2018}. Whenever this happens, the \ac{CM} decay is discarded and
a new one is simulated.
The decay function has the following synopsis
\begin{minted}{C}
enum alouette_return alouette_decay(
int mode, int pid, const double momentum[3],
const double * polarisation,
struct alouette_products * products).
\end{minted}
It takes as input the mother \ac{PID}, according to the \ac{PDG}, as well as the
mother 3-momentum. Decay products are stored in a \mintinline{C}{struct
alouette_products}. This is a dedicated storage structure using fixed size
arrays. The structure is tailored for up to 7 decay products, the maximum
possible according to TAUOLA (see e.g. table~\ref{tab:decay-modes}). It is
defined as
\begin{minted}{C}
struct alouette_products {
int size, pid[7];
double P[7][4], polarimeter[3], weight;
}.
\end{minted}
The actual number of decay products is encoded in the \mintinline{C}{size}
field. Other fields record the \ac{PID} and 4-momenta of decay products. The
\mintinline{C}{weight} field is not used in the forward Monte Carlo case. For
consistency with the backward case it is set to $1$.
Let us point out that the \mintinline{C}{alouette_products} structure is not
intended for massively storing generic Monte Carlo data. Using a fixed size is
not optimal for that purpose. However, it is efficient as a temporary (volatile)
format, since the number of decay products is not known a priori when calling
\mintinline{C}{alouette_decay}.
\begin{table}
\caption{$\tau^-$ decay modes and sub-modes available in Alouette $1.0$.
Composite modes are labelled with a $^*$, and their sub-modes are indicated
underneath. Note that leptonic modes, indexed 1 and 2, might radiate an
additional $\gamma$.
\label{tab:decay-modes}}
\center
\begin{tabular}{ll}
\toprule
Index & Products \\
\midrule
$1$ & $\nu_\tau\; \overline{\nu}_e\; e^-\; (\gamma)$ \\
$2$ & $\nu_\tau\; \overline{\nu}_\mu\; \mu^-\; (\gamma)$ \\
$3$ & $\nu_\tau\; \pi^-$ \\
$4$ & $\nu_\tau\; \pi^-\; \pi^0$ \\
$5^*$ & $\nu_\tau\; a_1^-$ \\
$\ 501$ & $\nu_\tau\; 2 \pi^-\; \pi^+$ \\
$\ 502$ & $\nu_\tau\; \pi^-\; 2 \pi^0$ \\
$6$ & $\nu_\tau\; K^-$ \\
$7^*$ & $\nu_\tau\; K^{*-}$ \\
$\ 701$ & $\nu_\tau\; \pi^-\; K_S $ \\
$\ 702$ & $\nu_\tau\; \pi^-\; K_L $ \\
$\ 703$ & $\nu_\tau\; \pi^0\; K^- $ \\
$8$ & $\nu_\tau\; 2 \pi^-\; \pi^0\; \pi^+$ \\
$9$ & $\nu_\tau\; \pi^-\; 3 \pi^0$ \\
$10$ & $\nu_\tau\; 2 \pi^-\; 2 \pi^0\; \pi^+$ \\
$11$ & $\nu_\tau\; 3 \pi^-\; 2 \pi^+$ \\
$12$ & $\nu_\tau\; 3 \pi^-\; \pi^0\; 2 \pi^+$ \\
$13$ & $\nu_\tau\; 2 \pi^-\; 3 \pi^0\; \pi^+$ \\
\bottomrule
\end{tabular}
\quad
\begin{tabular}{ll}
\toprule
Index & Products \\
\midrule
$14$ & $\nu_\tau\; \pi^-\; K^-\; K^+$ \\
$15^*$ & $\nu_\tau\; \pi^-\; K^0\; \overline{K}^0$ \\
$\ 1501$ & $\nu_\tau\; \pi^-\; 2 K_S$ \\
$\ 1502$ & $\nu_\tau\; \pi^-\; K_S\; K_L$ \\
$\ 1503$ & $\nu_\tau\; \pi^-\; 2 K_L$ \\
$16^*$ & $\nu_\tau\; \pi^0\; K^0\; K-$ \\
$\ 1601$ & $\nu_\tau\; \pi^0\; K_S\; K^-$ \\
$\ 1602$ & $\nu_\tau\; \pi^0\; K_L\; K^-$ \\
$17$ & $\nu_\tau\; 2 \pi^0\; K^-$ \\
$18$ & $\nu_\tau\; \pi^-\; \pi^+\; K^-$ \\
$19^*$ & $\nu_\tau\; \pi^-\; \pi^0\; \overline{K}^0$ \\
$\ 1901$ & $\nu_\tau\; \pi^-\; \pi^0\; K_S$ \\
$\ 1902$ & $\nu_\tau\; \pi^-\; \pi^0\; K_L$ \\
$20$ & $\nu_\tau\; \pi^-\; \pi^0\; \eta$ \\
$21$ & $\nu_\tau\; \pi^-\; \pi^0\; \gamma$ \\
$22^*$ & $\nu_\tau\; K^-\; K^0$ \\
$\ 2201$ & $\nu_\tau\; K^-\; K_S$ \\
$\ 2202$ & $\nu_\tau\; K^-\; K_L$ \\
\bottomrule
\end{tabular}
\end{table}
As in TAUOLA, one can also enforce a specific decay mode when calling an
Alouette decay function. The decay mode is indicated as an integer number,
where ``$0$'' stands for all modes. The TAUOLA version wrapped by Alouette $1.0$
(i.e. Tauola\pp{} $1.1.8$ for LHC) considers 22 decay modes\footnote{In
comparison, TAUOLA version from \citet{Chrzaszcz2018} provides 196 decays
modes.}, described in table~\ref{tab:decay-modes}. Some decay modes are
composite. They proceed through resonances, e.g. $\tau^- \to a_1^- \nu_\tau$, or
they result in $K^0$ particles rendered by TAUOLA as $K_S$ or as $K_L$. In
these cases, the decay products vary randomly for a given mode, which is
problematic for the backward procedure described in
section~\ref{sec:algorithms}. Therefore, a normalisation procedure is applied,
as following.
Whenever a decay mode can lead to different decay products, Alouette defines
sub-modes for each case. These sub-modes are indexed as $i = 100 m + s$, where
$m$ is TAUOLA's mode index and $s$ the sub-mode index. For example, for the
$5^\text{th}$ mode, $\tau^- \to a_1^- \nu_\tau $, two sub-modes are simulated by
TAUOLA, $a_1 \to \pi^+ \pi^- \pi^- \nu_\tau$ and $a_1 \to \pi^0 \pi^0 \pi^-
\nu_\tau$, which are respectively indexed as $501$ and $502$ by Alouette. The
relative branching ratios of sub-modes are encoded in TAUOLA common blocks, e.g.
\mintinline{C}{tauola_taukle.bra1} for $a_1$. This allows one to compute the
total branching ratio of a given sub-mode. Then, in step (i) of
algorithm~\ref{al:backward-decay}, composite decay modes are replaced by their
sub-modes. In addition, one needs to enforce a specific sub-mode of decay in
step (ii) whenever it is selected. This is achieved by temporarily overriding
TAUOLA relative branching ratios for sub modes, e.g. setting
\mintinline{C}{tauola_taukle.bra1 = 1} enforces simulating mode $501$. Note
that this is a proper (intended) usage of TAUOLA, as described in section~6 of
\citet{Jadach1993}.
Let us point out that due to radiative corrections, the leptonic decay modes,
indexed as 1 and 2, are still composite. The decay products might contain an
extra $\gamma$, or not. In this case, it is not possible to explicitly select
between both sub-modes. As a result, Alouette cannot backward decay $\gamma$
particles to $\tau$-leptons.
The interface of the undecay function is similar to the decay one. Its synopsis
is
\begin{minted}{C}
enum alouette_return alouette_undecay(
int mode, int pid, const double momentum[3],
alouette_polarisation_cb * polarisation,
struct alouette_products * products),
\end{minted}
where the \mintinline{C}{pid} and \mintinline{C}{momentum} arguments correspond
to the daughter particle, not to the mother one. The \ac{BMC} weight, given by
equation~\eqref{eq:backward-weight}, is filled to the \mintinline{C}{weight}
field of the \mintinline{C}{alouette_products} structure.
In forward mode, the mother's spin polarisation is specified directly as a
3-vector. In backward mode, this would not be convenient since the latter might
depend on the mother momentum, which is only returned at output of the undecay
function. Thus, the spin polarisation is instead provided by a callback
function in backward mode, defined as
\begin{minted}{C}
typedef void alouette_polarisation_cb(
int pid, const double momentum[3],
double * polarisation)
\end{minted}
where \mintinline{C}{pid} and \mintinline{C}{momentum} are given for the mother
particle in this case. This method allows one to query the polarisation value
during the course of the backward simulation.
The undecay function has three additional configurable parameters, defined as
global variables. The \mintinline{C}{int alouette_undecay_mother} variable
allows one to set a specific mother particle in backward decays, by indicating
its \ac{PID} as an integer. Setting this variable to zero, which is the default
behaviour, results in both $\tau^-$ and $\tau^+$ to be considered as mother
candidate.
The \mintinline{C}{double alouette_undecay_bias} variable allows one to set the
value of the bias, parameter $b$ in equation~\eqref{eq:bias-spin}. It defaults
to 1, i.e. longitudinally polarized $\tau$-leptons, which should be relevant for
high energy applications. In other use cases, setting a lower value might be
more efficient.
The \mintinline{C}{alouette_undecay_scheme} variable allows one to specify the
Monte Carlo integration variables when computing the \ac{BMC} weight, as an
\mintinline{C}{enum alouette_undecay_scheme} value. The default is to assume a
Cartesian 3-momentum, which is consistent with Alouette and TAUOLA APIs. But,
the two alternative schemes discussed in section~\ref{sec:jacobian-weight} are
available as well, i.e. spherical coordinates for the 3-momentum or an
energy-direction representation. In particular, if Alouette is chained with
PUMAS~\cite{Niess2022,GitHub:PUMAS} for a \ac{BMC} simulation, then the
energy-direction scheme must be selected in order to be consistent with PUMAS.
\subsection{Python package}
\subsubsection{Package implementation}
The alouette Python 3 package is a wrapper of the C library,
\mintinline{bash}{libalouette}, built using cffi~\cite{cffi:website} and
numpy~\cite{Harris2020}. As a result, the Python and C \acp{API} of Alouette are
very similar. The cffi package is used in API mode in order to generate Python
bindings for \mintinline{bash}{libalouette}. The buffer protocol allows one to
expose numeric C data as \mintinline{Python}{numpy.ndarray}. By combining both
cffi and numpy, the Python implementation of Alouette is straightforward, only
requiring some wrapping. In particular, the \mintinline{Python}{@property}
decorator of Python class instances is convenient for wrapping low level C /
Fortran data as attributes. Since this decorator is not available for base
Python objects, but only for class instances, we make intensive use of singleton
classes.
The \mintinline{C}{alouette_initialise} and \mintinline{C}{alouette_decay} C
functions are wrapped as \mintinline{Python}{alouette.initialise} and
\mintinline{Python}{alouette.decay} Python functions. Decay products are wrapped
as \mintinline{Python}{alouette.Products} class. This class exposes C data,
e.g. \ac{PID}s or momenta, as read-only numpy arrays.
The \mintinline{C}{alouette_undecay} function is implemented as an
\mintinline{Python}{alouette.undecay} singleton class. This lets it operate as a
function but with managed properties. Thus, calling
\mintinline{Python}{alouette.undecay(...)} performs a backward Monte Carlo
decay. But, in addition \mintinline{Python}{undecay} has three attributes,
\mintinline{Python}{undecay.mother}, \mintinline{Python}{undecay.bias} and
\mintinline{Python}{undecay.scheme}, which allows one to manipulate the
corresponding C global variables.
Using a similar approach, Alouette's random stream is wrapped by an
\mintinline{Python}{alouette.random} singleton class. Calling
\mintinline{Python}{alouette.random()} returns the next pseudo-random number
from the stream. The \mintinline{Python}{random.seed} field exposes the current
seed as a read only attribute. The stream can be (re)set with the
\mintinline{Python}{random.set(seed=None)} function. If no explicit seed is
provided, then a \mintinline{C}{NULL} pointer is passed to
\mintinline{C}{alouette_random_set}, i.e. the random seed is drawn from the OS
entropy using \mintinline{bash}{/dev/urandom}.
In addition, some relevant TAUOLA common blocks are exposed, in
\mintinline{Python}{alouette.tauola} submodule, using singleton classes. E.g.,
the parametrisation version for decays to 2 and 3 pions can be read and modified
as \mintinline{Python}{tauola.ipcht.iver}.
\subsubsection{Package distribution}
The \ac{PyPI} and its associated package manager, \mintinline{bash}{pip}, are
used for distributing Alouette. Binary distributions, based on the ``wheel''
format, have become prevalent on \ac{PyPI}, over source distributions. Binary
distributions are convenient for end-users, since the software is already
compiled. However, building a portable binary distribution implies additional
technical complications for developers, not discussed herein.
Binary distributions of Alouette are generated using GitHub's \ac{CI} workflow.
They are available from PyPI~\cite{PyPI:Alouette} as Python wheels for Linux and
for OSX. Alouette wheels have \ac{ABI} compatibility with system libraries down
to \mintinline{bash}{glibc} 2.5, on Linux, or down to OSX 10.9. Let us
emphasize that the wheels contain a binary of \mintinline{C}{libalouette}, that
can be used independently of the Python package. In addition, the wheels are
shipped with a small executable script, \mintinline{bash}{alouette-config},
providing C compilation flags for Alouette.
\section{Alouette validation \label{sec:validation}}
Various validation tests of Alouette have been carried out. In the following we
present some of the final results obtained with the Python alouette package,
using v1.0.1 of Alouette. Intermediary tests have been performed as well, not
detailed below. In particular, the C library has been checked to be error free
according to valgrind~\cite{Valgrind:2007}. Floating point errors have also
been tracked down by enabling floating point exceptions, using
\mintinline{bash}{fenv.h}. In addition, the Python API is unit tested with a
$100\,\%$ coverage. This is done on each source update, using GitHub's \ac{CI}
workflow. Similar, but more informal tests have also been carried out for the C
API.
\subsection{Initialisation \label{sec:seed-validation}}
A preliminary concern is to check that TAUOLA is properly initialised by
Alouette. Indeed, let us recall that TAUOLA's initialisation, and thus its
subsequent physics results, depend on the seed value provided to Alouette's
internal \ac{PRNG}. The seed value determines a set of 20 estimates of maximum
weights, $W_\text{max}$. These maximum weights are used by TAUOLA in order to
simulate the kinematics of decays by rejections sampling, as discussed previously
in section~\ref{sec:tauola-initialisation}. In the following, let us write
$W_{ij}$ the maximum weight estimate obtained for seed $i$ and mode $j$. Note
that two body decay modes, $\tau^- \to \pi^- \nu_\tau$ (3) and $\tau^- \to K^-
\nu_\tau$ (6), have no associated weight since the kinematics is fixed in these
cases.
The impact of TAUOLA's initialisation on physics results is investigated by
considering $10^6$ seed values, and by recording their corresponding $W_{ij}$
values. The seeds are drawn from a uniform distribution. As a figure of merit,
in table~\ref{tab:weights-ratio} we report the ratio of extreme $W_\text{max}$
estimates for mode $j$, defined as
\begin{linenomath*}
\begin{equation}
r_j = \frac{\max(W_{ij})}{\min(W_{ij})},
\end{equation}
\end{linenomath*}
where the $\min$ and $\max$ run over all seeds.
Let us recall that TAUOLA applies a security factor of $1.2$ to its
$W_\text{max}$ estimate. Thus, assuming that $\max(W_{ij})$ is indeed the upper
bound, $r_j \leq 1.2$ guarantees identical physics results for the $10^6$ seed
values, for mode $j$. In the present study, this condition is satisfied only
for $7$ out of $20$ decay modes. Actually, for some modes, e.g. the
$5^\text{th}$ one corresponding to $\tau \to a_1^- \nu_\tau$, the maximum weight
is likely not found out of $10^6$ trials, since no convergence is observed.
Let us point out that this convergence issue could be observed already in the
very first TAUOLA papers; see e.g. the ``\mintinline{Fortran}{TEST RUN OUTPUT}''
appendix in \citet{Jadach1991,Jadach1993} where the \mintinline{Fortran}{DADMAA}
routines reports a significant number of ``overweighted'' events.
\begin{table}
\caption{Ratios $r_j$ of maximum to minimum $W_\text{max}$ estimates. The
ratios have been computed from $10^6$ initialisations using Alouette's
internal PRNG, with seed values drawn from a uniform distribution.
\label{tab:weights-ratio}}
\center
\begin{tabular}{ll}
\toprule
Mode ($j$) & ratio ($r_j$) \\
\midrule
$1$ & $1.14$ \\
$2$ & $1.15$ \\
$4$ & $1.21$ \\
$5$ & $36.1$ \\
$7$ & $1.04$ \\
$8$ & $3.61$ \\
$9$ & $1.45$ \\
$10$ & $2.62$ \\
$11$ & $1.00$ \\
$12$ & $1.00$ \\
\bottomrule
\end{tabular}
\quad
\begin{tabular}{ll}
\toprule
Mode ($j$) & ratio ($r_j$) \\
\midrule
$13$ & $1.00$ \\
$14$ & $2.58$ \\
$15$ & $2.47$ \\
$16$ & $3.22$ \\
$17$ & $7.13$ \\
$18$ & $2.23$ \\
$19$ & $3.65$ \\
$20$ & $1.33$ \\
$21$ & $1.37$ \\
$22$ & $1.05$ \\
\bottomrule
\end{tabular}
\end{table}
Finding a single seed that would maximise the 20 $W_\text{max}$ values
simultaneously seems nearly impossible. Therefore, we use a more pragmatic
approach in Alouette, as following. Let us write $\overline{W}_j$ the median
value for the estimate of $W_\text{max}$ for mode $j$. Among all tested seeds,
we selected the one yielding estimates closest to $\overline{W}_j$ according to
least squares, i.e. the one minimising the L2-norm $\|\vb{W} -
\overline{\vb{W}}\|$. When initialising TAUOLA, this ``median'' seed is used as
default value by Alouette for its internal PRNG.
As a cross-check, we consider \ac{CM} $\tau^-$ decays, and we compare the
distributions obtained for the resulting $\nu_\tau$ energy, $E_\nu$, using the
median seed and the respective ``min'' and ``max'' seeds for each decay mode.
We compare $E_\nu$ values since our concern with Alouette is $\nu_\tau$-$\tau$
transport. The comparison was performed with $10^8$ Monte Carlo events per decay
mode. As an example, figure~\ref{fig:validation-seed} shows the results
obtained for the $5^\text{th}$ mode, the most pathological one according to
table~\ref{tab:weights-ratio}. It can be seen that the central parts of the
$E_\nu$ distributions are nearly identical using the median or the max seed.
However, for the min seed a significant deviation is observed, with a $4\,\%$
maximum difference on the \ac{PDF}. Similar results are observed for other
modes. The bulk of the $E_\nu$ distributions agree using the median or the max
seed, within statistical uncertainties. But the min result can deviate
significantly for some modes, i.e. for $\tau^- \to a_1^- \nu_\tau$ (5), $\tau^-
\to \pi^0 K^- K^0 \nu_\tau$ (16) and $\tau^- \to \pi^- \pi^0 K^0 \nu_\tau$ (19).
Thus, for unlucky seed values erroneous physics results are obtained. Note also
that our test does not allow to check if the \ac{PDF} differ in tails, due to an
insufficient number of events in these regions.
Given the previous results, using a median seed is a satisfactory solution for
Alouette, whose main scope is $\nu_\tau$-$\tau$ transport. In addition, the
median seed is efficient CPU wise. Indeed, the larger the estimate of
$W_\text{max}$, the larger the number of rejected samples when simulating a
tentative decay. Note also that in any case the user can set its own seed
value instead of the median one, if desired.
\subsection{Forward Monte Carlo}
Forward Monte Carlo results are validated by comparison to Tauola\pp. Of
particular interest for $\nu_\tau$-$\tau$ transport is the $E_\nu$ spectrum of
the daughter neutrino energy, as discussed previously. In order to validate the
end-to-end forward procedure implemented in Alouette, let us now consider decays
of a high energy, $1\,$TeV, $\tau^-$ lepton instead of \ac{CM} ones. A
comparison of Alouette and Tauola\pp{ }results is shown on
figure~\ref{fig:validation-tauolapp}, for $10^8$ Monte Carlo events. Three spin
polarisation cases are considered for the $\tau^-$, right handed ($P = +1$),
unpolarised ($P = 0$) and left handed ($P = -1$). Alouette and Tauola\pp{ }agree
within Monte Carlo statistical uncertainties. A similar agreement is also
obtained when decaying $\tau^+$ leptons instead of $\tau^-$.
When performing these comparisons, one should take care that Tauola\pp{ }uses a
different convention than Alouette. Indeed, Tauola\pp{ }scope is to decay
$\tau^- \tau^+$ pairs produced simultaneously. Thus, in Tauola\pp, $\tau^-$
always propagate along the $\shortminus z$ axis, while $\tau^+$ along $z$ (see
e.g. section~4 of \citet{Davidson2012}). As a result, a spin polarisation
$z$-component of $+1$ ($-1$) should be given to the
\mintinline{C}{Tauola::decayOne} method for a left (right) handed $\tau^-$. But
opposite values should be used in the case of a $\tau^+$ decay, which might be
confusing. In comparison, Alouette lets the user explicitly specify momentum and
spin polarisation for the mother particle as 3-vectors.
\subsection{Backward Monte Carlo}
The validation of Alouette backward Monte Carlo results deserves a more detailed
discussion, since \ac{BMC} methods are usually less familiar than forward ones.
Comparisons of forward and backward results are performed using toy experiments.
Let us first describe the model used for these experiments, and then let us
present the results of two test cases.
\subsubsection{Toy model}
The following toy model is considered. Let us assume a primary flux of
$\tau$-leptons, $\Phi_0$, with a fraction $f_0$ of $\tau^-$ and $1 - f_0$ of
$\tau^+$. Let $\phi_0 = d\Phi_0 / dp_0$ denote the differential flux w.r.t. the
$\tau$ momentum, $p_0$, and let us set $\Phi_0 = 1$. Thus, $\phi_0$ is also the
\ac{PDF} of $p_0$. Let us further restrict values of $p_0$ to an interval
$[p_\text{min}, p_\text{max}]$. The toy experiment consist in decaying this
primary flux of $\tau$-leptons, using Alouette, and then in recording the
daughter particles whose momenta $p_i$ also fall in $[p_\text{min},
p_\text{max}]$. Let $\phi_i$ denote the corresponding differential flux and
$\Phi_i$ its integral.
Let us briefly recall the forward Monte Carlo computation of $\Phi_i$ using a
bias procedure. Let $N$ denote the total number of Monte Carlo iterations and
let us index them by $k$. For each Monte Carlo iteration, first a primary $\tau$
is generated. The $\tau$ charge value is drawn from a uniform distribution with
a probability $f_0$ for a $\tau^-$. The $\tau$ momentum is generated using a $1
/ p$ bias distribution, as
\begin{linenomath*}
\begin{equation} \label{eq:log-sampling}
\ln p_{0,k} = \ln p_\text{min} +
\xi_k \ln\left(\frac{p_\text{max}}{p_\text{min}}\right),
\end{equation}
\end{linenomath*}
where $\xi_k$ is a random variate uniformly distributed over $[0,1]$, drawn
using Alouette \ac{PRNG}. This bias procedure amounts to a uniform sampling in
log scale, which is usually efficient for spectra spanning several orders of
magnitude. Let us recall that sampling according to
equation~\eqref{eq:log-sampling} corresponds to the
following bias \ac{PDF}
\begin{linenomath*}
\begin{equation} \label{eq:log-pdf}
b(p_0) =
\frac{1}{p_0 \ln\left(\frac{p_\text{max}}{p_\text{min}}\right)} .
\end{equation}
\end{linenomath*}
Thus, forward Monte Carlo events are weighted by $\omega_{b,F} = \phi_0(p_{0,k})
/ b(p_{0,k})$.
Secondly, the $\tau$ is decayed. Let us write $m_{ik}$ the number of daughters
particles of type $i$ with final momenta in $[p_\text{min}, p_\text{max}]$.
It follows that the forward Monte Carlo estimate of $\Phi_i$ writes
\begin{linenomath*}
\begin{align}
\label{eq:montecarlo-estimate}
\overline{\Phi}_{i,F} &= \frac{1}{N} \sum_{k=1}^N{\phi_{ik,F}} , \\
\label{eq:forward-flux}
\phi_{ik,F} &= m_{ik} \frac{\phi_0(p_{0,k})}{b(p_{0,k})} .
\end{align}
\end{linenomath*}
Let us further recall that, owing to the \ac{CLT}, the ``Monte Carlo error'',
i.e. $\epsilon_i = \overline{\Phi}_{i,F} - \phi_i$, converges to a Gaussian
distribution for large $N$, as $1 / \sqrt{N}$. An error estimate is given by the
standard deviation of Monte Carlo samples, as
\begin{linenomath*}
\begin{equation} \label{eq:montecarlo-error}
\overline{\sigma}^2_{i,F} = \frac{1}{N-1} \left( \frac{1}{N} \sum_{k=1}^N{
\phi^2_{ik,F}} - \overline{\Phi}^2_{i,F} \right) .
\end{equation}
\end{linenomath*}
Let us point out that the Monte Carlo computation needs to record only two
quantities, the sum of weights $\sum \phi_{ik,F}$ and the sum of squared weights
$\sum \phi_{ik,F}^2$. From those, the Monte Carlo estimate and its corresponding
uncertainty are derived, using equations~\eqref{eq:montecarlo-estimate} and
\eqref{eq:montecarlo-error}.
The backward Monte Carlo computation of $\Phi_i$ is similar to the forward one,
but reverting the simulation flow. A bias procedure is used, applying
corollary~4 of \citet{Niess2018}. For each Monte Carlo event, first the final
momentum $p_{ik}$ of daughter $i$ is drawn from a $1 / p$ bias distribution,
using equation~\eqref{eq:log-sampling} but substituting $p_{0,k}$ with $p_{ik}$.
Secondly, the daughter particle is undecayed yielding the mother particle with
charge $C_k = \pm 1$ and momentum $p_{0,k}$. In addition, companion daughter
particles are also produced by Alouette. The Monte Carlo weight due to this bias
procedure is
\begin{linenomath*}
\begin{equation}
\omega_{b,B} = \frac{f(C_k) \phi_0(p_{0,k})}{b(p_{ik})},
\end{equation}
\end{linenomath*}
where
\begin{linenomath*}
\begin{equation}
f(C) = \begin{cases}
f_0 & \text{if } C = -1, \\
1 - f_0 & \text{otherwise} .
\end{cases}
\end{equation}
\end{linenomath*}
Thus, $f \phi_0$ corresponds to the differential flux of $\tau^-$ or
of $\tau^+$ particles, depending on the backward sampled mother particle.
As previously, let $m_{ik}$ denote the number of daughters $i$ with momenta in
$[p_\text{min}, p_\text{min}]$, considering both the initial daughter and its
decay companions. The total backward weight corresponding to this event is
\begin{linenomath*}
\begin{equation}
\label{eq:backward-flux}
\phi_{ik,B} = m_{ik} \frac{f(C_k) \phi_0(p_{0,k})}{b(p_{ik})}
\omega_{B,k},
\end{equation}
\end{linenomath*}
where $\omega_{B,k}$ is the \ac{BMC} weight returned by Alouette, computed
according to equation~\eqref{eq:backward-weight}. Note that since the toy model
flux is integrated using spherical coordinates, Alouette's undecay function must
be configured accordingly. Thus, an additional factor $p_{ik}^2 / p_{0,k}^2$ is
applied by Alouette to the Jacobian \ac{BMC} weight given by
equation~\eqref{eq:jacobian-weight}, as detailed in \ref{sec:jacobian-weight}.
The backward Monte Carlo estimate $\overline{\Phi}_{i,B}$ of the flux $\Phi_i$,
and its corresponding uncertainty $\overline{\sigma}_{i,B}$, are obtained from
equations~\eqref{eq:montecarlo-estimate} and~\eqref{eq:montecarlo-error}. We
proceed as in the forward case, but substituting the weight $\phi_{i,F}$ with
the backward one, $\phi_{i,B}$.
There is an additional subtlety in the backward computation that we need to
mention. If the multiplicity of a daughter is larger than one, then it is not
correct to generate its final momentum over $[p_\text{min}, p_\text{max}]$,
despite this is used as selection criteria. Instead, one should actually use
$[0, p_\text{max}]$ as interval for the bias distribution. The reason is the
following. When same type particles are produced, some of them might lie in
$[p_\text{min}, p_\text{max}]$ while others are below $p_\text{min}$. Those are
still valid events, nevertheless. In order to properly generate those events,
one must consider that the backward sampled particle might have a momentum below
$p_\text{min}$, while its ``twines'' not necessarily. However,
equation~\eqref{eq:log-sampling} does not allow to set a null lower bound. Thus,
in practice we set the lower bound to a fraction $\epsilon p_\text{min}$ of
$p_\text{min}$, where $\epsilon = 10^{-2}$. I.e. the momentum $p_{ik}$ is
actually log-sampled over $[\epsilon p_\text{min}, p_\text{max}]$.
Obviously, the toy case considered herein does not illustrate the benefits of
the backward Monte Carlo procedure. \ac{BMC} appears similar, yet less
straightforward than the usual forward computation. Backward methods are
efficient for asymmetric problems. For example, \ac{BMC} methods are good at
sampling rare secondary events, in a narrow phase-space, from an extended
primary flux. However, since in such cases forward computations are
inefficient, comparisons would be difficult. Therefore, we instead consider a
symmetric toy model. This is a good stress-test for the backward procedure,
since forward and backward methods have similar Monte Carlo efficiencies.
\subsubsection{Differential flux}
As a first test, let us compare the differential fluxes obtained by backward and
forward computations for a particular case. Let us consider a $1 / p^2$ primary
flux composed of high energy left handed $\tau^-$ and right handed $\tau^+$ in
equal fractions, i.e. $f_0 = 1 / 2$. Let us set $p_\text{min} = 100\,$GeV and
$p_\text{max} = 1\,$PeV. The differential flux $\phi_i$ is estimated by Monte
Carlo using a log-uniform grid of momentum values. The procedure is similar to
the one described previously for computing the total flux $\Phi_i$, using
equations~\eqref{eq:montecarlo-estimate} and~\eqref{eq:montecarlo-error}. But,
the grid intervals are considered when computing the multiplicities $m_{ik}$
instead of the total range $[p_\text{min}; p_\text{max}]$.
Figure~\ref{fig:validation-spectrum} shows the results obtained with $N = 10^7$
Monte Carlo events. The forward and backward Monte Carlo computations agree
within statistical uncertainties. As an aside, this figure also provides a
comparison of inclusive spectra of secondary particles in high energy $\tau$
decays.
\subsubsection{Systematic tests}
In addition, we performed systematic comparisons of the backward and forward
results for the total flux $\Phi_i$. In order to test all cases separately, the
primary flux is composed only of $\tau^+$ or of $\tau^-$, i.e. $f_0 = 0$ or $1$.
Three spin polarisations are considered in each case, left, right and
unpolarised. A $1 / p$ primary spectrum is used with momenta between
$p_\text{min} = 1\,$GeV and $p_\text{max} = 1\,$TeV. This allows us to
cross-check the backward procedure for low relativistic boost ($\gamma$) values
as well. With these settings, we computed the total flux of secondary particles
for all possible decay modes and sub-modes. We simulated $N = 10^6$ Monte Carlo
events per test case, resulting in relative accuracies on $\Phi_i$ varying
between $0.1$ and $0.2\,\%$.
Let $\overline{\Phi}_{ij,F}$ ($\overline{\Phi}_{ij,B}$) denote the total flux
obtained for daughter $i$ and decay mode $j$ using the forward (backward)
Monte Carlo computation. Let $\overline{\sigma}_{ij,F}$
($\overline{\sigma}_{ij,B}$) be the corresponding error estimate. In order to
assess the agreement between forward and backward computations we form the
following test statistic
\begin{linenomath*}
\begin{equation}
t_{ij} = \frac{\Phi_{ij,F} - \Phi_{ij,B}}{\sqrt{\sigma^2_{ij,F} +
\sigma^2_{ij,B}}} .
\end{equation}
\end{linenomath*}
As null test hypothesis, $\mathcal{H}_0$, it is assumed that the forward
(backward) Monte Carlo result is distributed as a Gaussian with expectation
$\Phi_{ij,0}$ and variance $\sigma^2_{ij,F}$ ($\sigma^2_{ij,B}$). Thus, we
assume that the \ac{CLT} limit is reached. Under $\mathcal{H}_0$, $t_{ij}$
follows a normal distribution. Therefore, let us call ``significance'' the
values obtained for $t_{ij}$.
The significance values obtained for an unpolarised flux of $\tau^-$ are shown
on figure~\ref{fig:validation-spectrum}, as a test matrix. For this
configuration, the worse significance value is $2.8\,\sigma$ out of $n = 141$
test cases. In order to assess if this is indeed significant, or not, the
``look-elsewhere effect'' must be accounted for. This is done by forming the
following test statistic
\begin{linenomath*}
\begin{equation}
T = \max(t^2_{ij}) .
\end{equation}
\end{linenomath*}
Under $\mathcal{H}_0$, the \ac{CDF} of $T$ is $\left(F_{\chi_1^2}\right)^n$,
where $F_{\chi_1^2}$ is the \ac{CDF} of a $\chi^2$ distribution with 1 degree of
freedom. It follows that a worse significance $t_{ij}$ of $2.8\, \sigma$
corresponds to a look-elsewhere corrected $p$-value of $51.4\,\%$. Similar
results are obtained when considering other polarisation values and / or a flux
of $\tau^+$ particles. Considering all test cases, we obtain a global $p$-value
of $32.5\,\%$ with a worse significance of $\shortminus 3.5\,\sigma$ out of
$846$ test cases. Thus, we conclude that we observe no significant differences
between the forward and backward Monte Carlo results, with a relative accuracy
of $0.1$-$0.2\,\%$ on the total fluxes $\Phi_i$.
\section{Conclusion}
In the first part of this paper, section~\ref{sec:algorithms}, we have presented
a reverse Monte Carlo algorithm for simulating particle decays, i.e.
algorithm~\ref{al:backward-decay}. This algorithm allows one to invert a forward
Monte Carlo decay engine using the Jacobian \ac{BMC} method, introduced in
\citet{Niess2018}. The method only requires that the forward engine produces
\ac{CM} decays, preferentially with the possibility to specify the decay mode.
Algorithm~\ref{al:backward-decay} has been applied to $\tau$ decays with TAUOLA,
which constitute a comprehensive use case. Thus, we consider this algorithm
general enough to be transposable to other particle decays than $\tau$ ones.
Section~\ref{sec:algorithms} is complementary to \citet{Niess2018}. It
illustrates the utility of the Jacobian \ac{BMC} method for reverse Monte Carlo,
since it provides a simple way to undecay Monte Carlo particles. In addition,
section~\ref{sec:algorithms} emphasises the importance of coordinate systems
when computing the Jacobian \ac{BMC} weight, which has not been discussed
previously.
In the second part of this paper, section~\ref{sec:implementation}, the
Alouette library has been presented. The library is structured in three layers.
\begin{enumerate}[(1)]
{\item
The first layer is a slightly modified version of TAUOLA Fortran (from
Tauola\pp{ }$1.1.8$ for LHC), refactored in order to comply to what we would
expect from a library. The refactoring is done procedurally using a Python
script. It does not modify any TAUOLA algorithm. It only relocates routines,
redirects some critical calls, and it unifies external library symbols under
the \mintinline{C}{tauola_} namespace using the Fortran~2003
\mintinline{Fortran}{BIND(C)} attribute. This refactored TAUOLA is not
intended for Alouette end-users. However, it could serve as a starting
point for other C developers, sharing similar design concerns, and wishing
to integrate TAUOLA Fortran in their own project.
}
{\item
The second layer is the Alouette C library. It implements the algorithms
discussed in section~\ref{sec:algorithms}. Alouette's \ac{API} is intended
to be simple when the library is used for transport problems, like
$\nu_\tau$-$\tau$. TAUOLA's initialisation is automated with robust
settings, in particular for its ``warmup'', i.e. the determination of
$W_\text{max}$ values. Thus, end-users need to call only a single library
function, \mintinline{C}{alouette_decay} or its undecay version in \ac{BMC}
mode. In addition, the same \ac{PRNG} is used in the C and Fortran layers,
and it can be modified at runtime.
}
{\item
The third layer is a Python package wrapping the Alouette C library. As a
result, the Python and C \acp{API} are almost identical. The wrapping is
done using cffi and numpy. This allows us to expose Fortran and C data as
familiar \mintinline{Python}{numpy.ndarrays}. Binary distributions of
Alouette are available from \ac{PyPI}, for Linux and OSX.
}
\end{enumerate}
In the third part of this paper, section~\ref{sec:validation}, we presented the
results of various validation tests of Alouette. In forward Monte Carlo mode,
Alouette and Tauola\pp{ }results agree within Monte Carlo uncertainties,
considering $10^8$ events. Backward and forward Monte Carlo results are also
found in agreement, with a relative accuracy of $0.1$-$0.2\,\%$.
Alouette has been implemented on top of Tauola\pp ``LHC'' release. This release
does not include the latest developments discussed in \citet{Chrzaszcz2018}. The
LHC release will be obsolete as TAUOLA physics is updated using latest
developments (e.g. based on Belle~II~\cite{Kou2019,Kou2020} results). Thus,
future improvements of Alouette should support recent TAUOLA releases as well,
not only the LHC one.
\section*{Acknowledgements}
The author thanks an anonymous reviewer for its critical reading which
contributed to improve the present paper. In addition, we gratefully acknowledge
support from the CNRS/IN2P3 Computing Center (Lyon - France) for providing
computing resources needed for this work. The analysis of Monte Carlo data has
been done with numpy~\cite{Harris2020}. Validation figures have been produced
using matplotlib~\cite{Hunter2007} and a cmasher~\cite{VanderVelden2020} colour
map.
\appendix
\section{Backward Monte Carlo weight \label{sec:backward-weight}}
The backward Monte Carlo weight is given by the determinant of the Jacobian
matrix corresponding to the change of variable between $\vb{p}_0$, the mother
momentum, and $\vb{p}_j$, the daughter momentum. This change of variable is
expressed by equation~\eqref{eq:bmc-transform}. Let us first compute the
corresponding Jacobian matrix. A point of caution should be raised here. When
deriving $\vb{p}_0$ as function of $\vb{p}_j$, $\vb{p}_j^\star$ should be
considered as a constant, even in the polarised case where a bias \ac{CM} decay
process depending on $\vb{p}_j$ is used. That is, $L^{\shortminus 1}$ should be
differentiated only w.r.t. its first variable. This can be seen as $L$ and
$L^{\shortminus 1}$ are reciprocal only w.r.t. their first variable. In other
words, the \ac{CM} decay process is not inverted in the \ac{BMC} procedure. It
is biased though.
Using Cartesian coordinates, the Jacobian matrix can be expressed as
\begin{linenomath*}
\begin{equation} \label{eq:jacobian-matrix}
\frac{\partial \vb{p}_0}{\partial \vb{p}_j} = \frac{m_0}{d}
\begin{bmatrix*}[l]
a_x \Delta_x + b & a_x \Delta_y & a_x \Delta_z \\
a_y \Delta_x & a_y \Delta_y + b & a_y \Delta_z \\
a_z \Delta_x & a_z \Delta_y & a_z \Delta_z + b
\end{bmatrix*},
\end{equation}
\end{linenomath*}
where
\begin{linenomath*}
\begin{align}
a_x &= \frac{1}{E_j}\left[p_{j,x} - \left(E_j + E_j^\star \right)
\frac{p_{j,x} E_j^\star + E_j p_{j,x}^\star}{d}\right], \\
b &= E_j + E_j^\star, \\
d &= E_j E_j^\star + \vb{p}_j \cdot \vb{p}_j^\star + m_j^2, \\
\Delta_x &= p_{j,x} - p_{j,x}^\star .
\end{align}
\end{linenomath*}
The quantities $a_y$, $\Delta_y$, $a_z$ and $\Delta_z$ are obtained from $a_x$
and $\Delta_x$ by substituting $x$ with $y$ or $z$.
The determinant is conveniently computed using
equation~\eqref{eq:jacobian-matrix}. Indeed, most terms simplify out. Only the
factors in $b^2$ and $b^3$ remain. Thus
\begin{linenomath*}
\begin{equation}
\left| \frac{\partial \vb{p}_0}{\partial \vb{p}_j} \right| =
\frac{m_0^3}{d^3} \left[\left(a_x \Delta_x + a_y \Delta_y +
a_z \Delta_z\right) b^2 + b^3 \right].
\end{equation}
\end{linenomath*}
Developing the previous results, after some manipulations one finds
\begin{linenomath*}
\begin{equation}
\left| \frac{\partial \vb{p}_0}{\partial \vb{p}_j} \right| =
\frac{m_0^3}{d^3} \left(E_j + E_j^\star\right)^2 \frac{
(E_j + E_j^\star)^2 - d}{E_j} .
\end{equation}
\end{linenomath*}
Noting that $\gamma + 1 = \left(E_j + E_j^\star\right)^2 / d$, one can express
the determinant as function of $\gamma = E_0 / m_0$, which yields
equation~\eqref{eq:jacobian-weight}.
\bibliography{ms}
|
Title:
FastQSL: A Fast Computation Method for Quasi-separatrix Layers |
Abstract: Magnetic reconnection preferentially takes place at the intersection of two
separatrices or two quasi-separatrix layers, which can be quantified by the
squashing factor Q, whose calculation is computationally expensive due to the
need to trace as many field lines as possible. We developed a method (FastQSL)
optimized for obtaining Q and the twist number in a 3D data cube. FastQSL
utilizes the hardware acceleration of the graphic process unit (GPU) and adopts
a step-size adaptive scheme for the most computationally intensive part:
tracing magnetic field lines. As a result, it achieves a computational
efficiency of 4.53 million Q values per second. FastQSL is open source, and
user-friendly for data import, export, and visualization.
| https://export.arxiv.org/pdf/2208.12569 | command.
\usepackage{amsmath}
\usepackage{multirow}
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\newcommand\Rone{\textbf{\uppercase\expandafter{\romannumeral1}}\,}
\newcommand\Rtwo{\textbf{\uppercase\expandafter{\romannumeral2}}\,}
\definecolor{forestgreen}{rgb}{0.10, 0.50, 0.10}
\newcommand{\detail}[1]{\textbf{\textcolor{blue}{#1}}}
\definecolor{forestgreen}{rgb}{0.10, 0.50, 0.10}
\newcommand{\peijin}[1]{#1}%
\newcommand{\jchen}[1]{#1}%
\received{}
\revised{ }
\accepted{ August 28, 2022}
\submitjournal{ApJL}
\shorttitle{FastQSL}
\shortauthors{Zhang et al.}
\begin{document}
\title{FastQSL: A Fast Computation Method for Quasi-separatrix Layers}
\author[0000-0001-6855-5799]{PeiJin Zhang}
\affiliation{Institute of Astronomy and National Astronomical Observatory,\\ Bulgarian Academy of Sciences, Sofia 1784, Bulgaria}
\affiliation{ASTRON, The Netherlands Institute for Radio Astronomy,\\
Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands}
\affiliation{Astronomy \& Astrophysics Section, Dublin Institute for Advanced Studies, Dublin 2, Ireland. }
\affiliation{CAS Key Laboratory of Geospace Environment,
School of Earth and Space Sciences, \\
University of Science and Technology of China,
Hefei, Anhui 230026, China}
\correspondingauthor{Jun Chen}
\email{el2718chenjun@nju.edu.cn}
\author[0000-0003-3060-0480]{Jun Chen}
\affiliation{School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China}
\affiliation{CAS Key Laboratory of Geospace Environment,
School of Earth and Space Sciences, \\
University of Science and Technology of China,
Hefei, Anhui 230026, China}
\author[0000-0003-4618-4979]{Rui Liu}
\affiliation{CAS Key Laboratory of Geospace Environment,
School of Earth and Space Sciences, \\
University of Science and Technology of China,
Hefei, Anhui 230026, China}
\affiliation{CAS Center for the Excellence in Comparative Planetology,
\\University of Science and Technology of China, Hefei, Anhui 230026, China}
\author[0000-0001-6252-5580]{ChuanBing Wang}
\affiliation{CAS Key Laboratory of Geospace Environment,
School of Earth and Space Sciences, \\
University of Science and Technology of China,
Hefei, Anhui 230026, China}
\affiliation{CAS Center for the Excellence in Comparative Planetology,
\\University of Science and Technology of China, Hefei, Anhui 230026, China}
%
\keywords{Magnetic topology, Quasi-Separatrix Layers, GPU speedup}
\section{Introduction}
Chromosphere flare ribbons often coincide with the footprints of separatrices or quasi-separatrix layers (QSLs) \citep{priest1995three, demoulin1996three, Demoulin1997},
which embed the favorable sites for 3D magnetic reconnection, such as
null points, separators (intersection of two separatrices) and
quasi-separators (intersection of two QSLs)
of the magnetic field \citep{Priest2000,Pontin2011}, where a large gradient of magnetic connectivity is present.
The squashing factor $Q$ quantifies the connectivity change of magnetic field lines \citep{Titov2002, Titov2007},
$Q$ is defined through a mapping from
one surface $S_1$, which a magnetic field line threads at $(x_1, y_1)$, to another surface $S_2$, which the field line threads at $(x_2, y_2)$;
\begin{align}
\underset{1\,2}\Pi: (x_1,y_1) \rightarrow (x_2,y_2).
\label{pi+-}
\end{align}
The Jacobian matrix of differential mapping is expressed as
\begin{align}
\underset{1\,2}{D}=\left(
\begin{array}{cc}
\frac{\partial x_2}{\partial x_1} & \frac{\partial x_2}{\partial y_1} \\
\frac{\partial y_2}{\partial x_1} & \frac{\partial y_2}{\partial y_1} \\
\end{array}
\right)
\equiv
\left(
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right).
\label{eq:D+-}
\end{align}
A full description of $Q$ that considers the variance of the covariant metric
tensor on $S_1$ and $S_2$ is defined by Equations (11), (12) and (14) in \citet{Titov2007}.
If $S_1$ and $S_2$ are the two boundaries of a cuboid coordinated
with the same Cartesian coordinate system,
the value of $Q$ at $(x_{1},y_{1})$ is
\begin{align}
Q\left(x_1,y_1\right)=\frac{a^2+b^2+c^2+d^2}
{\left|\text{det}\, \underset{1\,2}{D}\right|}.
\label{eq:Q+-}
\end{align}
$\left|\text{det}\, \underset{1\,2}{D}\right|$ is often replaced by its equivalence $\left|B_{n,1} / B_{n,2} \right|$ for mitigating numerical error,
where
\begin{align}
B_{n,1}=\left.(\Vec{B}\cdot \Vec{n}) \right |_{S_1}
\label{eq:Bn1}
\end{align}
is the component normal to $S_1$ of $(x_1,\, y_1)$, and $\Vec{n}$ is the normal unit vector of ${S_1}$, the form is similar to that of $B_{n,2}$.
Since \cite{Titov2002} proved $Q\left(x_1,y_1\right)\,=\,Q\left(x_2,y_2\right)$,
\cite{Aulanier2005} expanded the definition of $Q$ into 3D space by
\begin{align}
\vec{B} \cdot \nabla Q = 0,
\label{eq:qline}
\end{align}
i.e. values of $Q$ are invariant along a field line.
Separatrices are located where $Q=\infty$,
and
QSLs are located where $Q\gg 2$, the theoretical minimum of Q.
Along with tracing field lines,
the twist number,
a measure of how many turns two infinitesimally close field lines wind about each other,
can be calculated without much additional effort by (Equation (16) in \cite{berger2006writhe})
\begin{align}
\mathcal{T}_w= \int_L^{}\frac{\nabla\times\vec{B}\cdot\vec{B}}{4\pi B^2}\textrm{d}l,
\label{eq:tw}
\end{align}
where the integral range $L$ is a segment of a magnetic field line.
In this work, we take advantage of GPU computing,
which is more efficient and economic compared to traditional CPU computing \citep{zwart2020ecological}, to obtain the 3D distribution of $Q$ and $\mathcal{T}_w$ with high efficiency. \cite{feng2013gpu} achieved an acceleration ratio of about 8 times by re-writing the model into a GPU compatible form for the magnetohydrodynamics (MHD) simulation of space weather. \cite{Caplan2018GPUAO} accelerated the solar MHD code based on OpenAcc, the GPU version has about 3 times the efficiency of the CPU version, with a comparable cost level of hardware. \cite{tassev2017qsl} have implemented the computation of QSLs with a GPU based on OpenCL, and achieved the efficiency of obtaining a representative 3D QSL map within a few hours.
The rest of this paper is arranged as follows: in Section 2, the algorithm and program structure is presented. In Section 3, we use the extrapolated potential field from a solar active region on 2010 Oct 16 19:00\,UT (AR11112) and an analytical field from \cite{titov1999TD99} (TD99 model)
to demonstrate the method. In Section 4, we present detailed benchmarks and comparisons for different algorithms and computation architectures. In Section 5, we discuss and summarize the result.
\section{Method}
FastQSL is developed from the published source code \footnote{ \url{http://staff.ustc.edu.cn/~rliu/qfactor.html} } \citep{Liu2016apj}, hereafter Code2016.
\subsection{Calculation of \texorpdfstring{$Q$}{Q}} \label{Q-Calculation}
In the computation of $Q$, the essential and most computational consuming step is numerically deriving magnetic field lines by solving the equation:
\begin{align}
\frac{\textrm{d}\,\Vec{r}\,(s)}{\textrm{d}\,s} = \frac{\Vec{B}}{B},
\label{eq:rb}
\end{align}
where $s$ is the arc length coordinate of a field line, and
$\vec{r}\,(s)$ is the coordinates function of the field line.
To accurately map the field-line foot-points on a surface, one must solve the field-line equation in high precision.
In Code2016,
Equation \eqref{eq:rb} is integrated by the classic \texttt{RK4}.
We terminate the integration where it goes outside the data cube, and then move one step back to get the field-line coordinates at the boundary.
$\underset{1\,2}{D}$ in Equation \eqref{eq:D+-} is derived from
the changes in the mapped footpoint coordinates with respect to the footpoints of neighboring field lines.
$Q$ on a cross section can be calculated with the method 3 introduced in \cite{Pariat2012} (hereafter Method \Rone), which introduces an auxiliary cross section $S_0\,(x_0,y_0)$ to obtain
\begin{align}
\underset{1\,2}{D}=
\left(
\begin{array}{cc}
\frac{\partial x_2}{\partial x_0} & \frac{\partial x_2}{\partial y_0} \\
\frac{\partial y_2}{\partial x_0} & \frac{\partial y_2}{\partial y_0} \\
\end{array}
\right)
\left(
\begin{array}{cc}
\frac{\partial x_0}{\partial x_1} & \frac{\partial x_0}{\partial y_1} \\
\frac{\partial y_0}{\partial x_1} & \frac{\partial y_0}{\partial y_1} \\
\end{array}
\right),
\label{eq:D+-2}
\end{align}
and
\begin{align}
\left(
\begin{array}{cc}
\frac{\partial x_0}{\partial x_1} & \frac{\partial x_0}{\partial y_1} \\
\frac{\partial y_0}{\partial x_1} & \frac{\partial y_0}{\partial y_1} \\
\end{array}
\right)
=
\left(
\begin{array}{cc}
\frac{\partial x_1}{\partial x_0} & \frac{\partial x_1}{\partial y_0} \\
\frac{\partial y_1}{\partial x_0} & \frac{\partial y_1}{\partial y_0} \\
\end{array}
\right)^{-1}
=
\left.\left(
\begin{array}{rr}
\frac{\partial y_1}{\partial y_0} &-\frac{\partial x_1}{\partial y_0} \\
-\frac{\partial y_1}{\partial x_0} & \frac{\partial x_1}{\partial x_0} \\
\end{array}
\right) \right/ |B_{n,0}/B_{n,1}|,
\label{eq:D+-3}
\end{align}
where $B_{n,0}$ is the component normal to $S_0$, that has a similar form as Equation \eqref{eq:Bn1}. Method \Rone is used in Code2016.
\cite{tassev2017qsl} published their code \texttt{QSL Squasher}
\footnote{\url{https://bitbucket.org/tassev/qsl_squasher/src/hg/}} that achieved a high efficiency to identify QSLs,
and \cite{Scott2017} gave a detailed analysis of the code.
Taking $\Vec{U},\Vec{V}$ as a pair of orthonormal unit vectors on $S_0$,
\cite{Scott2017}
then proposed a method of obtaining $Q$ without the information of neighboring mapping coordinates by solving
\begin{align}
\frac{\textrm{d}\{\Vec{r},\, \Vec{U},\, \Vec{V}\}}{\textrm{d}s} =
\{\frac{\Vec{B}}{B},
\,\Vec{U} \cdot \nabla \frac{\Vec{B}}{B},
\,\Vec{V} \cdot \nabla\frac{\Vec{B}}{B}\},
\label{eq:rb2}
\end{align}
and they proved that
\begin{align}
Q = \frac{ \Tilde{\Vec{U}}_1^2\, \Tilde{ \Vec{V}}_2^2 +
\Tilde{\Vec{U}}_2^2\, \Tilde{ \Vec{V}}_1^2
-
2\,( \Tilde{\Vec{U}}_1\cdot\Tilde{ \Vec{V}}_1)\,( \Tilde{\Vec{U}}_2\cdot \Tilde{\Vec{V}}_2)}
{(B_{n,0})^2/ (B_{n,1}\, B_{n,2})}
\label{eq:q_scott}
\end{align}
is equivalent to Equation \eqref{eq:Q+-},
where
\begin{align}
\Tilde{\Vec{U}}_1= \Vec{U}-{\left.
\frac{ \Vec{U}\cdot \Vec{n} }{ \Vec{B}\cdot \Vec{n} } \Vec{B} \right|}_{S_1},
\end{align}
the form is similar for $\Tilde{\Vec{U}}_2,\,\Tilde{\Vec{V}}_1,\,\Tilde{\Vec{V}}_2$.
In this paper, the method of \cite{Scott2017} based on Equation (\ref{eq:q_scott}) is referred to as Method \Rtwo.
An alternative set of codes for calculating QSLs is published on \footnote{\url{https://github.com/Kai-E-Yang/QSL}} (here after CodeYang),
the first version still followed Method \Rone and was firstly applied in \cite{Yang2015}.
As of October 2018, CodeYang adopted Method \Rtwo.
Different selections of $S_1,\,S_2$ will result in different values of $Q$ even
for the same start point on $S_0$ (see the example in Section 4.1 of \citet{Titov2007}).
In Code2016 and FastQSL, $S_1,\,S_2$ by Method \Rone are
the boundaries where a field line terminates.
Therefore Code2016 and FastQSL record these boundaries for every field line.
If one locally rotates $S_1, S_2$ to be perpendicular to the magnetic field line, Equation \eqref{eq:Q+-} and Equation \eqref{eq:q_scott} will give $Q_{\perp}$ \citep{Titov2007}.
$Q_{\perp}$ removes the projection effect on boundaries, therefore quantifies the property of volume QSL more precisely than $Q$ does. The reason that $Q$ is still often used rather than $Q_\perp$ is for its numerical simplicity. \texttt{QSL Squasher} and CodeYang set $S_1,\,S_2$ to always be perpendicular to $\Vec{B}$, therefore providing $Q_\perp$ only.
Method \Rone requires tracing at least 4 neighboring field lines for the central difference of footpoint coordinates,
resulting in a numerical difficulty in cases where 4 field lines have different $S_1,\,S_2$, that gives \texttt{NaN} in Code2016.
Method \Rone also has difficulty of accurately applying the formula of $Q_\perp$ of \citet{Titov2007}, especially on a polarity inversion line (PIL).
Method \Rtwo can give $Q$ and $Q_\perp$ directly without introducing the error of coordinate difference by tracing Equation \eqref{eq:rb2} alone,
but solving Equation \eqref{eq:rb2} is less efficient than
Equation \eqref{eq:rb} because of the need of calculating $\nabla \frac{\Vec{B}}{B} $ at every step.
In addition, since $Q$ changes sharply around a separatrix,
the high-$Q$ positions could be inside cells whose surrounding grids still have low values of $Q$,
then these separatrix segments can not be captured.
In contrast, with Method \Rone,
Code2016 traces field lines at refined grids,
and implement central difference of the mapping coordinates
in terms of refined grids
for Equations \eqref{eq:D+-2} and \eqref{eq:D+-3};
here the grid spacing is denoted as $\delta$ in \citet{Pariat2012}.
Consequently, a separatrix can be captured with
refined girds,
and characterized by an extremely high value of $Q$.
Briefly speaking, Method \Rone has the advantage of locating the position of thin QSLs especially separatrices (except that $S_1, S_2$ are not exactly same for 4 neighboring field lines),
Method \Rtwo has the advantage of giving accurate values of $Q$ and $Q_\perp$.
These characteristics are shown in Section \ref{sec:results} (Figure \ref{fig:res2D} and Figure \ref{fig:quadrapole}).
Since $Q$ is mostly used to
locate %
the position of QSLs and separatrices,
Method \Rone still has its advantage.
FastQSL provides the option of both methods.
\subsection{Magnetic Field at the Input}
\jchen{
For Code2016 and FastQSL, the input magnetic field is assumed to be in Cartesian grids.
Code2016 requires uniform grid spacings, while FastQSL additionally supports general stretched (but still rectilinear) grids.
CodeYang and \texttt{QSL Squasher} can run in spherical coordinates, \texttt{QSL Squasher} can run in stretched grids.
}
\jchen{
We assume that the input magnetic field $\Vec{B}$ is known at every 3D Cartesian grid $[x_i,\,y_j,\,z_k]$.
Then the $\vec{B}$ at $\vec{r}=(x,\,y,\,z)$ in
a cubic unit cell $[x_i,\,x_{i+1}]\times[y_j,\,y_{j+1}]\times[z_k,\,z_{k+1}]$ is interpolated by
\begin{align}
\Vec{B}_{\textrm{interp}}(x,\,y,\,z) =& \sum_{m=0}^{1} \sum_{n=0}^{1} \sum_{p=0}^{1} \omega_{x,m}\,
\omega_{y,n}\, \omega_{z,p}\, \Vec{B}(x_{i+m},\,y_{j+n},\,z_{k+p}), \label{eq:interp}\\
\omega_{x,0}=&\frac{x_{i+1}-x}{x_{i+1}-x_i},
\label{eq:w0}\\
\omega_{x,1}=&1- \omega_{x,0},
\label{eq:w1}
\end{align}
where $\omega_{y,n},\, \omega_{z,p}$ have similar forms as Equation \eqref{eq:w0} and Equation \eqref{eq:w1}.
For uniform grids, simply flooring $\{x,\,y,\,z\}/\text{spacing}$ is $\{i,\,j,\,k\}$ in Equation \eqref{eq:interp}.
While for stretched grids, we apply a binary search for the determination of $i$, $j$, $k$, which is much more time-consuming than the flooring, and the final performance is reduced to 45\%-80\% (depends on settings at the input) by this determination.
}
\subsection{Tracing Scheme}
\jchen{
Code2016 and CodeYang utilizes the classic \texttt{RK4} to solve Equation \eqref{eq:rb} and Equation \eqref{eq:rb2}. \texttt{QSL Squasher} was updated to version 2.0 from January 2019, then the option of tracing scheme of Cash-Karp method is removed and
only Euler integration is retained, previous versions are not available online currently. All of them use uniform fixed step-size (here after \texttt{step}).
}
\jchen{
FastQSL updates the tracing scheme with the 3/8-rule \texttt{RK4}
\citep{kuttaRK4}, which introduces a smaller step error than the classic \texttt{RK4},
and additionally provides \texttt{RKF45} \citep{fehlberg1969low} for further acceleration. \texttt{RKF45} calculates the difference between \texttt{RK4} and \texttt{RK5} of each step,
\texttt{tol} is the maximum tolerated difference,
and the unit of \texttt{tol} is the original grid spacing.
If the difference is larger than \texttt{tol},
that is a failed-trial step,
then the step-size is adjusted to a smaller value and repeat the tracing step from the same point.
If the difference is smaller than \texttt{tol},
\texttt{RKF45} accepts this tracing step and then adjust the step-size to a larger value
according to the last difference and \texttt{tol}.
A smaller value of \texttt{tol} will result in a more precise output, but takes more computational resources.
}
\jchen{
If grids are stretched, we adopt
a self-adaptive fashion of \texttt{step} and \texttt{tol} in cells of different shapes for a better performance. The scaling of \texttt{step} and \texttt{tol} in a cell %
are:
\begin{align}
\text{scaling}&=
1 \left/\sqrt{
\left(\frac{B_x / B}{x_{i+1}-x_i}\right)^2+
\left(\frac{B_y / B}{y_{j+1}-y_j}\right)^2+
\left(\frac{B_z / B}{z_{k+1}-z_k}\right)^2}\right.,\\
\texttt{tol}_\text{cell}&=\texttt{tol}\times \text{scaling},\\
\texttt{step}_\text{cell}&=\texttt{step}\times \text{scaling}.
\end{align}
\texttt{tol} and \texttt{step} here are dimensionless, $\texttt{tol}_\text{cell}$ and $\texttt{step}_\text{cell}$ that actually applied in the cell have the same unit as $x_i,\,y_j,\,z_k$.
}
\subsection{Code and Algorithm}
\jchen{Code2016 and FastQSL also provide $\mathcal{T}_w$ and field-line length,
which are also extended to 3D by remaining constant along a field line, like Equation \eqref{eq:qline}. Different from Q-calculation,
a low-level of (even without) refinement of $\mathcal{T}_w$ is good enough for data
analysis.
FastQSL additionally provides the coordinates of ending points of field lines, which had been used to locating flaring ribbons in an MHD simulation \citep{Jiang2021},
and can help the calculation of slip-squashing factors \citep{Titov2009} if the flows at the boundaries are known.
}
FastQSL provides 2 sets of code.
The first set is directly developed from Code2016
and still runs on IDL \jchen{(the reliance of SolarSoftWare of Code2016 is removed)}+{Fortran}\footnote{\url{https://fortran-lang.org/}}
This set is optimized in many details. For example, it can be compiled by both \texttt{gfortran} and \texttt{ifort}
while Code2016 is designed only for \texttt{ifort}.
The 2nd set is accelerated by NVIDIA GPU.
The flowchart of the GPU program with Method \Rone is shown in Figure \ref{fig:chart},
the differences from Method \Rtwo is described in the caption of Figure \ref{fig:chart}.
The program is based on \href{http://python.org/}{Python} and \href{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html}{CUDA/C}. The major input of this program is the data-cube of the 3D magnetic field.
The data is loaded with \href{https://docs.scipy.org/doc/scipy/reference/io.html}{Scipy.io}, which is capable of reading various file format of data (e.g. .sav .mat and unformatted). For the preparation of GPU computing, the field data, the parameters and the coordinate array of points to be calculated are then transferred to GPU-memory.
The most computational-power consuming part is to trace magnetic field lines, for which the \texttt{RKF45} solver for is implemented in CUDA/C and compiled as a callable module \texttt{TraceAllBline} (as the green blocks shown in Figure \ref{fig:chart}) by {Cupy}\footnote{\url{https://cupy.dev/}} The compiled module calculates magnetic field lines and derives the foot-point mapping. Then the foot-point mapping results are transferred back to host-memory for the calculation of $Q$.
Also, with the magnetic field line computed in \texttt{TraceAllBline}, $\mathcal{T}_w$ can be simply derived with Equation \eqref{eq:tw}.
After the calculation, $Q$ and $\mathcal{T}_w$ are visualized with {matplotlib} \footnote{\url{https://matplotlib.org/}} (for 2D) and {pyvista}\footnote{\url{https://www.pyvista.org/}} (for 3D).\footnote{Since the server of \url{http://staff.ustc.edu.cn/} will stop after September 2022, the first set will be updated at \url{http://github.com/el2718/FastQSL}. For the second set, the source code, demo example, and document of the method implementation is online available at \url{https://github.com/peijin94/FastQSL}.}
\section{Results} \label{sec:results}
We apply FastQSL to 3 common scenarios of magnetic field analyzing:
(1) a potential field extrapolated from a solar active region;
\jchen{(2) an analytical quadrapole field;}
(3) a flux rope from a TD99 model.
These magnetic fields are used to demonstrated and benchmark FastQSL.
\subsection{An Extrapolated Potential Field}
The first test is done with the potential field extrapolated from an active region (NOAA AR 11112) on 2010 October 16 at 19:00 UT, which is the same field for
analyzing ``dome-plate QSL"
in \citet{chen2020extreme}. Strictly speaking, it should be ``dome-plate separatrices'' with singular $Q$ since there is a null point \citep{Titov2011,Scott2021}.
As shown by Figure \ref{fig:res2D}(d), $Q_\perp$ can not show any footprints of bald
patch separatrix \citep{Titov&al1993,titov1999TD99}.
For example, in the region bounded by the black box in Figure \ref{fig:res2D}(b),
there is a footprint of bald patch separatrix on a PIL between
the green and the purple field lines (the neighboring field lines at two sides of a PIL that is the footprints of a bald patch separatrix should depart with each other), which also satisfies
$(B_x, B_y, 0) \cdot \nabla B_z|_\text{PIL} > 0$ (the purple lines in Figure \ref{fig:res2D}(a)),
while such topology is missing in Figure \ref{fig:res2D}(d).
As in the discussion above, the footprints of QSLs in Figure \ref{fig:res2D}(d)(e) are much thinner than that in Figure \ref{fig:res2D}(b), even appear discontinuous at the thinnest points.
Figure \ref{fig:res2D}(c)(f) are calculated with original grids.
Figure \ref{fig:res2D}(c) keeps the continuity of footprints,
while Figure \ref{fig:res2D}(f) shows much more discontinuities
than Figure \ref{fig:res2D}(e),
indicating that the method \Rtwo requires a higher level of refinement for displaying continuous separatrices.
With the significant improvement in efficiency, the 3D distribution of Q can be obtained with ease within the timescale of minutes.
Figure (\ref{fig:POT3d}) shows the 3D magnetic topologies above the magnetogram.
As shown in Figure \ref{fig:POT3d},
the dome structure is well-represented as Figure 4 of \citet{chen2020extreme}.
\subsection{An Analytical Quadrapole Field}
\jchen{
In the first case, $Q$ decays exponentially away from the null point \citep{Pontin2016}, there are still some remnant of the high-Q region around separatrices can be captured by Method \Rtwo.
An analytical quadrapole field can clearly present the shortage of Method \Rtwo on capturing separatrices.
The field is
\begin{align}
\vec{B}_\text{quadrapole}(\vec{r})&= \sum_{i=1}^{4}\, q_i\, \frac{\vec{r}-\vec{r}_i}{|\vec{r}-\vec{r}_i|^3},
\label{eq:quadrapole}
\end{align}
where $\vec{r}_1=(-1.5,\,0,\,-0.5)$, $\vec{r}_2=(-0.5,\,0,\,-0.5)$, $\vec{r}_3=(0.5,\,0,\,-0.5)$, $\vec{r}_4=(1.5,\,0,\,-0.5)$ are the locations of 4 magnetic charges, and $\{q_1,\,q_2,\,q_3,\,q_4\}=\{1,\,-1,\,1,\,-1\}$ are the strengths of the magnetic charges. This analytical field is uniformly discretised for FastQSL, and the grid spacing is 0.02.
There are 4 sun spots on the photosphere (Figure \ref{fig:quadrapole}(a)),
and the length of field line can sharply jump at some places (Figure \ref{fig:quadrapole}(b)(e)), where must be separatrices. Method \Rone can fully capture all these separatrices (Figure \ref{fig:quadrapole}(c)(f)).
While Figure \ref{fig:quadrapole}(g) that from Method \Rtwo present a blank because all calculated $Q$ are below 10 in the region of Figure \ref{fig:quadrapole}(g), the case is same if we plot $\text{log}_{10}(Q_\perp)$.
In Figure \ref{fig:quadrapole}(c), in the area closed by the inner separatrices that labeled with ``in" or in the area outside of the outer separatrices that labeled with ``out",
considering the symmetry of the magnetic field,
the field-line mapping \eqref{pi+-} should be $x_2=-x_1$, $y_2=y_1$.
According to Equation \eqref{eq:Q+-}, the values of $Q$ in the area ``in" and ``out" should identically be 2.
The real distribution of $Q$ around these separatrices should like the Dirac Delta funtion.
As the discussions of Section \ref{Q-Calculation}, since most refined girds are not on the zero-thickness separatrices, Method \Rtwo can not capture these separatrices (Figure \ref{fig:quadrapole}(d)(g)).
}
\jchen{
Another case that $\vec{B}=(x,\,-y,\,0)$ has a similar distribution of $Q$.
We set $x^2+y^2=1$ as
$S_1(\theta,\,z)$, $S_2(\theta,\,z)$ of $Q$-calculation (Figure \ref{fig:null2d}),
where $\theta$ is the radian measure.
For $0<\theta<\pi/2$ (the case is similar for $\pi/2<\theta<2\,\pi$), according to the symmetry of the magnetic field, the mappings are $\theta_2=\pi/\,2-\theta_1$, $z_2=z_1$,
and all values of Q are 2 with Equation \eqref{eq:Q+-}.
Separatrices only appear at $\theta=0,\,\pi/\,2,\,\pi,\,3\,\pi/\,2$.
}
\subsection{A TD99 model}
A TD99 model at the setting of
$R=110~\rm Mm$, $d=34~\rm Mm$, $L=55~\rm Mm$,
$a=49.4~\rm Mm$,
$I=4\times 10^{12}~A$,
$I_0=1.66\times 10^{12}~A$, $q= 10^{14}~T~\rm m^2$ is also tested.
\jchen{
This analytical field is uniformly discretised, the grid spacing is 3.04~Mm.}
This setting represents a hyperbolic flux tube (HFT) \citep{Titov2002} topology around the flux rope,
3D structure of the HFT and the 3D distribution of twist number are presented by Figure \ref{fig:TD3D}.
\section{Benchmark}
A benchmark is presented for comparing the efficiency of FastQSL with published codes
(i.e. \texttt{QSL Squasher}, CodeYang and Code2016). The quality of resultant images and the time-consumed depend on the choices of \texttt{step} of \texttt{RK4} or \texttt{tol} of \texttt{RKF45} and method.
Images with different choices of parameters are shown in Figure \ref{fig:quality}.
There is a marginal value of \texttt{step} or $\texttt{tol}$
for the quality of resultant image.
\jchen{For a specific row, comparing horizontally,} comparing with those resultant images with any lower values,
the image with the marginal value should not show any recognizable difference.
And above the marginal value, the difference is recognizable.
\jchen{In Figure \ref{fig:quality}, columns are labeled with ``A, B, C, D'' to mark an image
that is ground truth, marginal ground truth, distinguishable and unlikeness.}
The marginal value depends on the smoothness of the field.
For example, \jchen{comparing to column A and B, }most area in column C are satisfying, but some areas are not,
one should check the quality of image to make decision.
\jchen{
The image quality is not very sensitive to the value of \texttt{step} or \texttt{tol},
which the efficiency is sensitive to.
For some analysis that do not require a high quality, even column D can provide an acceptable result, and can achieve a very fast performance. }
For Method \Rone,
if we fix the tracing parameter \texttt{step} or \texttt{tol},
we find the angle $\theta=\arcsin(|B_{n,0} / B|)$ between the magnetic field line and the cross-section $S_0$ will affect image quality, with a smaller $\theta$ giving poorer results.
To mitigate this effect,
our empirical formulas are:
\begin{align}
\texttt{step}|_{S_0} &= \text{max}([\texttt{step}_{\perp} \times | B_{n,0} / B |,\, \texttt{step}_\text{min}]), \label{eq:step}\\%
\texttt{tol}|_{S_0} &= \texttt{tol}_{\perp} \times | B_{n,0} / B |^{1.5}, \label{eq:tol}
\end{align}
where $\texttt{step}_{\perp}$, $\texttt{step}_\text{min}$ and $\texttt{tol}_{\perp}$ are constants, \text{max()} is the operation of taking the maximum value. %
When field lines are tangent to $S_0$, $B_{n,0} \to 0$,
introducing spurious high-$Q$ structures.
In order to avoid this artifact, when $|B_{n,0} / B| < 0.05$,
we locally rotate $S_0$ so that it is perpendicular to the field line,
and traces 4 neighboring field lines at the new cross-section, and then calculate $Q$.
Therefore, the \texttt{step} or the \texttt{tol} at the input of FastQSL is for the perpendicular case,
then adjusted by Equation \eqref{eq:step} or \eqref{eq:tol} for every field line.
Since Code2016 fixes the \texttt{step}, it needs a small marginal \texttt{step} of 0.4 in Figure \ref{fig:quality}.
For Method \Rtwo, FastQSL always sets $S_0$ to the perpendicular case, this adjustment is not applied.
For Method \Rtwo, there can be two strategies to calculate $\nabla \frac{\Vec{B}}{B}$.
One prepare a 3D array of $\nabla \frac{\Vec{B}}{B} $ at the beginning by a second order \jchen{finite difference, for example, the formula for uniform grids is}
\begin{align}
\frac{\partial\, \Vec{B}/B}{\partial\, x}(x_i,\,y_j,\,z_k)
= \frac{1}{(x_{i+1}-x_{i-1})}\left[\frac{\Vec{B}(x_{i+1},\,y_j,\,z_k)} { B(x_{i+1},\,y_j,\,z_k)}-
\frac{\Vec{B}(x_{i-1},\,y_j,\,z_k)} { B(x_{i-1},\,y_j,\,z_k)}\right],
\label{eq:grad_B}
\end{align}
the form is similar for $\frac{\partial\, \Vec{B}/B}{\partial\, y}(x_i,\,y_j,\,z_k),\, \frac{\partial\, \Vec{B}/B}{\partial\, z}(x_i,\,y_j,\,z_k)$.
Then does an interpolation that is similar to Equation \eqref{eq:interp}.
The other is to interpolate $ \frac{\Vec{B}}{B}$ at the neighboring points of $\vec{r}$ like
$\vec{r}\pm(0.001,0,0), \vec{r}\pm(0,0.001,0), \vec{r}\pm(0,0,0.001)$,
then $\nabla \frac{\Vec{B}}{B}$ is given by central differences.
the first way gives a smoother distribution than the 2nd.
As shown by the bottom 2 rows of Figure \ref{fig:quality},
the marginal \texttt{tol} by the first way is $10^{-2.9}$
while that by the second way is $10^{-4.4}$,
which means the image quality by the first way is better with the same \texttt{tol}.
The second way needs to calculate additional 6 values of $\frac{\Vec{B}}{B}$,
therefore it is slower than the first way.
But the first way asks additional storage for the 3D array of
$\nabla \frac{\Vec{B}}{B}$
(3 times as the array of $\vec{B}$ occupied) while the second does not.
CodeYang applies the second way.
As the gradient of Equation \eqref{eq:interp} can be analytically given in every cell,
\texttt{QSL Squasher} applies a skill that is mathematically similar as the second way,
but the calculation of one value of $\nabla \frac{\Vec{B}}{B}$ consumes the similar duration
as the first way.
The first set of FastQSL uses the first way, while the second set applies the second way due to the limitation of GPU memory.
Comparing with Method \Rone (the row of Code2016 of Figure \ref{fig:quality}),
Method \Rtwo (the top 2 rows of Figure \ref{fig:quality}) allows a much larger \texttt{step}.
Providing a completely impartial benchmark to all codes and both methods is extremely difficult, we just show the benchmark at the marginal values \jchen{(column B of Figure \ref{fig:quality})}, to make a sense of time-consumed.
By using the TD99 model as an input,
and calculating for the same region (the 3D region of Figure \ref{fig:TD3D}) and at the same resolution,
the efficiency is measured by the count of calculated values of $Q$ per unit time.
The original CodeYang can not accept a \texttt{step} $> 1$, because it extends only 1 ghost layer to boundaries. We modified the code to have 10 ghost layers for the benchmark.
\texttt{QSL Squasher} allows adaptive mesh refinement, the possible locations of QSL are quickly identified by Field-line Length Edge (FLEDGE) map, then only calculate $Q_\perp$ at these locations,
\cite{tassev2017qsl} claimed an order-of-magnitude speed-up with adaptive refinements.
We firstly do not apply adaptive refinements,
directly set the grid resolution for $Q_\perp$ that is same as other panels of Figure \ref{fig:quality},
then achieve the marginal \texttt{step} of 0.3, and use this setting for the benchmark (Tabel \ref{tb:bench}).
A larger step can shrink the resulting QSLs more significantly (Figure \ref{fig:quality}),
even slightly affects at the \texttt{step} of 0.3.
\jchen{We infer this artifact comes from the relatively large step error of Euler integration, because our codes will also have such shrinking if we change the tracing scheme to Euler integration}.
We also try the adaptive refinements,
at the proper choices of the threshold of length jump for a refinement and the maximum times of refinements for giving a satisfying image that likes column B of Figure \ref{fig:quality},
the performance by GPU is 25 kQ/s, which is surprisingly lower than 189 kQ/s in Tabel \ref{tb:bench}, which is without the adaptive refinements.
We guess that the gradient of field-line length in the thick QSL could be small still,
then the adaptive refinements can not improve the performance.
As shown by Table \ref{tb:bench},
Code2016 is faster than \texttt{QSL Squasher} and CodeYang.
\texttt{QSL Squasher} is slowed down by Euler integration, double-precision floating-point format, adaptability for \jchen{stretched} girds, the way of calculating $\nabla \frac{\Vec{B}}{B}$, and other potential aspects.
CodeYang is slowed down by double-precision floating-point format,
the way of calculating $\nabla \frac{\Vec{B}}{B}$ and other potential aspects.
Comparing to Code2016, FastQSL achieves a significant speed-up.
For the first set of FastQSL,
compiled by \texttt{ifort} is sightly faster than by \texttt{gfortran}.
Traced by \texttt{RKF45} is sightly faster than by \texttt{RK4},
but this comparison may be not true for all kinds of magnetic field.
For a highly twisted field, many failed-trial steps could happen in that by \texttt{RKF45},
then by \texttt{RK4} could be even faster.
If traced by \texttt{RK4},
the efficiency by Method \Rtwo is sightly faster than that by Method \Rone,
the comparison is reversed if traced by \texttt{RKF45}.
For the second set of FastQSL that is optimized with GPU,
it achieves the best efficiency by Method \Rone.
If $Q$ is calculated by Method \Rtwo, since the second set uses the second way to calculate $\nabla \frac{\Vec{B}}{B}$, it is even slower than the first set.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Code & Processor & Compiler & Method & Tracing scheme & Parameter & Performance\\
\hline
\multirow{2}{*}{\texttt{QSL Squasher}}
& CPU & \multirow{2}{*}{OpenCL}
& \multirow{3}{*}{\Rtwo,\, 2} & \multirow{2}{*}{Euler integration} & \multirow{2}{*}{\texttt{step} = 0.3} & 29 kQ/s \\
\cline{2-2}\cline{7-7}
\multirow{2}{*}{}
& GPU & \multirow{2}{*}{}
& \multirow{3}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & 189 kQ/s \\
\cline{1-3} \cline{5-7}
CodeYang
& \multirow{7}{*}{CPU} & \texttt{gfortran} & \multirow{3}{*}{} & \multirow{2}{*}{classic \texttt{RK4}} & \texttt{step} = 2.1 & 27 kQ/s \\
\cline{1-1} \cline{3-4} \cline{6-7}
Code2016
& \multirow{7}{*}{} & \texttt{ifort} & \multirow{3}{*}{\Rone} & \multirow{2}{*}{} & \texttt{step} = 0.4 & 191 kQ/s\\
\cline{1-1} \cline{3-3} \cline{5-7}
\multirow{7}{*}{FastQSL}
& \multirow{7}{*}{} & \texttt{gfortran} & \multirow{3}{*}{}
& \multirow{3}{*}{3/8-rule \texttt{RK4}}
& \multirow{2}{*}{$ \texttt{step}_\perp = 3.0 $} & 749 kQ/s\\
\cline{3-3} \cline{7-7}
\multirow{7}{*}{} & \multirow{6}{*}{} & \multirow{4}{*}{\texttt{ifort}} & \multirow{3}{*}{} & \multirow{3}{*}{} & \multirow{2}{*}{} & 854 kQ/s\\
\cline{4-4} \cline{6-7}
\multirow{7}{*}{} & \multirow{6}{*}{} & \multirow{4}{*}{} & \multirow{2}{*}{\Rtwo, 1} & \multirow{3}{*}{} & $ \texttt{step} = 4.0 $ & 907 kQ/s\\
\cline{5-7}
\multirow{7}{*}{} & \multirow{7}{*}{} & \multirow{4}{*}{} & \multirow{2}{*}{} & \multirow{4}{*}{\texttt{RKF45}} &
$\texttt{tol} = 10^{-2.9}$ & 1.11 MQ/s\\
\cline{4-4} \cline{6-7}
\multirow{7}{*}{} & \multirow{7}{*}{} & \multirow{3}{*}{} & \multirow{2}{*}{\Rone} & \multirow{4}{*}{} &
\multirow{2}{*}{$\texttt{tol}_\perp = 10^{-3.2}$} & 1.43 MQ/s \\
\cline{2-3} \cline{7-7}
\multirow{7}{*}{} & \multirow{2}{*}{GPU} & \multirow{2}{*}{CUDA/C} & \multirow{2}{*}{} &\multirow{4}{*}{} & \multirow{2}{*}{} & 4.53 MQ/s\\
\cline{4-4} \cline{6-7}
\multirow{7}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \Rtwo,\, 2
& \multirow{4}{*}{} & $\texttt{tol} = 10^{-4.4}$ & 1.13 MQ/s\\
\hline
\end{tabular}
\caption{The computation efficiency of different methods, measured as the capability of calculating $Q$ per second.
In this test, the CPU is Intel core i9 10900K, the GPU is RTX 3070 OC.
The version of \texttt{gfortran} is 9.4.0,
the version of \texttt{ifort} is 2021.6.0.
The columns of ``Processor", ``Compiler" and ``Performance" are only for the computation of QSL, others like IO, preprocessing, visualization are not involved.
The numbers following \Rtwo indicate the 1st or the 2nd way of calculating
$\nabla \frac{\Vec{B}}{B}$.
The column of ``Parameter" shows the marginal value of a satisfied image in column B of Figure \ref{fig:quality}.
\texttt{QSL Squasher} is tested with a smaller data cube of the same TD99 due to its high requirement of memory.
}
\label{tb:bench}
\end{table}
\section{Discussion and Summary}
Comparing with Code2016, the computational efficiency increased by 24 times in this work. The increase in computational efficiency majorly comes from two aspects: new computing architecture and the improvement of algorithm.
In the computational tasks of data inspection like QSL identification and quantification, GPU acceleration can improve the efficiency of interactive inspection and help to discover new features in data.
In the computational tasks of simulation \citep{feng2013gpu}, the massive parallel computation with GPU can help explore more parameter space with given time and resource budget. Also, GPU computing is more environmentally friendly by reducing carbon emission \citep{stevens2020imperative,2020NatAsGPU}.
To summarize, we developed a reliable method of calculating the squashing factor $Q$ and twist number %
in data cubes of magnetic fields on \jchen{Cartesian} grids. This method can achieve unprecedented computation efficiency, with which one can obtain maps of $Q$ and twist number within a few seconds for 2D input and a few minutes for 3D input. The high efficiency can benefit the analysis of magnetic topology, especially for the analysis of MHD simulations, which may require the computation of 3D $Q$ and twist number in a time series.
\section{Acknowledge}
J.C. thanks the constructive discussion with Guo, Yang, and acknowledges the support by the China Scholarship Council (No.201706340140).
The research was supported by
the National Natural Science Foundation of China (42188101, 41974199, 41574167, 41774150, and 11925302),
the B-type Strategic Priority Program of the Chinese Academy of Sciences (XDB41000000), and STELLAR project (952439).
\bibliography{cite}
\clearpage
|
Title:
Galaxy clustering from the bottom up: A Streaming Model emulator I |
Abstract: In this series of papers, we present a simulation-based model for the
non-linear clustering of galaxies based on separate modelling of clustering in
real space and velocity statistics. In the first paper, we present an emulator
for the real-space correlation function of galaxies, whereas the emulator of
the real-to-redshift space mapping based on velocity statistics is presented in
the second paper. Here, we show that a neural network emulator for real-space
galaxy clustering trained on data extracted from the Dark Quest suite of N-body
simulations achieves sub-per cent accuracies on scales $1 < r < 30 $ $h^{-1}
\,\mathrm{Mpc}$, and better than $3\%$ on scales $r < 1$ $h^{-1}\mathrm{Mpc}$
in predicting the clustering of dark-matter haloes with number density
$10^{-3.5}$ $(h^{-1}\mathrm{Mpc})^{-3}$, close to that of SDSS LOWZ-like
galaxies. The halo emulator can be combined with a galaxy-halo connection model
to predict the galaxy correlation function through the halo model. We
demonstrate that we accurately recover the cosmological and galaxy-halo
connection parameters when galaxy clustering depends only on the mass of the
galaxies' host halos. Furthermore, the constraining power in $\sigma_8$
increases by about a factor of $2$ when including scales smaller than $5$
$h^{-1} \,\mathrm{Mpc}$. However, when mass is not the only property
responsible for galaxy clustering, as observed in hydrodynamical or
semi-analytic models of galaxy formation, our emulator gives biased constraints
on $\sigma_8$. This bias disappears when small scales ($r < 10$
$h^{-1}\mathrm{Mpc}$) are excluded from the analysis. This shows that a vanilla
halo model could introduce biases into the analysis of future datasets.
| https://export.arxiv.org/pdf/2208.05218 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
editorials, notices -- miscellaneous
\end{keywords}
\begingroup
\let\clearpage\relax
\tableofcontents
\endgroup
\newpage
\section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular*}{\columnwidth}{@{}l@{\hspace*{50pt}}l@{\hspace*{50pt}}l@{}}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt] %
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular*}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular*}{\columnwidth}{l@{\hspace*{40pt}}l@{\hspace*{40pt}}l}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt] %
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular*}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb'\appendix' command before the next \verb'\section{}'.
This will automatically adjust the section headings, figures, tables, and equations to reflect the fact that they are part of an appendix.
It is only necessary to enter the \verb'\appendix' command once -- everything after that command is in an appendix.
Remember that appendices should be placed \textit{after} the list of references.
Unlike other astronomy class files, there are no special commands for online material.
If your paper has any online material, it should be placed in a separate file.
See our instructions to authors$^{\ref{foot:itas}}$ for guidance.
\section{Packages and custom commands}
\label{sec:packages}
\subsection{Additional packages}
Sometimes authors need to include additional \LaTeX\ packages, which provide extra features.
For example, the \verb'bm' package provides extra bold maths symbols, whilst the \verb'pdflscape' package adds support for landscape pages.
Packages can be included by adding the \verb'\usepackage{}' command to the preamble of the document (not the main body).
Please \emph{only include packages which are actually used in the paper}, and include a comment to explain what each one does.
This will assist the typesetters.
If you are using \texttt{mnras\_template.tex}, it includes a specific section for this purpose, near the start of the file with the header 'authors - place your own packages here'.
For example, to include \verb'pdflscape', use:
\begin{verbatim}
\usepackage{pdflscape} %
\end{verbatim}
Consult the documentation for that package for instructions on how to use the additional features.
\subsection{Custom commands}
Authors should avoid duplicating or redefining commands which are already available in \LaTeX\ or \verb'mnras.cls'.
However it may sometimes be necessary to introduce a custom command e.g. as a shortcut while writing the paper.
Please \emph{only include commands which are actually used in the paper}, and include a comment to explain what each one does.
This will assist the typesetters.
Use \verb'\newcommand', \emph{not} \verb'\def', as this will avoid accidentally overwriting existing commands.
Place custom commands in the preamble of the document (not the main body).
If you are using \texttt{mnras\_template.tex}, it includes a specific section for this purpose, near the start of the file with the header 'authors - place your own commands here'.
As an example, a shortcut for the unit \kms can be defined like this:
\begin{verbatim}
\newcommand{\kms}{\,km\,s$^{-1}$} %
\end{verbatim}
Velocities can then be written as e.g. \verb'2.3\kms' which produces 2.3\kms.
Similar shortcuts can be used for frequently quoted object designations.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
This guide replaces an earlier one originally prepared by Cambridge University Press (CUP) in 1994, and last updated in 2002 by Blackwell Publishing.
Some code segments are reproduced from, and some examples are based upon, that guide.
The authors were: A.~Woollatt, M.~Reed, R.~Mulvey, K.~Matthews, D.~Starling, Y.~Yu, A.~Richardson (all CUP), and Penny~Smith, N.~Thompson and Gregor~Hutton (all Blackwell), whose work is gratefully acknowledged.
The accompanying \bibtex\ style file was written by John Sleath, Tim Jenness and Norman Gray, without whom \bibtex\ support would not have been possible.
Some special symbols in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols} were taken from the Springer Verlag \textit{Astronomy \& Astrophysics} \LaTeX\ class, with their permission.
KTS thanks Nelson Beebe (University of Utah) for helpful advice regarding CTAN.
\section*{Data Availability}
The inclusion of a Data Availability Statement is a requirement for articles published in MNRAS. Data Availability Statements provide a standardised format for readers to understand the availability of data underlying the research results described in the article. The statement may refer to original data generated in the course of the study or to third-party data analysed in the article. The statement should describe and provide means of access, where possible, by linking to the data or providing the required accession numbers for the relevant databases or DOIs.
\appendix
\section{Journal abbreviations}
\label{sec:abbreviations}
Abbreviations for cited journals can be accessed using the commands listed in table~\ref{tab:journal_abbr}.
Although some of these may appear to be outdated or rarely cited, they have been selected to be compatible with the \bibtex\ output by the NASA Astrophysics Data System$^{\ref{foot:ads}}$, commands used by other astronomy journals, and with additional entries for journals with non-standard abbreviations in MNRAS.
For journals which are not on this list, see our instructions to authors$^{\ref{foot:itas}}$ for guidance on how to abbreviate titles.
\begin{table*}
\caption{Commands for abbreviated journal names, see appendix~\ref{sec:abbreviations}.}
\label{tab:journal_abbr}
\begin{tabular}{@{}l@{\:}l@{\:}l@{}} %
\hline
Command & Output & Journal name\\
\hline
\verb'\aap' or \verb'\astap' & \aap & Astronomy and Astrophysics$^a$\\
\verb'\aapr' & \aapr & The Astronomy and Astrophysics Review\\
\verb'\aaps' & \aaps & Astronomy and Astrophysics Supplement Series\\
\verb'\actaa' & \actaa & Acta Astronomica\\
\verb'\afz' & \afz & Astrofizika\\
\verb'\aj' & \aj & The Astronomical Journal\\
\verb'\ao' or \verb'\applopt' & \ao & Applied Optics\\
\verb'\aplett' & \aplett & Astrophysics Letters\\
\verb'\apj' & \apj & The Astrophysical Journal\\
\verb'\apjl' or \verb'\apjlett' & \apjl & The Astrophysical Journal Letters$^a$\\
\verb'\apjs' or \verb'\apjsupp' & \apjs & The Astrophysical Journal Supplement Series\\
\verb'\apss' & \apss & Astrophysics and Space Science\\
\verb'\araa' & \araa & Annual Review of Astronomy and Astrophysics\\
\verb'\arep' & \arep & Astronomy Reports$^b$\\
\verb'\aspc' & \aspc & Astronomical Society of the Pacific Conference Series\\
\verb'\azh' & \azh & Astronomicheskii Zhurnal$^c$\\
\verb'\baas' & \baas & Bulletin of the American Astronomical Society\\
\verb'\bac' & \bac & Bulletin of the Astronomical Institutes of Czechoslovakia\\
\verb'\bain' & \bain & Bull. Astron. Inst. Netherlands\\
\verb'\caa' & \caa & Chinese Astronomy and Astrophysics\\
\verb'\cjaa' & \cjaa & Chinese Journal of Astronomy and Astrophysics\\
\verb'\fcp' & \fcp & Fundamentals of Cosmic Physics\\
\verb'\gca' & \gca & Geochimica Cosmochimica Acta\\
\verb'\grl' & \grl & Geophysics Research Letters\\
\verb'\iaucirc' & \iaucirc & International Astronomical Union Circulars\\
\verb'\icarus' & \icarus & Icarus\\
\verb'\japa' & \japa & Journal of Astrophysics and Astronomy\\
\verb'\jcap' & \jcap & Journal of Cosmology and Astroparticle Physics\\
\verb'\jcp' & \jcp & Journal of Chemical Physics\\
\verb'\jgr' & \jgr & Journal of Geophysics Research\\
\verb'\jqsrt' & \jqsrt & Journal of Quantitiative Spectroscopy and Radiative Transfer\\
\verb'\jrasc' & \jrasc & Journal of the Royal Astronomical Society of Canada\\
\verb'\memras' & \memras & Memoirs of the Royal Astronomical Society\\
\verb'\memsai' & \memsai & Memoire della Societa Astronomica Italiana\\
\verb'\mnassa' & \mnassa & Monthly Notes of the Astronomical Society of Southern Africa\\
\verb'\mnras' & \mnras & Monthly Notices of the Royal Astronomical Society$^a$\\
\verb'\na' & \na & New Astronomy\\
\verb'\nar' & \nar & New Astronomy Review\\
\verb'\nat' & \nat & Nature\\
\verb'\nphysa' & \nphysa & Nuclear Physics A\\
\verb'\pra' & \pra & Physical Review A: Atomic, molecular, and optical physics\\
\verb'\prb' & \prb & Physical Review B: Condensed matter and materials physics\\
\verb'\prc' & \prc & Physical Review C: Nuclear physics\\
\verb'\prd' & \prd & Physical Review D: Particles, fields, gravitation, and cosmology\\
\verb'\pre' & \pre & Physical Review E: Statistical, nonlinear, and soft matter physics\\
\verb'\prl' & \prl & Physical Review Letters\\
\verb'\pasa' & \pasa & Publications of the Astronomical Society of Australia\\
\verb'\pasp' & \pasp & Publications of the Astronomical Society of the Pacific\\
\verb'\pasj' & \pasj & Publications of the Astronomical Society of Japan\\
\verb'\physrep' & \physrep & Physics Reports\\
\verb'\physscr' & \physscr & Physica Scripta\\
\verb'\planss' & \planss & Planetary and Space Science\\
\verb'\procspie' & \procspie & Proceedings of the Society of Photo-Optical Instrumentation Engineers\\
\verb'\rmxaa' & \rmxaa & Revista Mexicana de Astronomia y Astrofisica\\
\verb'\qjras' & \qjras & Quarterly Journal of the Royal Astronomical Society\\
\verb'\sci' & \sci & Science\\
\verb'\skytel' & \skytel & Sky and Telescope\\
\verb'\solphys' & \solphys & Solar Physics\\
\verb'\sovast' & \sovast & Soviet Astronomy$^b$\\
\verb'\ssr' & \ssr & Space Science Reviews\\
\verb'\zap' & \zap & Zeitschrift fuer Astrophysik\\
\hline
\multicolumn{3}{l}{$^a$ Letters are designated by an L at the start of the page number, not in the journal name}\\
\multicolumn{3}{l}{\footnotesize$^b$ In 1992 the English translation of this journal changed its name from Soviet Astronomy to Astronomy Reports}\\
\multicolumn{3}{l}{\footnotesize$^c$ Including the English translation Astronomy Letters}\\
\end{tabular}
\end{table*}
\clearpage %
\section{Advanced formatting examples}
\label{sec:advanced}
Sometimes formatting doesn't behave exactly as expected when used in titles or section headings, and must be modified to obtain the correct appearance.
Generally the publishers can fix these problems during the typesetting process after a paper is accepted, but authors may wish to adjust these themselves to minimise the possibility of errors and/or for the benefit of the refereeing process.
Below are some examples of output, followed by the \LaTeX\ code which produces them.
Most mathematics and text formatting works as expected, but some commands might not be the correct size, bold or italic.
If so they can be finessed by hand, as in the bold mathematics here:
\boxit{\huge\bf \textit{Herschel} observations of galaxies at $\bm{\delta > 60\degr}$}
\begin{verbatim}
\title{\textit{Herschel} observations of galaxies at
$\bm{\delta > 60\degr}$}
\end{verbatim}
Most fonts do not provide bold and italic versions of small capitals, so the \verb'\ion{}{}' command doesn't produce the expected output in headings.
The effect has to be `faked' using font size commands, remembering that the running head is a different style:
\boxit{\huge\bf Abundances in H\,{\Large \textbf{II}} regions}
\begin{verbatim}
\title
[Abundances in H\,{\normalsize \textit{II}} regions]
{Abundances in H\,{\Large \textbf{II}} regions}
\end{verbatim}
Complex mathematics can cause problems with links, so might require adding a less formatted short version of the heading:
\boxit{\bf 2\quad FINDING Mg\,{\sevensize II} ABSORBERS AT $\bm{z > 2}$}
\begin{verbatim}
\section
[Finding Mg II absorbers at z > 2]
{Finding M\lowercase{g}\,{\sevensize II} absorbers
at $\lowercase{\bm{z > 2}}$}
\end{verbatim}
Using square brackets in headings can cause additional linking problems, which are solved by wrapping them in \{\textellipsis\}:
\boxit{\bf 2.1\quad [C\,{\sevensize II}] 158$\bmath{\umu}$m emission}
\begin{verbatim}
\subsection
[{[C II] 158$\umu$m emission}]
{[C\,{\sevensize II}] 158$\bmath{\umu}$m
emission}
\end{verbatim}
Use \verb'\text{}' (not \verb'\rm') for non-variables in mathematics, which preserves the formatting of the surrounding text.
For the same reasons, use \verb'\textit{}' for italics (not \verb'\it').
\boxit{\bf 3.1\quad Measuring $\bm{T}_\text{eff}$ from \textit{Gaia} photometry}
\begin{verbatim}
\subsection{Measuring $\bm{T}_\text{eff}$ from
\textit{Gaia} photometry}
\end{verbatim}
\section{Additional commands for editors only}
The following commands are available for the use of editors and production staff only.
They should not be used (or modified in the template) by authors.
\begin{description}
\item \verb'' inserts the title, authors and institution list in the correct formatting.
\item \verb'\nokeywords' tidies up the spacing if there are no keywords, but authors should always enter at least one.
\item \verb'\volume{}' sets the volume number (default is 000)
\item \verb'\pagerange{}' sets the page range. The standard template generates this automatically, starting from 1.
\item \verb'\bsp' adds the `This paper has been typeset\textellipsis' comment at the end of the paper.
The command name refers to Blackwell Science Publishing, who were the publishers at the time when MNRAS began accepting \LaTeX\ submissions in 1993.
\item \verb'\mniiiauth{}' used by the \bibtex\ style to handle MNRAS style for citing papers with three authors. It should not be used manually.
\item \verb'\eprint{}' used by the \bibtex\ style for citing arXiv eprints.
\item \verb'\doi{}' used by the \bibtex\ style for citing Digital Object Identifiers.
\end{description}
\bsp %
\label{lastpage} |
Title:
Breaking correlation in the inflow parameters of interstellar neutral gas in direct-sampling observations |
Abstract: We analyze the reasons for the correlation between the temperature,
direction, and speed of the interstellar neutral gas inflow into the
heliosphere, obtained in analyzes of observations performed by the IBEX-Lo
instrument onboard Interstellar Boundary Explorer (IBEX). We point out that
this correlation is the combined result of the inability to measure the speed
of the atoms that enter the instrument and the restriction of the observations
to a short orbital arc around the Sun performed by the instrument during
observation. We demonstrate that without the capability to measure the speed,
but with the ability to perform observations along longer orbital arcs, or from
at least two distant locations on the orbit around the Sun, it is possible to
break the parameter correlation. This, however, requires a capability to adjust
the boresight of the instrument relative to the spacecraft rotation axis, such
as that of the planned IMAP-Lo camera onboard the Interstellar Mapping and
Acceleration Probe (IMAP).
| https://export.arxiv.org/pdf/2208.14101 |
\title{Breaking correlation in the inflow parameters of interstellar neutral gas in direct-sampling observations}
\correspondingauthor{M. Bzowski}
\email{bzowski@cbk.waw.pl}
\author[0000-0003-3957-2359]{M. Bzowski}
\affil{Space Research Centre PAS (CBK PAN), Bartycka 18a, 00-716 Warsaw, Poland}
\author[0000-0002-5204-9645]{M.A. Kubiak}
\affil{Space Research Centre PAS (CBK PAN), Bartycka 18a, 00-716 Warsaw, Poland}
\author[0000-0002-2745-6978]{E. M{\"o}bius}
\affil{University of New Hampshire, Durham, NH}
\author[0000-0002-3737-9283]{N.A. Schwadron}
\affil{University of New Hampshire, Durham, NH}
\keywords{ISM: ions -- ISM: atoms, ISMS: clouds -- ISM: magnetic fields -- local interstellar matter -- Sun: heliosphere -- ISM: kinematics and dynamics}
\section{Introduction}
\label{sec:intro}
\noindent
The Sun is traversing an interstellar cloud of a partly ionized, magnetized gas. The interaction between this gas and the solar wind is responsible for the creation of the heliosphere \citep{axford_etal:63a}. The ionized component of interstellar matter and the solar wind plasma are separated at the heliopause, and the neutral component penetrates freely into the heliosphere. The flow distribution of this component inside the heliosphere is determined by a combination of solar gravity, ionization losses due to interaction with solar wind particles and solar EUV radiation, and -- for hydrogen -- the solar resonant radiation pressure \citep{patterson_etal:63a}. The main species within the interstellar neutral (ISN) gas include hydrogen and helium, but some of the less abundant components, including oxygen, neon, and deuterium, have also been detected \citep{bochsler_etal:12a, schwadron_etal:16a, rodriguez_etal:13a}. Because of very low densities ($\ll 1$~\cc~ for H and even less for heavier species), the ISN gas throughout the heliosphere can be regarded as collisionless. Observations of ISN gas, its derivative particle components, and the solar light scattered off this gas bring information on the physical state of the matter in the local interstellar medium (LISM).
The diagnostic potential of ISN gas observations is exploited using three main measurement techniques: (1) observations of the so-called heliospheric backscatter glow, which appears due to the fluorescence of ISN atoms excited by solar emission lines, (2) observations of pickup ions, i.e., a population of ISN atoms ionized inside the heliosphere, subsequently forming a singly charged sub-population in the solar wind, and (3) direct sampling of ISN atoms. Historically first were the discovery observations of the heliospheric backscatter glow of ISN H \citep{morton_purcell:62a, bertaux_blamont:71, thomas_krassa:71}; a review of early observations of the heliospheric glow was presented by \citet{fahr:74}.
ISN H is strongly depleted inside the heliosphere due to ionization processes and radiation pressure, and its distribution function is strongly modified within the outer heliosheath, i.e., in the region of perturbed interstellar matter ahead of the heliopause \citep{baranov_etal:91}. Therefore, it is more convenient to use ISN He to infer the Sun's velocity relative to the LISM and the LISM temperature. Helium is abundant in the LISM \citep[the H/He ratio of the neutral components $\sim 12-13$,][]{slavin_frisch:07a, bzowski_etal:19a}, weakly ionized inside the heliosphere \citep[because its ionization rate at 1~au is $\sim 10^{-7}$~s$^{-1}$, compared with $> 5\times 10^{-7}$~s$^{-1}$ for H,][]{rucinski_etal:96a, bzowski_etal:13b, sokol_etal:20a}, and negligibly susceptible to solar radiation pressure. Therefore, at 1~au ISN He is more abundant than ISN H. Furthermore, it is relatively little modified ahead of the heliopause by charge-exchange and elastic collisions \citep{bzowski_etal:17a, swaczyna_etal:21a, fraternale_etal:21a}, which facilitates retrieving the physical state of the unperturbed interstellar medium.
ISN He has been used for diagnosing the LISM and its interaction with the heliosphere employing all three of the aforementioned measurement techniques. None of them, however, can provide full information on the physical state of the ISN gas -- at least one of the four relevant parameters becomes suppressed. As a result, even though careful analysis yields all parameters, strong correlations in their uncertainties appear. As a result, data are consistent with some combinations of the parameter values much more likely than with the others, which sometimes is referred to as parameter degeneracy or parameter correlation. Visually, such a situation can be described as forming ``tubes'' of likely parameter values in the four-dimensional parameter space.
Direct-sampling measurements performed from 1~au by IBEX \citep{bzowski_etal:12a, mobius_etal:12a, bzowski_etal:14a, wood_etal:15a, bzowski_etal:15a, schwadron_etal:15a, swaczyna_etal:18a,swaczyna_etal:22b} provided the flow direction and speed, and the temperature of the ISN gas with uncertainties strongly correlated with each other \citep{bzowski_etal:12a, mobius_etal:12a, bzowski_etal:15a, mobius_etal:15b, lee_etal:15a, swaczyna_etal:15a}. On the other hand, analyses of PUIs and the helioglow provide consistent conclusions concerning the flow direction of ISN He \citep{vallerga_etal:04a, mobius_etal:15c, taut_etal:18a}, but a systematic difference between estimates of the inflow speed and gas temperature from direct sampling and from helioglow analysis exists, the latter returning a much larger temperature than the former.
Reducing the uncertainties below $\sim 1\degr${} in the flow direction and $\sim 1$~\kms~in the speed is needed to facilitate studies of ISN H and the secondary population of ISN He. This is because investigation of the secondary population in direct sampling observations requires subtracting the contribution from the unperturbed ISN He population from the measured signal \citep{kubiak_etal:16a, galli_etal:19a, swaczyna_etal:18a}. Also, searching for signatures of hypothetical deviations of the primary ISN gas population from thermal equilibrium, as those manifesting by a kappa distribution function \citep{sokol_etal:15a} or by a temperature anisotropy \citep{wood_etal:19a}, or of the effects of elastic collisions within the outer heliosheath, recently suggested to modify the distribution function of ISN He at the heliopause \citep{swaczyna_etal:21a}, requires a very precise knowledge of the first moments of the distribution function of the ISN gas, i.e., its flow vector. Last but not least, a precise knowledge of the inflow direction of both the primary and the secondary populations of ISN He is needed to determine with a high accuracy the orientation of the approximate symmetry plane of the heliosphere, defined by the flow vector of ISN He and the vector of the local interstellar magnetic field. As shown by \citet{zirnstein_etal:15a} and \citet{kubiak_etal:16a}, the direction of inflow of the secondary population belongs to the plane defined by the aforementioned vectors, which implies that the direction of the so-called B-V plane can be determined by 1 au observations of the primary and the secondary population of ISN He.
Here, we investigate the reasons for the degeneracies in the ISN He parameters inferred from observations and suggest the observation capabilities required from direct-sampling instruments operating in the vicinity of the Earth's orbit to remove the parameter correlation. We start from presenting the reasons for the correlation of the direction, speed, and temperature of the ISN gas, obtained from direct sampling measurements performed from a spacecraft co-moving with the Earth, like Interstellar Boundary Explorer (IBEX). Using a simple model with the thermal spread of the ISN gas neglected, we approximately reproduce the correlation obtained from the data analysis and we suggest a method to break the parameter correlation (Section \ref{sec:whyDegeneracy}). We then verify the findings using an advanced model of the gas distribution (Section \ref{sec:thermalSpread}), first illustrating the effects of the direction and speed (Section \ref{sec:speedDirCorrel}), and speed and temperature correlation (Section \ref{sec:speedTempCorrel}) for the IBEX viewing conditions, and we proceed to verify the idea of parameter breaking outlined in Section \ref{sec:whyDegeneracy}. We show that the essential prerequisite for the parameter breaking is the ability of the instrument to change its boresight direction and point out that a space mission within the reach of present measurement technology, like the planned Interstellar Mapping and Acceleration Probe (IMAP) mission \citep{mccomas_etal:18b}, will hopefully be able to remove the parameter correlation issue.
\section{Why do the inflow parameters of the ISN gas come out from observations correlated with each other?}
\label{sec:whyDegeneracy}
\noindent
The flow velocity of ISN atoms inside the heliosphere is governed by Sun's gravity. The gravity force bends the straight-line trajectories of the atoms into hyperbolae with the Sun in their foci. Had the ISN gas been infinitely cold \citep[i.e., monoenergetic, without thermal spread, as in the so-called cold model,][]{fahr:68, blum_fahr:70a, axford:72, johnson:72a, johnson:72b, fahr_lay:73a,holzer:77, thomas:78}, at very far distances from the Sun the atoms would move parallel to each other with identical speeds relative to the Sun. As their distances to the Sun drop, the gravity bends their trajectories and deflects them from their original unperturbed direction of motion, depending on the impact parameter of a given trajectory.
A direct-sampling experiment, like IBEX or GAS/Ulysses, detects the atoms in situ while moving around the Sun with a velocity of a magnitude comparable to that of ISN atoms. These instruments have been able to determine the direction from which an atom is coming, but not its impact speed. Had the direction and the speed both been measured, with the orbital velocity of the instrument known, it would be possible to unambiguously determine the velocity vector of the incoming atom in the solar-inertial frame, and subsequently the velocity of motion of this atom far away ahead of the Sun. This velocity would be equivalent to the inflow velocity of ISN gas.
However, since the impact speed is not determined, a family of solutions exists that feature different impact speeds and identical directions of impact at a moving instrument. Such a situation is illustrated in Figure \ref{fig:degenOrbits}, where we show two families of orbits, reaching the instrument on the spacecraft traveling in Earth orbit. While in the instrument-inertial frame the directions are identical, the magnitudes of the speed differ, which results in different velocity vectors of the atoms far away from the Sun. Such families of orbits, related by the impact geometry at the spacecraft, exist for all locations along the instrument orbit around the Sun. As a result, an observer who performs the observations during a given day of the year (DOY) will find a degeneracy of the direction and speed of inflow of the ISN gas. These degenerate directions and speeds of the atoms at infinity form lines in the parameter space.
This effect can be simulated as follows. Adopting a velocity vector of the ISN gas at infinity and selecting a location of the instrument at the Earth's orbit, one can calculate the relative velocity vector of the atom. With this, one can vary the impact speed by a certain value, transform the varied impact velocity vectors into the solar-inertial frame, and subsequently, by solving the hyperbolic Kepler equation, find the corresponding velocity vectors at infinity. Using this recipe, one can obtain projections of the correlation tubes on the longitude-speed plane for any selected DOY. We performed such calculations adopting, after \citet{bzowski_etal:15a}, the inflow speed 25.764~\kms, the flow longitude 75.75\degr, and the flow latitude $-5.16\degr$. The magnitude of the speed variation at the instrument was adopted as $\pm 2$ and $\pm 4$~\kms{} in the spacecraft-inertial frame, based on the insight from simulations performed using the Warsaw Test Particle Model for ISN He \citep[nWTPM, ][]{sokol_etal:15b}, which showed that the thermal speed parallel to the ISN flow in the spacecraft frame at 1 au, calculated as the second central moment of the atom speed distribution at the instrument in the spacecraft-inertial frame, is equal to $\sim 2 \pm 0.5$~\kms. This is adopted as the spread of relative speeds along the impact direction at the instrument.
Formation of a correlation tube is presented in Figure~\ref{fig:vlCorrelTubesIBEX} for the IBEX-Lo observation interval. The optical axis of the IBEX-Lo camera is perpendicular to the rotation axis of the IBEX spacecraft, which is maintained approximately pointing at the Sun \citep{fuselier_etal:09b, mobius_etal:09a}. Following \citet{bzowski_etal:15a} and \citet{swaczyna_etal:18a}, we select a time interval during the year from which IBEX-Lo observations of ISN He were taken to the analysis by these authors, i.e., DOYs 22 and 57, and additionally the DOY for the maximum of the observed signal, i.e., DOY 37. We calculate the longitude--speed correlation lines for the start, maximum, and end days of this interval and plot the resulting correlation tube in Figure~\ref{fig:vlCorrelTubesIBEX} as the gray region. Superimposed, we plot the actually obtained correlation line from the data analysis by \citet{bzowski_etal:15a} (see their Figures 5 and 7). Clearly, the observation-based correlation line overlaps with the simulated tube, even though the simulated tube was obtained using very simplified assumptions.
This part of the analysis suggests that performing direct-sampling observations of ISN He along a relatively short arc of the Earth's orbit around the Sun (about 35\degr, as in the case of IBEX-Lo observations) will not permit to fully remove the correlation of the ISN flow parameters. The parameter correlation exists because of ballistic reasons in the situation when one can measure the direction of inflow of the gas at the instrument, but not its speed. A detailed analysis, like that proposed by \citet{swaczyna_etal:15a} and used by \citet{bzowski_etal:15a}, \citet{swaczyna_etal:18a} and \citet{swaczyna_etal:22b}, enables determining the most likely solution, but with uncertainties given by a complex covariance matrix, which results in the largest uncertainties along the correlation lines. Therefore, increasing the counting statistics or reducing the background will not be sufficient to remove the parameter correlation. The correlation may be partly alleviated by varying the spin axis of the spacecraft around the ecliptic plane, as pointed out by \citet{schwadron_etal:22a}. However, performing observations over a long orbital arc, or during at least two time intervals along the orbit separated in time by several months, will result in the creation of several correlation tubes intersecting at large angles in the parameter space near the true inflow parameters, and thus will constrain the parameters more tightly.
This is illustrated in the left panel of Figure~\ref{fig:CorrelTubesIMAP}, which shows correlation tubes projected on the longitude-speed plane, simulated for DOYs uniformly distributed along the first half of the year. The correlation lines rotate around the point defined by the inflow parameters of ISN He assumed in the simulations. Direct-sampling observations performed during a few consecutive days during any portion of the year will result in parameters correlated along a correlation tube specific for the given observation time. Cartoons of such tubes are plotted as the elliptical gray shades. The lengths of the axes of these elliptical region were adopted based on the approxmation provided by \citet{swaczyna_etal:15a}. Inspection of the correlation lines and the associated parameter tue regions represented by the elliptical shades shows that by combining observations from at least two time intervals of $\sim 30$ days separated by $\sim 60$ days is expected to constrain the parameters so that the correlation is mostly removed. The resulting uncertainty range of the parameters is represented by the intersections of the parameter tubes, illustrated by the dark regions. Clearly, if observations from long orbital arcs are available, then the uncertainty becomes close to spherical in the parameter space, and the size of the uncertainty region is reduced by a large factor.
This is also true for the correlation between the longitude and latitude of the inflow direction, as show in the right panel of Figure \ref{fig:CorrelTubesIMAP}.
In the discussion above, the finite temperature of the ISN gas has been neglected. However, analysis of IBEX-Lo observations showed that the temperature is correlated with speed and inflow direction, forming a parameter tube. It was also found that the Mach number for these correlated parameters is practically independent of these parameters. The reason for this is the thermal spread of atom velocities. The differentiation of atom speeds at the instrument exists because the thermal spread of the ISN translates into the observed angular distribution of the flow. Figure~\ref{fig:TvCorrelTubesIBEX} demonstrates this clearly. We compare the IBEX correlation line, drawn in blue, with a relation obtained in the following way: take the speeds from the longitude-speed correlation line presented in Figure~\ref{fig:TvCorrelTubesIBEX} and calculate the corresponding temperatures assuming a constant Mach number using the formula:
\begin{equation}
T = \frac{3 m_{\text{He}}}{5 k_{\text{B}}}\left(\frac{v}{M} \right)^2,
\label{eq:TfromMach}
\end{equation}
where $M$ is the Mach number, $v$ the speed of the ISN gas in the unperturbed interstellar medium, $k_\text{B}$ the Boltzmann constant, and $m_{\text{He}}$ the mass of He atoms. With this, the temperature-speed parameter tube is identical for all DOYs. Inspection of Figure~\ref{fig:TvCorrelTubesIBEX} shows that the observation and model correlation lines agree very well with each other. The flow longitude--temperature correlation exists because the temperature is correlated with the speed, and the speed with the longitude. Hence, constraining the temperature is obtained when the longitude--speed correlation is removed.
An interesting aspect of the parameter correlation is pointed out for DOYs about 121 (the black line in Figure \ref{fig:CorrelTubesIMAP}). The correlation line is vertical in the figure, which implies that observations for this DOY will return an uncertainty in speed, but relatively little in the inflow direction. That means that this DOY seems advantageous for precise determination of the inflow direction.
The discussion presented so far is mostly based on simple-model arguments. However, the existence of intersecting parameter correlation tubes and the feasibility of using them to constrain the ISN flow vector and gas temperature have been verified in an analysis of ISN He observations performed by the Ulysses/GAS experiment \citep{bzowski_etal:14a}. The GAS experiment \citep{witte_etal:92a, witte_etal:93} was the first to directly sample ISN He. The data were collected along long arcs of the Ulysses orbit around the Sun, at the perihelion side of the orbit between the north and south solar polar directions \citep[see Figure 1 in][]{bzowski_etal:14a}. The spacecraft was three axis-stabilized, and the beam of ISN He was maintained within the field of view by frequent adjustments of the direction of the instrument boresight. Ulysses performed three revolutions around the Sun, and usable data were split into several groups \citep[see Figure 14 in][]{bzowski_etal:14a}. Determination of the ISN He flow parameters performed by these authors using data from these individual arcs resulted in the discovery of intersecting parameter tubes in the parameter space (see Figures 11 and 12 in this paper). These intersections were used to constrain the flow parameters. Simultaneously, these authors pointed out that the correlation tubes obtained from similar arcs in different Ulysses orbits are very similar to each other (see Figures 8 and 9 in the aforementioned paper). This is, of course, expected based on the insight presented earlier in our paper. Similarly, \citet{wood_etal:15a} obtained well-constrained inflow parameters based on Ulysses observations collected on a long orbital arc, even though they used a different parameter determination method to that used by \citet{bzowski_etal:14a}.
\section{Effects of the thermal spread of atom velocities}
\label{sec:thermalSpread}
\noindent
In this section, we acknowledge that the ISN gas has a thermal spread. We demonstrate by simulations of the ISN He signal performed using a state of the art hot model of ISN He (WTPM) for the viewing conditions of IBEX-Lo and IMAP-Lo that indeed, the simulated signals obtained for the parameters highly-correlated based on the insight from the cold model, are indeed almost indistinguishable in the relevant portions of the Earth orbit, in other words, these parameters are indeed degenerate. However, the signals calculated for the same parameter sets but for different portions of the Earth's orbit (i.e., for different instrument location) become to different between the different parameter sets. Thus, the degeneracy obtained for the IBEX viewing conditions can be removed by performing observations in a different portion of the Earth orbit. This, however, is only feasible with the capability to shift the boresight of the instrument, which will be available on IMAP-Lo. We also demonstrate that the signals expected for uncorrelated parameter sets for the IBEX viewing geometry become hardly distinguishable for the IMAP-Lo viewing geometry at the locations in the Earth's orbit predicted by the theory presented in the previous section, which illustrates that the reasons for the parameter degeneracy are well understood.
Subsequently, we demonstrate that a similar behavior of the simulated signal, and thus the parameter correlation, is expected not only for the boresight orientation optimized for the maximum statistics, but also for alternative viewing conditions provided that a portion of the thermally-broadened ISN beam is in the instrument field of view. In fact, observing flanks of the distributions instead of the peaks may facilitate removing the inflow parameter correlation at the cost of a certain reduction in counting statistics.
We start from the removal of the direction--speed correlation, followed by the temperature--speed or temperature--direction correlation.
\subsection{Simulations and parameter selection}
\label{sec:paramSel}
\noindent
To investigate the parameter correlation removal, we performed a series of numerical simulations of the signal due to ISN He expected to be observed by an IMAP-Lo-like instrument orbiting the Sun at 1 au in the ecliptic plane. The virtual instrument is supposed to be mounted on a virtual spin-stabilized spacecraft with the spin axis directed precisely at the Sun for each DOY, and the elongation angle of the boresight of this instrument from the spin axis can be freely adjusted. The change of the ecliptic longitude of the spacecraft during a day was neglected.
\begin{deluxetable*}{llllll}[!h]
\tablecaption{\label{tab:degenParam} degenParam Flow parameters of ISN He used in the simulations}
\tablehead{case & longitude [\degr] & latitude [\degr] & speed [\kms] & temperature [K] & Mach no}
\startdata
nominal set & 75.75 & $-5.160$ & 25.784 & 7443 & 5.0754 \\
degen. 1 & 74.00 & $-5.287$ & 27.073 & 8380 & 5.0262 \\
degen. 2 & 78.00 & $-4.995$ & 24.120 & 6322 & 5.1556 \\
\hline
const. Mach 1 & 75.75 & $-5.160$ & 23.763 & 6322 & 5.0754 \\
const. Mach 2 & 75.75 & $-5.160$ & 27.359 & 8380 & 5.0754
\enddata
\end{deluxetable*}
The simulations were performed assuming the inflow parameters of the ISN gas identical to those obtained from analysis of IBEX observations and the two alternative sets of highly correlated parameters, listed in the first three rows in Table \ref{tab:degenParam}. The differences between the optimum parameters, listed as ``the nominal set'', and the two alternative sets, listed as ``degen. 1'' and ``degen. 2'', are chosen so that the parameters lie on the correlation tube in the parameter space. This tube is shown with the blue line in Figure \ref{fig:vlCorrelTubesIBEX}. The speed--longitude combinations of the two latter sets are drawn as purple dots.
Another series of simulations was performed for two sets of inflow parameters that are located outside the parameter tube. These parameter sets are correlated with each other and with the nominal parameter set (line 1 in Table \ref{tab:degenParam}) by having identical Mach number. These two parameter sets are listed in the last two rows in Table \ref{tab:degenParam} as ``const. Mach 1'' and ``const. Mach 2''. They are plotted with the blue line in Figure~\ref{fig:TvCorrelTubesIBEX} and as thick black dots in Figure \ref{fig:vlCorrelTubesIBEX}. In this case, the inflow direction (longitude and latitude) was identical between the three parameter sets, and the speeds and temperatures were varied so that they correspond to the Mach number corresponding to the nominal parameter set.
All simulations were carried out using a time- and heliolatitude-dependent model of the ionization factors, adopted from \citet{sokol_etal:19a}. For the virtual observation year, we chose 2015 to simulate the conditions of high solar activity, resulting in relatively high ionization losses of ISN He inside the heliosphere. This choice was made for correspondence with the expected level of solar activity after launch of the IMAP mission in 2025.
The simulations were performed using the numerical version of the Warsaw Test Particle Model \citep[nWTPM; ][]{sokol_etal:15b}. They modeled the flux of ISN He filtered by an IMAP-like collimator as a function of the spacecraft spin angle. More details can be found in \citet{sokol_etal:19c}. The results shown further in the paper are organized into series of several selected DOYs. The flux shown is normalized by the maximum flux for the presented simulation series for a given parameter set and the selection of DOYs and elongation angles. Separate normalization constants are calculated for series of simulations with different flow parameters. This normalization is performed to eliminate differences in the absolute flux between the cases. This is needed because in reality, the absolute sensitivity of an instrument is known with a limited accuracy, so determination of the ISN gas parameters cannot rely on the absolute calibration of the instrument. In fact, we only compare the shapes of the simulated flux as a function of the spacecraft spin angle.
For the presentation of the results, we selected a relatively high cutoff of 1\% of the maximum flux, to approximately account for the expected presence of the secondary population of ISN He, the Warm Breeze, as pointed out by \citet{sokol_etal:19c}. We also used an energy sensitivity limit, adopted at 20 eV for all discussed species, following the insight from \citet{galli_etal:15a,sokol_etal:15a}.
\subsection{Removing the direction--speed correlation}
\label{sec:speedDirCorrel}
\subsubsection{IBEX viewing conditions: parameter degeneracy or correlation?}
\label{sec:ibexViewing}
\noindent
The Interstellar Boundary Explorer (IBEX) is the first spacecraft to sample ISN gas directly at the Earth's orbit \citep{mccomas_etal:09a}. The spin-stabilized spacecraft is in an elongated orbit around the Earth \citep{mccomas_etal:11a} with a period of several days. The IBEX-Lo instrument, used to observe the ISN gas \citep{fuselier_etal:09b, mobius_etal:09a}, has a boresight with a fixed direction, inclined at 90\degr{} to the spacecraft spin axis. The spin axis is shifted once or twice per IBEX orbit (i.e., every several days) to approximately follow the Sun. Thus, between the spin axis shifts, the beam of the ISN gas viewed by the instrument is slowly moving across the observed strip in the sky. Because of the fixed viewing direction, the beam of ISN gas can be observed only during several weeks each year. The ISN observation season is further limited by strong contributions to the signal from the secondary population of ISN He at the beginning of each year \citep{bzowski_etal:12a, kubiak_etal:14a, kubiak_etal:16a} and from ISN H later during the year \citep{saul_etal:12a, schwadron_etal:13a,katushkina_etal:15b, galli_etal:19a, rahmanifard_etal:19a}. Effectively, the observation season of the primary population of ISN He spans approximately DOYs 22--57 each year. Thus, the corresponding Earth's orbital arc is $\sim 35\degr$, which makes it short in comparison with a quarter of the Earth orbit that is best suited for breaking the degeneracy (cf the correlation lines for DOYs 37 and 127 in Figure \ref{fig:CorrelTubesIMAP}).
Several analyses of IBEX-Lo observations returned the inflow velocity vector and the temperature of ISN He with uncertainties forming a ``tube'' in the parameter space \citep{bzowski_etal:12a, mobius_etal:12a, bzowski_etal:15a, schwadron_etal:15a, swaczyna_etal:18a, swaczyna_etal:22b}. The uncertainties of the parameters are much larger along these correlation lines in the four-dimensional parameter space, but much tighter constrained across them. This has been referred to as parameter correlation or parameter degeneracy \citep{schwadron_etal:22a}.
Discussion provided in Section \ref{sec:whyDegeneracy} suggests that the parameters of the ISN gas determined from an instantaneous observation will be degenerate if the speed of the incoming atoms cannot be measured. If the measurements are performed during a number of consecutive days, then the parameter degeneracy line will be slowly rotating in the parameter space. The simulations discussed in Section \ref{sec:whyDegeneracy} were performed for individual atoms and assuming an idealized situation when the instrument is able to look precisely into the beam of the incoming atoms. However, the actual observation conditions on IBEX differ from these ideal ones, and the sampling was performed along a finite arc of the Earth orbit. As a result, the parameters obtained from the analysis are strongly correlated with each other, but not degenerate, since it was possible to obtain a unique set of the parameters by means of chi-squared analysis \citep[e.g.,][]{swaczyna_etal:15a}.
To show that the parameter degeneracy or correlation is indeed supported by models of the ISN He flux observed at 1 au, we performed simulations of the IBEX signal for two alternative observation scenarios: (1) for an idealized situation, when the instrument has a variable elongation of its boresight from the spacecraft spin axis, with the boresight always directed so that the peak of the ISN beam goes into the instrument field of view (``follow the peak''), and (2) for a more realistic situation for IBEX, when the boresight is always perpendicular to the spin axis (``stepwise adjustment''). In both cases, the spin axis was assumed to point precisely towards the Sun for each DOY, unlike in the reality on IBEX. The simulations were performed for selected five DOYs during the yearly ISN observation season; covering the time interval chosen by \citet{bzowski_etal:15a}, and subsequently \citet{swaczyna_etal:18a} and \citet{swaczyna_etal:22b} for analysis of the actual observations. Also the selection of spin angle boundaries follows the choices made by these authors. The evolution of the elongation from the Sun of the ISN gas beam during the year for scenario (1) was adopted from \citet{sokol_etal:19c}.
The results are presented in Figure \ref{fig:ibexGeom}. The figure compares the flux for the nominal set of inflow parameters (the black lines) and for the parameters listed in the second and third rows in Table \ref{tab:degenParam}.
The signal for scenario (1) is represented by the upper groups of lines, which were scaled by a facor of 2 for a better visual separation from the other set of lines. For the viewing conditions (1), the three simulated cases are indeed degenerate in the sense that the simulations return practically identical results for the three different parameter sets. This illustrates that simulations performed taking the thermal spread of the ISN gas into account confirm the presence of a high parameter correlation obtained from observations performed along a short arc around the Sun, when the instrument boresight is adjusted to exactly follow the peak of the ISN gas beam. Note, however, that the fluxes for the three parameter sets are not perfectly equal. We verified that the differences between the three cases are within $\sim 10$\% and are the largest for the earliest and the latest day in the presented sample.
In reality, the IBEX-Lo boresight is fixed at 90\degr. The simulations performed for the same DOYs and the same parameter sets are presented as the lower sets of lines in Figure \ref{fig:ibexGeom} (scenario 2). Clearly, also in this case, the three strongly correlated parameter sets return very similar, albeit not identical fluxes. Details of the differences between the three simulated cases are different, but generally their magnitudes are similar to those obtained in scenario (1). This illustrates that the thermal character of the gas allowed to fit a unique solution to IBEX-Lo observations, even though the parameters of the ISN gas flow were obtained strongly correlated.
A conclusion from this portion of our study is that indeed, where the simple reasoning presented in Section \ref{sec:whyDegeneracy} suggests a parameter degeneracy, the models of the signal obtained from our realistic simulations predict almost identical behaviors of the simulated flux for the correlated parameter sets. Hence, the parameter degeneracy for these short arcs is confirmed by simulation.
\subsubsection{Breaking the parameter correlation}
\label{sec:correlBreak}
\noindent
In this section, we will verify if performing observations along longer orbital arcs of the Earth facilitates breaking the parameter correlation and what scenarios for the boresight angle adjustments are the most advantageous to accomplish this goal. Clearly, the differences between the correlated cases should be as large as possible, but on the other hand, the absolute magnitude of the expected signals must be sufficiently large to provide sufficient counting statistics.
To that end, we performed simulations of the signal along the Earth orbit throughout the entire year every four days for the three correlated parameter sets, assuming either that the elongation angle is adjusted daily so that the peak of the flow of the ISN gas is within the field of view, or that this angle is adjusted stepwise, with prolonged intervals of a constant setting. The most salient results are presented in Figure \ref{fig:followVsStep}, which has a format similar to that adopted for Figure \ref{fig:ibexGeom}.
It is clearly visible that an opportunity to break the parameter correlation is obtained by extending observations along the Earth's orbital arc. Within scenario (1) (``follow the peak''), very similar signal shapes are visible for the times of the IBEX observations, but for other DOYs the differences are larger. For observations at the turn of the years (represented by the panel for DOY 365) and several months later, about DOYs 157 and 173, the signals for the selected parameter sets are very different from each other. This means that the parameters that came out correlated during the IBEX observation times will come out uncorrelated from observations performed during different epochs, exactly as suggested by the analysis presented in Section \ref{sec:whyDegeneracy}.
The simulations presented in Figure \ref{fig:followVsStep} were made for parameter sets that are within the correlation lines characteristic for early DOYs during the year (see Figures \ref{fig:vlCorrelTubesIBEX} and \ref{fig:CorrelTubesIMAP}). However, a qualitatively similar behavior is expected for all other (longitude--speed) pairs. For such pairs, the expected signal will be almost identical during some portion of the year, but very different for a different portion, facilitating breaking the parameter correlation provided that observations are performed along a sufficiently long arc along the orbit around the Sun.
It is worth to point out in the context of the planned observations by IMAP-Lo that the effect of breaking the parameter correlation can be obtained also when using alternative scenarios for modification of the elongation angle. This is illustrated by the example presented as the second (lower) set of lines in the panels of Figure \ref{fig:followVsStep}. Using the elongations listed in the figure, the peak of the ISN beam is not in the field of view and the instrument is looking at the flanks of the beam except for one of the orbits. The elongation angle is maintained fixed during prolonged intervals and changed from time to time. This corresponds to the ``stepwise scenario'' (2). Clearly, in this scenario, the signal is weaker than in the former case, but the differences between the signals obtained for the three correlated cases are much larger. However, when planning the observations, one should take the expected statistics into account. Weaker signals imply a larger Poisson noise in the data and large differences, as those seen in lower set of lines the lower-right panel, might be partially obscured by the statistical scatter. Nevertheless, the potential to break the parameter correlation remains.
When planning the observations, one needs to take several aspects into account. One of them is the presence of other components of the ISN gas in the considered field of view. In the case of ISN He, it is the presence of the secondary population of this species, i.e., the Warm Breeze. This aspect was illustrated in Figure 8 in \citet{sokol_etal:19c}. In the examples presented in this paper, we verified by simulations (not shown) that the expected signal from the Warm Breeze is low relative to that from the primary population of ISN He (around the lower bound of the plots).
\subsubsection{Using the indirect beam of the ISN gas for breaking the parameter correlation}
\label{sec:ndirBeam}
\noindent
An interesting option for breaking the parameter correlation may be using the indirect beams of the ISN gas. The cold model of the ISN gas inside the heliosphere predicts that in any point in space, two co-planar orbits of ISN atoms intersect, with different impact parameters and the angular momentum vectors oriented oppositely relative to the orbital plane. One of them is referred to as the direct orbit, and the other one as the indirect orbit. The latter one has typically already passed its perihelion and is receding from the Sun, with a positive radial velocity. An example of indirect orbits is presented in Figure \ref{fig:degenOrbits} (DOY 305). Since in reality the ISN gas has a thermal spread, we speak of the direct and indirect beams of ISN atoms.
For observations of ISN He performed close to the Earth's orbit, the indirect beam can be observed provided that the instrument boresight can be inclined to the Sun-centered spin axis of the spacecraft at least at 60\degr--75\degr. In principle, one could determine the inflow parameters of the ISN gas solely from observations of the indirect beam. However, the parameter correlation tubes are narrower for the indirect beam than for the direct beam observed by IBEX because of a relatively short observation arc. This is shown in the upper-left panel of Figure \ref{fig:NdirCorrelTubes}, where we repeat the IBEX correlation tube presented in Figure \ref{fig:vlCorrelTubesIBEX} and plot the correlation lines for two extreme DOYs when the whole indirect beam is visible for an instrument with the boresight elongation angle equal to 60\degr. Clearly, the correlation lines for the $\sim 15$ day observation window are almost identical. However, they strongly differ from the correlation lines for the easily-observable direct beam for DOYs only two months afterward. The intersection of the two correlation tubes will constrain the parameter magnitudes quite tightly.
The indirect beam was observed in the past by GAS/Ulysses \citep{witte:04}, but because of a high physical background due to the Milky Way, helioglow, and stars it could not be used for precise determination of the inflow parameters. Here we argue that observing the indirect beam on a spacecraft like IMAP is feasible.
First, the expected magnitude of the flux of the indirect beam is between 12\% and 20\% of the maximum flux for the IBEX viewing conditions even for the solar maximum epoch, as shown in the upper-right panel of Figure \ref{fig:NdirCorrelTubes}. The plot shows the expected flux normalized to the maximum of the respective fluxes for the IBEX viewing conditions. Such a flux magnitude is easily within the capability of an IBEX-like ISN instrument. Second, even though the correlation tubes obtained from the cold model are very close to each other, the actual beams with their thermal spread will not be challenging to differentiate, as shown in the lower row of panels of Figure \ref{fig:NdirCorrelTubes}. In these panels, we show normalized beams for the three correlated cases defined in the first three rows of Table \ref{tab:degenParam}. The shapes of the beams differ throughout the entire indirect beam window. We have verified that even though the Warm Breeze (i.e., the secondary population of ISN He) will be visible during the indicated time window, the WB signal is expected at a much lower level than that of the ISN beam, and consequently should not preclude using the indirect beam for the parameter correlation removal. The presented indirect beams for the tightly correlated parameters strongly differ from each other and thus are expected to be distinguishable in the actual observations despite the expected poorer statistics.
\subsection{Speed--temperature correlation}
\label{sec:speedTempCorrel}
\subsubsection{IBEX viewing conditions}
\label{sec:IBEXVTCorrel}
\noindent
Another correlation found in the IBEX data analysis and pointed out in Section \ref{sec:whyDegeneracy} is the correlation of the inflow speed and the gas temperature by the Mach number (Figure \ref{fig:TvCorrelTubesIBEX}). We simulated the expected IBEX-Lo signal for parameters correlated by the constant Mach number, as listed in the last two lines in Table \ref{tab:degenParam}. Here, the speed and the temperature are varied, but the inflow direction remains unchanged. These parameters are outside of the IBEX direction--speed correlation tube.
For the IBEX viewing conditions, i.e., with the elongation constant in time, the simulated signals are almost indistinguishable during a very short interval of time, but clearly different during the remaining portions of the IBEX observation season. This is illustrated in Figure \ref{fig:ibexGeomTh} in the lower set of lines. The difference in the signals for the three parameter sets are expected because the inflow parameters are outside of the experimentally-found parameter correlation tube. This suggests that it was possible to fit a unique set of the speed and temperature of the ISN gas based on IBEX observations owing to the fixed direction of the IBEX-Lo boresight direction.
However, the speed--temperature correlation is real and does have consequences. Simulations performed for the elongation angle varying to follow the peak of the ISN beam are hardly distinguishable, as demonstrated by the upper set of lines in Figure \ref{fig:ibexGeomTh} (which were scaled up by a factor of 2 to visually separate them from the other line set). In the following section, we will investigate how much this Mach number-related correlation persists in observations carried out along extended arcs around the Sun.
\subsubsection{Removing the temperature--speed correlation}
\label{sec:IMAPVTCorrel}
\noindent
The Mach-number correlation was studied by simulations assuming the same scenarios for varying the boresight tilt angle as those used in Section \ref{sec:speedDirCorrel}. They are presented in Figure \ref{fig:followVsStepTh}.
Clearly, removing the temperature-speed correlation within the ``follow the peak'' scenario (1) may be challenging, albeit feasible. The signals, represented by the upper set of figures (scaled by a factor of 2) are almost identical, and differences occur mostly away from the peaks. To remove the correlation within this scenario, it is recommended to use observations from time intervals distant by about three months (see the lines for DOY 157 and DOY 73) Note also that for a time around DOY 120, the direction--speed correlation coincides with the temperature-speed correlation (for the parameter sets listed in Table \ref{tab:degenParam}).
However, removing the temperature-speed correlation is expected to be much easier when using the stepwise adjustment scenario, as illustrated by the lower set of lines in Figure \ref{fig:followVsStepTh}. Even though for the elongation angles used in this figure, the differences between the correlated cases seem smaller than those for the direction--speed correlated cases shown in Figure \ref{fig:followVsStep}, they can be increased by selecting a different elongation angle, as we have verified (simulations not shown). Nevertheless, the temperature--speed correlation is removed when one of these two parameters are determined independently, which can be done relatively easily with the speed, as discussed in Section \ref{sec:correlBreak}.
\subsection{Removing correlation of the inflow parameters for heavy species}
\label{sec:NeOxCorrel}
\noindent
Direct-sampling observations by IBEX-Lo revealed the presence of ISN O and Ne at the Earth's orbit \citep{mobius_etal:09b,bochsler_etal:12a}. While it is expected that Ne and O co-move with the ISN gas and that within the unperturbed LISM, the flow parameters of Ne and O are identical to those of ISN He, it cannot be ruled out that interactions within the outer heliosheath modify the populations of individual species in different ways \citep{schwadron_etal:16a,baliukin_etal:17a}. Therefore, it would be interesting to determine the flow parameters of the heavy interstellar species independently.
Since Ne and O are practically insensitive to solar radiation pressure, they follow purely Keplerian trajectories inside the heliosphere, and the peaks of their beams at the Earth orbit are expected in the same locations in the spacecraft-inertial reference frame to those of ISN He. Since atomic masses of these species are much larger than that of He, the beam widths of Ne and O are much narrower than the beams of ISN He (compare Figure \ref{fig:followVsStep} and \ref{fig:followPeakOx}). And because absolute densities of these species are estimated at $5.5 \times 10^{-6}$ \cc{} and $4.5\times 10^{-5}$ \cc, respectively \citep{frisch_slavin:03}, in contrast to that of ISN He at $1.5\times 10^{-3}$ \cc \citep{witte:04}, their fluxes at 1 au are expected to be lower by roughly three orders of magnitude than those of ISN He \citep{sokol_etal:19c}. This makes them a challenging target for direct-sampling observations.
Despite these challenging conditions, Ne and O were detected in the IBEX-Lo data approximately at the expected signal level \citep{park_etal:14a,park_etal:15a,park_etal:19a}. It was found that the joint flux of Ne and O was lower by three orders of magnitude than that of ISN He, so the counting statistics actually obtained was much lower than that for this latter species. This, and technical issues in IBEX-Lo after 2011, prevented further observations of these species, but the data that had been collected enabled \citet{schwadron_etal:16a} to determine the flow parameters of ISN O. They were obtained similar to those found for ISN He, but with a much larger uncertainty. Also this latter study reported a strong correlation between the obtained flow parameters.
This is not surprising given the insight provided so far in this paper. The mechanism responsible for inflow parameter correlation is expected to be identical for all species. Therefore, it works also for Ne and O, even though it is challenging to separate O and Ne in the data because of the mechanism of neutral atom detection \citep[sputtering of O$^-$ ions by both O and Ne atoms,][]{wurz_etal:06a}.
We performed simulations for ISN O assuming the three alternative highly-correlated parameter sets listed in the first three rows in Table \ref{tab:degenParam}, assuming two alternative scenarios of adjusting the virtual instrument boresight: following the peak of the ISN beam, identical to that used in Figure \ref{fig:followVsStep}, and a stepwise modification, with a stepwise change of the boresight. Results of the first of them are presented in Figure \ref{fig:followPeakOx}. As in the preceding figures, we present the fluxes normalized to the maximum values of the presented time series, but to acknowledge the low absolute values of the expected flux, we introduced a low threshold of 10 atoms s$^{-1}$ cm$^{-2}$, approximately equal to the IBEX-Lo detection threshold \citep{park_etal:19a}.
The results illustrate that using the ``follow the peak'' scenario for boresight adjustment, one is able to collect measurements of ISN Ne and O with a low statistical scatter. Despite the small width of the beams, it will be possible to break the parameter correlation (note the different agreement levels for the simulations made with the correlated parameters for different DOYs, distributed along the first half of the year). The first of the presented panels approximately corresponds to the IBEX viewing conditions, the remaining four panels show the flux evolution and the increasing differences.
Since, as discussed in Section \ref{sec:speedDirCorrel}, it is advantageous to use a stepwise boresight adjustment scenario for ISN He, we checked if adoption of such a scenario will be advantageous also in the case of ISN Ne and O. To that end, we performed simulations presented in Figure \ref{fig:stepOx}. The flux threshold is identical to that used in Figure \ref{fig:followPeakOx}.
The simulations clearly show that removal of parameter correlation for ISN Ne and O will be facilitated by adoption of the stepwise boresight adjustment scheme, similarly as in the case of ISN He. The differences between the correlated cases are larger than those expected when adopting the ``follow the peak'' scheme.
Adopting the step-like boresight modification scheme is expected to provide provide measurements with a larger statistical scatter than that obtained using the ``follow the peak'' scheme. While this issue is not a significant drawback for the plentiful ISN He, it potentially might be an issue for Ne and O. However, analysis presented in Figure \ref{fig:fluxMaxPlot} demonstrates that adoption of a reasonable stepwise scheme, similar to that presented in Figure \ref{fig:stepOx}, can provide a sufficiently high signal to noise ratio to perform successful measurements and data analysis. The figure shows that the expected flux magnitudes are measurable, certainly not worse that those in the case of IBEX-Lo observations. The tested adjustment scheme offers two seasons of plentiful fluxes of ISN Ne and O, one about DOY 100, and another one about DOY 140, which is expected to provide two correlation tubes intersecting at large angles in the parameter space, as shown in Figure \ref{fig:CorrelTubesIMAP}. Thus, with the capabilities of IMAP-Lo, it will be possible to break the parameter correlation for the direction and speed of ISN Ne and O, not only for He. Breaking the speed--temperature correlation will be naturally achieved due to a tightly constrained speed magnitude.
We note, however, that it is not likely that the option of using the indirect beams of the heavy species will be feasible, at least during the solar maximum conditions, because of a much larger attenuation of these beams by ionization than it is expected for helium.
\subsection{Discussion}
\label{sec:discussion}
\noindent
The selection of the DOYs and elongation angles used in the example simulations presented in Section \ref{sec:thermalSpread} have not been optimized for the most advantageous elongation angles and fixed-elongation intervals. The ``stepwise adjustment'' scenario was adopted as an educated guess based on the insight from \citet{sokol_etal:19c}. However, it is evident from this study that breaking the parameter correlation is not difficult provided that observations are performed continuously or intermittently, but during time intervals distant by a couple of months during the year. If the adopted elongation angles allow a sufficiently large flux of ISN He into the instrument, the correlation will be removed, and a scheme of elongation adjustment can be defined to accommodate also other science targets, as those discussed in \citet{sokol_etal:19c}.
The discussion presented so far focused on the primary population of the ISN gas. However, identical parameter correlation mechanisms operate also for the secondary population, which, similarly to the primary population, also are obtained strongly correlated \citep{kubiak_etal:16a}. The proposed ideas to break the parameter correlation are applicable in studies of the secondary population as well.
\section{Summary and conclusions}
\label{sec:conclu}
\noindent
We have investigated the reasons for the correlation of the inflow parameters of the ISN gas, obtained in analyzes of direct-sampling observations performed by the IBEX-Lo experiment. The presence of the correlation is visible as ``tubes'' of the more likely values of the parameters in the parameter space.
We found that the parameter correlation appears because of physical reasons. As long as there is no capability of measuring both the direction and the speed of the atoms incoming at the instrument, and only the direction can be measured, the flow direction, the speed, and the temperature of the ISN gas will be obtained correlated, when observations are carried out along a short orbital arc of the instrument. An illustration of the reason is presented in Figure \ref{fig:degenOrbits}.
This conclusion is obtained from a simple reasoning, presented in Section \ref{sec:whyDegeneracy}, supported by calculations performed using the cold model of the ISN gas flow. With this model, we approximately reproduced the parameter tube obtained from IBEX observations, as shown in Figure \ref{fig:vlCorrelTubesIBEX}. This conclusion is also supported experimentally, by analysis of the ISN He flow parameters based on measurements of the Ulysses/GAS by \citet{bzowski_etal:14a}, discussed in Section \ref{sec:whyDegeneracy}.
The parameter correlation will not appear in the analysis if the observations are performed along a sufficiently long orbital arc around the Sun, or at least over shorter orbital arcs of the instrument, separated in space. The orientation of a parameter tube in the parameter space varies with the location along the orbit. When the instrument moves with the spacecraft along the orbit, the parameter tube rotates in the parameter space, and the daily tubes intersect at the position of the actual inflow parameters. This is illustrated in Figure \ref{fig:CorrelTubesIMAP}. By using this feature, it is easy to break the correlation between the direction and the speed of the inflow of ISN gas.
Another option to break the parameter correlation, complementary to the aforementioned one, is using observations of both the direct and indirect beams of ISN He, as discussed in Section \ref{sec:ndirBeam}. This requires using elongation angles of the instrument boresight about 60\degr--75\degr{} and is feasible only for He, because of large ionization losses for O and Ne.
The correlation between the speed and the temperature appears because combinations of the speed and the temperature corresponding to identical Mach numbers of the flow result in identical widths of the ISN gas beam. The speed--temperature tube is shown in Figure \ref{fig:TvCorrelTubesIBEX}. In the simple terms of the cold model, this correlation persists along the orbit around the Sun.
By simulations performed using the hot model of the ISN gas flow in the heliosphere, implemented in the WTPM model, we showed that indeed, the simulated beams of the ISN gas observed by an IBEX-like instrument, performed for alternative parameter sets correlated by speed and direction for a given location in the orbit, predicted by the cold model, are almost indistinguishable, as shown in Figure \ref{fig:ibexGeom}. We showed that the same parameter sets are outside the parameter tube for different location along the Earth orbit, which results in strongly differing simulated beams, presented in Figure \ref{fig:followVsStep}. Such beams would be easy to differentiate in actual observations. This supports the idea that performing the observations along sufficiently long orbital arcs enables breaking the parameter correlation.
Breaking the speed-temperature correlation will also be possible, since the similar beams obtained for the correlated parameters in one location will become different for a different location, as shown in Figure \ref{fig:followVsStepTh}.
The ability to observe the ISN gas from different locations along the Earth orbit requires a capability to adjust the boresight of the instrument, i.e., the elongation angle between the boresight and the spacecraft spin axis, such as that of the planned IMAP-Lo experiment. In Figures \ref{fig:followVsStep} and \ref{fig:followVsStepTh} we show that successful parameter breaking is achievable for different scenarios of adjusting the elongation angle as the measurement location moves along the Earth orbit. This is true not only for the case when the boresight is adjusted to follow the peak of the ISN gas. In fact, adopting a scenario with the boresight elongation maintained constant during prolonged intervals may be more advantageous.
We verified this conclusion not only for the planned observations of the plentiful ISN He, but also for those of ISN Ne and O, which are less abundant and hence more challenging to observe and interpret. In section \ref{sec:NeOxCorrel} we show simulations for these species and point out that the expected statistics collected by an IMAP-like instrument is expected to be sufficient when adopting both the peak-following and the stepwise boresight adjustment scenarios.
Summarizing: the parameter correlation is due to physical reasons and mere increasing the statistics while maintaining the location of observations in the Earth's orbit will not remove it. Breaking the ISN gas parameter correlation requires performing observations from separate locations in the orbit relatively far apart, and satisfying this prerequisite requires a capability of adjusting the instrument boresight relative to the spacecraft rotation axis, such as that of the planned IMAP-Lo camera.
\begin{acknowledgments}
The work at CBK PAN was supported by the Polish National Science Centre grant 2019/35/B/ST9/01241.
\end{acknowledgments}
\bibliographystyle{aasjournal}
\bibliography{breakDege_v5}
|
Title:
Cascades of high-energy SM particles in the primordial thermal plasma |
Abstract: High-energy standard model (SM) particles in the early Universe are generated
by the decay of heavy long-lived particles. The subsequent thermalization
occurs through the splitting of high-energy primary particles into lower-energy
daughters in primordial thermal plasma. The principal example of such processes
is reheating after inflation caused by the decay of inflatons into SM
particles. Understanding of the thermalization at reheating is extremely
important as it reveals the origin of the hot Universe, and could open up new
mechanisms for generating dark matter and/or baryon asymmetry. In this paper,
we investigate the thermalization of high-energy SM particles in thermal
plasma, taking into account the Landau--Pomeranchuk--Migdal effect in the
leading-log approximation. The whole SM particle content and all the relevant
SM interactions are included for the first time, i.e., the full gauge
interactions of SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$ and the top Yukawa
interaction. The distribution function of each SM species is computed both
numerically and analytically. We have analytically obtained the distribution
function of each SM species after the first few splittings. Furthermore, we
demonstrate that, after a sufficient number of splittings, the particle
distributions are asymptotic to certain values at low momentum, independent of
the high-energy particles injected by inflaton decay. The results are useful to
calculate the DM abundance produced during the pre-thermal phase. An example is
provided to illustrate a way to calculate the DM abundance from the scattering
between the thermal plasma and high-energy particles in the cascade.
| https://export.arxiv.org/pdf/2208.11708 |
\hypersetup{pageanchor=false}
\begin{titlepage}
\begin{center}
\hfill KEK-TH-2443\\
\hfill TU-1165\\
\vskip 0.5in
{\Huge \bfseries
Cascades of high-energy SM particles\\
in the primordial thermal plasma\\
}
\vskip .8in
{\Large Kyohei Mukaida$^{a,b}$, Masaki Yamada$^{c,d}$}
\vskip .3in
\begin{tabular}{ll}
$^a$& \!\!\!\!\!\emph{Theory Center, IPNS, KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan}\\
$^b$& \!\!\!\!\!\emph{Graduate University for Advanced Studies (Sokendai), }\\[-.3em]
& \!\!\!\!\!\emph{1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan}\\
$^c$& \!\!\!\!\!\emph{FRIS, Tohoku University, Sendai, Miyagi 980-8578, Japan}\\
$^d$& \!\!\!\!\!\emph{Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan}\\
\end{tabular}
\end{center}
\vskip .6in
\end{titlepage}
\tableofcontents
\thispagestyle{empty}
\renewcommand{\thepage}{\arabic{page}}
\renewcommand{\thefootnote}{$\natural$\arabic{footnote}}
\setcounter{footnote}{0}
\newpage
\hypersetup{pageanchor=true}
\section{Introduction}
\label{sec:intro}
Understanding of the thermal history of the Universe has dramatically changed our perception of cosmology.
Observation of the cosmic microwave background~\cite{Penzias:1965wn,Mather:1993ij} revealed that our Universe was filled with the hot thermal plasma up to $\mathcal{O} (1) \eV$.
Combining this observation with Big Bang nucleosynthesis (BBN), our understanding has been further enhanced to a higher temperature~\cite{Schramm:1997vs}.
In BBN, the abundance of primordial light elements is predicted by solving the Boltzmann equations for nucleons with complicated reaction chains.
The consistency with observed light-element abundances has confirmed the thermal history of the Universe up to a temperature as high as $\mathcal{O}(1)\MeV$~\cite{Kawasaki:1999na,Kawasaki:2000en,Giudice:2000ex,
Hannestad:2004px,Ichikawa:2005vw,deSalas:2015glj,Hasegawa:2019jsa},
providing one of the most stringent constraints on cosmological scenarios beyond the standard model (SM).
To explain the initial condition challenges of the thermal Universe, such as the flatness and horizon issues, an exponentially expanding era, called inflation, must be present in the earlier stage of the Universe~\cite{Guth:1980zm} (see also Refs.~\cite{Starobinsky:1980te,Sato:1980yn}).
When the inflation terminates, its energy should be released into the thermal plasma to form the hot Universe.
This process is called reheating and is realized by the perturbative or nonperturbative decay of inflatons into radiation, including SM particles.
If the decay rate of inflaton is not significantly high,
the last stage of reheating is described by its perturbative decay after a period of inflaton-oscillation domination.%
\footnote{
In the earlier stage of reheating, parametric resonance may take place~\cite{Traschen:1990sw,Kofman:1994rk,Shtanov:1994ce,Kofman:1997yn}, which is known as preheating.
In this paper, we focus on the case with a small inflaton-decay rate, where the preheating is generically shut off by the cosmic expansion and rescatterings.
The final state of reheating is then dominated by the perturbative inflaton decay.
}
The process of thermalization, even in this case, is quite non-trivial, contrary to the naive expectation.
This is because the primary particles injected by the inflaton decay can have much higher energy than the temperature of the ambient plasma, which is as low as $\mathcal{O} (1) \MeV$.\footnote{
For a higher decay rate, although the reheating is described by the perturbative decay, the temperature of the ambient plasma becomes larger than the inflaton mass.
In this case, the decay rate is modified by the thermal effect~\cite{Mukaida:2012qn,Mukaida:2012bz}.
}
Moreover, such injection of high-energy SM particles is expected in many models beyond the SM.
If the model involves some heavy long-lived particles, such as dilaton or moduli fields~\cite{Coughlan:1983ci,deCarlos:1993wie,Banks:1993en}, its prolonged decay generates high-energy particles.
Of course, it is not limited to the decay of heavy particles.
The decay/evaporation of extended objects, such as I-balls/oscillons~\cite{Hertzberg:2010yz,Kawasaki:2013awa,Saffin:2016kof,Hong:2017ooe}
and primordial black holes~\cite{Hawking:1974rv,Hawking:1975vcx,Page:1976df,Das:2021wei}, also lead to such primary generation of high-energy particles.
To have the thermal Universe of $\mathcal{O} (1) \MeV$ confirmed by the success of BBN, one has to guarantee that the primary high-energy particles are thermalized until then.
Therefore, understanding of the thermalization process after the injection of high-energy SM particles is indispensable not only to cosmology but to particle physics.
High-energy SM particles are continuously injected into low-temperature plasma before the completion of, for instance, inflaton decay.
In order for the high-energy particles to become thermalized, they must lose their energy while increasing in number via their splittings into lower momentum modes.
The splittings of parent particles with much higher energy than the temperature of the ambient plasma occur almost collinearly, and hence, the rate is strongly suppressed by the interference between the parent and daughter particles.
This effect is known as the Landau--Pomeranchuk--Migdal (LPM) effect~\cite{Landau:1953um, Migdal:1956tc, Gyulassy:1993hr,
Arnold:2001ba, Arnold:2001ms, Arnold:2002ja, Besak:2010fb}.
The resulting thermalization proceeds via the bottom-up process, where lower-momentum modes are thermalized earlier,
and the injected high-energy particles are thermalized only after the splittings.
Thermalization was investigated in detail
in Refs.~\cite{Arnold:2002zm,Jeon:2003gi, Arnold:2008zu, Kurkela:2011ti,AbraaoYork:2014hbk,Kurkela:2014tea,Kurkela:2014tla,Kurkela:2018oqw,Kurkela:2018xxd,Du:2020zqg,Du:2020dvp,Fu:2021jhl} in the context of ultra-relativistic heavy-ion collisions, which were applied to the cosmological context in
Refs.~\cite{Harigaya:2013vwa,Harigaya:2014waa,Mukaida:2015ria,Drees:2021lbm,Passaglia:2021upk,Drees:2022vvn} (see also Refs.~\cite{Davidson:2000er,Allahverdi:2002pu,Jaikumar:2002iq} for earlier works).
To calculate the amount of non-thermally produced dark matter (DM) during thermalization and reheating, understanding of detailed thermalization history is crucial.
In particular, during splitting, the number of high-energy particles exponentially increases (though their energy becomes smaller).
Subsequently, weakly interacting dark matter can be efficiently produced from the collisions of high-energy particles before they are completely thermalized~\cite{Harigaya:2014waa}.%
\footnote{
Non-thermal production of DM during reheating and thermalization was discussed in Refs.~\cite{Garcia:2017tuj, Dudas:2017kfz, Drees:2017iod, Allahverdi:2018aux, Kaneta:2019zgw,Bernal:2019mhf,Allahverdi:2019jsc}.
However, the following effects were not taken into account: finiteness of the thermalization timescale and DM production from the thermal cascade of high-energy particles.
See also Refs.~\cite{Chung:1998rq, Giudice:2000ex, Allahverdi:2002nb, Allahverdi:2002pu, Kane:2009if, Hooper:2011aj,Kurata:2012nf, Fan:2013faa, Kane:2015jia, Co:2015pka, Dhuria:2015xua} for earlier works on non-thermal production of DM with and without the instantaneous thermalization approximation.
See also Refs.~\cite{Kurata:2012nf,Mambrini:2022uol} for indirect non-thermal production of DM from the inflaton decay in vacuum.
}
The following related works included a non-renormalizable coupling for DM production~\cite{Garcia:2018wtq,Harigaya:2019tzu}%
\footnote{
See Ref.~\cite{Garcia:2020eof,Garcia:2020wiy} for the case with a non-quadratic inflaton potential.
}
non-thermal leptogenesis during thermalization ~\cite{Hamada:2015xva, Hamada:2018epb, Asaka:2019ocw},
and sphalerons after the electroweak crossover~\cite{Asaka:2003vt,Jaeckel:2022osh}.
In Ref.~\cite{Drees:2022vvn}, the Boltzmann equations were numerically solved for fermions and gauge fields in the SM without the Higgs field, with the results agreeing with qualitative discussion~\cite{Harigaya:2014waa} and numerical results for a pure gluon theory~\cite{Drees:2021lbm} up to some numerical factors.
In this paper, we extend the analysis and calculation of Refs.~\cite{Harigaya:2014waa,Drees:2021lbm,Drees:2022vvn} by providing and solving complete Boltzmann equations for the SM particles in the leading-log approximation for the thermalization of high-energy SM particles. We include the quarks, leptons, Higgs, and gauge bosons of SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$ gauge theory. The top Yukawa interaction is considered, while the other Yukawa interactions are negligible.
The complete Boltzmann equations are numerically solved in the stationary regime
where the source term of high-energy particles is balanced by the dissipation into the thermal plasma.
This corresponds to the thermalization at the last stage of reheating, where the LPM suppressed splitting rate is much larger than the Hubble expansion rate~\cite{Mukaida:2015ria}.
The relative values of distributions of the SM fields at a given momentum are observed to be asymptotic to certain values at low momenta, independent of the initially injected high-energy particles.
Therefore, all SM particles are produced during the splitting process and their distributions reach the scaling solution in the limit of a large number of splitting processes.
The energy scale at which the scaling solution is approximately realized is determined, which depends on the initially injected particle species.
The results can be used to consider the non-thermal production of DM during thermalization.
This paper is organized as follows:
In Sec.~\ref{sec:reheating}, first, the system in which our calculations are applied is specified. The properties and Boltzmann equations are summarized for the SM particles that govern the thermalization after inflation.
In Sec.~\ref{sec:analytic}, we provide some analytic results for the asymptotic behavior of the distribution functions for the SM particles at a small and large momentum.
The former provides a scaling solution of the Boltzmann equation that is useful for calculating the DM production process during thermalization.
Subsequently, the scaling behavior at a large momentum provides an appropriate boundary condition for numerical calculation of the Boltzmann equation.
Equipped with this boundary condition, the Boltzmann equation is numerically solved in Sec.~\ref{sec:numerical}.
The numerical result confirms the analytic asymptotic behavior of the distribution functions.
In Sec.~\ref{sec:application}, the results are applied to the non-thermal production of DM.
A toy model is considered to illustrate the calculation of the DM abundance from scattering between the thermal plasma and a cascading high-energy particle.
Section~\ref{sec:conclusion} presents the discussion and conclusion.
\section{Setup for thermalization process of SM particles}
\label{sec:reheating}
\subsection{Source term for primary particles}
We are interested in the thermalization process of high-energy particles injected into a low-temperature ambient plasma.
This is realized, \textit{e.g.}, by the decay of inflaton into SM particles during the reheating epoch.
Our calculation and formalism can be applied to a more general setup, which we specify below.
The system is as follows:
We begin with a thermal plasma with a temperature $T$. High-energy SM particles with energy $p_0$ ($\gg T$) are injected into the thermal plasma via, \textit{e.g.}, the decay of a heavy particle.
We refer to these high-energy particles as primary particles.
These primary particles are expected to lose their energy via the splitting process into lower-energy daughter particles, as explained later in this paper.
This process can be described by the Boltzmann equations if we appropriately take into account the splitting processes as we discuss in the next Sec.~\ref{sec:splitting}.
\begin{equation}
\qty( \frac{\partial }{\partial t} - H p \frac{\partial }{\partial p} ) f_s(p,t)= \text{(Source \ term)} + \text{(Splitting \ terms)},
\label{eq:Boltzmann0}
\end{equation}
where $H$ is the Hubble parameter and $f_s$ represents the distribution function of particle species $s$. The source term is present for the primary particles and is given by a delta-function at $p = p_0$.
If we consider a case in which the primary particles originate from the two-body decay of a heavy particle with number density $n_I (t)$, mass $m_I$, and decay rate $\Gamma_I$, the source term is expressed as
\begin{align}
&\text{(Source \ term)} = \Br \frac{\dd \Gamma_I}{\dd p} \frac{2\pi^2}{p^2} n_I(t),
\\
&\frac{\dd \Gamma_I}{\dd p}= 2 \Gamma_I \delta \qty( p - p_0 ), \qquad
p_0 = m_I / 2.
\label{eq:source}
\end{align}
Here, $\Br$ is the branching ratio into a particle species $s$, which is defined shortly [see Eq.~\eqref{eq:Br}].%
\footnote{
\label{footnote1}
Strictly speaking,
the source term is a distribution with a finite width broadening in the domain of $p \le p_0$
because of, \textit{e.g.}, the redshift via the expansion of the Universe.
This particularly implies that
the integral of the delta function over $p$ from $p \ll p_0$ to $p_0$ gives a factor of $1$ rather than $1/2$.
In other words, the delta-function gives a source only for $p < p_0$ (rather than both $p< p_0$ and $p > p_0$), so that it gives a factor of $1$ after the integral over a line segment of $[0,p_0]$.
}
If the heavy particle denoted by the subscript $I$ behaves as a pressureless matter, we have
\begin{equation}
n_I(t) = n_I (t_0) \qty[ \frac{a(t_0)}{a(t)} ]^3 e^{-\Gamma_I t} ,
\label{eq:nI}
\end{equation}
where $a(t)$ is the scale factor, and $t_0$ is a reference time.
The splitting terms will be specified in the next Sec.~\ref{sec:splitting}.
In this paper, we consider a
regime in which the thermalization rate is significantly faster than the Hubble expansion rate,
and hence the temperature of the ambient plasma can be approximated to be constant.
Then, we can neglect the second term on the left-hand side of \eqref{eq:Boltzmann0}.
We also consider the case in which $n_I(t)$ does not change over the thermalization timescale.
For notational convenience, we define
\begin{align}
&\tilde{\Gamma} \equiv 2 \Gamma_I \frac{2\pi^2 n_I}{p_0^2} \frac{1}{p_0^{1/2} T^{3/2}},
\end{align}
so that
the source term of the Boltzmann equation can be expressed as
\begin{align}
\text{(Source \ term)} = p_0^{1/2} T^{3/2} \Br \tilde{\Gamma} \delta( p -p_0).
\end{align}
The factor of $p_0^{1/2} T^{3/2}$ is included for later convenience.
Since the thermalization timescale is faster than other timescales and primary particles are continuously injected,
the particle distributions are expected to reach a stationary solution.
This greatly simplifies our analysis, as we only need to calculate a stationary solution to the Boltzmann equations.
One may obtain such stationary solution by requiring that the collision terms of the Boltzmann equations vanish for a given source term.
There are several interesting applications for our calculations.
One simple example is the decay of a heavy particle into SM particles in the early Universe.
A particularly important one is reheating via the perturbative decay of inflaton, which is expected at the last stage of reheating
for many inflaton models with a small decay rate such as a Planck-suppressed decay.
Such a small coupling is theoretically well motivated because inflaton must have a flat potential, which is spoiled by loop corrections if the inflaton has sizable interactions.
In this case, the above assumptions are justified when one considers the thermalization of inflaton-decay products at the final stage of the reheating process.%
\footnote{
Strictly speaking, the thermalization process (more specifically, the LPM splitting process) occurs faster than the expansion rate of the Universe after the Universe reaches its maximal temperature, as discussed in Ref.~\cite{Mukaida:2015ria}. One can apply our calculation to this regime.
}
Furthermore, one can approximate $n_I(t)$ to be a constant value because inflaton decay is significantly slower than the thermalization timescale.
Let us illustrate a typical time-evolution after inflation for the sake of completeness.
After inflation, the inflaton oscillation dominates the Universe, and the inflaton perturbatively decays into the SM particles to reheat the Universe.
When the Hubble parameter $H$ becomes comparable to the decay rate $\Gamma_I$, reheating is completed, and the Universe is dominated by radiation.
The important point here is that the inflaton continuously decays into SM particles, \textit{even before reheating is completed} (\textit{i.e.}, during the inflaton-oscillation dominated era).
This is the source of primary particles during the reheating epoch.
Throughout this paper, we assume that the thermalization proceeds in a homogeneous ambient plasma with constant temperature $T$ and can be treated as if thermalization of each high-energy particle is an isolated event.
This is justified because the number density of decaying heavy particle is small enough for the case of interest
and the backreaction of cascading process to ambient plasma is negligible.
This is confirmed by the following order-of-magnitude estimation.
By each thermalization process,
a small region is heated via the splitting process which we will explain shortly.
Those heated regions
dissipate into the ambient plasma with the diffusion length of order $(t \, t_{\rm el})^{1/2} \sim t^{1/2} / (\alpha T^{1/2})$ within a time scale of $t$, where $t_{\rm el} \sim 1/(\alpha^2 T)$ is the time scale of elastic scatterings.
In order to see the effect of the diffused region of a single jet to the subsequent jets,
the typical time scale of interest should be self-consistently determined such that another jet is produced within the volume of $d_{\rm dif}^3$.
Here, the probability that a heavy particle decays within the volume of $d_{\rm dif}^3$ is given by $d_{\rm dif}^3 n_I \Gamma_I t$.
Setting this equal to unity and solve it in terms of $t$,
we obtain
$t \gtrsim \left( \alpha^3 p_0 \Mpl \right)^{2/5} / T^{9/5}$,
where we used $n_I \lesssim T^4 / p_0$ and $\Gamma_I \lesssim H \sim T^2 / \Mpl^2$.
The volume of dissipated region from each high-energy particle is about $d_{\rm dis}^3$ and the energy of injection into this region is about $p_0$.
Comparing this with the energy of thermal plasma in that region, we can estimate how much the thermalization process overheats this region such as
\begin{align}
\frac{p_0}{d_{\rm dif}^3} \frac{1}{T^4}
\lesssim \left( \frac{\alpha^6 p_0^2 T}{M_{\rm pl}^{3} } \right)^{1/5}.
\end{align}
This is much smaller than unity, so that we can neglect the backreaction to the ambient plasma during the thermalization
and approximate $T$ to be constant in space.
\subsection{Splitting process}
\label{sec:splitting}
Our entire analysis is based on Boltzmann equations, which are valid under some conditions.
One can show that the (quasi-)particle excitations with momenta larger than the screening scale, $p^2 \gg m_D^2 \sim \alpha \int_{\bm p} f(p) / |p|$, are described by Boltzmann equations
if the following conditions are met:
(i) the occupancy is perturbative, $f(p) \ll 1 / \alpha$,
(ii) the size of quasi-particles is smaller than the mean free path,
and (iii) the duration of each interaction is shorter than the mean free time.
Here the fine structure constant of relevant interactions is denoted as $\alpha$ collectively.
The last condition implies the quantum decoherence of individual scatterings, which requires treating each interaction as the collision terms in the Boltzmann equations.
We are interested in the energy loss of high-energy particles injected into low-temperature ambient plasma, which is dominated by their nearly collinear splittings.
For such collinear emissions, the last condition (iii) should be discussed carefully because the emitted daughter stays close to the parent, thereby interfering with subsequent scatterings. This is known as the LPM effect~\cite{Landau:1953um, Migdal:1956tc, Gyulassy:1993hr,
Arnold:2001ba, Arnold:2001ms, Arnold:2002ja, Besak:2010fb}. See Fig.~\ref{fig:LPM} as an illustration.
Hence, coherence is kept until the overlap between the parent and daughter is lost.
Let us estimate the decoherence time, $t_\text{form}$, before which the destructive interference suppresses the emissions following Ref.~\cite{Kurkela:2011ti}.
Suppose that a parent particle with a momentum $p$ emits a daughter particle with a momentum $k$ almost collinearly which is charged under the SM gauge group.
In this case, the decoherence time is dominated by the daughter particle.
Since the transverse size of the wave is $1/k_\perp$ with $k_\perp$ being its transverse momentum, the overlap is resolved for $t \gtrsim k / k_\perp^2$.
A high-energy charged particle acquires transverse momentum through frequent elastic scatterings with particles in the medium mediated by the $t$-channel gauge boson exchange.
In our case of interest, such elastic scatterings can be regarded as random processes because we expect $t_\text{form} \gg t_\text{el}$.
The square transverse momentum then obeys the diffusion equation of
\begin{equation}
k_\perp^2 \sim \hat q_\text{el} t,
\qquad
\hat q_\text{el} \sim \int_{\bm{q}_\perp} \frac{\alpha^2 (q_\perp)}{q_\perp^2 + m_D^2} \int_{\bm{p}'} f(\bm{p}'),
\qquad
m_D^2 \sim \alpha \int_{\bm{p}'} \frac{f (\bm{p}')}{p'},
\label{eq:transverse_diff}
\end{equation}
with $\alpha$ being the fine structure constant of the mediated gauge boson.
This leads to the following condition for the decoherence:
\begin{equation}
t \gtrsim \sqrt{k / \hat q_\text{el}} \equiv t_\text{form}.
\end{equation}
If the daughter particle is neutral under a gauge group of the system in consideration, the decoherence time is dominated by the transverse diffusion of the parent particle, \textit{i.e.}, $p_\perp^2 \sim \hat q_\text{el} t$.
In this case, we instead have
\begin{equation}
t \gtrsim \frac{1}{k_\perp} \frac{p}{p_\perp} \simeq \frac{1}{k} \frac{p^2}{p_\perp^2}
~~\longrightarrow~~
t \gtrsim \sqrt{\frac{p^2}{k \hat q_\text{el}}} \equiv t_\text{form}.
\end{equation}
An important example of this case is the emission of U(1)$_Y$ gauge boson.
The above consideration shows that the splitting can only happen for $t > t_\text{form}$.
Once the condition $t > t_\text{form}$ is fulfilled, the subsequent splitting can be treated incoherently, which allows us to use the Boltzmann equations.
Taking into account the coupling to the daughter particle $\alpha_d$, the LPM-suppressed splitting rate is estimated as
\begin{equation}
\Gamma_\text{LPM} \sim \frac{\alpha_d}{t_\text{form}}
= \alpha_d \,
\begin{cases}
\sqrt{\frac{\hat q_\text{el}}{k}} &\text{charged daughter}, \\
\sqrt{\frac{k \hat q_\text{el}}{p^2}} & \text{neutral daughter}.
\end{cases}
\end{equation}
The splitting rate is suppressed in proportion to $1/ \sqrt{k}$ or $\sqrt{k/p^2}$, reproducing the characteristic suppression factor of the LPM effect.
The corresponding splitting function that appears in the Boltzmann equations is given by
\begin{equation}
\gamma \sim k \times \Gamma_\text{LPM} \sim \alpha_d \left. k_\perp^2 \right|_{t \sim t_\text{form}}, \qquad
\left. k_\perp^2 \right|_{t \sim t_\text{form}} \sim
\begin{cases}
\hat q_\text{el} t_\text{form} = \sqrt{k \hat q_\text{el}} &\text{charged daughter},\\
\frac{k^2}{p^2} \hat q_\text{el} t_\text{form} = \frac{k}{p} \sqrt{k \hat q_\text{el}} & \text{neutral daughter}.
\end{cases}
\label{eq:split_func_rough}
\end{equation}
As we see shortly, this simple argument correctly reproduces the qualitative behavior of the splitting functions.
The exact splitting function with the LPM effect is obtained by resumming the corresponding diagrams.
The resummation can be performed by solving the recursion equation self-consistently, which is derived from first principles in thermal field theory~\cite{Arnold:2001ba, Arnold:2001ms, Arnold:2002ja, Besak:2010fb}.
Instead of providing a direct derivation here, we just quote the result known in the literature and explain the physical meaning of each term.
Suppose that the parent particle of species $s$ with momentum $p$ emits daughters of species $s'$ and $s''$ with momenta of $k$ and $p - k$ almost collinearly.
The splitting functions can be written collectively as
\begin{equation}
\gamma_{s \leftrightarrow s' s''} \qty( p; xp, (1 - x)p ) = \frac{1}{2} \frac{\alpha_{ss's''}}{\qty( 2 \pi )^4 \sqrt{2}} \times
\frac{P_{s \leftrightarrow s' s''}^\text{(vac)} \qty(x)}{x \qty( 1 - x)} \times \mu^2_\perp \qty( 1, x, 1-x; s, s', s''),
\end{equation}
with $x = k / p$.
The first term $\alpha_{ss's''}$ collectively represents the coupling of a three-point vertex of $s s' s''$, including the possible group factors.
For instance, the triple gauge boson vertex of SU$(N)$ gives $\alpha_{g g g} = \qty(N^2 - 1) N \alpha$ with $\alpha$ being the fine structure constant of SU$(N)$.
The second term in the numerator is the well-known splitting function in the DGLAP equations.
Hence, the first two terms correspond to the splitting processes in a vacuum.
The last term comes from the diffused transverse momentum at the decoherence time, which is a more general expression of $k_\perp^2 |_{t\sim t_\text{form}}$ in Eq.~\eqref{eq:split_func_rough}.
The explicit form of $\mu_\perp$ for $p \gg T / (\alpha^2 \ln 1/ \alpha^2)$ in the leading-log approximation is given by~\cite{Arnold:2008zu}%
\footnote{
For a relatively small momentum,
$T \ll p_0 \ll T / (\alpha^2 \log 1/\alpha^{1/2})$, one may
replace
\begin {equation}
\frac{ \alpha_a \qty(m_{D,a}) - \alpha_a \qty(Q_{\perp, a}) }{ - b_a / \qty( 64 \pi^3 )}\, \mathcal{N}_a
\quad \to \quad
4\pi \alpha_a m_{D,a}^2 T \ln(Q_{\perp, a}^2/m_{D,a}^2)
\label {eq:logsub}
\end {equation}
in \eqref{eq:mu_perp}.
However, the difference is of the order of $\mathcal{O}(10)\%$ at most, even for such a small momentum; therefore, we use \eqref{eq:mu_perp} throughout our numerical calculations.
}%
\footnote{
Because we are interested in the case with a large hierarchy between $p_0$ and $T$, logarithmic corrections of order $\alpha_a \log (Q_{\perp,a}/T)$ to the leading-log approximation could be important.
This kind of corrections is known as doubly logarithmically enhanced order-$\alpha$ corrections~\cite{Iancu:2015vea}.
We thank an anonymous referee for pointing this out and leave the effect of those corrections for a future work.
On the contrary, $\sqrt{\alpha}$-corrections can be neglected because it is smaller by of order $\sqrt{\alpha}/\log (Q_{\perp,a}/T)$ than the leading-log term~\cite{Caron-Huot:2008zna}.
}
\begin{align}
\mu^4_\perp \qty( x_1, x_2, x_3; s_1, s_2, s_3) =
\frac{2}{\pi} \,
x_1 x_2 x_3 \,
p\,
\sum_{a} \frac{ \alpha_a \qty(m_{D,a}) - \alpha_a \qty(Q_{\perp, a}) }{ - b_a / \qty( 64 \pi^3 )}\, \mathcal{N}_a
\sum_{\sigma \in A_3} \tfrac12 \qty[ C_{R_{s_{\sigma(2)}}}^{(a)}+C_{R_{s_{\sigma(3)}}}^{(a)}-C_{R_{s_{\sigma(1)}}}^{(a)} ] x_{\sigma(1)}^2,
\label{eq:mu_perp}
\end{align}
where
\begin{equation}
\mathcal{N}_a
\equiv
\sum_s \frac{\nu_s}{d_{{\rm R}_s}^{(a)}} t_{{\rm R}_s}^{(a)}
\int \frac{\dd^3\ell}{(2\pi)^3} \> f_s(\ell),
\label{eq:N_a}
\end{equation}
and
\begin{equation}
\qty( \frac{Q_{\perp,a}}{m_{D,a}} )^2
\sim \left(\frac{p}{T}\right)^{1/2} \ln^{1/2}\left(\frac{p}{T}\right) \,,
\qquad
m_{D,a}^2 =
8\pi \alpha_a
\sum_s \frac{\nu_s}{d_{\mathrm{R}_s}^{(a)}} t_{\mathrm{R}_s}^{(a)} \int\frac{\dd^3p}{(2\pi)^3} \frac{f_s(p)}{p}
\,.
\label{eq:mD}
\end{equation}
Here, we consider a gauge group of $G = G_1 \times G_2 \times \cdots \times G_N$ with $G_a$ being a Lie group and the summation over $a$ runs through all the Lie groups involved, \textit{i.e.}, $a = 1 ,2,\cdots,N$.
We use $s$ to represent particle species, its number of degrees of freedom is denoted by $\nu_s$, and the corresponding representation under $G$ is $\mathrm{R}_s$.
For non-Abelian $G_a$, the dimension of a representation $\mathrm{R}$ is $d_\text{R}^{(a)}$, its generators are normalized by $\mathrm{Tr} [T_{{\rm R}}^{i} T_{{\rm R}}^j] = t_{{\rm R}}^{(a)} \delta_{ij}$, and its quadratic Casimir is denoted as $C^{(a)}_{{\rm R}} \cdot \mathbb{1} \equiv T_{{\rm R}}^{i} T_{{\rm R}}^i$,
where $i$ and $j$ are indices for generators.
For notational brevity, we use the same characters $d_\mathrm{R}^{(a)}, t_\mathrm{R}^{(a)}, C^{(a)}_{{\rm R}}$ for Abelian $G_a$, which are defined by $d_\mathrm{R}^{(a)} = 1$ and $t_{{\rm R}}^{(a)} = C^{(a)}_{{\rm R}} = q_{a}^2$ with $q_a$ being the charge under $G_a$.
The fine-structure constant of each $G_a$ is denoted as $\alpha_a$ and its beta function is represented by $b_a$.
Note here that all the gauge couplings are assumed to be comparable $\alpha \sim \alpha_a$ and hence the resummation is performed for all the gauge fields on an equal footing as done for instance in \cite{Anisimov:2010gy,Besak:2012qm,Bodeker:2019ajh}.
The summation is taken over the alternating group of degree $3$ denoted as $A_3$, \textit{i.e.}, $(\sigma(1), \sigma(2), \sigma(3)) = (1,2,3)$, $(2,3,1)$, or $(3,1,2)$.
As an example, let us again consider the case of SU$(N)$ gauge fields.
One finds $\mu_\perp^2 (1, x, (1-x); g g g) \sim \sqrt{xp \hat q_\text{el}}$ for $x \ll 1$.
On the other hand, the emission of a U$(1)$ gauge field from $\psi$ gives
$\mu_\perp^2 (1, x, (1-x); \psi g \psi) \sim x \sqrt{x p \hat q_\text{el}}$.
They coincide with $k_\perp^2 |_{t \sim t_\text{form}}$ in Eq.~\eqref{eq:split_func_rough}.
\subsection{SM particles}
Now we shall consider the SM, where $G = G_1 \times G_2 \times G_3$ with $G_1 =$U(1)$_Y$, $G_2 =$SU(2)$_L$, and $G_3 = $SU(3)$_c$.
We consider the thermalization process of hard primaries via SM interactions
and thus assume that hard primaries are one of the (or some combinations of) SM particles.
We denote the primary particle as $\si$.
We first summarize the properties of SM particles for the reader’s convenience.
A list of SM particles and their charge assignments in SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$ are shown below.
\begin{center}
\begin{tabular}{c|cccccc|ccc}
& $e_f$ & $L_f$ & $u_f$ & $d_f$ & $Q_f$ & $\phi$ & $g$ & $W$ & $B$ \\[.2em]
\hline
SU(3)$_c$ & & & F & F & F &
& A & &
\\
SU(2)$_L$ & & F & & & F & F
& & A &
\\
U(1)$_Y$ & -1 & -1/2 & 2/3 & -1/3 & 1/6 & 1/2
& & &
\\
\hline
$\nu_s$ & 2 & 4 & 6 & 6 & 12 & 4 & 16 & 6 & 2
\end{tabular}
\label{tab:toolkit}
\end{center}
where $f= 1,2,3$ represents the family
and $\nu_s$ represents the number of degrees of freedom of the particle species $s$.
We neglect asymmetry in the system, that is, the number density of an anti-particle is equal to that of a particle, so that its degrees of freedom are included in $\nu_s$.
We use $s$ to represent particle species, such as
\begin{equation}
s = \qty( e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g, W, B ),
\end{equation}
where $f=1,2,3$, and $f'=1,2$.
In our notation, when we take a summation over an index for species $s$, we implicitly include that of $f$.
We treat the third family of quarks separately because we consider the top Yukawa interaction, as we will see shortly.
We collectively denote the gauge interactions as $a = 1, 2$, and $3$
for U(1)$_Y$, SU(2)$_L$, and SU(3)$_c$, respectively.
As noted above, we denote the primary particle injected from a heavy particle decay as $\si$.
Here, we summarize group factors for later convenience.
We use F and A to denote fundamental and adjoint representations.
In general, we have the following equalities, $t_{\rm R} = d_{\rm R} C_{\rm R}/ d_{\rm A}$, and $\ta = \ca$.
For SU($N$),
$C_{\rm F}^{(N)} = (N^2-1)/(2N)$, $C_{\rm A}^{(N)}= N$,
$d_{\rm F}^{(N)} = N$, $d_{\rm A}^{(N)} = N^2-1$, $t_{\rm F}^{(N)} = 1/2$, and $t_{\rm A}^{(N)} = N$.
For U(1)$_Y$,
$d_{\rm F}^{(1)} = d_{\rm A}^{(1)} = 1$, and
$C_{{\rm R}_s}^{(1)} = t_{{\rm R}_s}^{(1)} = q_{Y,s}^2$ for a particle $s$,
and $C_{\rm A}^{(1)} = 0$ for gauge bosons.
Explicitly,
\begin{align}
C_{{\rm F}}^{(3)} = \tfrac43 \,,
\qquad
C_{{\rm A}}^{(3)} = 3 ,
\qquad
\df^{(3)} = 3 ,
\qquad
d_{{\rm A}}^{(3)} = 8 ,
\qquad
\tf^{(3)} = \tfrac12 ,
\qquad
t_{{\rm A}}^{(3)} = 3 ,
\\
C_{{\rm F}}^{(2)} = \tfrac34 \,,
\qquad
C_{{\rm A}}^{(2)} = 2 ,
\qquad
d_{\rm F}^{(2)} = 2 ,
\qquad
d_{{\rm A}}^{(2)} = 3 ,
\qquad
\tf^{(2)} = \tfrac12 ,
\qquad
\ta^{(2)} = 2 ,
\\
C_{{\rm F}_s}^{(1)} = q_{Y,s}^2 \,,
\quad
C_{{\rm A}}^{(1)} = 0 ,
\qquad
\df^{(1)} = 1 ,
\qquad
d_{{\rm A}}^{(1)} = 1 ,
\qquad
t_{{\rm F}_s}^{(1)} = q_{Y,s}^2,
\quad
t_{{\rm A}}^{(1)} = 0,
\end{align}
for charged particles.
If a particle $s$ is not charged under $a$,
we define $C_{{\rm R}_s}^{(a)} = 0$, $d_{{\rm R}_s}^{(a)} = 1$, and $t_{{\rm R}_s}^{(a)} = 1$.
We define $\alpha_t \equiv y_t^2 / (4\pi)$,
where $y_t$ represents the top Yukawa coupling.
We neglect Yukawa interactions other than the top Yukawa interaction due to its smallness.
In our numerical calculation, we use
\begin{align}
&\alpha_1 (m_Z) = (1-\sin^2\theta_W)^{-1} \alpha(m_Z) \simeq 0.0102
\qquad
\alpha_2 (m_Z) = \sin^{-2} \theta_W \alpha(m_Z) \simeq 0.0338
\\
&\alpha_3 (m_Z) \simeq 0.118
\qquad \qquad \qquad \qquad \qquad \qquad
\alpha_t (m_t) \simeq 0.0786
\end{align}
where $\theta_W$ ($\sin^2 \theta_W \simeq 0.231$) is the Weinberg angle, $\alpha (m_Z)$ ($\simeq 1/128$) is the fine-structure constant, and
$m_Z$ ($\simeq 91.2 \GeV$) and $m_t$ ($\simeq 173 \GeV$) are the $Z$-boson and top masses, respectively~\cite{Workman:2022ynf}.
The gauge coupling runs, such as
\begin{align}
\alpha_a^{-1} (\mu) - \alpha_a^{-1} (\mu_0)
&= - \frac{b_a}{4 \pi} \ln \frac{\mu^2}{\mu_0^2},
\\
b_a &= \frac{4}{3} \sum_s \frac{t_{{\rm R}_s}^{(a)} \nu_s}{4 d_{{\rm R}_s}^{(a)}} B_s - \frac{11}{3} C_{\rm A}^{(a)}
\\
&=\left\{
\begin{array}{lll}
-7 \quad \text{for \ SU(3)}
\\
-\frac{19}{6} \quad \text{for \ SU(2)}
\\
\frac{41}{6} \quad \text{for \ U(1)}_Y
\end{array}
\right. ,
\end{align}
where $B_s = 1$ for a fermion and $1/2$ for a boson.
We also use
\begin{align}
&\alpha_t^{-1} (\mu) - \alpha_t^{-1} (\mu_0)
= - \frac{b_t}{4 \pi} \ln \frac{\mu^2}{\mu_0^2},
\\
&b_t
= \alpha_t^{-1}
\qty( \frac{9}{2} \alpha_t
-8 \alpha_3
- \frac{9}{4} \alpha_2
- \frac{17}{12} \alpha_1
)
\end{align}
for the running of the top Yukawa coupling.
We are mainly interested in thermalization of SM particles with an energy scale much larger than the electroweak scale.
Because of the renormalization group running, the gauge coupling constants are of the same order with each other at such a high energy scale. We thus perform a resummation for all gauge interactions on an equal footing to calculate $\mu_\perp^2$ [see Eq.~\eqref{eq:mu_perp}]~\cite{Anisimov:2010gy,Besak:2012qm}.
We define the distribution function $f_s$ for each species.
It is normalized such that
$f(p) = 1/(e^{p/T} \mp 1)$ without the degrees of freedom $\nu_s$ in thermal equilibrium for bosons and fermions.
We are interested in the case with an under-dense regime, where $f_s(p) \ll 1$ for hard particles.
We also consider the case with $p_0 \gg T$, which allows us to approximate the splitting functions
using the next-to-leading-logarithm approximation.
In this case, the Boltzmann equation is reduced to
\begin{align}
\frac{\partial }{\partial t} f_s (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_s} \sum_{s',s''}
\int_0^p \dd k \,
\gamma_{s \leftrightarrow s's''} \bigl(p; k, p-k \bigr) \,
f_s(p)
+
\frac{(2\pi)^3}{p^2 \nu_s} \sum_{s',s''}
\int_0^\infty \dd k \,
\gamma_{s' \leftrightarrow s s''} \bigl(p+k; p, k \bigr) \,
f_{s'}(p+k)
\nn
&\qquad + ({\rm source \ term}),
\label {eq:boltzmann}
\end{align}
where the final line represents the source term.
Summation over $s'$ and $s''$ is taken for all particle species multiplied by the number of flavors in the unit of Weyl fermions and complex scalar fields.
The explicit forms of the splitting rate $\gamma_{s \leftrightarrow s' s''}$ and Boltzmann equations are written in the next section and Appendix~\ref{sec:appendixA}.
\subsection{Splitting rate for the SM}
\label{sec:SM}
The splitting functions $\gamma_{s \leftrightarrow s' s''}\bigl(P; xP, (1-x)P\bigr)$ include the summation over the spin degrees of freedom of a chiral fermion and a complex scalar field with a single flavor (or one-half of a Dirac field) with respect to the relevant gauge group.
The next-to-leading-logarithm result can be summarized in the following
form~\cite{Arnold:2001ba,Arnold:2002ja,Arnold:2002zm,Anisimov:2010gy,Bodeker:2019ajh}:
\begin {subequations}
\label{eq:gammas}
\begin{align}
\gamma_{g_a\leftrightarrow g_ag_a}(P; xP, (1-x)P)
&= \frac{1}{2} \frac{d_{{\rm A}}^{(a)} C_{{\rm A}}^{(a)} \alpha_a}{(2\pi)^4 \sqrt2}
\,
\frac{1^4+x^4+(1-x)^4}{1^2 \cdot x^2(1-x)^2}\, \mu_{\perp,a}^2(1,x,1{-}x;\, g_a,\,g_a,\, g_a) \, ,
\label {eq:gamma_ggg}
\\
\gamma_{s \leftrightarrow g_a s}(P; xP, (1-x)P)
&= \frac12 \frac{d_{{\rm F}}^{(a)} C_{{\rm F}_s}^{(a)} \alpha_a}{(2\pi)^4 \sqrt2}
\,
\frac{1^2+(1-x)^2}{1 \cdot x^2(1-x)}\, \mu_{\perp}^2(1,x,1{-}x;\, s,g_a,s)
\quad {\rm for} \ s = {\rm (fermion)}\,,
\label {eq:gamma_qgq}
\\
\gamma_{g_a \leftrightarrow s\bar s}(P; xP, (1-x)P)
&= \frac12 \frac{d_{{\rm F}}^{(a)} C_{{\rm F}_s}^{(a)} \alpha_a}{(2\pi)^4 \sqrt2}
\,
\frac{x^2+(1-x)^2}{1^2 \cdot x(1-x)} \, \mu_{\perp}^2(1,x,1{-}x;\, g_a,s,s)
\quad {\rm for} \ s = {\rm (fermion)}\,,
\label {eq:gamma_gqq}
\\
\gamma_{\phi \leftrightarrow g_a \phi}(P; xP, (1-x)P)
&= \frac12 \frac{d_{{\rm F}}^{(a)} C_{{\rm F}_\phi}^{(a)} \alpha_a}{(2\pi)^4 \sqrt2}
\,
\frac{2}{x^2}\, \mu_{\perp}^2(1,x,1{-}x;\, \phi,g_a,\phi) \,,
\label {eq:gamma_pgp}
\\
\gamma_{g_a \leftrightarrow \phi \phi^*}(P; xP, (1-x)P)
&= \frac12 \frac{d_{{\rm F}}^{(a)} C_{{\rm F}_\phi}^{(a)} \alpha_a}{(2\pi)^4 \sqrt2}
\,
\frac{2}{1^2} \, \mu_{\perp}^2(1,x,1{-}x;\, g_a,\phi,\phi) \,,
\label {eq:gamma_gpp}
\\
\gamma_{u_3 \leftrightarrow \phi Q_3}(P; xP, (1-x)P)
&= \frac12 \frac{\alpha_{\rm t}}{(2\pi)^4 \sqrt2}
\,
\frac{1}{1 \cdot (1-x)}\, \mu_{\perp}^2(1,x,1{-}x;\, {u_3},\phi,{Q_3}) \,,
\label {eq:gamma_qpq}
\\
\gamma_{\phi \leftrightarrow u_3 \bar{Q}_3}(P; xP, (1-x)P)
&= \frac12 \frac{\alpha_{\rm t}}{(2\pi)^4 \sqrt2}
\,
\frac{1}{x(1-x)} \, \mu_{\perp}^2(1,x,1{-}x;\, \phi,{u_3},{Q_3}) \,,
\label{eq:gamma_pqq}
\end{align}
\end {subequations}
where $g_a$ collectively represent the gauge bosons of gauge group $G_a$.%
\footnote{
We add a factor of $1/2$ to \eqref{eq:gamma_ggg} as a symmetry factor to avoid a double count,
where we integrate over $x$ from $0$ to $1$ rather than from $0$ to $1/2$ in the Boltzmann equation.
}
Note that we add a factor of $1/2$ to Eqs.~(\ref{eq:gamma_qgq}), (\ref{eq:gamma_gqq}), (\ref{eq:gamma_qpq}), and (\ref{eq:gamma_pqq}) because we use chiral fermions rather than a Dirac fermion.
Contributions from switched diagrams are included in the splitting functions.
For example,
the rates of processes such as $\bar{q} \leftrightarrow g_a \bar{q}$ are included in
$\gamma_{q \leftrightarrow g_a q}$, where $\bar{q}$ represents the anti-particle of $q$.
If we want to treat
$u_3 \leftrightarrow \phi Q_3$
and
$Q_3 \leftrightarrow \phi^* u_3$
separately,
we should multiply $\gamma_{u_3 \leftrightarrow \phi Q_3}$ ($= \gamma_{Q_3 \leftrightarrow \phi^* u_3}$) by a factor of $1/2$ for each splitting rate [see Eqs.~(\ref{eq:f_u3}), (\ref{eq:f_Q3}), and (\ref{eq:f_phi})].
The functions of $x$ in the above equations come from the DGLAP splitting functions.
The function $\mu_\perp^2$ is given by \eqref{eq:mu_perp}
with $G=$SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$,
where
\begin{align}
{\cal N}_a
&=\left\{
\begin{array}{lll}
15 \frac{\zeta(3)}{\pi^2} T^3 \quad \text{for \ SU(3)}
\\
14 \frac{\zeta(3)}{\pi^2} T^3 \quad \text{for \ SU(2)}
\\
6 \frac{\zeta(3)}{\pi^2} T^3 \quad \text{for \ U(1)}_Y
\end{array}
\right.
\\
m_{D,a}^2
&=\left\{
\begin{array}{lll}
8 \pi \alpha_3 T^2 \quad \text{for \ SU(3)}
\\
\frac{22 \pi }{3} \alpha_2 T^2 \quad \text{for \ SU(2)}
\\
\frac{22 \pi}{3} \alpha_1 T^2 \quad \text{for \ U(1)}_Y
\end{array}
\right.
\end{align}
from Eqs.~(\ref{eq:N_a}) and (\ref{eq:mD}).
The Boltzmann equations of the SM are as follows: The explicit form of each species is shown in Appendix~\ref{sec:appendixA}.
For a gauge boson $g_a$,
\begin{align}
\text{(splitting terms)}
&= - \frac{(2\pi)^3}{p^2 \nu_{g_a}}
\int_0^p \dd k \,
\qty[ \gamma_{g_a \leftrightarrow g_a g_a} \bigl(p; k, p-k \bigr) + \sum_{s'} \frac{d_{{\rm R}_{s'}}^{(2)}d_{{\rm R}_{s'}}^{(3)}}{d_{{\rm R}_{s'}}^{(a)}} \gamma_{g_a \leftrightarrow s' \bar{s}'} \bigl(p; k, p-k \bigr)
]
f_{g_a}(p)
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{g_a}}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{g_a \leftrightarrow g_ag_a} \bigl(p+k; p, k \bigr) f_{g_a}(p+k)
+ \sum_{s'} \frac{d_{{\rm R}_{s'}}^{(2)}d_{{\rm R}_{s'}}^{(3)}}{d_{{\rm R}_{s'}}^{(a)}} \gamma_{s' \leftrightarrow g_a s'} \bigl(p+k; p, k \bigr) f_{s'}(p+k)
]
.
\end{align}
where $s'$ represents the fermions and Higgs.
The summation over $s'$ should include the contributions from all flavors.
Note that the self-splitting process is absent for the Abelian gauge boson.
For a fermion or scalar $s$,
\begin{align}
\text{(splitting terms)}
&= - \frac{(2\pi)^3}{p^2 \nu_{s}}
\int_0^p \dd k \,
\sum_a \frac{ d_{{\rm R}_s}^{(2)}d_{{\rm R}_s}^{(3)}}{d_{{\rm R}_s}^{(a)}} \gamma_{s \leftrightarrow g_a s} \bigl(p; k, p-k \bigr)
f_{s} (p)
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{s}}
\int_0^\infty \dd k \,
\qty[ \sum_a \frac{ d_{{\rm R}_s}^{(2)}d_{{\rm R}_s}^{(3)}}{d_{{\rm R}_s}^{(a)}}
\qty(
2 \gamma_{g_a \leftrightarrow s \bar{s}} \bigl(p+k; p, k \bigr) f_{g_a}(p+k)
+ \gamma_{s \leftrightarrow s g_a} \bigl(p+k; p, k \bigr) f_{s}(p+k) )
]
.
\label{eq:boltzmannparticle}
\end{align}
where we omit Yukawa interactions.
The contribution from the top Yukawa interaction is given by
\begin{align}
\text{(splitting terms)} \ni
&- \frac{(2\pi)^3}{p^2 \nu_{s}}
\int_0^p \dd k \,
d_{{\rm R}_s}^{(3)} \gamma_{s \leftrightarrow \phi s'} \bigl(p; k, p-k \bigr)
f_{s} (p)
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{s}}
\int_0^\infty \dd k \,
d_{{\rm R}_s}^{(3)}
\qty[
2 \gamma_{\phi \leftrightarrow s s'} \bigl(p+k; p, k \bigr) f_\phi (p+k)
+ \gamma_{s' \leftrightarrow s \phi} \bigl(p+k; p, k \bigr) f_{s'}(p+k)
]
.
\end{align}
for the top quark $s = u_3$ and $Q_3$ with $s' = Q_3$ and $u_3$, respectively,
and
\begin{align}
\text{(splitting terms)} \ni
&- \frac{(2\pi)^3}{p^2 \nu_{\phi}}
\int_0^p \dd k \,
2 d_{{\rm R}_s}^{(3)} \gamma_{\phi \leftrightarrow s s'} \bigl(p; k, p-k \bigr)
f_{\phi} (p)
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{\phi}}
\int_0^\infty \dd k \,
d_{{\rm R}_s}^{(3)}
\qty[
\gamma_{s \leftrightarrow \phi s'} \bigl(p+k; p, k \bigr) f_s (p+k)
+ \gamma_{s' \leftrightarrow \phi s} \bigl(p+k; p, k \bigr) f_{s'}(p+k)
] .
\end{align}
for the Higgs $\phi$.
We note that the Boltzmann equations are linear in terms of the distributions so that the normalization of distributions can be arbitrary. This in turn implies that we can always rescale $\tilde{\Gamma}$ without loss of generality.
We thus take $\tilde{\Gamma} = 1$ in our numerical calculations for any $\si$ in Sec.~\ref{sec:numerical}.
The linearity of the equations also implies that we can take $\si$ independently for all species.
Moreover, we can take a linear combination of $\si$ to obtain the solution with an arbitrary primary source term.
Our results for $\si = \qty( e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g, W, B )$ therefore cover all possible initial conditions for the source term.
When we consider the case in which the injected particle is solely for $\si$ with a particular flavor,
the branching rate $\Br$ is given by
\begin{align}
\Br = \frac{1}{\nu_s} \delta_{s\, \si},
\label{eq:Br}
\end{align}
where $\delta_{s\, \si}$ is the Kronecker delta.
If we equally treat all flavors (except for the top quarks)
and assume that high-energy particles are injected into all $f$ or $f'$,
the branching ratio should be multiplied by $1/3$ or $1/2$, respectively.
\section{Analytic calculations}
\label{sec:analytic}
Before we solve the Boltzmann equations numerically, in this section, we provide analytic results at $p \approx p_0$ and $p \ll p_0$.
The asymptotic behavior at $p \approx p_0$ provides an appropriate boundary condition for the distribution function at $p = p_0$ from the delta-function source term.
This is useful for the numerical calculations.
The asymptotic behavior at $p \ll p_0$ is phenomenologically important for discussing non-thermal DM production during the thermalization process of SM particles.
We will see that these analytic results are consistent with our numerical results in Sec.~\ref{sec:numerical}.
\subsection{Boundary condition and asymptotic behavior at $p \approx p_0$}
Here, we explain the asymptotic behaviors of distributions at $p \approx p_0$, which are useful for imposing boundary conditions on distributions in numerical calculations.
The asymptotic behavior is different for the primary particle and other particles.
The splitting process at $p\approx p_0$ is schematically represented in Fig.~\ref{fig:flow1},
where the leftmost part represents the primary particle $\si$.
The function of $y \equiv (p_0 - p)/p_0$ at the top of the figure represents the behavior of the distribution of the corresponding species at $p \approx p_0$.
The arrows represent the cascades of particles from primary particles. In this section, we provide summary results for the distributions of only primary particles. Their derivation and the secondary and subsequent distributions are given in Appendix~\ref{sec:appendixB} and are schematically summarized in Fig.~\ref{fig:flow1}.
First, let us consider a case in which gluons are injected at $p = p_0$ from a delta-function source term.
The stationary equation for the gluon distribution can be expressed as
\begin{align}
- 2 \int_0^{p/2} \dd k \,
\gamma_{g \leftrightarrow g g} \bigl(p; k, p-k \bigr) \,
f_g(p)
+
\int_0^\infty \dd k \,
2 \gamma_{g \leftrightarrow g g} \bigl(p+k; p, k \bigr) \,
f_g(p+k)
+ \frac{p^2}{(2\pi)^3} p_0^{1/2} T^{3/2} \tilde{\Gamma} \delta (p - p_0) = 0,
\end{align}
where we include the source term ($= p_0^{1/2} T^{3/2} \Br \tilde{\Gamma} \delta (p - p_0)$) and use $\Br = 1/\nu_g$ [see Eq.~\eqref{eq:Br}].
Here, we neglect terms associated with quarks because the secondary particles are subdominant at $p \approx p_0$ and there is no IR divergence in the pair production of quarks.
In Appendix~\ref{sec:appendixB},
we discuss that the following distribution function satisfies the above equation:
\begin{equation}
f_g(p) \approx \frac{\tilde{\Gamma}}{ (2\pi)^4 \tilde{\gamma}_{g \leftrightarrow g g}} \qty( \frac{p_0 - p}{p_0} )^{-1/2}
\end{equation}
for $p \approx p_0$, where $\tilde{\gamma}_{g \leftrightarrow g g}$ is defined below \eqref{eq:gluonasym}.
This represents the asymptotic behavior of the gluon when the primary particles are gluons, $\si = g$.
Similar results hold for $\si = L_f, u_f, d_f, Q_f, \phi$, and $W$ once we replace $\tilde{\gamma}_{g \leftrightarrow g g}$, as appropriate.
Below, we summarize their results.
\begin{equation}
f_{s_1} (p) \approx
\frac{\tilde{\Gamma}}{(2\pi)^4 \sum_{a'} \tilde{\gamma}_{s_{1} \to g_{a'} s_{1}}
} \qty( \frac{p_0 - p}{p_0} )^{-1/2}
\end{equation}
for $s_1 = L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g$, and $W$,
where $\tilde{\gamma}$ are defined in Eqs.~(\ref{eq:tildegtogg}), (\ref{eq:tildeqtogq}), and (\ref{eq:tildephitogphi}).
As a reference, we obtain
\begin{align}
(2\pi)^4 \sum_{a'} \tilde{\gamma}_{s_{1} \to g_{a'} s_{1}}
&\simeq
(
0.016, \
0.083, \
0.083, \
0.083, \
0.21, \
0.21, \
0.016, \
1.8, \
0.13
)
\nonumber\\
&{\rm for} \ s_1 = (L_f, \ u_{f'}, \ u_3, \ d_f, \ Q_{f'}, \ Q_3, \ \phi, \ g, \ W),
\end{align}
respectively.
Here, we assume $p = p_0 = 10^{12} T = 10^{15} \GeV$, though the dependence on $p$ is only logarithmic through the running of coupling constants.
These $\tilde{\gamma}$ represent the splitting rates of the primary particles at $p \approx p_0$.
The cases with $\si = e_f$ and $B$ have qualitatively different distributions because they experience only Abelian gauge interactions leading to a different scaling for the splitting rate [see Eq.~\eqref{eq:split_func_rough}].
If the primary particle is an Abelian gauge boson $B$, it splits only into other particles and does not undergo soft-dominated splitting processes.
In this case, we obtain
the delta-function distribution for $f_B(p)$ as
\begin{equation}
f_B(p) = C_B' \delta (p - p_0) + C_B \qty( \frac{p_0 - p}{p_0} )^{0},
\end{equation}
where we include
a subdominant component, which is determined in Appendix~\ref{sec:appendixB}.
Here, $C_B'$ is given by
\begin{equation}
C_B'
= \frac{p_0}{(\pi/4) (2\pi)^3} \frac{\tilde{\Gamma}}{3 \sum_{s_f} \tilde{\gamma}_{B \to s_f s_f} + \tilde{\gamma}_{B \to \phi \phi} }.
\end{equation}
where $s_f$ represents fermions.
As a reference, we obtain
\begin{align}
(\pi/4)(2\pi)^3 \qty[ \sum_{s_f} \tilde{\gamma}_{B \to s_f s_f} + \tilde{\gamma}_{B \to \phi \phi} ]
&\simeq
0.011,
\end{align}
where we assume $p = p_0 = 10^{12} T = 10^{15} \GeV$, though the dependence on $p$ is only logarithmic.
If a right-handed lepton is produced from heavy particle decay, that is, if $\si = e_f$,
the soft-photon emission of $e_f \to B e_f$ is more suppressed by the LPM effect than the non-Abelian case and again cannot soften the delta-function distribution.
As a result, it remains a delta-function distribution plus a subdominant component such as
\begin{equation}
f_{e_f} (p) = C_{e_f}' \delta (p - p_0) + C_{e_f} \qty( \frac{p_0 - p}{p_0} )^{-1/2}.
\end{equation}
Here,
$C_{e_f}'$ is given by
\begin{align}
C_{e_f}'
\approx \frac{p_0 }{(2\pi)^3 }
\frac{8}{11 \pi} \frac{ \tilde{\Gamma}}{\tilde{\gamma}_{e_f \to B e_f}}.
\end{align}
As a reference, we obtain
\begin{align}
\frac{11 \pi}{8} (2\pi)^3 \tilde{\gamma}_{e_f \to B e_f}
&\simeq
3.4 \times 10^{-4},
\end{align}
where we assume $p = p_0 = 10^{12} T = 10^{15} \GeV$, though the dependence on $p$ is only logarithmic.
Other particles are produced from the splitting of primary particles.
Secondary and subsequent particles have suppressed distributions at $p \approx p_0$ with some power of $(p_0 - p)/p_0$. The power of this behavior can be determined analytically, as discussed in Appendix~\ref{sec:appendixB}. The result is summarized in Fig.~\ref{fig:flow1}.
The leftmost part represents the primary particle $\si$.
The arrows represent the cascades from the primary particle, namely, they demonstrate how secondary and subsequent particles are produced from the primary particle.
The behavior of the distributions of corresponding particles is represented by the uppermost line, where $y \equiv (p_0 - p)/p_0$.
If the primary particle is $e_f$ or $B$,
they have the delta-function distribution $\delta (y)$, as discussed above.
The other primary particles have a distribution proportional to $y^{-1/2}$.
For example,
if the primary particle is a gluon ($\si = g$),
its distribution is $\propto y^{-1/2}$.
The secondary particles produced by the splitting from the gluon are quarks,
which have the distribution $\propto y^{1/2}$.
The U(1)$_Y$ gauge boson $B$ as well as the Higgs $\phi$ and W bosons are produced from the splitting of quarks, where the distribution is $\propto y^1$ for $B$ and $y^{3/2}$ for $\phi$ and $W$. Right- and left-handed leptons are produced from the splitting of $B$, and the distribution is $\propto y^{3/2}$ and $y^2$, respectively.
As we will explain shortly,
we confirm the behaviors described in Fig.~\ref{fig:flow1} using numerical calculations.
\subsection{Asymptotic behavior at $p \ll p_0$}
\label{sec:analyticsmall}
It is known that the splitting process results in
$f \propto p^{-7/2}$ for $p \ll p_0$ (see, e.g., Refs~\cite{Kurkela:2011ti,Kurkela:2014tea}).
As we will see in Sec.~\ref{sec:numerical}, this behavior is confirmed by our numerical results, even for a case with entirely SM particles.
In this section, we first derive the ratio of the distributions for different species at $p \ll p_0$.
For this purpose, we define $R_g^{(s)}(p)$ as the fraction of the number of species $s$ at energy $p$.
\begin{equation}
R_s(p) \equiv
\frac{\nu_s f_s(p)}{\sum_{s'} \nu_{s'} f_{s'}(p)},
\label{eq:Rs}
\end{equation}
where the summation in the denominator is taken for all families and species.
We expect that they reach the constant values $R_s^{\rm (asym)}$ at $p \ll p_0$.
We can derive the constraint equations that determine $R_s^{\rm (asym)}$ by substituting $f_s (p) = \tilde{f}_s p^{-7/2}$ into the Boltzmann equations, where $\tilde{f}_s$ are constants, and performing integrations.
The constraint equations are given by
\begin{align}
&- 0.0924 \tilde{f}_B + 0.0693 \tilde{f}_{e_f} = 0,
\label{eq:constrainte}
\\
&- 0.0967 \tilde{f}_B +
2.72 \tilde{f}_{L_f} - 0.905 \tilde{f}_W = 0,
\label{eq:constraintL} \\
&- 0.447 \tilde{f}_B - 13.0 \tilde{f}_g +
53.7 \tilde{f}_{u_{f'}} = 0,
\\
&- 0.447 \tilde{f}_B - 13.0 \tilde{f}_g - 19.6 \tilde{f}_\phi -
1.34 \tilde{f}_{Q_3} + 61.7 \tilde{f}_{u_3} = 0,
\\
&- 0.112 \tilde{f}_B + 53.5 \tilde{f}_{d_f} -
13.0 \tilde{f}_g = 0,
\\
&-0.0299 \tilde{f}_B - 15.1 \tilde{f}_g + 57.8 \tilde{f}_{Q_{f'}} -
2.94 \tilde{f}_W = 0,
\\
& -0.0299 \tilde{f}_B - 15.1 \tilde{f}_g - 9.45 \tilde{f}_\phi +
61.4 \tilde{f}_{Q_3} - 0.762 \tilde{f}_{u_3} - 2.94 \tilde{f}_W = 0,
\\
&-
0.0322 \tilde{f}_B + 59.8 \tilde{f}_\phi - 8.90 \tilde{f}_{Q_3} - 9.71 \tilde{f}_{u_3} -
0.253 \tilde{f}_W = 0,
\\
&- 60.1 \tilde{f}_B + 63.2 \tilde{f}_g - 81.0 \tilde{f}_{Q_{f'}} -
40.5 \tilde{f}_{Q_3} - 20.0 \tilde{f}_{u_3} - 40.1 \tilde{f}_{u_{f'}} = 0,
\\
&- 5.29 \tilde{f}_{L_f} -
1.38 \tilde{f}_\phi - 15.0 \tilde{f}_{Q_{f'}} - 7.50 \tilde{f}_{Q_3} + 19.6 \tilde{f}_W = 0,
\\
&6.49 \tilde{f}_B - 0.753 \tilde{f}_{d_f} - 0.208 \tilde{f}_{e_f} - 0.435 \tilde{f}_{L_f} -
0.0645 \tilde{f}_\phi - 0.269 \tilde{f}_{Q_{f'}} - 0.134 \tilde{f}_{Q_3} -
1.01 \tilde{f}_{u_3} - 2.01 \tilde{f}_{u_{f'}} = 0
\label{eq:constraintgamma}
\end{align}
for $s = e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g, W$, and $B$, respectively,
where we multiply all terms by $10^4$.
These constraint equations cannot be solved exactly because
the Boltzmann equation includes running gauge coupling, which depends logarithmically on $p$.
However, we can still determine an approximate value of $\tilde{f}_s$.
For example, we can solve Eqs.~(\ref{eq:constraintL}-\ref{eq:constraintgamma})
without imposing \eqref{eq:constrainte}
and check how accurately \eqref{eq:constrainte} is satisfied.
We find that the error on \eqref{eq:constrainte} is of the order of $3 \times 10^{-6}$, which means that all constraint equations are satisfied with errors of $\mathcal{O}(0.01\%)$ at most.
We can calculate $R_s^{\rm (asym)}$ from
the constraint equation results
such as
\begin{equation}
R_s^{\rm (asym)} =
\frac{\nu_s \tilde{f}_s}{\sum_s \nu_s \tilde{f}_s},
\label{eq:Rs2}
\end{equation}
where the factor of $p^{-7/2}$ is cancelled between the numerator and denominator.
The result is shown in Tab.~\ref{tab:1}.
We find that the gluon dominates in terms of the number and energy of SM particles in the scaling regime.
These asymptotic behaviors are verified using our numerical calculations, as shown in Sec.~\ref{sec:numerical},
where
we also determine the energy scale $p = p_{\rm asym}^{(s_{\rm inj})}$ below which $R_s(p) \simeq R_s^{\rm (asym)}$ from numerical calculations.
We also define
an asymptotic value of total distributions such as
\begin{align}
f_{\rm tot}^{\rm (asym)} (p)
&\equiv
\left. \sum_s \nu_s f_s(p) \right\vert_{p < p_{\rm asym}^{(s_{\rm inj})}},
\label{eq:totaldist}
\\
&\equiv
\tilde{\Gamma} \tilde{f}_{\rm tot}^{\rm (asym)}
\qty( \frac{p}{p_0} )^{-7/2}.
\label{eq:tildetotal}
\end{align}
This is proportional to $(p/p_0)^{-7/2}$ and $\tilde{\Gamma}$, which we explicitly show in Eq.~(\ref{eq:tildetotal}).
We expect that $\tilde{f}_{\rm tot}^{\rm (asym)}$ is independent of the injected particle $\si$ because of the following reason.
We first note that
the fraction of distributions, $R_s^{(\rm asym)}$,
is independent of $\si$, as shown above.
We then note that the source term continuously provides a constant energy per unit time. The injected energy is transferred to a smaller momentum $p$ via the splitting. Because of the conservation of energy per unit time, we expect that the energy transfer in the distribution from a large momentum to a small momentum depends only on the injected energy and does not depend on the injected particle species $\si$.
This implies that the asymptotic values of distributions are determined by how fast the thermalization process occurs at a small $p$.
Because the fraction of distributions is also independent of $\si$,
the amplitude of the total distribution function is then expected to be independent of the injected primary particles.
This is actually confirmed by our numerical calculations for all $\si$ as we will see in Sec.~\ref{sec:numerical}.
Here we provide a rough estimation~\cite{Harigaya:2014waa,Harigaya:2019tzu}.
Focusing on a distribution around a momentum $p$ ($\ll p_0$),
the time scale for the energy flow out of that momentum scale is given by $\left. \Gamma_{\rm LPM}^{-1}\right\vert_{k \sim p}$, which implies the energy flow out of that momentum scale is estimated as
\begin{align}
p^4 \left. \Gamma_{\rm LPM}\right\vert_{k \sim p} f_{\rm tot}^{(\rm asym)}(p)
\sim \alpha \alpha_d T^{3/2} p^{7/2} \tilde{\Gamma} \tilde{f}_{\rm tot}^{\rm (asym)} \qty( \frac{p}{p_0} )^{-7/2}
\end{align}
where we use Eq.~\eqref{eq:split_func_rough}.
This should be compared with the energy injection per unit time:
\begin{align}
\int 4 \pi p^3 \dd p \, 2 \Gamma_I \delta ( p - p_0) \frac{2\pi^2}{p^2} n_I =
4 \pi p_0^2 \tilde{\Gamma} p_0^{3/2} T^{3/2}.
\end{align}
We then obtain $\tilde{f}_{\rm tot}^{\rm (asym)} \sim 1/\alpha \alpha_d$, where we omit numerical factors for simplicity.
Because the asymptotic behavior is determined by the thermalization or splitting process of the dominant component, gluons,
we finally obtain
\begin{align}
\tilde{f}_{\rm tot}^{\rm (asym)} \sim \alpha_s^{-2}.
\label{eq:tildeftotest}
\end{align}
Combining Eqs.~\eqref{eq:Rs2} and \eqref{eq:tildetotal}, we obtain
\begin{equation}
\left. \nu_s f_s(p) \right\vert_{p < p_{\rm asym}^{(s_{\rm inj})}} = \tilde{\Gamma} R_s^{\rm (asym)}
\tilde{f}_{\rm tot}^{\rm (asym)} \qty( \frac{p}{p_0} )^{-7/2}.
\end{equation}
These results indicate that any particles can be produced from any primary particles through the cascades.
Moreover, even the fraction of particles is the same for any primary particles.
This is important if one considers DM production through the collision of these secondary particles.
This particularly implies that DM production during the thermalization of high-energy SM particles is inevitable if the DM is coupled to the SM sector.
All particles, including the DM,
must be produced through the cascades.
The DM production rate during the thermalization of SM particles can be estimated using the distribution of SM particles determined in this paper,
and the DM production cross section from the collision of SM particles.
This is demonstrated in Sec.~\ref{sec:application} by using a toy model.
\begin{table*}
\begin{center}
\begin{tabular}{c|ccccccccccc}
& $e_f$ & $L_f$ & $u_{f'}$ & $u_3$ & $d_f$ & $Q_{f'}$ & $Q_3$ & $\phi$ & $g$ & $W$ & $B$ \\[.2em]
\hline
$100 \, R_s^{\rm (asym)}$ & 1.16 & 1.25 & 3.61 & 3.62 & 3.59 & 8.26 & 8.24
& 0.816 & 39.4 & 5.33 & 0.867
\end{tabular}
\end{center}
\caption{Asymptotic value of the fraction of distributions at a small momentum $R_s^{(\rm asym)}$.
}
\label{tab:1}
\end{table*}
\section{Numerical results}
\label{sec:numerical}
In this section, we first explain how to numerically determine the stationary solution to the Boltzmann equation. We then show our numerical results, which are consistent with the analytic result discussed in Sec.~\ref{sec:analytic}.
\subsection{Numerical methods}
We determine the stationary solution to the Boltzmann equation (\textit{i.e.}, $\partial f_s / \partial t = 0$) with a given source term for $s = \si$.
We again note that the Boltzmann equations are linear in terms of the distributions.
This allows us to take $\tilde{\Gamma} = 1$ without loss of generality by rescaling the normalization of distributions.
Moreover, we can take $\si$ independently for all species and
our results for $\si = \qty( e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g, W, B )$ cover all possible initial conditions for the source term.
We equally treat all generations except for the top quarks
and assume that high-energy particles are injected into all $f$ ($=1,2,3$) or $f'$ ($=1,2$) for each case of $\si$.
When we show our result for, \textit{e.g.}, $e_f$, we show that of one of the generations for $e_f$.
The stationary Boltzmann equation has two terms other than the source term: the terms proportional to $f_s(p)$ and those for the integral with a weight of $f_s (p')$ with $p' >p$.
This means that
the distribution at $p$ can be determined from the distribution for $p' > p$.
Therefore, we can determine a smaller momentum iteratively, starting from an appropriate boundary condition at the maximal momentum $p = p_0$.
The source term determines the boundary condition of the distributions at $p = p_0$, which we analytically determine in Sec.~\ref{sec:analytic} and Appendix~\ref{sec:appendixB}.
We discretize the momentum
to approximate the integral using a Simpson's rule method.
As we will see,
the particle distribution has a power-law dependence on $p$ for $p \ll p_0$ and on $p_0-p$ for $p \approx p_0$.
Therefore, it is convenient to discretize the momentum
in a logarithmic scale on $p$ for $p \ll p_0$ and on $p_0 - p$ for $p \approx p_0$.
Specifically, we use
\begin {align}
&p_n = p_{\rm min} \exp \qty[
\frac{\log (p_0/p_{\rm min})}{N_p} \sum_{i=1}^{i=n}
\qty( 1-\tanh \qty( \frac{i-1/2 - N_p/2}{\sigma} ) )
],
\\
&\sigma = \frac{N_p}{\log \qty[ 2 \qty( p_0/T ) \log (p_0 / p_{\rm min}) /N_p ]},
\end {align}
for $n = 1, 2, \dots, N_p$, where $N_p$ is the number of grids.
One can check that $p_1 \simeq p_{\rm min} = 2T$ and $p_{N_p} \simeq p_0$ with exponentially small errors.
Here, $\sigma$ is chosen
such that the resolution of the momentum near $p \simeq p_0$ is approximately $T_0$, for example, $p_{N_p} - p_{N_p -1} \simeq T_0$.
As we will see shortly, there is no infrared divergence; hence, we do not require a small interval, particularly in the intermediate scale of $p$.
In our numerical calculation, we take $N_p = 10^4$ and $2p_0/p_{\rm min} = p_0 / T = 10^{12}$ unless otherwise stated.
The source term in \eqref{eq:boltzmann} is the delta function [see Eq.~\eqref{eq:source}]; therefore, we require an appropriate boundary condition or regularization in numerical calculations.
As discussed in Sec.~\ref{sec:analytic},
the stationary solution to the Boltzmann equation with the delta-function source term can be analytically calculated for $p \approx p_0$.
The distribution for a primary particle $\si$ is proportional to $\sqrt{p_0/(p_0 - p)}$ for $\si = L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g, W$.
In our numerical calculation,
we use this analytic form of primary particle distribution for $p_n \in (p_{\rm ini}, p_0)$ with $p_{\rm ini}$ determined via $(p_0 -p_{\rm ini})/p_0 \simeq 10^{-3}$.
One can also take analytic distributions for other (secondary) particles, which are derived in Appendix~\ref{sec:appendixB}. However, these are subdominant at $p \approx p_0$ and are mainly produced from the primary particle; hence, they can be omitted for $p > p_{\rm ini}$.
We find that the resulting distributions for $p \lesssim p_{\rm ini}$ do not change for a different choice in $p_{\rm ini}$ within a numerical uncertainty if $(p_0 -p_{\rm ini})/p_0 \ll 1$ is satisfied.
For the case of $\si = e_f$ and $B$,
the distribution of a primary particle $\si$ is proportional to a delta function. In this case,
we simply substitute the delta function into the Boltzmann equation.
Specifically, for the case of $\si = e_f$, we include
$(2\pi)^3/(p^2 \nu_B) \, 3 \gamma_{e_f \leftrightarrow B e_f} (p_0;p,p_0-p) C_{e_f}'$ in \eqref{eq:boltzmann-photon}
instead of imposing boundary conditions for $\si = e_f$.
Including these terms, we solve the Boltzmann equation from $p \approx p_0$ to a smaller $p$.
The case with $\si = B$ is similar to the case with $\si = e_f$. In this case, we should include
$(2\pi)^3/(p^2 \nu_s) \, 2 d_s^{(2)}d_s^{(3)} \gamma_{B \leftrightarrow s \bar{s}} (p_0;p,p_0-p) C_{B}'$ in \eqref{eq:boltzmannparticle} (or correspondingly, Eqs.~(\ref{eq:boltzmannuf}), (\ref{eq:boltzmannQf}), (\ref{eq:boltzmannef}), (\ref{eq:boltzmannLf}), and (\ref{eq:f_phi})).
For a consistency check,
we utilize a different algorithm that simply uses a constant initial condition (fixed boundary condition) at $p = p_0$ for a particle species $\si$.
This method is used in Refs.~\cite{Drees:2021lbm,Drees:2022vvn}.
We find that
the results of this method are consistent with those determined by the above methods within a numerical uncertainty.
The splitting rate $\gamma_{s\leftrightarrow s' s''}(p;k, p-k)$ is proportional to $k^{-3/2}$ for soft gauge boson emission with a small momentum $k$. This implies an infrared divergence for the integral over $k$.
However, this is automatically regulated when we include all terms in the Boltzmann equation
because the emission of soft gauge bosons with energy $\epsilon p$ from energy $p$ is cancelled by the contribution from
the emission of soft gauge bosons with energy $p+\epsilon p$ in the limit of small $\epsilon$. Because we expect that the distribution function is continuous,
these contributions cancel each other out,
and the splitting terms in the Boltzmann equation do not exhibit infrared divergence.
Therefore, we do not (in principle) need to include an infrared cutoff to perform the numerical calculations.
Still, we must take particular care with the integral in a small $k \ll p$.
Let us consider the gluon self-splitting process as an example. The (apparent) IR divergent parts are given by
\begin{align}
\int_0^{\epsilon p} \dd k \,
\gamma_{g \leftrightarrow gg} \bigl(p; k, p-k \bigr) \,
f_g(p)
-
\int_0^{\epsilon p} \dd k \,
\gamma_{g \leftrightarrow gg} \bigl(p+k; p, k \bigr) \,
f_g(p+k)
\end{align}
where we focus on $k \in (0, \epsilon p)$ with $\epsilon \ll 1$.
Because $\epsilon \ll 1$,
we can approximate $f_g(p+k) \simeq f_g(p) + k f_g'(p)$.
Then, we obtain
\begin{align}
f_g (p) \int_0^{\epsilon p} \dd k \,
\qty[ \gamma_{g \leftrightarrow gg} \bigl(p; k, p-k \bigr) - \gamma_{g \leftrightarrow gg} \bigl(p+k; p, k \bigr) ]
-
f_g'(p) \int_0^{\epsilon p} \dd k \, k \,
\gamma_{g \leftrightarrow gg} \bigl(p+k; p, k \bigr).
\end{align}
These integrals are regular and can be performed analytically or numerically.
In our discrete momentum method,
we must determine $f_g'(p_i)$ from the information of $f_g(p_j)$ with $j > i$.
One may simply calculate the derivative from $f_g(p_{j})$ with $j > i$, and interpolate it to $p = p_i$.
However, this may result in artificial instability for the numerical calculation.
This is because an error on the derivative at $p_j = p_{i+1}$ leads to an error on $f_g(p_i)$, which results in a larger error on the derivative at $p_i$.
This enhances the error on $f_g(p)$ at a small $p$ when we iteratively determine $f_g(p)$ from a large $p$.
To avoid this artificial instability,
we calculate the derivatives for $f_g(p_{j})$ with $j = i+1, i+2, \dots, i+100$ and take their averaged value to approximate $f_g'(p_i)$.
In other words, we discretize the derivatives of the distributions by $N_p/100 = 100$ grids rather than $N_p$ grids. This is still accurate for our purpose because the derivatives of the distributions do not change drastically within $p \in (p_i, p_{i+100})$.
We emphasize that this procedure for the (apparent) IR divergence
allows us to take a larger momentum grid than the physical IR cutoff of $\mathcal{O}(T)$.
This in turn allows us to start from a significantly larger momentum $p_0$ than $T$.
In fact, we can take $p_0 / T = 10^{12}$ or larger,
even for a relatively small number of grids, $N_p = 10^4$.
\subsection{Numerical results and scaling solution}
\label{sec:numericalscaling}
The numerical results for the particle distributions weighted by $(p/p_0)^{7/2}$ are shown in Fig.~\ref{fig:f-1} for $\si = e_f$ (top left), $L_f$ (top right), $u_{f'}$ (middle left), $u_3$ (middle right), and $d_f$ (bottom left)
and in Fig.~\ref{fig:f-2} for $\si = Q_{f'}$ (top left), $Q_3$ (top right),
$\phi$ (middle left), $g$ (middle right), $W$ (bottom left), and $B$ (bottom right).
As stated, we use $T = 10^3 \GeV$, $p_0/ T = 10^{12}$, and $N_p = 10^4$,
where the dependence of our results on $T$ (or equivalently, $p_0$) is only logarithmic through the running of the gauge couplings.
We take $\tilde{\Gamma} = 1$ without loss of generality, where our results for $f_s (p)$ should be rescaled by $\tilde{\Gamma}$ for other values of $\tilde{\Gamma}$.
In all cases except $\si = B$,
the injected particle is dominant at $p \approx p_0$,
and other secondary particles are produced with the suppression factors discussed in Appendix~\ref{sec:appendixB} (see Fig.~\ref{fig:flow1}).
Our numerical results confirm that
all particle distribution scale with $f \propto p^{-7/2}$ for $p \ll p_0$ for any injected particles.
For a smaller $p$, the particle distribution reaches a scaling solution, which is independent of the initial injected particle.
To determine the dominant particle species during the splitting process, we plot $R_s(p)$ given by \eqref{eq:Rs} as a function of $p$ in Figs.~\ref{fig:R-1} and \ref{fig:R-2} for each $\si$.
We can see that $R_s$ is asymptotic to a certain value $R_s^{\rm (asym)}$ for a sufficiently small $p/p_0$ for any $\si$.
The asymptotic values are consistent with the analytic result derived in Sec.~\ref{sec:analytic},
as shown in Tab.~\ref{tab:1}.
In particular, the gluon dominates the number and energy of SM particles in the scaling regime for any $\si$.
The energy threshold for the scaling solution is represented by $p_{\rm asym}^{(\si)}$,
where
$R_s(p) \simeq R_s^{\rm (asym)}$ for all $s$ for $p < p_{\rm asym}^{(\si)}$.
Namely, $p_{\rm asym}^{(\si)}$ represents
the energy below which
$R_s(p)$ is equal to the scaling solution within $10\%$ for any $s$
for the case in which a species $\si$ is injected at $p = p_0$.
The particles charged under SU(2) tend to require a relatively small $p_{\rm asym}^{(\si)}$.
In particular, the case with $\si = L_f$ and $W$ cannot reach the scaling solution in the case of $p_0 / T = 10^{12}$.
For these two scenarios, we also calculate the stationary solution to the Boltzmann equation with $p_0 / T = 10^{14}$.
The case with $\si = L_f$ does not reach the scaling solution, even in this scenario, whereas the case with $\si = W$ reaches the scaling solution at $p_{\rm asym}^{(W)} = 6 \times 10^{-14} p_0$.
We also define $p_{\rm asym'}^{(\si)}$ such
that
$R_s(p) \simeq R_s^{\rm (asym)}$ for $p < p_{\rm asym'}^{(\si)}$ for all $s$ except $s = L_f, W$.
This represents the scale below which the dominant components, such as gluons, reach the scaling regime
while particles with only the slowest SU(2) interactions may not.
The results are summarized in Tabs.~\ref{tab:3} and \ref{tab:4}.
In Fig.~\ref{fig:f-1}, \ref{fig:f-2}, \ref{fig:R-1}, and \ref{fig:R-2},
the dark and light shaded regions represent $p < p_{\rm asym}^{(s)}$ and $p < p_{\rm asym'}^{(s)}$, respectively.
The difference between $p_{\rm asym}^{(\si)}$ and $p_{\rm asym'}^{(\si)}$
represents how slowly $s = L_f$ and $W$ reach the scaling solution, even after the other species achieve this.
\begin{table*}
\begin{center}
\begin{tabular}{c|ccccccc}
& $e_f$ & $L_f$ & $u_{f'}$ & $u_3$ & $d_f$ & $Q_{f'}$ & $Q_3$
\\[.2em]
\hline
$\tilde{f}_{\rm tot, \si}^{\rm (asym)}$ & $20.7$ & $21.0$ & $23.1$ & $22.9$ & $23.1$ & $22.2$ & $22.1$
\\
$p_{\rm asym}^{(\si)}/p_0$ & $3 \times 10^{-10}$ & $< 10^{-14}$ & $3 \times 10^{-6}$ & $8 \times 10^{-6}$ & $1 \times 10^{-6}$ & $2 \times 10^{-11}$ & $2 \times 10^{-11}$
\\
$p_{\rm asym'}^{(\si)}/p_0$ & $5 \times 10^{-6}$ & $2 \times 10^{-10}$ & $1 \times 10^{-4}$ & $1 \times 10^{-4}$ & $6 \times 10^{-4}$ & $5 \times 10^{-7}$ & $5 \times 10^{-7}$
\end{tabular}
\end{center}
\caption{Asymptotic value of total distributions and the typical energy scales of the scaling solution $p_{\rm asym}^{(\si)}$ and $p_{\rm asym'}^{(\si)}$ for $\si = (e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3)$.
}
\label{tab:3}
\end{table*}
\begin{table*}
\begin{center}
\begin{tabular}{c|cccc}
& $\phi$ & $g$ & $W$ & $B$ \\[.2em]
\hline
$\tilde{f}_{\rm tot, \si}^{\rm (asym)}$ & $23.4$ & $23.4$ & $22.6$ & $22.1$
\\
$p_{\rm asym}^{(\si)}/p_0$ & $1 \times 10^{-11}$ & $3 \times 10^{-5}$ & $6 \times 10^{-14}$ & $5 \times 10^{-10}$
\\
$p_{\rm asym'}^{(\si)}/p_0$ & $4 \times 10^{-7}$ & $2 \times 10^{-2}$ & $3 \times 10^{-9}$ & $3 \times 10^{-5}$%
\end{tabular}
\end{center}
\caption{Same as Tab.~\ref{tab:1} but for $\si =( \phi, g, W, B)$.}
\label{tab:4}
\end{table*}
In Sec.~\ref{sec:analyticsmall}, we discuss that the asympotic value of total distribution $\tilde{f}_{\rm tot}^{\rm (asym)}$ is independent of $\si$.
To confirm this,
we define
an asymptotic value of total distributions for each $\si$ such as
\begin{align}
f_{\rm tot, \si}^{\rm (asym)} (p)
&\equiv
\left. \sum_s \nu_s f_s(p)
\right\vert_{p < p_{\rm asym}^{(s_{\rm inj})}},
\label{eq:totaldist2}
\\
&\equiv
\tilde{\Gamma} \tilde{f}_{\rm tot, \si}^{\rm (asym)}
\qty( \frac{p}{p_0} )^{-7/2}.
\label{eq:tildetotal2}
\end{align}
The proportionality constant is determined from numerical calculations for each $\si$ and is shown in Tabs.~\ref{tab:3} and \ref{tab:4} for $\tilde{\Gamma} = 1$.
We also show the result of $\si = L_f$
even though it does not reach the scaling solution.
For all cases, $\tilde{f}_{\rm tot, \si}^{\rm (asym)}$ is the same with each other within an error of order $10 \%$.
The averaged value is about $22.4$, which is consistent with the rough estimation of \eqref{eq:tildeftotest}.%
\footnote{
For the case with only gluons, namely for the pure SU(3) gauge theory, we obtain $\tilde{f}_{\rm tot}^{\rm (asym)} \simeq 11.3$ from our numerical calculation. This is smaller than that for the full SM case by a factor of about two. The difference comes from the fact that some fraction of energy is transferred into the other SM particles, which have slower thermalization processes. The thermalization rate of the whole sector is therefore smaller in the full SM case and the resulting asymptotic value $\tilde{f}_{\rm tot}^{\rm (asym)}$ becomes larger than the case only with gluons.
}
The derivation for different $\si$ may come from numerical errors and the fact that the scaling solution is not an exact for a finite $p/p_0$.
In particular,
a relatively large deviation for $\si = L_f$ come from the fact that the system does not reach the scaling solution even for $p = 10^{-14} p_0$.
For example, the asymptotic values $\tilde{f}_{\rm tot, \si}^{\rm (asym)}$ for $\si = (L_f, W)$ change from ($19.2, \, 20.9$) for $p/p_0 = 10^{-12}$
to ($21.0, \, 22.6$) for $p/p_0 = 10^{-14}$.
Although the asymptotic behavior for $p \ll p_0$ is similar in all cases,
the initial cascading process at $p \approx p_0$ is different.
These features are, however, consistent with the analytic calculations derived in Sec.~\ref{sec:analytic} and Appendix~\ref{sec:appendixB}.
We show a couple of examples of distributions at $p \approx p_0$ in Fig.~\ref{fig:p0},
where $\si = e_f$ (left panel) and $B$ (right panel).
In the figure, we do not show the delta-function distribution for $e_f$ and $B$.
The distributions of all particles have a consistent dependence with Fig.~\ref{fig:flow1}.
The cascading process at $p \approx p_0$ can also be seen in Figs.~\ref{fig:f-1} and \ref{fig:f-2}, which are consistent with Fig.~\ref{fig:flow1}.
\section{Application to non-thermal DM production}
\label{sec:application}
In this section, we briefly explain how our results can be used to calculate the DM abundance from non-thermal production during the thermalization process.
We can improve the qualitative estimation of Ref.~\cite{Harigaya:2014waa} to a quantitative level.
Suppose that DM $\chi$ is produced from
the reaction of $s_1 s_2 \to \chi \chi$.
The Boltzmann equation is given by
\begin{align}
\frac{\dd}{\dd t} n_\chi + 3 H n_\chi
&=
2 \int \dd {\rm Lips}
\abs{\mathcal{M}}^2 \nu_{s_1} f_{s_1} (p_1)
\nu_{s_2} f_{s_2} (p_2),
\\
&= 2 \int \frac{\dd^3 p_1}{(2 \pi)^3} \frac{\dd^3 p_2}{(2\pi)^3}
\nu_{s_1} f_{s_1} (p_1)
\nu_{s_2} f_{s_2} (p_2)
\sigma v,
\end{align}
where $\mathcal{M}$ is the amplitude of the scattering process $s_1 s_2 \to \chi \chi$ and
\begin{align}
&\dd {\rm Lips} \equiv
\dd\Pi_1 \dd \Pi_2 \dd \Pi_3 \dd \Pi_4 (2 \pi)^4 \delta(p_1 + p_2 - p_3 - p_4),
\\
&\dd \Pi_i \equiv \frac{\dd^3 p_i}{(2\pi)^3 2 E_i}.
\end{align}
The distribution $f_s$ is given by a combination of
the thermal and non-thermal parts, such as
\begin{align}
&\nu_s f_s(p) = \nu_s f_s^{\rm (th)}(p) + \nu_s f_s^{\rm (Non-th)}(p),
\\
&f_s^{\rm (th)} (p) = \frac{1}{e^{p/T} \pm 1}.
\end{align}
Here, the non-thermal part is given by
\begin{align}
\nu_s f_s^{\rm (Non-th)} (p) &= R_s (p) f_{\rm tot} (p)
\\
&\simeq
\frac{ 2 \pi^2 \rho_I(t) \Gamma_I }{p_0^{7/2} T^{3/2}(t)} R_s^{\rm (asym)} \tilde{f}_{\rm tot}^{\rm (asym)} \qty( \frac{p}{p_0} )^{-7/2},
\end{align}
for $p \gtrsim T$,
where we use Eq.~(\ref{eq:tildetotal}) and $\tilde{\Gamma} = 2 \pi^2 \rho_I \Gamma_I / (p_0^{7/2} T^{3/2})$
and assume $p < p_{\rm asym}^{(\si)}$
in the final line such that
the production process is dominated by scattering with energy $p$ in the scaling regime.
In this form, the time dependence originates from $\rho_I$ and $T$.
Note that $\tilde{f}_{\rm tot}^{\rm (asym)}$ only depends on $T(t)$ logarithmically; hence, we can neglect its dependence on the time variable.
We can solve the Boltzmann equation for $\chi$ to obtain its abundance by
substituting $R_s^{\rm (asym)}$ and $\tilde{f}_{\rm tot}^{\rm (asym)}$
($\simeq 21\,\text{-}\,23$) from Tabs.~\ref{tab:3} and \ref{tab:4}.
At the end of reheating, we have $T = T_{\rm RH} \simeq (90/(g_* \pi^2))^{1/4} \sqrt{\Gamma_I \Mpl}$, where $g_*$ is the effective number of relativistic degrees of freedom.
If the primary particles are produced from inflaton decay, we have $\rho_I = 3 H^2 \Mpl^2 \simeq 3 \Gamma_I^2 \Mpl^2$.
Now, we shall calculate the DM abundance in a toy model.
We assume that the cross section for DM production is given by $\sigma = \alpha_{\chi}^2 /s$ for $s > 4 m_\chi^2$, where $\alpha_{\chi}$ is a coupling constant, and $s$ is the center-of-mass energy.
When $m_\chi^2 / T < p_0$,%
\footnote{
When $m_\chi^2 / T > p_0$,
DM can be produced mainly by scattering among the high-energy particles from the cascade~\cite{Harigaya:2014waa}.
}
DM can be produced mainly by the scattering between the thermal plasma and high-energy particles from the cascade, such as
\begin{align}
\frac{\dd}{\dd t} n_\chi + 3 H n_\chi
&=
\frac{ 2 \pi^2 \rho_I(t) \Gamma_I }{p_0^{7/2} T^{3/2}(t)}
\qty( \nu_{s_1} R_{s_2}^{\rm (asym)} +
\nu_{s_2} R_{s_1}^{\rm (asym)} )
\tilde{f}_{\rm tot}^{\rm (asym)}
\nonumber\\
&\qquad \qquad \qquad \times
\frac{1 }{4\pi^4}
\int_0^\infty p_1^2 \dd p_1
\int_T^{p_0} p_2^2 \dd p_2 \int_{-1}^1 \dd \cos \theta \,
e^{-p_1/T} \qty( \frac{p_2}{p_0} )^{-7/2}
\sigma v,
\\
&\simeq
\frac{ 2 \pi^2 \rho_I(t) \Gamma_I }{p_0^{7/2} T^{3/2}(t)}
\qty( \nu_{s_1} R_{s_2}^{\rm (asym)} +
\nu_{s_2} R_{s_1}^{\rm (asym)} )
\tilde{f}_{\rm tot}^{\rm (asym)}
\frac{1}{4\pi^4}
\alpha_{\chi}^2
\frac{15 \sqrt{\pi}}{36} \frac{(T p_0)^{7/2} }{m_\chi^3},
\\
&= \frac{15 \alpha_{\chi}^2}{72 \pi^{3/2}}
\frac{ \rho_I(t) \Gamma_I T^2(t)}{m_\chi^3} \qty( \nu_{s_1} R_{s_2}^{\rm (asym)} +
\nu_{s_2} R_{s_1}^{\rm (asym)} ) \tilde{f}_{\rm tot}^{\rm (asym)},
\label{eq:boltzmannchi}
\end{align}
where we assume $p_0 T \gg m_\chi^2$ for simplicity.
To solve \eqref{eq:boltzmannchi}, we need the time-dependence of $T(t)$, which obeys
\begin{align}
&\frac{\dd}{\dd t} \rho_r + 4 H \rho_r = \Gamma_I \rho_I,
\\
&H(t) = \sqrt{\frac{\rho_I + \rho_r}{3 \Mpl^2}},
\\
&\rho_r = \frac{g_* \pi^2}{30} T^4.
\end{align}
The time-dependence of $\rho_I$ is given by \eqref{eq:nI} with $\rho_I = m_I n_I$.
Integrating \eqref{eq:boltzmannchi} over $t$ in these equations,
we obtain
\begin{align}
\frac{\rho_\chi}{s}
&\simeq 4.4 \times
\frac{45}{2 \pi^2 g_{*s}}
\frac{15 \alpha_{\chi}^2}{72 \pi^{3/2}}
\frac{ \Gamma_I^{3/2}\Mpl^{3/2}}{m_\chi^2}
\qty( \nu_{s_1} R_{s_2}^{\rm (asym)} +
\nu_{s_2} R_{s_1}^{\rm (asym)} )
\tilde{f}_{\rm tot}^{\rm (asym)},
\\
&\simeq 2.2 \times 10^{-2} \,
\alpha_\chi^2 \frac{T_{\rm RH}^3}{m_\chi^2}
\qty( \nu_{s_1} R_{s_2}^{\rm (asym)} +
\nu_{s_2} R_{s_1}^{\rm (asym)} )
\tilde{f}_{\rm tot}^{\rm (asym)},
\end{align}
where $\rho_\chi = m_\chi n_\chi$, $s$ is the entropy density, and $g_{*s}$ is the effective number of degrees of freedom of the entropy density.
In the final line, we assume $g_* = g_{*s} = 106.75$.
This is consistent with the order of the estimated result in Ref.~\cite{Harigaya:2014waa}
up to a numerical factor, as expected.
In particular, the result is independent of $p_0$.
Note that the SM gauge coupling dependence is implicitly included in $\tilde{f}_{\rm tot}^{\rm (asym)}$ ($\sim \alpha_s^{-2}$) [see Eq.~\eqref{eq:tildeftotest}].
Let us substitute some numbers.
The factor of $\tilde{f}_{\rm tot}^{\rm (asym)}$ has a similar value for all $\si$ within an error of about $10\%$, so one may take its averaged value $\tilde{f}_{\rm tot, \si}^{\rm (asym)} \simeq 22$.
For example,
if $\chi$ are produced from the scattering of weak bosons,
we substitute $\nu_W = 6$, $R_W^{\rm (asym)} \simeq 0.0533$ and obtain
\begin{align}
\frac{\rho_\chi}{s}
&\simeq 0.33 \, \alpha_\chi^2 \frac{T_{\rm RH}^3}{m_\chi^2}.
\end{align}
Finally, we note that the result does not depend on the detail of the inflaton decay.
First of all, it is independent of the inflaton mass and the decay rate, as discussed in Ref.~\cite{Harigaya:2014waa}.
Moreover,
as we discuss in Sec.~\ref{sec:analytic} and show in Tabs.~\ref{tab:1}, \ref{tab:3}, and \ref{tab:4},
the asymptotic value of distributions in the scaling regime $R_s^{\rm (asym)} \tilde{f}_{\rm tot}^{\rm (asym)}$ is independent of the injected particle $\si$.
Our result is therefore almost independent of the high-energy physics even though a high-energy particle is injected from the heavy-particle decay.
\section{Discussion and conclusions}
\label{sec:conclusion}
We have investigated
the thermalization of high-energy SM particles in an ambient thermal plasma. This is realized, \textit{e.g.}, during the reheating after inflation, where an inflaton decays into SM particles with energy of the order of the inflaton mass.
All the relevant terms for splitting in the Boltzmann equations for all SM particles were taken into account, including the Abelian and non-Abelian gauge interactions and the top Yukawa interaction with the Higgs field.
The Boltzmann equations were numerically solved with a constant delta-function source term at $p = p_0$ with $p_0 \gg T$.
The distribution of particles scales as $p^{-7/2}$ for $p \ll p_0$, as expected from the scaling property of the splitting rate.
We have analytically calculated the asymptotic behavior of distributions at $p \approx p_0$ and $p \ll p_0$.
The first one represents the way in which the primary particles lose their energy, and the secondary and subsequent particles are produced during the initial thermalization.
The asymptotic behavior of distribution for primary particles at $p \approx p_0$ is proportional to $(p_0 - p)^{-1/2}$ for all SM particles other than the right-handed electron $e_f$ and hypercharge gauge boson $B$.
If the primary particle is $e_f$ or $B$, specific attention is required, where the distribution is proportional to a delta function. This is because the interaction of these particles is only the Abelian gauge, and the self-splitting rate is strongly suppressed.
Therefore, the particle distribution is not smeared from a delta-function source term.
Furthermore, we have shown that the asymptotic behavior of distributions at $p \ll p_0$ is independent of the injected primary particles.
In particular, the number density is dominated by gluons after a sufficiently large number of splittings occurs from any primary particle.
This is a strong prediction for the thermalization of high-energy SM particles.
Our results are useful when considering non-thermal production of DM during thermalization.
Since the distribution of SM particles is proportional to $p^{-7/2}$ at small $p$, a large amount of DM can be produced from scattering between the cascading particles and the thermal plasma.
The abundance of DM can be calculated using the result of distributions for a given energy $p$.
In particular, if the energy scale of the DM production process is much lower than the initial energy of the primary particles,
the scaling solution for the distributions can be used, and the result is independent of the way the primary particles are produced.
Furthermore, this result provides a definite prediction of DM abundance even though the process is highly non-thermal.
Moreover, there are several proposals for the non-thermal production of lepton asymmetry during the reheating era~\cite{Hamada:2015xva, Hamada:2018epb, Asaka:2019ocw}, for which the detailed thermalization process investigated in this paper may be important for calculating the produced lepton asymmetry.
We note that our results have broad applications.
Our assumptions are as follows:
i) Some SM particles are produced with a large energy $p_0$ and a small distribution $f_{\si} (p_0) \ll 1$,
ii) An ambient thermal plasma is present with a temperature $T$ ($\ll p_0$),
and
iii) The splitting rate is significantly greater than the Hubble expansion rate.
In several situations, these conditions are satisfied.
For example, if reheating is considered via the perturbative decay of inflatons, these conditions are satisfied after the temperature of the Universe reaches its maximal value~\cite{Mukaida:2015ria}.
The conditions are also satisfied at the last stage of reheating, in which case the temperature is equal to the reheating temperature.
For the decay of a long-lived heavy particle or extended object, our analysis is applicable to both cases where it dominates and does not dominate the Universe.
\section*{Acknowledgments}
K.\,M.\, was supported by MEXT Leading Initiative for Excellent Young Researchers Grant No.\ JPMXS0320200430,
and by JSPS KAKENHI Grant No.\ JP22K14044.
M.\,Y.\ was supported by the Leading Initiative for Excellent Young Researchers,
MEXT, Japan, and by JSPS KAKENHI Grant No.\ JP20H05851 and JP21K13910.
\appendix
\section{Boltzmann equations for the SM}
\label{sec:appendixA}
In this Appendix, we write the Boltzmann equations for the SM particles. Except for the top Yukawa, the Yukawa interactions can be neglected.
For notational simplicity, the argument for $\gamma_i$ and $f_s$ is omitted, which can be easily read from the general form of \eqref{eq:boltzmann} or the following:
\begin{align}
\frac{\partial }{\partial t} f_s (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_s} \sum_{s',s''}
\int_0^p \dd k \,
\gamma_{s \leftrightarrow s's''} \bigl(p; k, p-k \bigr) \,
f_s(p)
+
\frac{(2\pi)^3}{p^2 \nu_s} \sum_{s',s''}
\int_0^\infty \dd k \,
\gamma_{s' \leftrightarrow s s''} \bigl(p+k; p, k \bigr) \,
f_{s'}(p+k)
\nn
&\qquad + ({\rm source \ term}).
\end{align}
For the gluons,
\begin{align}
\frac{\partial }{\partial t} f_g (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_g}
\int_0^p \dd k \,
\qty[ \gamma_{g \leftrightarrow gg} + \sum_f \qty( \gamma_{g \leftrightarrow u_f \bar{u}_f} + \gamma_{g \leftrightarrow d_f \bar{d}_f} + 2 \gamma_{g \leftrightarrow Q_f \bar{Q}_f} ) ]
f_g
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_g}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{g \leftrightarrow gg} f_g
+ \sum_f \qty( \gamma_{u_f \leftrightarrow g u_f} f_{u_f}
+ \gamma_{d_f \leftrightarrow g d_f} f_{d_f}
+ 2 \gamma_{Q_f \leftrightarrow gQ_f} f_{Q_f}
) ]
.
\label{eq:boltzman_g}
\end{align}
Similarly,
\begin{align}
\frac{\partial }{\partial t} f_W (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_W}
\int_0^p \dd k \,
\qty[ \gamma_{W \leftrightarrow WW} + \sum_f \qty( 3 \gamma_{W \leftrightarrow Q_f \bar{Q}_f}
+ \gamma_{W \leftrightarrow L_f \bar{L}_f} )
+ \gamma_{W \leftrightarrow \phi \phi^*} ]
f_W
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_W}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{W \leftrightarrow WW} f_W
+ \sum_f \qty( 3 \gamma_{Q_f \leftrightarrow W Q_f} f_{Q_f}
+ \gamma_{L_f \leftrightarrow W L_f} f_{L_f} )
+ \gamma_{\phi \leftrightarrow W \phi} f_{\phi}
]
,
\\
\frac{\partial }{\partial t} f_B (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_B}
\int_0^p \dd k \,
\qty[ \sum_f \qty(
3 \gamma_{B \leftrightarrow u_f \bar{u}_f}
+ 3 \gamma_{B \leftrightarrow d_f \bar{d}_f}
+ 6 \gamma_{B \leftrightarrow Q_f \bar{Q}_f}
+ \gamma_{B \leftrightarrow e_f \bar{e}_f}
+ 2 \gamma_{B \leftrightarrow L_f \bar{L}_f}
)
+ 2 \gamma_{B \leftrightarrow \phi \phi^*}
]
f_B
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_B}
\int_0^\infty \dd k \,
\qty[
\sum_f
\qty(
3 \gamma_{u_f \leftrightarrow B u_f} f_{u_f}
+ 3 \gamma_{d_f \leftrightarrow B d_f} f_{d_f}
+ 6 \gamma_{Q_f \leftrightarrow B Q_f} f_{Q_f}
+ \gamma_{e_f \leftrightarrow B e_f} f_{e_f}
+ 2 \gamma_{L_f \leftrightarrow B L_f} f_{L_f}
)
+ 2 \gamma_{\phi \leftrightarrow B \phi} f_{\phi}
]
.
\label{eq:boltzmann-photon}
\end{align}
For quarks, we have
\begin{align}
\frac{\partial }{\partial t} f_{u_f} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{u_f}}
\int_0^p \dd k \,
\qty[ \gamma_{u_f \leftrightarrow g u_f}
+ 3 \gamma_{u_f \leftrightarrow B u_f} ]
f_{u_f}
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{u_f}}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{g \leftrightarrow u_f \bar{u}_f} f_g
+ 6 \gamma_{B \leftrightarrow u_f \bar{u}_f} f_B
+ \qty( \gamma_{u_f \leftrightarrow u_f g}
+ 3 \gamma_{u_f \leftrightarrow u_f B} ) f_{u_f}
] \,,
\label{eq:boltzmannuf}
\end{align}
for $f= 1,2$
and similar for $d_f$ with $f=1,2,3$. Here we included a factor of $2$ for pair creation since we involved anti-particle for $\nu_s$.
For the left-handed quarks, we have
\begin{align}
\frac{\partial }{\partial t} f_{Q_f} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{Q_f}}
\int_0^p \dd k \,
\qty[ \qty( 2 \gamma_{Q_f \leftrightarrow g Q_f}
+ 3 \gamma_{Q_f \leftrightarrow W Q_f}
+ 6 \gamma_{Q_f \leftrightarrow B Q_f} ) ]
f_{Q_f}
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{Q_f}}
\int_0^\infty \dd k \,
\qty[
4 \gamma_{g \leftrightarrow Q_f \bar{Q}_f} f_g
+ 6 \gamma_{W \leftrightarrow Q_f \bar{u}_f} f_W
+ 12 \gamma_{B \leftrightarrow Q_f \bar{Q}_f} f_B
+ \qty( 2 \gamma_{Q_f \leftrightarrow Q_f g}
+ 3 \gamma_{Q_f \leftrightarrow Q_f W}
+ 6 \gamma_{Q_f \leftrightarrow Q_f B} ) f_{Q_f}
]
\label{eq:boltzmannQf}
\end{align}
for $f= 1,2$.
For the top quark,
we should add contribution from the Yukawa interaction, such as
\begin{align}
\frac{\partial }{\partial t} f_{u_3} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{u_3}}
\int_0^p \dd k \,
\qty[ \dots + 3 \gamma_{u_3 \leftrightarrow \phi Q_3} ]
f_{u_3}
+
\frac{(2\pi)^3}{p^2 \nu_{u_3}}
\int_0^\infty \dd k \,
\qty[
\dots
+ 6 \gamma_{\phi \leftrightarrow u_3 \bar{Q}_3} f_{\phi}
+ 3 \gamma_{Q_3 \leftrightarrow u_3 \phi^* } f_{Q_3}
],
\label{eq:f_u3}
\\
\frac{\partial }{\partial t} f_{Q_3} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{Q_3}}
\int_0^p \dd k \,
\qty[ \dots + 3 \gamma_{Q_3 \leftrightarrow \phi^* u_3} ]
f_{Q_3}
+
\frac{(2\pi)^3}{p^2 \nu_{Q_3}}
\int_0^\infty \dd k \,
\qty[
\dots
+ 6 \gamma_{\phi \leftrightarrow \bar{Q}_3 u_3 } f_{\phi}
+ 3 \gamma_{u_3 \leftrightarrow Q_3 \phi} f_{u_3}
]
.
\label{eq:f_Q3}
\end{align}
For the leptons, we have
\begin{align}
\frac{\partial }{\partial t} f_{e_f} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{e_f}}
\int_0^p \dd k \,
\qty[
\gamma_{e_f \leftrightarrow B e_f} ]
f_{e_f}
+
\frac{(2\pi)^3}{p^2 \nu_{e_f}}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{B \leftrightarrow e_f \bar{e}_f} f_B
+ \gamma_{e_f \leftrightarrow e_f B} f_{e_f}
]
,
\label{eq:boltzmannef}
\\
\frac{\partial }{\partial t} f_{L_f} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{L_f}}
\int_0^p \dd k \,
\qty[
\gamma_{L_f \leftrightarrow W L_f}
+ 2 \gamma_{L_f \leftrightarrow B L_f} ]
f_{L_f}
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{L_f}}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{W \leftrightarrow L_f \bar{L}_f} f_W
+ 4 \gamma_{B \leftrightarrow L_f \bar{L}_f} f_B
+ \qty( \gamma_{L_f \leftrightarrow L_f W}
+ 2 \gamma_{L_f \leftrightarrow L_f B} ) f_{L_f}
] \,,
\label{eq:boltzmannLf}
\end{align}
for $f= 1,2,3$.
The Higgs field obeys
\begin{align}
\frac{\partial }{\partial t} f_{\phi} (p,t)
&= - \frac{(2\pi)^3}{p^2 \nu_{\phi}}
\int_0^p \dd k \,
\qty[
\gamma_{\phi \leftrightarrow W \phi}
+ 2 \gamma_{\phi \leftrightarrow B \phi}
+ 6 \gamma_{\phi \leftrightarrow u_3 \bar{Q}_3} ]
f_{\phi}
\nn
&+
\frac{(2\pi)^3}{p^2 \nu_{\phi}}
\int_0^\infty \dd k \,
\qty[
2 \gamma_{W \leftrightarrow \phi \phi^*} f_W
+ 4 \gamma_{B \leftrightarrow \phi \phi^*} f_B
+ \qty( \gamma_{\phi \leftrightarrow \phi W}
+ 2 \gamma_{\phi \leftrightarrow \phi B} ) f_\phi
+ 3 \gamma_{u_3 \leftrightarrow \phi Q_3} f_{u_3}
+ 3 \gamma_{Q_3 \leftrightarrow \phi^* u_3} f_{Q_3}
]
.
\label{eq:f_phi}
\end{align}
\section{Boundary condition and asymptotic behavior at $p \approx p_0$}
\label{sec:appendixB}
In this Appendix, the asymptotic behaviors of distributions at $p \approx p_0$ are calculated, which are useful to impose boundary conditions for distributions in numerical calculations.
The asymptotic behavior is different for the primary and other particles.
\subsection{Primary particles}
\subsubsection{Case with primary gluon}
First, we consider the case in which gluons were injected at $p = p_0$ with a delta-function source term.
The stationary equation for the gluon distribution is similar to the following:
\begin{align}
- 2 \int_0^{p/2} \dd k \,
\gamma_{g \leftrightarrow g g} \bigl(p; k, p-k \bigr) \,
f_g(p)
+
\int_0^\infty \dd k \,
2 \gamma_{g \leftrightarrow g g} \bigl(p+k; p, k \bigr) \,
f_g(p+k)
+ \frac{p^2 }{(2\pi)^3} p_0^{1/2} T^{3/2} \tilde{\Gamma} \delta (p - p_0) = 0,
\end{align}
where we include the source term ($= p_0^{1/2} T^{3/2} \Br \tilde{\Gamma} \delta (p - p_0)$)
and use $\Br = 1/\nu_g$.
Here, we neglected terms associated with quarks because the secondary particles are subdominant at $p \approx p_0$, and no IR divergence is observed in the pair production of quarks.
Defining $x \equiv k / p$ and $x_{\rm max} \equiv p_0 / p$, we can rewrite it as
\begin{align}
- \tilde{\gamma}_{g \leftrightarrow g g} \qty[ \int_0^{1/2} \frac{\dd x}{x^{3/2}} \,
f_g(p)
-
\int_0^{x_{\rm max}-1} \frac{\dd x}{x^{3/2}} \,
f_g(p(1+x))
] + \frac{p}{(2\pi)^3} \tilde{\Gamma} \delta (x_{\rm max}-1) \approx 0,
\label {eq:gluonasym}
\end{align}
where we approximate
$\gamma_{g \leftrightarrow g g} \bigl(p; k, p-k \bigr) \simeq
\gamma_{g \leftrightarrow g g} \bigl(p+k; p, k \bigr) \simeq p^{1/2} T^{3/2} \tilde{\gamma}_{g \leftrightarrow g g} x^{-3/2}/2$ with a dimensionless constant $\tilde{\gamma}_{g \leftrightarrow g g}$ for $x \ll 1$.
Although this approximation is not satisfactory for all integral domain in the first term, the contribution near $x \sim 1$ is subdominant and is not significant for our discussion.
Now we make an ansatz $f_g(p) = C_g ((p_0 - p)/p_0)^{-1/2 + \epsilon}$
with $C_g$ and $\epsilon$ being constants.
Then we can perform the integral and obtain
\begin{align}
- \frac{2 \pi \epsilon}{\sqrt{x_{\rm max} - 1}} \tilde{\gamma}_{g \leftrightarrow g g} \, C_g ((p_0 - p)/p_0)^{-1/2 + \epsilon}
+ \frac{p}{(2\pi)^3} \tilde{\Gamma} \delta (x_{\rm max}-1) \approx 0,
\label {eq:bc1}
\end{align}
at the leading order of $x_{\rm max} -1 \ll 1$ for $\epsilon \to 0$.
Here we used
\begin{align}
- \int_0^{1/2} \frac{\dd x}{x^{3/2}} \,
+
\int_0^{x_{\rm max}-1} \frac{\dd x}{x^{3/2}} \, \qty( \frac{x_{\rm max} -1 - x}{x_{\rm max} - 1} )^{-1/2 + \epsilon}
&\simeq - \frac{1}{\sqrt{x_{\rm max} - 1}}
\frac{2 \sqrt{\pi}\, \Gamma(1/2 + \epsilon)}{\Gamma(\epsilon)}
\\
&\simeq - \frac{2 \pi \epsilon}{\sqrt{x_{\rm max} - 1}}
\quad \text{for} \ \epsilon \to 0,
\end{align}
where we take a limit of $x_{\rm max} -1 \ll 1$ in the first line.
We can check that \eqref{eq:bc1} is consistent by taking an integral over $p$ from $p_0( 1- \delta)$ to $p_0$ with a small $\delta$ and taking
a limit of $\epsilon \to 0$ because the first term provides
\begin{align}
&- 2 \pi \epsilon \tilde{\gamma}_{g \leftrightarrow g g} \, C_g
\int_{p_0(1-\delta)}^{p_0} \dd p \sqrt{\frac{p}{p_0 - p}} \qty( \frac{p_0 - p}{p_0} )^{-1/2 + \epsilon}
\label{eq:deltafunct}
\\
&= - 2 \pi \tilde{\gamma}_{g \leftrightarrow g g} \, C_g p_0
\qty( \delta^\epsilon - 0 )
\\
&\to - 2 \pi \tilde{\gamma}_{g \leftrightarrow g g} \, C_g p_0
\quad \text{for} \ \epsilon \to 0.
\end{align}
Thus, we can determine $C_g$ to satisfy \eqref{eq:bc1} such that%
\footnote{
One may think that
the delta function provides $1/2$ for the integral over $p$ from $p_0( 1- \delta)$ to $p_0$.
However,
as we discussed in footnote~\ref{footnote1},
it should give a factor of unity for the integral over the line segment of $p \le p_0$ in our definition of delta function or source term.
}
\begin{equation}
C_g = \frac{\tilde{\Gamma}}{(2\pi)^4 \tilde{\gamma}_{g \leftrightarrow g g}}.
\end{equation}
In summary, we obtain the asymptotic behavior for $f_g$ such that
\begin{equation}
f_g(p) \approx \frac{\tilde{\Gamma}}{ (2\pi)^4 \tilde{\gamma}_{g \leftrightarrow g g}} \qty( \frac{p_0 - p}{p_0} )^{-1/2}
\end{equation}
for $p \approx p_0$.
\subsubsection{Case with primary $L_f, u_f, d_f, Q_f, \phi, W$}
A similar result with gluon is observed for primary particles with IR divergent splitting rates, such as $\si = L_f$, $u_{f'}$, $u_3$, $d_f$, $Q_{f'}$, $Q_3$, $\phi$, and $W$, once we replace $\tilde{\gamma}_{g \leftrightarrow g g}$ appropriately.
Here we summarize the results:
\begin{equation}
f_{s_1} (p) \approx
\frac{\tilde{\Gamma}}{(2\pi)^4 \sum_{a'} \tilde{\gamma}_{s_{1} \to g_{a'} s_{1}}
} \qty( \frac{p_0 - p}{p_0} )^{-1/2}
\end{equation}
for $s_1 = L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3, \phi, g, W$.
Here we explicitly write $\tilde{\gamma}_{s_{1} \to g_a s_{1}}$ as:
\begin{align}
&\sum_{a'} \tilde{\gamma}_{g_a \to g_{a'} g_a} = \frac{x^{3/2}}{p^{1/2} T^{3/2}} \qty( \frac{\nu_{g_a}}{ d_{A_a}^{(a)}} \gamma_{g_a \leftrightarrow g_a g_a} )
\quad {\rm for} \ g_a = (g, W)
\label{eq:tildegtogg}
\\
&\sum_a \tilde{\gamma}_{s_f \to g_a s_f} = \frac{x^{3/2}}{p^{1/2} T^{3/2}} \qty( \sum_a
\frac{\nu_{s_f}}{2 d_{F_s}^{(a)}} \gamma_{s_f \leftrightarrow a s_f} )
\quad {\rm for} \ s_f = (L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3),
\label{eq:tildeqtogq}
\\
&\sum_a \tilde{\gamma}_{\phi \to g_a \phi} = \frac{x^{3/2}}{p^{1/2} T^{3/2}} \qty( \sum_a
\frac{\nu_\phi}{2 d_\phi^{(a)}} \gamma_{\phi \leftrightarrow g_a \phi} )
\label{eq:tildephitogphi}
\end{align}
for $x \ll 1$, where we should evaluate at $p=p_0$.
Here, $a$ can run from 1 to 3 in the summation. However, the contribution from $a = 1$ vanishes in the limit of $x \to 0$, and this is because the soft $B$ boson emission is suppressed by an additional factor of $x$.
This behaviour implies that specific attention is paid to the right-handed lepton, which has the gauge interaction only with $B$ as explained later.
As a reference, we obtain
\begin{align}
(2\pi)^4 \sum_{a'} \tilde{\gamma}_{s_{1} \to g_{a'} s_{1}}
&\simeq
(
0.016, \
0.083, \
0.083, \
0.083, \
0.21, \
0.21, \
0.016, \
1.8, \
0.13
)
\nonumber\\
&{\rm for} \ s_1 = (L_f, \ u_{f'}, \ u_3, \ d_f, \ Q_{f'}, \ Q_3, \ \phi, \ g, \ W),
\end{align}
respectively.
Here we assumed $p = p_0 = 10^{12} T = 10^{15} \GeV$, though the dependence on $p$ is only logarithmic through the running of coupling constants.
\subsubsection{Case with primary $B$}
The cases with $\si = e_f$ and $B$ have qualitatively different distributions at $p \approx p_0$. This is because they experience only Abelian gauge interactions that lead to a different scaling for the splitting rate [see Eq.~\eqref{eq:split_func_rough}].
If the primary particle is an Abelian gauge boson $B$, it splits into other particles and does not undergo soft-dominated splitting processes.
The relevant part of its Boltzmann equation can be written as
\begin{align}
- \sum_{s_f} \tilde{\gamma}_{B \to s_f \bar{s}_f}
\int_0^{1} \dd x \frac{x^2 + (1-x)^2}{x^{1/2}(1-x)^{1/2}} \,
f_B(p)
- \tilde{\gamma}_{B \to \phi \phi^*}
\int_0^{1} \dd x \frac{2 \sqrt{x(1-x)}}{1} \,
f_B(p)
+ \frac{p_0}{(2\pi)^3} \tilde{\Gamma} \delta(p-p_0)
\approx 0,
\label{eq:boltzmanphoton}
\end{align}
where
\begin{align}
&\sum_{s_f} \tilde{\gamma}_{B \to s_f \bar{s}_f}
\simeq \frac{1}{p^{1/2} T^{3/2}}\qty[ \frac{x^2 + (1-x)^2}{x^{1/2}(1-x)^{1/2}} ]^{-1}
\qty( \sum_{s_f} \frac{\nu_{s_f}}{2 d_{F_s}^{(1)}} \gamma_{B \leftrightarrow s_f \bar{s}_f} )
\label{tildeBtoqq}
\\
&\tilde{\gamma}_{B \to \phi \phi^*}
= \frac{1}{p^{1/2} T^{3/2}}
\qty( \frac{2\sqrt{x(1-x)}}{1} )^{-1} \qty( \frac{\nu_{\phi}}{2 d_{\phi}^{(1)}} \gamma_{B \leftrightarrow \phi \phi^*} )
\label{tildeBtophiphi}
\end{align}
for $x \ll 1$.
Here, the summation for $s_f$ ($= e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3$) included all flavors.
The integrals do not have the IR divergence and
can be performed analytically. We then obtain
the delta function distribution for $f_B(p)$ as:
\begin{equation}
f_B(p) = C_B' \delta (p - p_0) + C_B \qty( \frac{p_0 - p}{p_0} )^{m'},
\end{equation}
where we include a subdominant component that is determined later.
Here, $C_B'$ is determined to satisfy \eqref{eq:boltzmanphoton} such that
\begin{equation}
C_B'
= \frac{p_0}{(\pi/4) (2\pi)^3} \frac{\tilde{\Gamma}}{3 \sum_{s_f} \tilde{\gamma}_{B \to s_f s_f} + \tilde{\gamma}_{B \to \phi \phi} }.
\end{equation}
As a reference, we obtain
\begin{align}
(\pi/4) (2\pi)^3 \qty[ 3 \sum_{s_f} \tilde{\gamma}_{B \to s_f s_f} + \tilde{\gamma}_{B \to \phi \phi} ]
&\simeq
0.011,
\end{align}
where we assume $p = p_0 = 10^{12} T = 10^{15} \GeV$ though the dependence on $p$ is only logarithmic.
From the delta-function distribution of $B$,
other particles are produced from the pair production processes.
This is discussed in the next section.
The subdominant part of the distribution for photon is determined by emission from $e_f$ [see Eq.~\eqref{eq:formula3}].
\subsubsection{Case with primary $e_f$}
Similar to the $B$ emission, $e_f$ experiences only Abelian gauge interaction.
Contrary to the non-Abelian case,
the soft-photon emission rate is more suppressed by the LPM effect by a factor of $k/p$, which removes the IR divergence.
As a result, the delta-function distribution from the source term cannot be softened and
$e_f$ has the delta function distribution plus a subdominant component such that
\begin{equation}
f_{e_f} (p) = C_{e_f}' \delta (p - p_0) + C_{e_f} \qty( \frac{p_0 - p}{p_0} )^{m'}.
\end{equation}
Here,
$C_{e_f}'$ satisfies
\begin{equation}
C_{e_f}'
\int_0^p \dd k \,
\gamma_{e_f \leftrightarrow B e_f} \bigl(p; k, p-k \bigr)
= \frac{p_0^2}{ (2\pi)^3} p_0^{1/2} T^{3/2}\tilde{\Gamma}.
\end{equation}
This equation can be approximated by
\begin{align}
C_{e_f}' \tilde{\gamma}_{e_f \to B e_f}
\int_0^{1} \dd x \frac{1 + (1-x)^2}{x^{1/2}(1-x)^{1/2}} \,
\approx \frac{p_0}{ (2\pi)^3} \tilde{\Gamma},
\end{align}
where
\begin{align}
\tilde{\gamma}_{e_f \to B e_f} =
\frac{1}{p^{1/2} T^{3/2}} \qty[ \frac{1 + (1-x)^2}{x^{1/2}(1-x)^{1/2}} ]^{-1}
\gamma_{e_f \leftrightarrow B e_f}
\label{eq:tildeetoBe}
\end{align}
for $x \ll 1$.
This provides
\begin{align}
C_{e_f}'
\approx \frac{p_0}{ (2\pi)^3}
\frac{8}{11 \pi} \frac{ \tilde{\Gamma}}{\tilde{\gamma}_{e_f \to B e_f}}.
\end{align}
As a reference, we obtain
\begin{align}
(2\pi)^3 \frac{11 \pi}{8} \tilde{\gamma}_{e_f \to B e_f}
&\simeq
3.4 \times 10^{-4},
\end{align}
where we assume $p = p_0 = 10^{12} T = 10^{15} \GeV$ though the dependence on $p$ is only logarithmic.
The subdominant part is determined from
\begin{align}
&- \int_0^p \dd k \,
\gamma_{e_f \leftrightarrow B e_f} \bigl(p; k, p-k \bigr) \,
f_{e_f} (p)
+
\int_0^\infty \dd k \,
\gamma_{e_f \leftrightarrow e_f B} \bigl(p+k; p, k \bigr) \, f_{e_f} (p+k)
= 0.
\end{align}
Substituting the distribution into this equation, we obtain
\begin{align}
&C_{e_f} =
\frac{8}{11\pi p_0}
C_{e_f}'
\\
&m' = -1/2.
\end{align}
\subsection{Secondary particles}
Next, we calculate the asymptotic behavior of $p \approx p_0$ for secondary (and subsequent) particles that are produced from the primary particle.
First, we consider a toy model to represent the basic phenomenon.
If a fermion $s_{n+1}$ is produced from a gauge boson $s_{n}$ with a known distribution $f_{s_n}$ with the following stationary Boltzmann equation:
\begin{align}
&-
\int_0^p \dd k \,
\gamma_{s_{n+1} \leftrightarrow s' s_{n+1}} \bigl(p; k, p-k \bigr) \,
f_{s_{n+1}} (p)
\nn
&\qquad +
\int_0^{p_0-p} \dd k \,
\qty[
2 \gamma_{s_{n} \leftrightarrow s_{n+1} s_{n+1}} \bigl(p+k; p, k \bigr) \,
f_{s_{n}} (p+k)
+ \gamma_{s_{n+1} \leftrightarrow s_{n+1} s' }
\bigl(p+k; p, k \bigr) \,
f_{s_{n+1}} (p+k)
]
= 0
,
\label{eq:toy}
\end{align}
where $s'$ is a non-Abelian gauge boson or a Higgs boson, which may or may not be $s_n$.
We define $x \equiv k / p$ and $x_{\rm max} \equiv p_0 / p$ and rewrite the equation as
\begin{align}
- \tilde{\gamma}_{s_{n+1} \leftrightarrow s' s_{n+1}} \qty[ \int_0^{1} \frac{\dd x}{x^{3/2}} \,
f_{s_{n+1}}(p)
-
\int_0^{x_{\rm max}-1} \frac{\dd x}{x^{3/2}} \,
f_{s_{n+1}}(p(1+x))
]
+
2 \tilde{\gamma}_{s_{n} \leftrightarrow s_{n+1} s_{n+1}} \int_0^{x_{\rm max}-1} \frac{\dd x}{x^{1/2}} \,
f_{s_{n}}(p(1+x))
\approx 0,
\end{align}
where
$\gamma_{s_{n+1} \leftrightarrow s' s_{n+1}} \bigl(p; k, p-k \bigr) \simeq
\gamma_{s_{n+1} \leftrightarrow s_{n+1} s' } \bigl(p+k; p, k \bigr) \simeq p^{1/2} T^{3/2} \tilde{\gamma}_{s_{n+1} \leftrightarrow s' s_{n+1}}x^{-3/2}$
and
$\gamma_{s_{n} \leftrightarrow s_{n+1} s_{n+1}} \bigl(p+k; p, k \bigr) \simeq p^{1/2} T^{3/2} \tilde{\gamma}_{s_{n} \leftrightarrow s_{n+1} s_{n+1}} x^{-1/2}$
with a constant $\tilde{\gamma}_{s_{n+1} \leftrightarrow s' s_{n+1}}$
and $\tilde{\gamma}_{s_{n} \leftrightarrow s_{n+1} s_{n+1}}$
for $x \ll 1$.
Now,
suppose that $f_{s_n}$ has the form of $f_{s_n} = C_{s_n} \qty( (p_0 - p) / p_0 )^m$ with a constant $C_{s_n}$ and $m$.
Using an ansatz $f_{s_{n+1}} = C_{s_{n+1}} \qty( (p_0 - p) / p_0 )^{m'}$, we can perform the above integral and obtain
\begin{align}
- 2 \sqrt{\pi} \frac{\Gamma(1 + m')}{ \Gamma(1/2+m')} C_{s_{n+1}} \tilde{\gamma}_{s_{n+1} \leftrightarrow s' s_{n+1}} \qty( (p_0 - p) / p_0 )^{m'-1/2}
+
2 \sqrt{\pi} \frac{\Gamma(1 + m)}{\Gamma(3/2+m)} C_{s_{n}} \tilde{\gamma}_{s_{n} \leftrightarrow s_{n+1} s_{n+1}} \qty( (p_0 - p) / p_0 )^{m+1/2}
\approx 0
\end{align}
for a leading order of $x_{\rm max} - 1 \ll 1$.
Therefore, we should take
\begin{align}
&C_{s_{n+1}} = \frac{1}{1+m} \frac{\tilde{\gamma}_{s_{n} \leftrightarrow s_{n+1} s_{n+1}}}{\tilde{\gamma}_{s_{n+1} \leftrightarrow s' s_{n+1}}} C_{s_{n}}
\\
&m' = m + 1
\end{align}
for the distribution of particle species $s_{n+1}$.
The form of \eqref{eq:toy} corresponds to the splitting process from non-Abelian gauge bosons to fermions.
A similar discussion and conclusion holds for the emission of non-Abelian gauge bosons from fermions, by replacing
$\tilde{\gamma}_{s_{n} \leftrightarrow s_{n+1} s_{n+1}}$
and $\tilde{\gamma}_{s_{n+1} \leftrightarrow s' s_{n+1}}$ appropriately.
However, the one associated with Abelian gauge boson should be provided specific attention because of the absence of IR divergence as we will see in the next section.
\subsubsection{Case with $L_f, u_f, d_f, Q_f, \phi, g$, and $W$ production}
From the discussion above, the distribution of $s_{n+1}$ is determined by the following procedure: the second term of \eqref{eq:toy} is calculated from a known distribution $f_{s_n}$ and compared with the first and third terms calculated from the ansatz. We generically denote the stationary Boltzmann equation in the limit of $x_{\rm max} - 1 \ll 1$ such that
\begin{align}
- \sum_a \tilde{\gamma}_{s_{n+1} \to g_a s_{n+1}}
\qty[ \int_0^{1} \frac{\dd x}{x^{3/2}} \,
f_{s_{n+1}}(p)
-
\int_0^{x_{\rm max}-1} \frac{\dd x}{x^{3/2}} \,
f_{s_{n+1}}(p(1+x))
]
+
\tilde{\gamma}_{s_{n} \to s_{n+1}} \int_0^{x_{\rm max}-1} \frac{\dd x}{x^{b}} \,
f_{s_{n}}(p(1+x))
\approx 0
\end{align}
for $s_{n+1} = L_f, u_f, d_f, Q_f, \phi, g$, and $W$,
where $\tilde{\gamma}_{s_{n+1} \to g_a s_{n+1}}$ is given by
Eqs.~(\ref{eq:tildegtogg}), (\ref{eq:tildeqtogq}), and (\ref{eq:tildephitogphi}).
Here, $b = -1/2, 1/2$, or $3/2$ depending on the splitting,
and the explicit form of $\tilde{\gamma}_{s_{n} \to s_{n+1}}$ is given below.
Taking
$f_{s_{n+1}} = C_{s_{n+1}} \qty( (p_0 - p) / p_0 )^{m'}$, we obtain
\begin{align}
&C_{s_{n+1}} = \frac{\Gamma(1/2 + m')}{2 \sqrt{\pi} \Gamma(1+m')} \frac{\Gamma(1-b) \Gamma (1+m)}{\Gamma(2-b+m)} \frac{\tilde{\gamma}_{s_{n} \to s_{n+1}}}{\sum_a \tilde{\gamma}_{s_{n+1} \to g_a s_{n+1}}} C_{s_{n}}
\\
&m' = m + 3/2 - b
\end{align}
for $f_{s_n} = C_{s_n} \qty( (p_0 - p) / p_0 )^m$
or
\begin{align}
&C_{s_{n+1}} = \frac{\Gamma(1/2 + m')}{2 \sqrt{\pi} \Gamma(1+m')} \frac{\tilde{\gamma}_{s_{n} \to s_{n+1}}}{\sum_a \tilde{\gamma}_{s_{n+1} \to g_a s_{n+1}}} C_{s_{n}}'
\\
&m' = 1/2 - b
\end{align}
for $f_{s_n} = C_{s_n}' \delta (p - p_0)$.
Here we provide $\tilde{\gamma}_{s_{n} \to s_{n+1}}$ for each process:
\begin{align}
&\tilde{\gamma}_{g_a \to s_f} = \frac{x^{1/2}}{p^{1/2} T^{3/2}} \qty(
2 \frac{\nu_{s_f}}{2 d_{F_s}^{(a)}} \gamma_{g_s \leftrightarrow s_f \bar{s}_f} )
\quad {\rm with} \ b = 1/2
\quad {\rm for} \ s_f = (e_f, L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3), \ \ g_a = (g, W, B)
\label{eq:tildegatoqs}
\\
&\tilde{\gamma}_{s_f \to g_{a'}} = \frac{x^{1/2}}{p^{1/2} T^{3/2}} \qty(
\frac{\nu_{s_f}}{2 d_{F_s}^{(a')}} \gamma_{s_f \leftrightarrow g_{a'} s_f} )
\quad {\rm with} \ b = 1/2
\quad {\rm for} \ s_f = (L_f, u_{f'}, u_3, d_f, Q_{f'}, Q_3), \ \ g_a = (g, W, B),
\label{eq:tildeqstoga}
\\
&\tilde{\gamma}_{g_a \to \phi} = \frac{x^{-1/2}}{p^{1/2} T^{3/2}} \qty(
2 \frac{\nu_{\phi}}{2 d_{\phi}^{(a)}} \gamma_{g_s \leftrightarrow \phi \phi^*} )
\quad {\rm with} \ b = -1/2
\quad {\rm for} \ g_a = (g, W, B)
\\
&\tilde{\gamma}_{\phi \to g_{a'}} = \frac{x^{-1/2}}{p^{1/2} T^{3/2}} \qty(
\frac{\nu_{\phi}}{2 d_{\phi}^{(a')}} \gamma_{\phi \leftrightarrow g_{a'} \phi} )
\quad {\rm with} \ b = -1/2
\quad {\rm for} \ g_a = (g, W, B),
\label{eq:tildephitoga}
\\
&\tilde{\gamma}_{s_f \to \phi} = \frac{x^{1/2}}{p^{1/2} T^{3/2}} \qty(
3 \gamma_{s_f \leftrightarrow \phi s_{f'}} )
\quad {\rm with} \ b = 1/2
\quad {\rm for} \ (s_f, s_{f'}) = (u_3, Q_3) \ {\rm or} \ (Q_3, u_3)
\\
&\tilde{\gamma}_{\phi \to s_f} = \frac{x^{1/2}}{p^{1/2} T^{3/2}} \qty(
6 \gamma_{\phi \leftrightarrow s_f s_{f'}} )
\quad {\rm with} \ b = 1/2
\quad {\rm for} \ (s_f, s_{f'}) = (u_3, Q_3) \ {\rm or} \ (Q_3, u_3)
\\
&\tilde{\gamma}_{s_f \to s_{f'}} = \frac{x^{-1/2}}{p^{1/2} T^{3/2}} \qty(
3 \gamma_{s_f \leftrightarrow \phi s_{f'}} )
\quad {\rm with} \ b = -1/2
\quad {\rm for} \ (s_f, s_{f'}) = (u_3, Q_3) \ {\rm or} \ (Q_3, u_3)
\end{align}
for $x \ll 1$.
Generically, $s_f$ represents a single generation $f$. We may take a summation over $f$
in \eqref{eq:tildegatoqs} to obtain a total splitting rate
[see, \textit{e.g.}, Eq.~\eqref{eq:boltzman_g}].
The aforementioned results imply that the production of Higgs boson is suppressed by an additional power of $(p_0 - p)/p_0$ for $p \approx p_0$.
A similar suppression is obtained for the emission of $W$ from $\phi$
and the production of right/left-handed top quark from left/right-handed quark via the Yukawa interaction.
\subsubsection{Case with $B$ production}
The emission of photons has a qualitatively different behavior than that for non-Abelian gauge bosons.
For $s_{n+1} = B$, we generically write
\begin{align}
- \sum_{s_f} \tilde{\gamma}_{B \to s_f \bar{s}_f}
\int_0^{1} \dd x \frac{x^2 + (1-x)^2}{x^{1/2}(1-x)^{1/2}} \,
f_{B}(p)
- \tilde{\gamma}_{B \to \phi \phi^*}
\int_0^{1} \dd x \frac{2 \sqrt{x(1-x)}}{1} \,
f_{B}(p)
+
\tilde{\gamma}_{s_{n} \to B} \int_0^{x_{\rm max}-1} \frac{\dd x}{x^{b}} \,
f_{s_{n}}(p(1+x))
\approx 0\,,
\end{align}
where $s_f$ represents the fermions.
Here,
$\tilde{\gamma}_{s_{n} \to B}$ is given by Eqs.~(\ref{eq:tildeqstoga}) and (\ref{eq:tildephitoga}), whereas $\sum_{s_f} \tilde{\gamma}_{B \to s_f s_f}$ and $\tilde{\gamma}_{B \to \phi \phi}$ are given by Eqs.~(\ref{tildeBtoqq}) and (\ref{tildeBtophiphi}).
Taking
$f_{B} = C_{B} \qty( (p_0 - p) / p_0 )^{m'}$, we obtain
\begin{align}
&C_{B} =
\frac{4}{\pi} \frac{\Gamma(1-b) \Gamma (1+m)}{\Gamma(2-b+m)} \frac{\tilde{\gamma}_{s_{n} \to B}}{3 \sum_{s_f} \tilde{\gamma}_{B \to s_f \bar{s}_f} + \tilde{\gamma}_{B \to \phi \phi^*} } C_{s_{n}}
\\
&m' = m + 1 - b
\end{align}
for $f_{s_n} = C_{s_n} \qty( (p_0 - p) / p_0 )^m$
or
\begin{align}
&C_{B} =
\frac{4}{\pi} \frac{\tilde{\gamma}_{s_{n} \to B}}{3 \sum_{s_f} \tilde{\gamma}_{B \to s_f \bar{s}_f} + \tilde{\gamma}_{B \to \phi \phi^*} }
C_{s_{n}}'
\label{eq:formula3}
\\
&m' = - b
\end{align}
for $f_{s_n} = C_{s_n}' \delta (p - p_0)$.
The photon emission from Higgs is suppressed by an additional power of $(p_0- p)/p_0$.
This process is subdominant as can observed from Fig.~\ref{fig:flow1}.
\subsubsection{Case with $e_f$ production}
For $s_{n+1} = e_f$, we generally write
\begin{align}
&- \tilde{\gamma}_{e_f \to B e_f}
\qty[ \int_0^{1} \dd x \frac{1 + (1-x)^2}{x^{1/2}(1-x)^{1/2}} \,
f_{e_f}(p)
-
\int_0^{x_{\rm max}-1} \dd x \frac{(1+x)^2 + x^2}{(1+x)^{1/2}x^{1/2}} \,
f_{e_f}(p(1+x))
]
\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +
\tilde{\gamma}_{B \to e_f} \int_0^{x_{\rm max}-1} \frac{\dd x}{x^{b}} \,
f_{B}(p(1+x))
\approx 0,
\end{align}
where $\tilde{\gamma}_{B \to e_f}$ and $\tilde{\gamma}_{e_f \to B e_f}$ are given by \eqref{eq:tildegatoqs} and \eqref{eq:tildeetoBe}, respectively.
Here we used the fact that only a case with $s_n = B$ is present.
Taking
$f_{e_f} = C_{e_f} \qty( (p_0 - p) / p_0 )^{m'}$, we obtain
\begin{align}
&C_{e_f} =
\frac{8}{11\pi}
\frac{\Gamma(1-b) \Gamma (1+m)}{\Gamma(2-b+m)} \frac{\tilde{\gamma}_{B \to e_f}}{\tilde{\gamma}_{e_f \to B e_f}} C_B
\\
&m' = m + 1 - b
\end{align}
for $f_{B} = C_{B} \qty( (p_0 - p) / p_0 )^m$
or
\begin{align}
&C_{e_f} =
\frac{8}{11\pi}
\frac{\tilde{\gamma}_{B \to e_f}}{\tilde{\gamma}_{e_f \to B e_f}} C_{B}'
\\
&m' = - b
\end{align}
for $f_B = C_{B}' \delta (p - p_0)$. Note that $b=1/2$ in this case.
\small
\bibliographystyle{utphys}
\bibliography{draft}
|
Title:
Principal-Component Interferometric Modeling (PRIMO), an Algorithm for EHT Data I: Reconstructing Images from Simulated EHT Observations |
Abstract: The sparse interferometric coverage of the Event Horizon Telescope (EHT)
poses a significant challenge for both reconstruction and model fitting of
black-hole images. PRIMO is a new principal components analysis-based algorithm
for image reconstruction that uses the results of high-fidelity general
relativistic, magnetohydrodynamic simulations of low-luminosity accretion flows
as a training set. This allows the reconstruction of images that are both
consistent with the interferometric data and that live in the space of images
that is spanned by the simulations. PRIMO follows Monty Carlo Markov Chains to
fit a linear combination of principal components derived from an ensemble of
simulated images to interferometric data. We show that PRIMO can efficiently
and accurately reconstruct synthetic EHT data sets for several simulated
images, even when the simulation parameters are significantly different from
those of the image ensemble that was used to generate the principal components.
The resulting reconstructions achieve resolution that is consistent with the
performance of the array and do not introduce significant biases in image
features such as the diameter of the ring of emission.
| https://export.arxiv.org/pdf/2208.01667 |
\title{Principal-Component Interferometric Modeling (\texttt{PRIMO}), an Algorithm for EHT Data I: Reconstructing Images from Simulated EHT Observations}
\author{Lia Medeiros}
\altaffiliation{NSF Astronomy and Astrophysics Postdoctoral Fellow}
\affiliation{School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540}
\author{Dimitrios Psaltis}
\affiliation{Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721}
\author{Tod R. Lauer}
\affiliation{NSF's National Optical Infrared Astronomy Research Laboratory, Tucson, AZ 85726}
\author{Feryal {{\"O}zel}}
\affiliation{Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721}
\keywords{accretion, accretion disks --- black hole physics --- Galaxy: center --- techniques: image processing}
\section{Introduction}\label{sec:intro}
The Event Horizon Telescope (EHT) collaboration recently imaged the supermassive black hole in the nearby giant elliptical galaxy M87 for the first time using sub-mm VLBI observations \citep{2019ApJ...875L...1E, 2019ApJ...875L...2E, 2019ApJ...875L...3E, 2019ApJ...875L...4E, 2019ApJ...875L...5E, 2019ApJ...875L...6E}. The first polarized images of the black hole in M87 were published a short time later and indicated a strong and ordered magnetic field in the vicinity of the black hole \citep{2021ApJ...910L..12E,2021ApJ...910L..13E}.
Reconstructing images of the M87 supermassive black hole was challenging. The 2017 observations included only five telescope locations, resulting in markedly sparse interferometric ($uv$-plane) coverage. This challenge was extensively addressed in the EHT papers and particularly in \citet{2019ApJ...875L...4E}, which is mainly concerned with a detailed discussion of the image reconstruction techniques used. In brief, a variety of algorithms was employed and all were extensively tested with simulations and inter-compared on the images recovered from the actual observations. Of necessity, each algorithm incorporated a variety of assumptions to address the incomplete $uv$-plane coverage, which in turn imply associated uncertainties in the images recovered. The aim of this diverse approach was to be conservative with the reconstructions and ensure that the major quantities of astrophysical interest that were recovered from the images were robust.
We begin with a discussion of the general image reconstruction techniques used so far, followed by the motivation for the \texttt{PRIMO} methodology that we introduce here.
\smallskip
\noindent \textit{General purpose imaging algorithms:} These include the traditional CLEAN algorithm \citep{1974A&AS...15..417H}, as well as new maximum likelihood methods (see e.g., \citealt{2019ApJ...875L...4E, 2016ApJ...829...11C,2017AJ....153..159A}). The challenge for general-purpose image reconstruction algorithms is to generate an image among an infinite set of formally allowable solutions that are compatible with the data. In order to reduce the range of possible solutions, regularizers and secondary constraints (such as image global entropy, smoothness, local curvature, etc.) are levied to recover an image that matches expectations of realistic structure. These methods are agnostic to theoretical predictions on image morphology and can therefore be used to determine basic image features such as the presence of a ring or brightness depression. However, introducing constraints on the plausibility of the image components is unavoidable and can lead to artifacts as shown, e.g., in Figure 10 of \citet{2019ApJ...875L...4E}. Moreover, even though the regularizing conditions are reasonable for some astronomical images, they may not be well motivated for black hole images since simulations predict steep gradients in parts of the image~\citep{2015ApJ...814..115P}.
\smallskip
\noindent \textit{Geometric fits:} These are posterior sampling algorithms that fit semi-analytic or geometric crescent- and ring-like models directly to interferometric data \citep{2013MNRAS.434..765K,2019ApJ...875L...6E}. The models invoke a much smaller number of free parameters and, therefore, do not require additional regularizers the way that the general purpose imaging algorithms do, as described above. However, in some cases these simple models may not be able to reproduce the complex image morphology predicted for black hole images. Indeed, simulations predict that the turbulent flows generate complex and stochastic structures as a consequence of the presence of bright, magnetically dominated flux tubes that are lensed by the black hole (see e.g., \citealt{2015ApJ...812..103C,2019ApJ...875L...5E}). Since the expected level of complexity is not included in the geometric model fits, the posteriors of the model parameters are affected by the most influential data points and may be biased \citep{2022ApJ...928...55P}.
\smallskip
\noindent \textit{Comparisons to numerical simulations:} These methods compare simulated images from general relativistic magnetohydrodynamic (GRMHD) simulations to interferometric data allowing for a rotation and scaling of the image relative to the data (see e.g., \citealt{2019ApJ...875L...5E}). This comparison leads to constraints on physically meaningful parameters about the accretion flow. However, a single EHT observation corresponds to a particular realization of the turbulent structure of the accretion that may be consistent with simulations only in a statistical sense. As a consequence, these methods benefit from prior characterization of the statistics of the various image structures and of the corresponding interferometric observables~\citep{2016ApJ...832..156K,2019ApJ...875L...6E}.
We present a novel principal-component interferometric modeling (\texttt{PRIMO}) algorithm that combines the desirable characteristics of the methods listed above while attempting to reduce their limitations. \texttt{PRIMO} uses a large library of GRMHD simulations as a ``training set'' for image reconstruction and model fitting. Instead of employing images that are smooth (as in the case of the maximum likelihood imaging methods) or consist of a limited number of broadened point sources (as in the case of CLEAN), it utilizes images that are broadly consistent with the space of possibilities spanned by the simulations. Because it involves a relatively small number of parameters, i.e., the coefficients of the principal components, it does not require imposing regularizers as is done in maximum likelihood methods. Furthermore, it is not limited to simple geometric shapes, such as crescents and rings, and can accurately reconstruct the stochastic features expected in black hole images. At the same time, it does not compare specific realizations of the turbulent images with the data but rather uses a principle-component decomposition to derive a basis for the space of possible images that are consistent with theoretical expectations. Finally, the PCA algorithm provides not only the best-fit image but rather a complete posterior over all image structures that are consistent with the data.
In addition to PCA, several other decompositions have been developed and applied to a multitude of problems. Using bases (called dictionaries), derived from PCA or other decompositions, to sparsely represent a training set falls under that umbrella of dictionary learning (see e.g. \citealt{Shao2014} for a review of dictionary learning applied to image de-noising). Within astronomy dictionary learning is frequently used to de-noise images and spectra, or for image classification. Convolutional neural networks (CNN) are also becoming ubiquitous in astronomy and have recently been applied to the output of the \texttt{Clean} algorithm to de-noise the results of image reconstructions \citep{2022MNRAS.509..990G}. Our goal is not de-noising in the image domain, \texttt{PRIMO} reconstructs images directly from the fourier-domain visibilities. PCA is well-suited for our application since it enables remarkably powerful dimensionality reduction, allowing us to fit only 20 PCA components to the visibilities. Nonnegative matrix factorization (NMF), for example, is also commonly used in astronomical applications (see e.g. \citealt{2016arXiv161206037Z}). However, requiring that the basis functions be positive definite can result in biases if the basis is truncated, especially near steep gradients like those expected near the boundary of the black hole shadow.
The PCA approach is very general but employs its own restrictions on the subset of allowable images by only requiring that the solution is likely to fall within the span of image morphologies produced by the training set of simulations. However, as it is well known~\citep[see e.g., ][]{10.1162/jocn.1991.3.1.71} and we will also demonstrate later, the PCA-based algorithm can reconstruct images even if the particular image structures are different in their details from the individual simulation snapshots that were used for the training set. Therefore, the method can be applied to reconstruct a black hole image even if the GRMHD outputs do not precisely represent all of its characteristics.
In \citet{2018ApJ...864....7M}, we showed that PCA could be used to efficiently represent the ``space'' of image morphologies seen in GRMHD simulations of an accreting black hole. The full range of structures seen in a simulation can then be encoded as a linear combination of a compact set of orthogonal ``eigenimages,'' with each eigenimage describing a portion of the structure seen in the simulation. Critically, PCA minimizes the number of components needed to describe the full variance of the simulation and the components can be ordered by the decreasing fraction of the variance that they describe.
A particular benefit of the PCA approach is that the orthogonal compact basis derived in image space transforms identically to the same basis that would be derived directly by representing the simulations in visibility (Fourier) space (see \citealt{2018ApJ...864....7M} for a mathematical proof). In short, the basis can be built in the image domain, where we have the best {\it a priori} knowledge of the likely image morphology, but is fitted in the complementary visibility space in which the observations are presented.
Another benefit of \texttt{PRIMO} is that it not only provides excellent recovery of structure up to the formal resolution limit of the observations, but can provide ``super-resolution'' at yet finer scales. Rich knowledge of the intrinsic source structure allows for quantitative measures of features that could not be recovered without strong priors. The principal-component basis encodes the intrinsic correlations of the source structure over a range of angular scales. Interferometric observations of structure within the resolution limit can implicitly constrain the structure at finer angular scales somewhat beyond it.
Given a set of interferometric data and a compact set of eigenimages, the problem of image reconstruction and model fitting reduces to finding the relative weights of the eigenimages that are necessary for their weighted linear combination to be consistent with the data. It is important to emphasize, however, that while the image space of simulated images is completely sampled by the PCA basis, the EHT coverage provides only sparse, incomplete sampling of the visibility space. As such, the basis functions in that space (i.e., the visibility maps of the eigenimages) are no longer orthogonal when sampled only at the discrete EHT baselines. As a result, their coefficients must be fitted to the data with a procedure that respects the resultant covariances that now appear when the PCA components are fitted to the visibilities.
The goal of this paper is to progress from the initial presentation of the PCA image reconstruction methodology introduced by \citet{2018ApJ...864....7M} to a complete description of how to apply it to analyzing the EHT observations of accreting supermassive black holes. In Section \ref{sec:PCAsims}, we describe the GRMHD simulations that we used to construct the PCA basis, the preprocessing of the simulated images, and finally the PCA basis that we derived from them. In Section \ref{sec:MCMC}, we describe the MCMC algorithm we use to fit interferometric data in order to obtain posteriors over the relative weights of the PCA components. We present results of applying \texttt{PRIMO} to simulated interferometric data in Section \ref{sec:results} and summarize our work in Section \ref{sec:discussion}.
\section{Building a PCA Basis From GRMHD simulations}\label{sec:PCAsims}
As outlined in \citet{2018ApJ...864....7M}, we perform PCA on images generated from GRMHD simulations to describe the image space in which EHT images of real accreting black holes are likely to reside. In this section, we detail the methodology used to derive the linear combination of PCA components needed to fit a given data set.
\subsection{The GRMHD Simulations}\label{sec:sims}
The GRMHD simulation images employed to generate the PCA basis were created using the massively parallel GPU-based code {\tt GRay} \citep{2013ApJ...777...13C}. As input to the radiative transfer and ray-tracing simulations, we use two high-resolution GRMHD simulations with long time spans that were created using the 3D {\tt HARM} code \citep{2003ApJ...589..444G, 2012MNRAS.426.3241N, 2013MNRAS.436.3856S}.
The configuration of a GRMHD simulation is specified by a set of physical parameters. For the purposes of validating our algorithm, we generated a set of 30 simulation runs, with parameters covering a wide range of possible emission models of the inner accretion flow around the black hole in M87, as follows:
\begin{itemize}
\item GRMHD simulations only evolve the energy density of the plasma and, therefore, primarily the temperature of the ions and not of the electrons. In the accretion flow, the ion-to-electron temperature ratio is expected to be determined primarily by the plasma $\beta\equiv P_{\mathrm{gas}}/P_{\mathrm{mag}}$ parameter, which is the ratio of the local gas to magnetic pressures~\citep{2015ApJ...799....1C}. In the polar funnel, which is magnetically dominated, the two temperatures are expected to be nearly equal due to magnetic conduction~\citep{2015MNRAS.454.1848R}. In order to capture this behaviour, we used a prescription for the electrons that sets the ion-to-electron temperature ratio $T_{\rm i}/T_{\rm e}$ to \citep{2016A&A...586A..38M, 2019ApJ...875L...4E}
\begin{equation}
\frac{T_i}{T_e}=R_{\mathrm{high}}\frac{\beta^2}{1+\beta^2} +\frac{1}{1+\beta^2}.
\end{equation}
We explore three values for $R_{\mathrm{high}} = 1,\, 20,\, 80$, but note that the $R_{\mathrm{high}}=1$ simulations effectively result in an electron temperature that is equal to the ion temperature throughout the plasma, which is inconsistent with the assumption of a radiatively inefficient flow. We choose to include the $R_{\mathrm{high}}=1$ simulations in our library only for consistency with previous EHT publications and in order to explore a broad, albeit somewhat unphysical, range of image structures.
\item The electron density scale provides an overall normalization that sets the total accretion rate in the simulation. We explored values for the electron density scale of $n_e = 10^5,\, 2.5\times 10^5,\, 5\times 10^5,\, 7.5\times 10^5,\, 10^6 \,\mathrm{cm}^{-3}$. We note that the higher values of electron number density are unlikely for M87, given the measured 1.3~mm flux and polarization signatures~\citep{2019ApJ...875L...5E,2021ApJ...910L..13E}, but we include them in our simulation data set for completeness.
\item In half of the simulations, we used initial conditions that resulted in strong, ordered magnetic fields and a magnetically arrested disk (MAD, see e.g., \citealt{2012MNRAS.426.3241N}); in the other half, we used initial conditions that resulted in a less-ordered, weaker, magnetic field, commonly referred to as standard and normal evolution (SANE, see e.g., \citealt{2003ApJ...592.1042I}).
\item We set the inclination angle of the black hole spin axis relative to the observer's line of sight to $i=17^{\circ}$. This parameter only enters the radiative transfer calculation and determines the relative asymmetry of the image (see, e.g., \citealt{2022ApJ...924...46M}). We made this choice under the assumption that the spin axis of the black hole is parallel to the large scale jet that has been observed at radio wavelengths \citep{2018ApJ...855..128W}. In the PCA model described below, we will allow for the possibility that the spin axis is either aligned or anti-aligned with the large scale jet as well as for an arbitrary position angle of the spin axis in the plane of the sky. Even though the last two considerations affect the orientation of the black-hole image in the sky, they are trivial geometric transformations and do not enter the GRMHD simulations.
\item We set the black hole mass to $M = 6.5\times10^9 M_{\odot}$ for the initial preparation of the simulations, which is a value consistent with the one obtained by stellar dynamics~\citep{2011ApJ...729..119G} and by the first EHT imaging results~\citep{2019ApJ...875L...6E}. Changing this value has two effects on the resulting simulations. First, it rescales the linear size of each image by a factor proportional to the mass. Second, it affects the outcome of the radiative transfer calculations by altering the synchrotron emission/absorption coefficients and by rescaling the photon path lengths. For the former effect, which is a trivial geometric transformation, we explore different mass values by rescaling the angular size of the PCA basis. For the latter effect, we note that, in the relevant range of parameters, the black-hole mass is nearly degenerate with the electron number density scale $n_e$, with the image brightness at each pixel scaling as $\sim n_{e}^2 M$ (see Appendix A of \citealt{2022ApJ...925...13S}; see also \citealt{2015ApJ...799....1C}). By exploring a broad range of values for the electron density scale and allowing for a rescaling of the images, we effectively probe a broad range of black-hole masses.
\item We assumed a single black hole spin parameter of $a=0.9$ for simplicity since image morphology is only weakly dependent on spin~\citep{2019ApJ...875L...5E}. Indeed, as we show in later sections, the same PCA basis can also be used to reconstruct images of black holes with other spins.
\end{itemize}
Figure \ref{fig:ne_rhigh} shows the effect of changing the electron number density scale, $n_e$, and the ion-to-electron temperature ratio $R_{\mathrm{high}}$ on a single snapshot from a MAD simulation. The electron number density scale affects primarily the width of the bright ring with the latter increasing significantly with increasing $n_e$~\citep{2022ApJ...925...13S}. In contrast, the temperature ratio $R_{\mathrm{high}}$ affects the relative brightness of different parts of the flow, altering the relative brightness between the funnel region and that of the accretion flow.
The set of parameters we discussed reflects a decision as to which sources of image variance to include in the PCA analysis and which parameters to treat externally. The position angle ($\phi$) of the image on the sky, for example, can be included in our model trivially by an overall rotation of the PCA components and need not be included in the derivation of the components themselves. Whether the spin axis is pointing towards us at $17^{\circ}$ or away from us at the complementary angle can also be incorporated in a similar manner, as it describes (statistically) a simple reflection. The effect of the black hole mass on image morphology is mostly degenerate with the electron density except for a change in the overall size of the image, which can be included trivially in the PCA model as a scaling of angular distances applied to all components. The overall source position is also not included in the PCA basis since the current set of EHT data only involve visibility amplitudes and closure phases, which are independent of the image location.
For each set of parameters, we generated 1024 image snapshots with a time resolution of $10\,GM/c^3$. For the mass of M87, the time resolution equals $\sim 3$ days and 17 hours and each simulation covers a total time span of over ten years. Each snapshot has a field of view of $64 \,GM/c^2$ and a resolution of $1/8\, GMc^{-2}$ per pixel (approximately $0.5\,\mu\mathrm{as}$ resolution). Critically, the field of view is substantially larger than the $\sim 10\,GM/c^2$ measured size of the image and the resolution scale is sufficiently fine to avoid deleterious aliasing effects \citep{2020arXiv200406210P}.
The set of 30 simulations provides a total of 30,720 images covering a broad range of image morphologies. Figure~\ref{fig:snapshots} shows several snapshots from a single simulation. Here we emphasize that although the parameters of the radiative transfer simulations can significantly affect gross image properties, such as the width of the ring of emission (see Figure \ref{fig:ne_rhigh}), there is significant variance in image morphology even within a single simulation because of the stochastic nature of the MHD turbulence in the accretion flow (see also \citealt{2017ApJ...844...35M,2018ApJ...856..163M,2018ApJ...864....7M}).
\subsection{Preparing the Simulated Images for PCA}
The simulated images have significant structure at small scales, which the EHT cannot probe. Because we want the PCA basis to only reflect image variance on the physical scales observed by the EHT, we first need to eliminate the high spatial-frequency structure in each simulated image.
To achieve this, we use a Butterworth filter \citep{butterworth1930}, which is effectively a low-pass filter, having a flat response for low Fourier frequencies and declining to zero smoothly at high-frequencies. The Butterworth filter is defined as
\begin{equation}
F_{\mathrm{BW}}(b) = \left[ 1+\left(\frac{b}{r}\right)^{2n} \right]^{-1/2},
\end{equation}
where $r$ is the scale of the filter and $n$ is a power-law index. We discuss in detail the motivation for using a Butterworth filter as well as the choice of parameters for EHT data analysis in \citet{2020arXiv200406210P}. The bottom row of Figure \ref{fig:snapshots} shows the snapshots of the top row filtered by a Butterworth filter with $n=2$ and $r=15G\lambda$. This choice of filter parameters allows us to retain most of the power at baseline lengths probed by the EHT array, while filtering out most of the power at larger lengths.
As a second step, we normalize each filtered image to have the same total flux. Because images with higher electron density scale $n_e$ have significantly higher total flux, not normalizing would have biased the PCA basis towards images with higher $n_e$ values. We explored the effects of standardizing the images by their variance and found that this has a negligible effect on the PCA basis other than on the overall normalizations. We, therefore, do not standardize the images by their variance. We also do not mean subtract the images before performing PCA, i.e., similar to what was done in \citet{2018ApJ...864....7M}, since the properties of the mean image are critical in fitting the observed data. If, instead, we had mean subtracted the images before performing PCA, we would have needed to add back the mean image to the linear combination of PCA components, resulting in the same number of free parameters in the model.
Since all of the images correspond to the same black hole spin $a=0.9$ and the same inclination angle $i=17^{\circ}$, all of the black hole shadows are concentric and aligned with each other. For the case of M87, this is justified because of the known inclination of the large-scale jet as well as the weak dependence of the simulated images on black-hole spin. If that were not the case, we would have also needed to recenter and align the images before performing PCA, along the line of the approach in~\citet{2020ApJ...896....7M}.
\subsection{Building the PCA Basis}\label{sec:PCA}
Given the complete set of filtered simulated images, we generated the PCA basis following the procedures established in \citet{2018ApJ...864....7M}.
Figure \ref{fig:comps_all} shows the first 20 PCA components. The first PCA component is similar to the average image and contains a positive flux. The higher order PCA components contain both positive and negative fluxes, since these components re-distribute the flux present in the first component to approximate each individual snapshot.
The normalized eigenvalues corresponding to each PCA component are shown in the top left corner of each panel. Each eigenvalue measures the variance in pixel brightness of each PCA component, normalized such that the sum of all eigenvalues is equal to unity. Figure~\ref{fig:vals_all} shows the eigenvalue spectrum for this PCA decomposition. The first few PCA components account for the majority of brightness variance in the image and only 20 components are needed to account for 99\% of the variance found in the full set of simulations. The slope of the eigenvalue spectrum for higher components is set by the power spectrum of the structures in the images \citep{2018ApJ...864....7M}.
Figures~\ref{fig:comps_VA} and \ref{fig:comps_VP} show the corresponding visibility amplitude and phase maps of the first 20 PCA components. It is a linear combination of these components in visibility space that we will fit directly to the data. As expected, the first few components contain primarily structures with low spatial frequencies (i.e., small baseline lengths) and describe primarily the broad-brush structure of the image. The remaining components contain significant power at high spatial frequencies (i.e., large baseline lengths) and describe the smaller structures in the image.
It is interesting that, although this was not explicitly imposed when performing the principal component decomposition, components of increasing PCA order correspond to higher order ($m$-fold) azimuthal symmetry. This is important when comparing the angular structure of the PCA components to the locations of the EHT baselines for the 2017 M87 observations~\citep{2019ApJ...875L...3E}, as also shown in Figures~\ref{fig:comps_VA} and \ref{fig:comps_VP}. Note that we have rotated the baseline tracks such that the black-hole spin axis, which points upwards in all these panels, is at $288^{\circ}$ East of North. Clearly, the first 20 PCA components already incorporate a substantial degree of azimuthal structure, which is finer than the angular separation of the dominant locations in visibility space probed by the EHT array. Lastly, note that each component comprises detail over a broad range of spatial frequencies. Within a given component, structural information on fine angular scales is correlated with that on broader scales. This allows visibilities within the EHT band limit to lead to inferences on the structure somewhat beyond it, producing reconstructions with a degree of ``super-resolution.''
\smallskip
\section{MCMC algorithm}\label{sec:MCMC}
In order to fit EHT data, we implement the linear PCA model (\texttt{PRIMO}) into the MCMC algorithm MARkov Chains for Horizons (\texttt{MARCH}, \citealt{2022ApJ...928...55P}). For the purposes of this initial exploration, we fit this model to synthetic EHT data calculated for the baseline tracks of the array during the 2017 April 5th observations of M87.
\subsection{The PCA Image Model}
The PCA decomposition described in Section 2 allows us to construct a model for a black-hole image that is a linear combination of $N$ PCA components, with an appropriate rescaling, to account for a different black-hole mass, and an appropriate rotation, to allow for different orientations in the sky.
We will be fitting data in the visibility domain and, therefore, define the linear combination of the first $N$ PCA components in that domain as
\begin{equation}
\tilde{I}(u,v) = \sum_{n=1}^{N} a_n {\bf \tilde{u}}_n(u,v),
\end{equation}
where ${\bf \tilde{u}}_n$ are the PCA components in the Fourier ($u,v$) domain, $\tilde{I}$ is the Fourier domain visibility of the reconstructed image, and $a_n$ is the amplitude of the $n-$th PCA component. Without loss of generality and in order to facilitate comparison with other astrophysical measurements of the sources, we set $a_1=1$ and instead fit for the total zero baseline visibility amplitude, which is also equal to the image flux
\begin{equation}
F=\sum_{n=1}^N a_n {\bf \tilde{u}}_n(0,0)\;.
\end{equation}
By construction, this same linear combination of the PCA components in the image domain also generates the ``best-fit'' image, i.e.,
\begin{equation}
I({\rm X,Y}) = \sum_{n=1}^{N} a_n {\bf {u}}_n({\rm X,Y}),
\end{equation}
where now $I$ is the reconstructed image and ${\bf u}_n$ are the PCA components, both in the image domain $(X,Y)$.
In addition to the $N-1$ PCA amplitudes and the flux normalization $F$, the model also includes three parameters that are implemented as a scaling, a rotation, and an up-down flip of the image.
In particular, we introduce
\begin{itemize}
\item A scaling parameter $\theta_g={GM}/(Dc^2)$ that is applied to all PCA components in the sky domain (or equivalently $\theta_g^{-1}$ that is applied in the visibility domain). This scaling parameter quantifies the mass-to-distance ratio of the particular black hole we are modeling and allows us to convert the length scales in our images, which are in gravitational units, to angular sizes in the sky. This parameter can also be informed by the strong priors obtained by modeling the dynamics of stars around the black hole~\citep{2019ApJ...875L...6E}.
\item A position angle $\phi$, measured in degrees East of North, applied to all PCA components, that quantifies the orientation of the black-hole spin on the plane of the sky.
\item A flip parameter $j=-1,1$ that accounts for the possibility that the spin axis is pointing away from the observer and therefore that the accretion flow is rotating in a sense that is opposite (i.e., clockwise) to that of the simulation. In other words, if $j=-1$, we mirror all PCA components along the x-axis such that the rotation patterns will be orientated in the clockwise direction.
\end{itemize}
We note that, for computational efficiency, we do not use the three parameters $\theta_g$, $\phi$, and $j$ to scale, rotate, and flip each of the PCA components. Instead, we use them to scale, rotate, and flip appropriately the small number of discrete $u-v$ locations of the EHT baselines. We then calculate the linear sum of the PCA components in these locations using the interpolation technique we discuss below.
In total, the PCA model has $N+3$ free parameters, where $N$ is the number of PCA components used. Finally, it is worth emphasizing that, even though the PCA model is linear in most of its parameters, the visibility amplitudes and closure phases that we fit it to involve non-linear operations.
\subsection{Two-Dimensional Interpolation}
At each step of the MCMC chain, the algorithm calculates the model prediction at the $(u,v)$ location of each data point and compares it to the data. Since the PCA image model is numerical and sampled on a regular array of pixels, we evaluate its prediction at any desired location using a 2D sinc interpolation, which has been demonstrated to cause no degradation of resolution of the 2D maps~\citep{1986ftia.book.....B}. In 1-D, a sinc function is defined as
\begin{equation}
\mathrm{sinc}(u) = \frac{\sin(\pi u)}{\pi u},
\end{equation}
where $u$ is the pixel coordinate in the Fourier domain. Interpolation in 2D is done with separable sinc kernels in $u$ and $v$ that are multiplied to form a 2-D kernel.
Along each orientation, the value of the visibility at $u'$ is given by
\begin{equation}
f(u') = \sum_{n} \frac{\sin[\pi (n-\Delta u)]}{\pi (n-\Delta u)}f(n),
\end{equation}
where $f(n)$ is the image value at the integer $n$ locations, and $\Delta u= u'-u.$ In practice, we limit the kernel to a finite domain of $\pm u_0,$ and taper it smoothly with a Gaussian to produce a well-behaved cutoff in the Fourier domain,
\begin{equation}
f(u') = \frac{1}{C_{\mathrm{sinc}}}\sum^{u_0/\Delta u}_{n=-u_0/\Delta u}e^{-(n-\Delta u)^2/2\sigma^2}~ \frac{\sin[\pi (n-\Delta u)]}{\pi(n-\Delta u)}f(n),
\end{equation}
where $\sigma$ is chosen such that 2-3 cycles of the sinc function are included. The normalization constant $C_{\mathrm{sinc}}$ ensures that the interpolation kernel has an integral of unity, given the tapering and finite domain:
\begin{equation}
C_{\mathrm{sinc}} = \sum^{u_0/\Delta u}_{n=-u_0/\Delta u}e^{-(n-\Delta u)^2/2\sigma^2}~\frac{\sin[\pi (n-\Delta u)]}{\pi(n-\Delta u)}.
\end{equation}
However, since the $\sin[\pi(n-\Delta u)]$ term is periodic with an amplitude specified by the $\Delta u$ phase, its particular value, but for an alternating sign, is constant and thus is absorbed in the normalization. In practice, evaluating a trigonometric function is not required, since we can write
\begin{equation}
f(u') = \frac{1}{C_{\mathrm{sinc}}'} \sum^{u_0/\Delta u}_{n=-u_0/\Delta u} \frac{e^{-(n-\Delta u)^2/2\sigma^2} (-1)^n}{(n-\Delta u) }f(u)
\end{equation}
where
\begin{equation}
C_{\mathrm{sinc}}' = \sum^{u_0/\Delta u}_{n=-u_0/\Delta u}\frac{e^{-(n-\Delta u)^2/2\sigma^2}(-1)^n}{(n-\Delta u)}.
\end{equation}
\subsection{The Posterior Distribution}\label{sec:posterior}
Having defined a visibility-domain PCA model that depends on $N+3$ model parameters, which we collectively denote by the vector $\vec{\theta}$, we use Bayes' theorem to write the posterior over these parameters as
\begin{equation}
P(\vec{\theta}\vert{\rm data})=C\; P_{\rm pri}(\vec{\theta})\; {\cal L}({\rm data}|\vec{\theta})\;.
\label{eq:bayes}
\end{equation}
Here, $P_{\rm pri}(\vec{\theta})$ is the prior distribution over the model parameters, ${\cal L}({\rm data}|\vec{\theta})$ is the likelihood that the set of observations can be obtained from the model, and $C$ is an appropriately defined normalization constant.
The set of data obtained by the EHT is a series of visibility amplitudes at the various baseline lengths between the different pairs of stations as well as a series of closure phases along all possible baseline triangles~\citep{2019ApJ...875L...3E}. We calculate the likelihood function by multiplying the likelihoods of the individual visibility amplitude and closure phase data (see, however,~\citealt{2019ApJ...882...23B}), assuming that all likelihoods are independent of each other
\begin{equation}
{\cal L}({\rm data}|\vec{\theta})= \prod_i {\cal L}_i({\rm data}|\vec{\theta})\;.
\label{eq:like}
\end{equation}
The precise definition of the various likelihoods is provided in detail in \citet{2022ApJ...928...55P}. Because they depend only on the data products, they are the same for all models. The priors over the model parameters, however, are specific to each model, as we discuss in detail in the following subsection.
\subsection{Priors}\label{sec:Priors}
To ensure that our PCA model is probing physically relevant areas of the parameter space, we include a combination of informative and non-informative priors on the various model parameters.
Because the EHT is an interferometer, the total flux $F$ of the compact image cannot be directly measured without perfect knowledge of the prior calibration of the various telescope gains. However, it can often be independently constrained using other single-dish observations. Due to extended emission, the zero-baseline flux of the M87 EHT data was significantly higher than what was reasonably expected for the compact source (see the discussion in appendix B of \citealt{2019ApJ...875L...4E}). Because of this, most EHT M87 analyses constrained the zero-baseline flux to a well-motivated value. To mirror those analyses, we fix the zero-baseline flux at 0.6 Jy, the value used to generate the synthetic data.
For the scaling parameter $\theta_g$, there often exist prior measurements based on gas and/or stellar dynamics. For the M87 black hole, the two measurements are not statistically consistent with each other (see \citealt{2011ApJ...729..119G, 2013ApJ...770...86W}). The envelope of the credible intervals for these two measurements is contained within the conservative range $1~\mu$as$~\le \theta_{\rm g}\le~6~\mu$as. For this reason, we simply use an uninformative prior
\begin{equation}
P({\theta_g})=\left\{\begin{array}{ll} \theta_g^{-1}\,\,\,\,\, \mathrm{if}\,1\,\mu\mathrm{as}\le\theta_g\le 6\,\mu\mathrm{as}\\ 0\,\,\,\,\, \mathrm{otherwise.}\\ \end{array} \right.
\end{equation}
For the orientation parameter $\phi$, we employ a highly informative prior based on the assumption that the black-hole spin is either aligned or anti-aligned with the large-scale jet observed at longer wavelengths, i.e., that
\begin{equation}
P(\phi) = \frac{1}{2\sqrt{2\pi\sigma_{\phi}^2}}\left[ e^{-(\phi-\phi_0)^2/2\sigma_{\phi}^2} +e^{-(\phi-\phi_0+\pi)^2/2\sigma_{\phi}^2} \right].
\end{equation}
Here $\phi_0=288^\circ$ is the orientation of the large scale jet \citep{2018ApJ...855..128W}. We set the widths of the two Gaussians to a nominal value of $\sigma_{\phi}={\pi}/{8}$. We allow the flip parameter $j$ to be equal to either 1 or -1, with the same prior.
Finally, we employ informative priors on the amplitudes of the PCA components. Our aim is to give higher priors to images for which the amplitudes of the PCA components are not very dissimilar from the amplitudes that correspond to the simulated images used to calculate the PCA decomposition. However, we also do not wish to limit the fit to images that have precisely the same range of amplitudes as the training set. To achieve this, we first calculate the distribution of amplitudes for each PCA component found in the ensemble of training images and then broaden this distribution by a factor of two.
Figure~\ref{fig:amps_grid} shows the distribution of normalized amplitudes, $a_n/a_1$, for the PCA components 2 through 21 that we calculated above; note that, by definition, we have set $a_1=1$. Each panel also shows a Gaussian (in orange) with the same mean and standard deviation as the numerical distribution. These Gaussians provide good descriptions of the distributions for almost all of the components shown in the figure, with components two and five being notable exceptions. Both of these components contain structure that controls the width of the ring in the image, which is strongly dependent on the simulation parameters (e.g., $n_e$). Therefore, the distributions of amplitudes for these components are not expected to follow a Gaussian distribution but rather will depend on the particular set of parameters used for the simulation library.
Gaussian distributions with the same mean but twice the standard deviation are also shown in each panel (green dashed lines) and comfortably include the full range of amplitudes found in the training image set. In practice, for computational efficiency, we use these broadened Gaussians as priors on the amplitudes of each PCA component. In other words, we write the prior for the normalized amplitude of the $n-$th PCA component $a_n/a_1$ as
\begin{equation}
P(a_n/a_1) = \left[e^{-\frac{1}{2}\left( \frac{a_n/a_1-\overline{a_n/a_1}}{2\sigma_n}\right)^2}\right]/({\sqrt{2\pi}2\sigma_n})\;,
\end{equation}
where $\overline{a_n/a_1}$ and $\sigma_n$ is the mean value and standard deviation of the distribution of normalized amplitudes of the training set.
\subsection{Theoretical uncertainty}\label{sec:theory_err}
In most applications of PCA, one can reconstruct an image by simply projecting the image onto the PCA components to find the relative amplitude of each component that will result in the best possible reconstruction. Using a higher number of components will invariably result in a higher-fidelity reconstruction. A loss-less reconstruction can always be achieved using all of the PCA components, if the image is part of the original set that was used to calculate the PCA decomposition. In the present application, however, we do not have a full image onto which we can project the components; we instead have sparse $u-v$ coverage. Attempting to fit a large number of components to sparse interferometric data can result in overfitting since there may be several possible linear combinations of components that fit the data. Therefore, there exists an optimal number of PCA components for which the highest-fidelity reconstruction can be achieved by fitting the sparse interferometric data while respecting the resolution of the array.
In order to determine this optimal number and asses the error introduced by the truncation, we quantify the error in the visibility amplitudes between a reconstruction with $N$ components and the original, unfiltered image in the Fourier domain as
\begin{equation}
\epsilon_{\mathrm{complex}}= \frac{\sqrt{|(V_{\mathrm{0}}-V_{N})(V^*_{0}-V^*_{N})|}}{F}\;,
\end{equation}
where $F$ is the total flux of the image, $V_{N}$ are the complex visibilities of the reconstruction, vertical bars indicate magnitude, and the asterisk denotes complex conjugation. We define the fractional error in visibility amplitude as
\begin{equation}
\epsilon_{\mathrm{VA}} =\left| \frac{|V_{\mathrm{orig}}| - |V_{\mathrm{recon}}| }{F}\right|
\end{equation}
where $|V_{\mathrm{orig}}|$, and $|V_{\mathrm{recon}}|$ denote the amplitude of the complex visibilities for the original and reconstructed images respectively. The error in visibility phase is defined as
\begin{equation}
\epsilon_{\mathrm{VP}} =|\arg(V_{\mathrm{orig}}) - \arg(V_{\mathrm{recon}})|
\end{equation}
if this quantity is $<180^{\circ}$ and
\begin{equation}
\epsilon_{\mathrm{VP}} =360^{\circ} - |\arg(V_{\mathrm{orig}}) - \arg(V_{\mathrm{recon}})|
\end{equation}
otherwise. We calculate these errors for each baseline length by averaging along different azimuthal orientations and over the complete set of images in the training set.
In both equations above, $\arg(V)$ denotes the argument or phase of the complex visibilities of the images. When taking the average of the error in visibility phase, we follow \citet{mardia2009directional} and define the average of a directional quantity as
\begin{equation}
\bar{\theta} =
\begin{cases}
\tan^{-1}(\bar{S}/\bar{C}), & \mathrm{if\,} \bar{C}\geq 0\\
\tan^{-1}(\bar{S}/\bar{C})+\pi, & \mathrm{if\,} \bar{C}< 0,
\end{cases}
\end{equation}
where
\begin{align}
\bar{S} &= \frac{1}{n}\sum^{n}_{j=1} \sin{\theta_j}\\
\bar{C} &= \frac{1}{n}\sum^{n}_{j=1} \cos{\theta_j}.
\end{align}
Figure~\ref{fig:theory_err} shows the errors $\epsilon_{\mathrm{complex}}$, $\epsilon_{\mathrm{VA}}$, and $\epsilon_{\mathrm{VP}}$ as a function of baseline length, for all 30,720 snapshots and for different values of the number $N$ of PCA components. The Figure also compares these errors to those introduced to the original images by the application of the Butterworth filter. In all three error quantities, there are significant broad peaks at around $1-4\,G\lambda$, which are introduced by the dips, or nulls, that exist in the training set around these baseline lengths (see~\citealt{2017ApJ...844...35M} for a discussion of the origin of these uncertainties).
The longest baselines included in the 2017 EHT array are about $8\,G\lambda$. Reconstructions with 20 components achieve fractional complex errors less than $\sim 3\%$ at all baselines less than $8\,G\lambda$, even at baseline lengths that frequently have a significant dip in visibility amplitude. The same reconstructions achieve a fractional error in visibility amplitude of less than $\sim 2\%$ and an error in visibility phase less than $\sim 15^{\circ}$ at all baselines less than $8\,G\lambda$. At baselines that do not coincide with the visibility amplitude minima, the errors are significantly smaller; fractional complex error in visibility amplitude for reconstructions with just 20 PCA components is $\sim 2\%$ in regions between visibility amplitude minima.
Since the reconstructions with only 20 PCA components achieve errors which are comparable to the errors in the EHT 2017 data for M87, in this work we settle on fitting 20 PCA components to synthetic data as a proof of concept. However, a slightly higher or lower number of components may achieve comparable, or even better results. We use the results presented in Figure~\ref{fig:theory_err} to add a ``theoretical error'' to our model, which is implemented as an additional uncertainty, as a function of baseline length. In order to account for the fact that the peaks in the theoretical uncertainties shown in Figure~\ref{fig:theory_err} correspond to the locations of the visibility minima, which themselves scale inversely with $\theta_{\rm g}$, we scale the baseline lengths of the theoretical error curves in a similar way. Moreover, because the errors shown in this figure are fractional, we multiply them by the total flux $F$ in the image.
\subsection{Preparing Simulated Data}\label{sec:sim_data}
The EHT observations are simulated as follows. For each data point in the M87 EHT data, we use sinc interpolation to interpolate between pixels in $u-v$ space and approximate the visibility at that $u-v$ location. In order to mimic thermal noise, we dither each data point with errors derived from a Gaussian distribution with a standard deviation set by the error in the EHT data at each $u-v$ location for the 2017 EHT observations of M87. We do not include gain errors in our synthetic data at this time, nor do we include gains as free parameters in our model.
\section{Results from synthetic data}\label{sec:results}
In order to demonstrate the performance of \texttt{PRIMO} with EHT data, we apply it to a number of synthetic data sets created from simulated snapshots. We start with two snapshots from a single GRMHD+radiative transfer MAD simulation with electron number density scale $n_e = 10^5\,\mathrm{cm}^{-3}$, electron temperature parameter $R_{\mathrm{high}}=20$, black hole spin $a_{\mathrm{BH}}=0.9$ and mass $M=6.5\times 10^9\,\mathrm{M}_{\odot}$. This set of parameters is relevant to the black hole in M87 and is consistent with the recent EHT results that showed that the polarization structure of M87 shows preference to MAD models over SANE models \citep{2021ApJ...910L..12E,2021ApJ...910L..13E}. These two snapshots were also considered in \citet{2022ApJ...928...55P} but for different values of the $R_{\rm high}$ parameter.
We begin by applying our algorithm to a simulated image snapshot that resembles a crescent shape but has some extended structure. This snapshot was not easily fit by a simple geometric crescent model \citep[][see Figures~16 and 17]{2022ApJ...928...55P}. The top row of Figure~\ref{fig:M47} shows the simulated image and the highest likelihood reconstruction from \texttt{PRIMO} after 10,000,000 MCMC steps.
Unlike the geometric crescent model, \texttt{PRIMO} can easily reproduce the morphology of this image, arriving at the correct ring size and width, and the correct position angle for the peak of emission along the ring.
The bottom row of the Figure shows the original image blurred by a Gaussian filter with a width of $15\,\mu\mathrm{as}$ and the original image after it was filtered with an $n=2$, $r=15\,\mathrm{G}\lambda$ Butterworth filter. The Gaussian broadened image approximates previously published EHT images, since most of the EHT reconstructed images published to date have been broadened by Gaussians. The width of the Gaussian kernel was chosen such that the median FWHM of the image, along 128 equispaced radial cross sections emanating from the center of the black-hole shadow, is equal to 20$\,\mu$as, i.e., similar to the M87 images reconstructed with other algorithms. (We note that we simply broadened the original simulated image and did not simulate CLEAN or RML imaging of it; still, the Gaussian broadened GRMHD image provides a simplified comparison to the resolution of previously published EHT images.) \texttt{PRIMO} achieves much higher image fidelity than the Gaussian blurred image and approaches the fidelity of the GRMHD input image simply blurred by the Butterworth filter.
Figure~\ref{fig:M47amp_clos} compares the visibility amplitudes and the closure phases of the synthetic data created from the simulated image as described in Section~\ref{sec:sim_data} to those of the reconstructed image with the highest likelihood. The model shows very good agreement with the synthetic data and no structure is present in the residuals. As expected, because of the very large signal-to-noise ratio of most of the EHT measurements, the residuals are dominated by the theoretical errors introduced by the truncation in the number of PCA components used. Nevertheless, this truncation does not introduce any substantial biases in the image structure or its properties.
Figure~\ref{fig:M47cross} compares the vertical (N-S) and horizontal (E-W) cross sections of the original image, the Butterworth filtered snapshot, the Gaussian filtered snapshot, and the most likely reconstruction with \texttt{PRIMO}. There is remarkable agreement between the properties of the reconstrcted image and those of the original one. In particular, the \texttt{PRIMO} fit is a much more accurate representation of the original snapshot than the snapshot convolved with a 20$\,\mu$as beam. The main features of the cross sections, i.e., the location and amplitude of the peaks, the width of the peaks, the size and depth of the central flux depression, and the relative amplitude difference between the two peaks is well approximated by the reconstruction.
Figure~\ref{fig:M47corner} shows a corner plot for numerous key parameters for the MCMC run discussed above. The corner plot shows a few correlations between parameters, such as between the scaling parameter $\theta_g$ and the amplitude of the second PCA component, as well as with several other components but to a lesser extent. Although the PCA components are orthogonal when considered across the entire image (or $u-v$ space), they are no longer orthogonal when we consider only the discrete locations of the EHT baselines. Because of this, some correlations between different PCA components are also visible, such as between the second and fourth components. The correlation between the overall scale ($\theta_g$) and the second component is not surprising; the second component affects the width of the ring, which is highly correlated with the diameter of the ring.
The widths of the posteriors of most of the low-order PCA components are significantly smaller than those of the priors, demonstrating that the broad-brush properties of the reconstructed image are driven by the data and not by the priors. This is increasingly less the case for the higher-order PCA components, justifying the level at which we truncated the series of components. The Figure also compares the ground-truth values (shown in green) to the highest likelihood values from the reconstruction (shown in red)\footnote{For the amplitudes of the PCA components, we treat the amplitudes derived by projecting the original image onto the first $N$ PCA components as the ground-truth values.}. In all cases, there is a remarkable agreement between the two.
As a second example, we apply \texttt{PRIMO} to synthetic data generated from a second snapshot that is dominated by an extended flux tube. The geometric crescent fit to this image failed to reconstruct a reasonable ring size even when a Gaussian component was added to the model (see Figures~18 and 19 in \citealt{2022ApJ...928...55P}). However, as can be seen in Figures~\ref{fig:M191}-\ref{fig:M191corner}, \texttt{PRIMO} can accurately reconstruct the location of the peak of emission along the ring, the width of the peak, the shape and depth of the central flux depression, and the extended flux tube towards the top left of the image. The visibility amplitudes and closure phases from the reconstructed image show good agreement with the synthetic data and very little structure is visible in the residuals.
Finally, we consider an image that is not included in the training ensemble used to generate the PCA components. While all images in the training set have a black hole spin of $a_{\mathrm{BH}}=0.9$, for the final synthetic data set we use an image from a simulation with a black hole spin of $a_{\mathrm{BH}}=0.7$. This image has a SANE magnetic field geometry and a plasma parameter of $R_{\mathrm{high}}=20$. Figures~\ref{fig:S7314}, \ref{fig:S7314amp_clos}, and \ref{fig:S7314cross} show the results of reconstructing this image with \texttt{PRIMO}. Even though this image was not included in the ensemble used to generate the PCA components, the algorithm was still able to accurately reconstruct the salient image features, such as the depth and shape of the brightness depression, the size and width of the peak, the orientation of the peak brightness asymmetry in the ring feature, and the extended structure towards the top left of the image.
\section{Summary}\label{sec:discussion}
We have presented a novel PCA-based image reconstruction algorithm, \texttt{PRIMO}, for reconstruction of black hole images from EHT data. Our algorithm is unique in that it combines prior information from physically motivated simulations to reconstruct images that lie in the same general space of images spanned by the simulations. Each simulation can create countless images with different morphologies due to the turbulent nature of the accretion flow, making it unlikely that the particular realization of the turbulent flow of the source that the EHT observes would be well fit by any one of the thousands of simulation images included in our library. However, the PCA-based algorithm allows us to reconstruct images regardless of whether or not they are contained within the library of images from which the PCA basis was created. Compared to the results of previous work, \texttt{PRIMO} is not severely affected by the biases identified in \citet{2022ApJ...928...55P}, where simulated images were fit with analytic crescent models.
Throughout this work we have used the EHT baseline coverage from the 2017 observations. Since then, the EHT has observed several more times with additional telescopes. We expect that, with additional baselines, we will be able to incorporate a higher number of PCA components to generate images from the data and achieve even better angular resolution. The EHT is also planning to observe at 345\,GHz in the coming years, which will allow us to probe even higher spatial frequencies. \texttt{PRIMO} can easily be adapted to exploit these new observations.
\begin{acknowledgements}
We thank C. K. Chan, P. Hallur, and B. Zackay for useful discussions. L.\;M.\ gratefully acknowledges support from an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award no. AST-1903847. D.\;P.\, and F.\;O.\, gratefully acknowledge support from NSF PIRE grant 1743747 for this work. All ray tracing calculations were performed with the \texttt{El~Gato} GPU cluster at the University of Arizona that is funded by NSF award 1228509.
\end{acknowledgements}
\bibliography{main} |
Title:
Current and future gamma-ray searches for dark-matter annihilation beyond the unitarity limit |
Abstract: For decades, searches for electroweak-scale dark matter (DM) have been
performed without a definitive detection. This lack of success may hint that DM
searches have focused on the wrong mass range. A proposed candidate beyond the
canonical parameter space is ultra-heavy DM (UHDM). In this work, we consider
indirect UHDM annihilation searches for masses between 30 TeV and 30 PeV,
extending well beyond the unitarity limit at $\sim$100 TeV, and discuss the
basic requirements for DM models in this regime. We explore the feasibility of
detecting the annihilation signature, and the expected reach for UHDM with
current and future Very-High-Energy (VHE; $>$ 100 GeV) $\gamma$-ray
observatories. Specifically, we focus on three reference instruments: two
Imaging Atmospheric Cherenkov Telescope arrays, modeled on VERITAS and
CTA-North, and one Extended Air Shower array, motivated by HAWC. With
reasonable assumptions on the instrument response functions and background
rate, we find a set of UHDM parameters (mass and cross section) for which a
$\gamma$-ray signature can be detected by the aforementioned observatories. We
further compute the expected upper limits for each experiment. With realistic
exposure times, the three instruments can probe DM across a wide mass range. At
the lower end, it can still have a point-like cross section, while at higher
masses the DM could have a geometric cross section, indicative of
compositeness.
| https://export.arxiv.org/pdf/2208.11740 |
\reportnum{\footnotesize CERN-TH-2022-139}
\reportnum{\footnotesize DESY-22-138}
\title{Current and future $\gamma$-ray searches for dark-matter annihilation beyond the unitarity limit}
\correspondingauthor{Donggeun Tak (\href{mailto:donggeun.tak@gmail.com}{donggeun.tak@gmail.com})}
\author{Donggeun Tak}
\affiliation{Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738, Zeuthen, Germany}
\author{Matthew Baumgart}
\affiliation{Department of Physics, Arizona State University, Tempe, AZ 85287, USA}
\author{Nicholas L. Rodd}
\affiliation{Theoretical Physics Department, CERN, 1 Esplanade des Particules, CH-1211 Geneva 23, Switzerland}
\author{Elisa Pueschel}
\affiliation{Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738, Zeuthen, Germany}
\keywords{Dark Matter, Ultra-heavy Dark Matter}
\section{Introduction} \label{sec:intro}
Dark matter (DM) is an unrevealed component of the matter in the Universe whose existence is widely supported by a broad set of observations \citep{Bertone2018a}.
For decades, many theoretical candidates have been considered for particle DM, of which two representative examples are ultralight axions ($M_{\chi}$ $\ll$ 1\,eV) and weakly interacting massive particles (WIMPs; $M_{\chi} \sim {\cal O}({\rm GeV}-{\rm TeV})$).
Both candidates have been hunted for with state-of-the-art experiments and observatories, and although these searches will continue to achieve important milestones---for example the long sought-after Higgsino may soon be within reach~\citep{Rinchiuso:2020skh,Dessert:2022evk}---so far the program has been unsuccessful \citep[for the latest reviews, see e.g.,][]{Gaskins2016, Boveia2018, Tao2020}.
The longstanding lack of a DM signal detection has driven theorists to look for DM candidates beyond the conventional parameter space.
One such candidate is ultra-heavy DM (UHDM; 10\,TeV $\lesssim M_{\chi} \lesssim m_{\rm pl} \approx 10^{19}$\,GeV).
Depending on the cosmological scenario and beyond the Standard Model (SM) theory that predicts the UHDM, the abundance and properties can vary \citep[for a broad outline, see][]{snowmass2022}; e.g., WIMPzilla \citep{Kolb1999} and Gluequark DM \citep{Contino2019}.
In addition to unexplored UHDM candidates, there are models that extend the WIMP mass range beyond $\sim$10\,TeV \citep[e.g.,][]{Harling2014, Baldes2017, Cirelli2019, Bhatia2021}.
Yet there exists a general upper limit on the WIMP mass, known as the unitarity limit, which requires $M_{\chi} \lesssim 194$\,TeV \citep{Griest:1989wd,Smirnov:2019ngs}.
This bound arises as the standard WIMP paradigm is associated with a thermal relic cosmology.
In this scenario, in the early Universe, the DM and SM particles are in thermal equilibrium.
As the Universe expands and cools, the DM departs from equilibrium and its abundance is rapidly depleted by annihilations, until the expansion eventually shuts this process off and the relic abundance freezes out.
The key parameter in this scenario is the DM annihilation cross section, which for point-like particles going to Standard Model (SM) states must scale as $M_{\chi}^{-2}$ by dimensional analysis.
As the mass increases, the cross section generally decreases. If it becomes too small, then the DM will be insufficiently depleted by the time it freezes out, and too much DM will remain to be consistent with the observed cosmological density.
Ultimately, as unitarity dictates that the cross section cannot be made arbitrarily large, this constraint translates into the stated upper bound on the DM mass.
While there is an attractive simplicity to the thermal-relic cosmology so described, as soon as we allow even minimal departures from it, the unitarity bound can be violated, allowing for the possibility that DM with even higher masses could be annihilating in the present day Universe.
For example, instead of annihilating directly to SM states, the DM could produce a metastable dark state which itself decays to the SM.
As shown by \cite{Berlin2016}, if this dark state lives long enough to dominate the energy density of the Universe, its decays to the SM will then dilute the DM density, avoiding the overproduction otherwise associated with heavy thermal DM, and allowing masses up to 100\,PeV to be obtained.
PeV-scale thermal DM can also be achieved if the DM is a composite state, rather than a point-like particle.
Exactly such a scenario was considered by \cite{Harigaya:2016nlg}, where DM with a large radius arose from a model of a strongly coupled confining theory in the dark sector. The lightest baryon in the theory plays the role of DM, which annihilates through a portal coupling to eventually produce SM states.
Such a scenario can evade the unitarity bound as the annihilation cross section is no longer guaranteed to scale as $M_{\chi}^{-2}$; it can instead now be determined by the geometric size of the composite DM.
Indeed, we will see that such composite DM scenarios are broadly the models that can be probed using the observational strategies considered in this work.
The self-annihilations which play a role in setting DM abundance in the early Universe can also be active today, producing an observable flux of stable SM particles such as $e^{\pm}$, $\nu_{e, \mu, \tau}$, and $\gamma$-rays, as well as unstable quarks, leptons, and bosons whose interaction processes can produce secondary $\gamma$-rays.
The full energy spectrum at production can be estimated with Monte Carlo (MC) simulations of the underlying particle physics.
For this purpose, {\tt PYTHIA} is the most widely used program, providing an accurate prompt DM spectrum up to ${\cal O}(10)$ TeV \citep{pythia}, and is a central ingredient in the widely used PPPC4DMID~\citep{Cirelli:2010xx,Ciafaloni:2010ti}.
However, {\tt PYTHIA} is not appropriate for studying UHDM in general, as it omits many of the interactions in the full unbroken SM that become important as the UHDM mass becomes much larger than the electroweak scale.
An alternative approach was introduced in \cite{HDM}, which computed the prompt DM spectrum from 1 TeV up to the Planck scale, the so-called {\tt HDMSpectrum}.\footnote{The results are publicly available at \url{https://github.com/nickrodd/HDMSpectra}.}
To do so, the authors of that work mapped the calculation of the DM spectrum to the computation of fragmentation functions, which can then be computed with DGLAP evolution in a manner that includes all relevant SM interactions, providing a better characterization of the prompt UHDM spectrum \citep[see][for a discussion of earlier approaches to compute DM spectra]{HDM}.
When $\gamma$-rays are produced from DM annihilation throughout the Universe, they can propagate to the Earth and be detected.
After considering the propagation effects,\footnote{For DM searches with galaxies in the Local Group, any galactic absorption by the starlight, infrared photons, and/or cosmic microwave background can be ignored due to its relatively small contribution \citep[$<$20\% at $\mathcal{O}$(100) TeV;][]{Esmaili2015}.
We note that while the UHDM mass range considered extends to 30 PeV, detected photons with energies above 100 TeV are not considered.}
the $\gamma$-ray flux at the Earth from DM annihilation can be described as
\begin{equation}\label{eq:dm_flux}
\frac{dF(E, \hat{n})}{dE d\Omega} = \frac{\langle\sigma v\rangle}{8\pi M_{\chi}^2}\frac{dN_{\gamma}(E)}{dE}\int_{\rm los}dl\, \rho^2(l\hat{n}),
\end{equation}
where $\langle\sigma v\rangle$ is the velocity-averaged annihilation cross section.
The prompt energy spectrum, $dN_{\gamma}(E)/dE$, depends on the DM annihilation channel and is determined from the HDM spectrum; $\rho(l\hat{n})$ is the DM density along the line of sight (los).
Even though the DM annihilation process can occur anywhere that DM is present, the DM signature from DM-rich regions will be brighter.
For instance, dwarf spheroidal galaxies (dSphs) in the Local Group are one of the best targets for DM study because of their high mass-to-light ratio (implying high DM density; e.g., $M/L \sim 3400 M_{\sun}/L_{\sun}$ for Segue 1; \citealp{Simon2011}), close proximity, and absence of bright nearby background sources.
The $\gamma$-rays that could be arriving at Earth from DM annihilations would be detectable with $\gamma$-ray space telescopes and ground-based observatories, enabling indirect searches for DM.
The self-annihilation of UHDM can produce $\gamma$-rays from around a TeV to above a PeV, containing the energy band in which the ground-based $\gamma$-ray observatories have better sensitivity than space-based instruments.
There are two classes of ground-based Very-High-Energy (VHE; $>$100 GeV) $\gamma$-ray observatories: Imaging Atmospheric Cherenkov Telescope arrays (IACTs) and Extended Air Shower arrays (EAS).
IACTs use reflecting dishes and fast cameras (generally using photomultiplier tubes, or PMTs) to reconstruct the Cherenkov light stimulated by the air showers triggered by TeV $\gamma$-rays as they interact with Earth's atmosphere.
Current generation EAS arrays are made of water tanks, where optical detectors (generally PMTs) in each tank directly detect the Cherenkov radition from charged air shower particles. Both types of instrument can reconstruct TeV $\gamma$-rays~\citep{doi:10.1146/annurev-nucl-102014-022036}.
Both have been used for indirect DM searches, with a particular focus on searches for electroweak-scale WIMPs \citep[e.g.,][]{dm_magic, dm_veritas, dm_hess, dm_hawc, dm_magic2, hawc_dm_halo}.
In addition to those $\gamma$-ray observatories, neutrino observatories have also searched for an indirect DM signal \citep[e.g.,][]{icecube_dm, Albert2022}.
In this paper, we explore the feasibility of detecting a UHDM annihilation signature from dSphs with current and future ground-based VHE $\gamma$-ray observatories.
To this end, we use only publicly available resources.
Also, we compute expected upper limits (ULs) for an UHDM particle with a mass from 30 TeV to 30 PeV, assuming that the UHDM signal is not detected.
We take Segue 1, one of the local classical dSphs, as our benchmark target, because it has been widely used for indirect DM searches, making it possible to place our results in the context of the existing limits at lower masses \citep[e.g.,][]{dm_veritas, dm_magic}. Furthermore, it has good visibility (in terms of zenith angle of observation) for all of the instruments discussed in this work.
We consider three instruments: the Very Energetic Radiation Imaging Telescope Array System (VERITAS; IACT), the Cherenkov Telescope Array (CTA; IACT), and the High-Altitude Water Cherenkov Observatory (HAWC; EAS array).
For VERITAS and HAWC, we do not access the official instrument response functions (IRFs)\footnote{The IRFs describe the mapping between the true and detected flux, primarily consisting of the effective area, point spread function, and energy dispersion matrix, each of which will differ between experiments.} and/or observed background spectra, but rather make reasonable assumptions based on publicly available information, and introduce a VERITAS-like and a HAWC-like instrument.
The remaining discussion is organized as follows.
In Section~\ref{sec:theory}, we present the theoretical motivations for UHDM searches, with a particular focus on the experimentally accessible parameter space.
The data acquisition and processing for each instrument is detailed in Sec.~\ref{sec:data}, with the methods used to calculate the projected sensitivity and ULs for each instrument outlined in Sec.~\ref{sec:method}.
We present our results in Sec.~\ref{sec:result}, and the studies on the systematic and statistical uncertainties are discussed in Sec.~\ref{sec:discussion}.
Our conclusions are reserved for Sec.~\ref{sec:summary}.
\vspace{0.3in}
\section{Theoretical Motivation}\label{sec:theory}
Theoretical arguments for DM have often downplayed the ultraheavy mass regime.
The prejudice against heavier masses arises from the so-called unitarity limit of \cite{Griest:1989wd}, which is based on the following ``bottom-up'' argument.
The naive expectation is that DM annihilation rates for point-like particles will scale as $\langle \sigma v \rangle \sim C/M_{\chi}^2$, where $M_{\chi}$ is the particle mass, and $C$ is a dimensionless parameter.
For a thermal relic, this cross section is what depletes the DM abundance away from its equilibrium value once the temperature of the Universe drops below $M_{\chi}$, and so we expect $\Omega_{\chi} \propto 1/\langle \sigma v \rangle$.
Accordingly, for too-large $M_{\chi}$, DM cannot destroy itself with enough vigor, and the Universe overcloses.
One can boost the size of $C$, but only up to an amount allowed by unitarity.
DM as a {\it simple} self-annihilating thermal relic is only possible for masses up to $\sim$194 TeV \citep{Smirnov:2019ngs}.
We show this upper limit in Fig.~\ref{fig:lim}; 194 TeV is an updated value of the conservative bound from \cite{Griest:1989wd} (those authors used $\Omega_\chi h^2 = 1$, as opposed to the current measurement of $\Omega_\chi h^2 = 0.12$ given by \citealp{Planck:2018vyg}).
To derive $M_{\chi} \lesssim 194~{\rm TeV}$, one assumes that the annihilation rate saturates the unitarity limit ($\langle \sigma v \rangle \propto 1/v$; cf. Eq.~\ref{eq:unitlim} with $J=0$) for the entire relevant history of the DM.
A rate that scales inversely with velocity is typically found only at low velocities and in the presence of a long-range force, as in the celebrated case of Sommerfeld enhancement.
As discussed below, it is difficult to model-build a scenario where the cross section is maximally large, but where the DM continues to behave as a simple elementary particle.
Typically, bound state and compositeness effects will enter in this limit.
For such reasons, in \cite{Griest:1989wd}, the authors felt the above cross-section scaling was overly conservative.
Instead, they assumed that the cross section was dominantly $S$-wave ($\langle \sigma v \rangle \propto v^0$) but with a maximum value still set by unitarity (as given in Eq.~\ref{eq:unitlim}).
Using this, and assuming $\Omega_{\chi} h^2 = 1$, they derived the well-known upper limit of 340 TeV.
Repeating their calculation for $\Omega_{\chi} h^2 = 0.12$, the bound is reduced to $M_{\chi} \lesssim 116~{\rm TeV}$.
Nevertheless, we will adopt the more conservative value of 194 TeV in our results.
It involves the fewest assumptions about the early Universe, but amounts to assuming that DM finds a way to annihilate at the limiting cross-section value throughout the era that set its relic abundance.
The presence of additional structure in either the DM particles themselves or the final states they capture into can weaken even this conservative limit, though.
For example, if capture into bound states is possible, then selection rules can open up annihilation channels into higher partial waves.
The total relic abundance of DM is necessarily set by the sum over all channels, but each partial wave respects the limit from unitarity,
\begin{equation}
\sigma_J \leq \frac{4\pi\, (2J+1)}{M_\chi^2 v_{\rm rel}^2}.
\label{eq:unitlim}
\end{equation}
As discussed in \cite{Bottaro:2021snn}, even for the straightforward scenario of thermal relics that are just multiplets of the electroweak group SU(2)$_L$, this allows DM consistent with unitarity up to $\sim$325 TeV.
It would seem uncontroversial to analyze the full regime that allows this simple scenario.
To relax the bound farther, as mentioned above, the unitarity limit of roughly one hundred TeV assumes a point-like particle. This was explicitly recognized in the classic 1990 reference on the matter.
If, however, DM is a composite particle, then the relevant dimensionful scale that sets the annihilation rate can be its geometric size, $R$, which may be much larger than its Compton wavelength $\sim 1/M_{\chi}$.
It is thus possible to realize a thermal-relic scenario for masses $\gg$ 100 TeV ({\it e.g.,} the example of \citealp{Harigaya:2016nlg} discussed above).\footnote{Alternatively, to get to very high masses, one can decouple the DM abundance from its annihilation rate. In this approach, one forfeits the WIMP-miracle in favor of an alternate cosmological history.
As an example, some other particle could populate the Universe, which ultimately decays to the correct quantity of DM ({\it cf.}~\citealp{Carney:2022gse} for discussion and references).
If DM is non-thermal, then additional structure is needed for detection. One straightforward possibility is to construct DM that is cosmologically stable, but decays with an observable rate ({\it e.g.}~\citealp{Kolb1999}).}
For pointing telescopes like VERITAS, HESS, or CTA to have a discovery advantage, one needs a scenario, like compositeness, with non-negligible DM annihilation, since the resulting flux will scale like $\rho^2$.
Bound-state particles with a heavy constituent, whether obtained as thermal relics or by a more complicated cosmology, provide a means to get annihilation rates $\langle \sigma v \rangle \, \gg \, C_{\rm unitary}/M_{\chi}^2$, where $C_{\rm unitary}$ is the largest factor consistent with quantum mechanics in a single partial wave.
One may therefore consider this as a generalization of the ``sum over partial waves'' loophole we first mentioned in the bound-state capture scenario.
As we see in Fig.~\ref{fig:lim}, there is a large region of parameter space beyond the point-like unitarity limit.
Furthermore, we project that the limits from CTA exceed those from HAWC out to several PeV, and are primed for testing these models.
The generic possibility of a geometric cross section for composite particles can be seen with atomic (anti)hydrogen, as pointed out in \cite{Geller:2018biy}, whose arguments we briefly recap.
In a hydrogen-antihydrogen collision, an interaction with a geometric cross section is the ``rearrangement'' reaction, which produces a protonium ($p \bar p$) $+$ positronium ($e^+ e^-$) final state.
Partial-wave by partial-wave, unitarity is naturally respected.
However, summing over all allowed angular momenta gives
\begin{equation}
\sigma \, \sim \, \sum_{J = 0}^{J_{\rm max}} \sigma_J \, \sim \, \frac{4\pi}{k_i^2} \sum_{J = 0}^{J_{\rm max}} (2J + 1) \,\sim\, \frac{4\pi}{k_i^2} J_{\rm max}^2 \,\sim\, 4\pi R^2,
\label{eq:unitsc}
\end{equation}
where $k_i$ is the initial momentum, $R$ is the size of the particle, and $J_{\rm max}$ is set by angular momentum conservation and the classical value $(k_i \, R)$.\footnote{For this parametric estimate, we are taking $J_{\rm max} \sim L_{\rm max} \sim k_i \, R$. Strictly, $k_i \, R$ is bounding the orbital angular momentum in the collision.
Also, Eq.~\eqref{eq:unitsc} assumes a kinetic energy, $E_i = k_i^2/2M_{\chi}$ comparable or larger than the incoming particle's binding energy, $E_b$.
If $E_i \ll E_b$, then only the $S$-wave will contribute and the cross section $\sigma \sim R/k_i$.
Since this involves just a single partial wave, we therefore cannot use a sum with many terms to exceed the point-particle unitarity limit.}
Importantly, a parametric enhancement in the cross section has been achieved by saturating each partial-wave bound up to $J_{\rm max}$.
Whatever partial-wave protonium is captured into, it will ultimately decay down the spectroscopic ladder until reaching the lowest-allowed-energy state, at which point it annihilates.
For a generic scenario with the dark sector charged under the SM, the entire process of capture, decay, and annihilation is prompt on observational timescales.
An ultraheavy dark-hydrogen thus provides a proof of concept for a ``detection-through-annihilation'' scenario.
The argument for geometric scaling generalizes, though, to include states bound by strong dynamics \citep{Kang:2006yd,Jacoby:2007nw}.
Thus, DM may be more like an ultraheavy $B$-meson \cite[as studied by][]{Geller:2018biy}, or a gluequark \citep[adjoint fermion with color neutralized by cloud of dark gluons;][]{Contino2019}, heavy-light baryon \citep{Harigaya:2016nlg}, {\it etc.}
For a complete scenario, one would necessarily need an explanation for why these heavy-constituent composites came to be the DM with the right abundance.
Nonetheless, the physics behind their ability to annihilate with an effective rate far above the point-particle unitarity limit is straightforward.
Therefore, models with dynamics not-too-different from the SM can realize annihilating particle DM all the way to the Planck scale, and should be tested.
With the above in mind, in Fig.~\ref{fig:lim}, we outline basic theoretical aspects of the parameter space we will consider \citep[\textit{cf.}][]{ANTARES:2022aoa}.
Firstly, we see that the majority of the mass range probed is above the conventional unitarity limit.
Next, the curve we label by ``Partial-Wave Unitarity'' represents the largest present day annihilation cross section consistent with the same point-particle unitarity constraints that when applied in the early Universe constrains $M_{\chi} \lesssim 194$ TeV.
In particular, we require $\langle \sigma v \rangle \leq 4\pi/(M_{\chi}^2 v_{\rm rel})$, where we take $v_{\rm rel} \sim 2 \times 10^{-5}$ as an approximate value for the average velocity between DM particles in nearby dwarf galaxies \citep{Martinez:2010xn,McGaugh:2021tyj}.\footnote{We note that the location of the Partial-Wave Unitarity bound strongly depends on the system observed.
A search for DM annihilation within the Milky Way, for instance, would depend on a higher relative velocity, $v_{\rm rel} \sim 10^{-3}$, given the larger mass of our galaxy as compared to its satellites.
This would lower the Partial-Wave Unitarity curve shown in Fig.~\ref{fig:lim} by roughly two orders of magnitude.}
Composite states can readily evade this bound, although as shown by \cite{Griest:1989wd}, even these systems eventually hit a ``Composite Unitarity'' bound, which requires $\langle \sigma v \rangle \leq 4\pi(1+M_{\chi} v_{\rm rel} R)^2/(M_{\chi}^2 v_{\rm rel})$, which for large masses reduces to the result in Eq.~\eqref{eq:unitsc}.
We show this result for different values of $R$ in Fig.~\ref{fig:lim}, and note that for $M_{\chi} \ll R^{-1}$, these results reduce to the point-like unitarity limit.
\vspace{0.3in}
\section{Data reduction} \label{sec:data}
\subsection{VERITAS-like instrument}\label{sec:veritas}
VERITAS is an array of four imaging atmospheric Cherenkov telescopes located in Arizona, USA \citep{VERITAS}. One of the VERITAS scientific programs is to search for indirect DM signals from astrophysical objects such as dSphs and the Milky Way Galactic Center \citep{Zitzer2017}. Since it has a similar sensitivity to other IACT observatories like MAGIC and HESS \citep{Park2015, Aleksic2016, Aharonian2006}, we adopt VERITAS as representative of current-generation IACTs.
For our analysis, we take the published IRFs and observed ON and OFF region\footnote{The ON region is defined as the area centered on a target. The OFF region is one or more areas containing no known $\gamma$-ray sources, used for estimating the isotropic-diffuse background rate. } counts from \cite{dm_veritas}. The size of ON-region was 0.03 deg$^2$, and the OFF-region was defined by the crescent background method \citep{Zitzer2013}. The relative exposure time between ON and OFF regions ($\alpha$) was 0.131. From 92.0 hrs of Segue 1 observations, the number of observed events from the ON ($N_{\rm on}$) and OFF regions ($N_{\rm off}$) was 15895 and 120826, respectively. We introduce a reference instrument, denoted ``VERITAS-like,'' whose observables are limited to total $N_{\rm on}$, total $N_{\rm off}$, and $\alpha$ (see App.~\ref{sec:check} for the comparison between VERITAS and VERITAS-like constraints on the DM annihilation cross section). In addition, we scale down $N_{\rm on}$ and $N_{\rm off}$ values to a nominal observation time of 50 hours.
\vspace{0.1in}
\subsection{CTA}\label{sec:cta}
CTA is the next-generation ground-based IACT array, which is expected to have about 10 times better point-source sensitivity when compared with the current IACT observatories, in addition to a broader sensitive energy range, stretching from 20 GeV to 300 TeV, and two to five times better energy and angular resolution \citep{Bernlohr2013}.
The observatory will be made up of two arrays, providing full-sky coverage: one in the northern hemisphere (CTA-North; La Palma in Spain) and the other in the southern hemisphere (CTA-South; Atacama Desert in Chile). CTA will be equipped with tens of telescopes. In this study, we consider the CTA-North array, from which our target, Segue 1, can be observed. CTA will broaden our understanding of the extreme Universe, including the nature of DM \citep{CTA}, and will be able to probe long-predicted, but so-far untested candidates like Higgsino DM~\citep{Rinchiuso:2020skh}.
The CTA IRFs and background distributions as a function of energy, as well as official analysis tools,\footnote{{\tt Gammapy}, \url{https://gammapy.org/}} are publicly available \citep{CTA_IRFs, Deil2017}.
We assume the alpha configuration (\textit{prod5 v0.1}). In the alpha configuration, the CTA-North array consists of 4 Large-Sized Telescopes (LSTs) and 9 Medium-Sized Telescopes (MSTs).\footnote{\url{https://www.cta-observatory.org/science/ctao-performance/}} To compare with the VERITAS-like instrument, we use the same observation conditions; the size of the ON-region is set to 0.03 deg$^2$ with $\alpha$ of 0.131.
\vspace{0.1in}
\subsection{HAWC-like instrument}\label{sec:hawc}
HAWC, located at Sierra Negra, Mexico, is a $\gamma$-ray and cosmic-ray observatory. The instrument is constituted of 300 water tanks. Each tank contains about $1.9\times10^5$ L of water with four photomultiplier tubes (PMTs). After applying $\gamma$/hadron separation cuts, observed $\gamma$-ray events are divided into analysis bins ($\mathcal{B}_{\rm hit}$) based on the fraction of the number of PMT hits. HAWC observes two-thirds of the sky on a daily basis and has found many previously undetected VHE sources \citep{Albert2020}. In addition, they have studied 15 dSphs within its field of view to search for DM annihilation and decay signatures \citep{dm_hawc,HAWC:2017udy}.
The IRFs and observed background spectrum for Segue 1 are not publicly available, so we introduce a ``HAWC-like'' reference instrument based on reasonable assumptions. A dataset including 507 days of observations of the Crab Nebula is publicly available \citep{Abeysekara2017},\footnote{\url{https://data.hawc-observatory.org/datasets/crab_data/index.php}} and the declination angle (Dec.) of the Crab Nebula is not significantly different from that of Segue 1 ($\Delta$Dec.~$\approx$ 6 degrees). Since Dec.~is expected to be one of the key factors determining the shape of the IRFs and background rate, we assume that the background rate and IRFs should be similar for observations of Segue 1 and the Crab Nebula (see App.~\ref{sec:check} for the comparison between HAWC and HAWC-like constraints on the DM annihilation cross section). With the help of the Multi-Mission Maximum Likelihood framework \citep[{\tt 3ML}; ][]{Vianello2015}, we acquire the IRFs and background rate for each $\mathcal{B}_{\rm hit}$ (total of 9 bins) as used in \cite{Abeysekara2017}. We set the radius of an ON region to 0.2 degrees, and the background is calculated from a circular region with a 3-degree radius around the Crab Nebula, providing $\alpha$ of 0.04/9 ($\sim$ 0.004).
\vspace{0.3in}
\section{Analysis methods}\label{sec:method}
\subsection{Ingredients for estimating UHDM signal}
To compute the $\gamma$-ray annihilation flux at the Earth, given in Eq.~\eqref{eq:dm_flux}, we need two ingredients: the photon spectrum for each DM annihilation channel and the DM density profile of the selected target, Segue 1.
As stated, we use the {\tt HDMSpectrum} \citep{HDM} to calculate the expected DM signal because it provides an accurate spectrum for the full mass range we consider.
The annihilation of UHDM produces $\gamma$-rays of energies equal to or less than $M_{\chi}$.
We compute the fraction of the produced energy flux ($F \propto \int E \frac{dN}{dE} dE$) that is observable and the number of expected $\gamma$-ray events ($N \propto \int \frac{dN}{dE} dE$); i.e., the energy flux and $\gamma$-ray counts distribution within the energy band of the current and future VHE $\gamma$-ray observatories ($E \leq$ 100 TeV).
In this work, we consider nine annihilation channels: three charged leptons ($e^{+}e^{-}$, $\mu^{+}\mu^{-}$, and $\tau^{+}\tau^{-}$), two heavy quarks ($t\bar{t}$ and $b\bar{b}$), three gauge bosons ($W^{+}W^{-}$, $ZZ$, and $\gamma\gamma$), and one neutrino ($\nu_e \bar{\nu}_e$).
For the DM density profile, we take a generalized version of the Navarro–Frenk–White (NFW) profile, which is a function of five parameters \citep{Hernquist1990, Zhao1996, GS2015},
\begin{equation} \label{eq:dm_profile}
\rho(r) = \frac{\rho_{s}}{(r/r_s)^{\gamma}[1+(r/r_s)^{\alpha}]^{(\beta-\gamma)/\alpha}},
\end{equation}
where the choice of ($\alpha$, $\beta$, $\gamma$) = (1, 3, 1) recovers the original NFW profile \citep{NFW1997}, and $r_s$ is the scale radius of the DM halo.
The so-called $J$-factor is defined as the integral of the squared DM density along the los within a region of interest (roi),
\begin{equation}
J = \int_{\rm roi}d\Omega \int_{\rm los}dl\, \rho^2(l\hat{n}).
\end{equation}
The set of five NFW parameters ($\alpha$, $\beta$, $\gamma$, $\rho_s$, and $r_s$) is obtained by fitting the observed kinematic data of the dSphs. Limited data produces large uncertainties in estimates of the $J$-factor, which propagates as a systematic uncertainty when estimating the DM cross section (see Sec.~\ref{sec:discussion}). In a thorough study, \cite{GS2015} obtained a number of the parameter sets that adequately describe the data. Among more than 6000 sets for Segue 1, we take one that approximates the median of the $J$-factor (see Table~\ref{tab:nfw}).
Fig.~\ref{fig:ratio} shows the expected number of $\gamma$-ray photons under the conditions stated below (left panel) and the ratio of observable energy flux to total energy flux (right panel) for the nine annihilation channels. For the expected counts distribution, we assume that the effective area is $10^{10}$ cm$^2$, the exposure time is 50 hours, the $J$-factor is 10$^{18}$ GeV$^2$/cm$^5 \,\cdot\,$sr, and the DM cross section is 10$^{-23}$ cm$^3$/s.
This result implies that the current and future observatories, whose sensitive energy ranges extend to 100 TeV, can observe a large portion of the produced $\gamma$-rays and/or energy flux from the UHDM annihilation, up to $M_{\chi}$ of a few PeV.
For the $\gamma \gamma$ channel, the majority of the energy remains in the sharp spectral feature at $E_\gamma \sim M_{\chi}$, and so the energy flux ratio sharply drops once the mass is above 100 TeV and the continuum component becomes dominant. This sharp decrease is not clearly visible in the expected count level because the emission at $E_\gamma \sim M_{\chi}$ produces only about 10\% of the total counts in the high-mass regime.
\begin{table}[h!]
\centering
\begin{tabular}{c c c c c c c c}
\hline\hline
$\rho_s$ & $r_s$ & $\alpha$ & $\beta$ & $\gamma$ & $\theta_{\rm max}$ & $J(\theta_{\rm max})$ \\
$[$ \(M_\odot\)$/{\rm pc}^3$ ] & [ pc ] & & & & [ deg ] & [ GeV$^2$/cm$^5 \,\cdot\,$sr ] \\ \hline
$5.1\times10^{-3}$& $2.2\times10^4$ & 1.48 & 8.04 & 0.83 & 0.35 & $2.5\times10^{19}$\\
\hline\hline
\end{tabular}
\caption{The selected parameter set of the generalized NFW profile for Segue 1. The maximum angular distance, $\theta_{\rm max}$ is given by the location of the furthest member star, which is an estimate of the size of Segue 1.}\label{tab:nfw}
\end{table}
\vspace{0.1in}
\subsection{Projected sensitivity curves}\label{sec:excess}
To explore the feasibility of detection, we compare expected $\gamma$-ray counts from UHDM self-annihilation to background counts. The number of expected signal counts ($N_s$) is obtained by forward-folding Eq.~\eqref{eq:dm_flux} with IRFs,
\begin{equation}\label{eq:dm_signal}
N_s = \int d\Omega\, dE'\, dE\, \frac{dF(E', \hat{n})}{dE'\, d\Omega} R(E, \Omega|E', \Omega'),
\end{equation}
where unprimed and primed quantities represent observed (strictly speaking, reconstructed) and true quantities, respectively. The function $R(E, \Omega|E', \Omega')$, refers to an IRF consisting of three sub-functions: effective area, energy bias, and point spread function. Assuming that the number of ON region events is $N_{\rm on} = N_s+\alpha N_{\rm off}$, we calculate the significance of the UHDM signal by using the so-called Li \& Ma significance \citep[$\mathcal{S}$;][]{Li1983},
\begin{equation}
\mathcal{S} = \sqrt{2} \left\{ N_{\rm on} \ln \left[ \frac{1+\alpha}{\alpha} \left( \frac{N_{\rm on}}{N_{\rm on}+N_{\rm off}} \right) \right] + N_{\rm off} \ln \left[ (1+\alpha) \left( \frac{N_{\rm off}}{N_{\rm on}+N_{\rm off}} \right) \right] \right\}^{1/2}.
\end{equation}
Finally, for each annihilation channel, we find a set of values of $M_{\chi}$ and $\langle\sigma v\rangle$ for which $\mathcal{S}$ = 5\,$\sigma$.
\subsection{Expected upper limit curves}\label{sec:uls}
To estimate an UL on the UHDM annihilation cross section for a given $M_\chi$, we perform a maximum likelihood estimation (MLE). Since we cannot access the energy distribution of background events for the VERITAS-like instrument, we use a simple likelihood analysis using the total $N_{\rm on}$ and $N_{\rm off}$ counts, $\mathcal{L}(\langle\sigma v\rangle; b|D)$, constructed from two Poisson distributions,
\begin{equation}
\begin{aligned}
\mathcal{L} &= \mathcal{P}_{\rm pois} \left( N_s + \alpha b; N_{\rm on} \right) \times \mathcal{P}_{\rm pois}(b; N_{\rm off})\\
&= \frac{ \left( N_s + \alpha b \right) ^{N_{\rm on} } e^{-(N_s+\alpha b)}}{N_{\rm on}!}\frac{b^{N_{\rm off}}e^{-b}}{N_{\rm off}!},
\end{aligned}
\end{equation}
where the nuisance parameter $b$ represents the expected background rate. This likelihood function is expected to be less sensitive compared to a full likelihood function incorporating event-wise energy information, especially at high masses, as it does not utilize any features present in the DM spectrum; see \cite{Aleksic2012} for full discussion of this hindrance. For CTA and the HAWC-like instrument, we perform a binned likelihood analysis,
\begin{equation}
\mathcal{L} = \sum_i \frac{ \left( N_{s, i} + \alpha b \right) ^{N_{{\rm on}, i} } e^{-(N_{s, i}+\alpha b)}}{N_{{\rm on}, i}!}\frac{b^{N_{{\rm off}, i}}e^{-b}}{N_{{\rm off}, i}!}.
\end{equation}
We calculate an expected UL with the assumption that an ON region does not contain any signal from UHDM self-annihilation but only Poisson fluctuation around $\alpha \times N_{\rm off}$; i.e., we can randomly sample $N_{\rm on}$ from the Poisson distribution of $\alpha N_{\rm off}$. For the binned likelihood analysis, we can apply the Poisson fluctuation to each background bin to get the binned ON-region data. With the synthesized ON-region data, we perform MLE analysis and calculate an UL on the DM cross section for a given $M_{\chi}$. Throughout this paper, UL refers to the one-sided 95\% confidence interval, which is obtained from the profile likelihood ($\Delta \ln\mathcal{L} = 1.35$). We repeat the process of calculating an expected limit to get the median or the containment band for the 95\% UL.
\vspace{0.3in}
\section{Results} \label{sec:result}
Here, we present two sets of analysis results: sensitivity curves and expected ULs, as functions of the UHDM particle mass. Since above a few tens of PeV the energy flux ratio for all annihilation channels is less than 10\% (Fig.~\ref{fig:ratio}), we perform the analyses for UHDM masses from 30 TeV up to 30 PeV. Note that all of the following results are based on assumed exposure times of 50 hours for the VERITAS-like instrument and CTA-North, and 507 days for the HAWC-like instrument.
Figure~\ref{fig:sensitivity} shows the sensitivity curves for nine UHDM annihilation channels ($e^{+}e^{-}$, $\mu^{+}\mu^{-}$, $t\bar{t}$, $b\bar{b}$, $W^{+}W^{-}$, $ZZ$, $\gamma\gamma$\footnote{Note that for the $\gamma\gamma$ channel, we use a different mass binning so that the lower bound of the sensitivity and upper limit curves is different from those from the other channels. This choice is based on the fact that the delta component in the $\gamma\gamma$ annihilation can be fully addressed only when the mass binning matches the binning of the energy bias matrix ($M_\chi = E_\gamma$).}, and $\nu_e \bar{\nu}_e$) with VERITAS-like (50 hrs; left panel), CTA-North (50 hrs; middle panel), and HAWC-like (507 days; right panel) instruments. Considering the annihilation of an UHDM particle with $M_{\chi}$ of 1 PeV via the $\tau^{+}\tau^{-}$ channel, a HAWC-like instrument is likely to reach $\mathcal{S}$ of 5\,$\sigma$ with the smallest cross section; specifically, a VERITAS-like instrument is expected to detect UHDM for a cross section of $\sim 5\times10^{-19}~{\rm cm}^3/{\rm s}$, CTA-North for $\sim 4\times10^{-19}~{\rm cm}^3/{\rm s}$, and a HAWC-like instrument for $\sim 1\times10^{-19}~{\rm cm}^3/{\rm s}$. However, this sensitivity depends on the annihilation channel and the UHDM mass, not to mention the exposure time. For example, for $M_{\chi}$ of 100 TeV, CTA-North shows, in general, a better sensitivity compared to the other instruments. For the $\gamma \gamma$ channel, a discontinuity in the sensitivity lines can be seen because, as explained earlier, the line-like contribution ($E_\gamma \sim M_\chi$) falls outside the sensitive energy range.
Next, we estimate the ULs on the UHDM annihilation cross section as a function of UHDM particle mass for the same annihilation channels for the three instruments (Fig.~\ref{fig:uls}). The curves represent the median value from 100 realizations generated at each mass. With the assumed observation conditions (e.g., livetime), CTA-North shows the most constraining ULs at lower masses ($M_{\chi} < 1$ PeV), whereas a HAWC-like instrument provides more stringent ULs at higher masses. Note that the UL on the DM cross section is expected to decrease as we increase the exposure time, $\langle\sigma v\rangle_{\rm UL} \propto 1/\sqrt{t}$. As expected from the relative sensitivity between VERITAS and CTA-North, the UL curves from CTA-North are about 10 times lower than those from a VERITAS-like instrument.
In the case of the $\gamma\gamma$ annihilation channel, a discontinuity in the UL curve is again observed at 100 TeV, most strongly for the CTA-North. In contrast to the VERITAS-like instrument, it is possible for the CTA-North instrument to perform the full binned likelihood analysis by comparing the signal and background energy distributions, which lowers the UL curve (see App.~\ref{sec:check}). Note that in the case of the $\gamma\gamma$ annihilation channel, the two distributions differ clearly compared to those of other channels. In the case of the HAWC-like instrument, the energy dispersion matrix for the highest energy bin is relatively broad, which smooths out the discontinuity.
\vspace{0.3in}
\section{Discussion of statistical and systematic uncertainties} \label{sec:discussion}
Here we briefly discuss the impact of statistical and systematic uncertainties on the presented UL curves. For these studies, we consider a single annihilation channel ($t\bar{t}$) for simplicity, although the results are representative of what we expect for the additional channels.
Due to the Poisson fluctuation in the observed counts, statistical uncertainty is inevitable. For this study, we compute the 68\% containment band of expected UL curves for a large number of MC realizations (10,000), using the method described in Sec.~\ref{sec:uls}.
Figure~\ref{fig:statsys_err} shows the statistical uncertainty band for 68\% (shaded region) and 95\% (dashed lines) containment. This figure implies that the Poisson fluctuation can result in 45--55\% statistical uncertainty (at the 1$\sigma$ level) across all masses for the three instruments: VERITAS-like ($\sim$45\%), CTA-North ($\sim$53\%), and HAWC-like ($\sim$54\%).
A major systematic uncertainty, beyond that inherent in IRFs, is the present uncertainty in the DM density profile assumed for Segue 1.
A DM density profile estimated from insufficient and possibly inaccurate kinematic observations will inevitably have a large uncertainty.
Also, it depends on assumptions and approximations made in the modeling---for instance the assumption of a NFW profile with exact spherical symmetry---can also lead to systematic uncertainties.
In addition, the stellar sample selection when fitting the DM density profile affects the $J$-factor significantly, such that any ambiguity in the sample selection, possibly due to contamination from foreground stars or stellar streams, can overestimate the $J$-factor.
The magnitudes of the systematic uncertainties are different from dSph to dSph, and depend on the definition of the DM density profile. For further discussion on this uncertainty, see \cite{Bonnivard2015a, Bonnivard2015b}.
As mentioned earlier, \cite{GS2015} provide more than 6000 viable parameter sets for Segue 1, and we compute $10^{4}$ expected UL curves by randomly sampling the parameter set. In this work, we use the parameter sets to estimate the systematic uncertainty on an expected UL curve due to uncertainty on the $J$-profile. Note that in this study, we do not include the Poisson fluctuation of the simulated ON region counts; i.e., $N_{{\rm on}, i}$ is equal to $\alpha N_{{\rm off}, i}$. Finally, we take ULs corresponding to the 68\% and 95\% containment for each mass (Fig.~\ref{fig:statsys_err}). This figure implies that, for Segue 1, the $J$-factor can increase or decrease an UL curve by a factor of 2 (1\,$\sigma$ level) across all masses, regardless of instrumental properties, at a level to the statistical uncertainties seen in Fig.~\ref{fig:statsys_err}. Note that \cite{Bonnivard2016} claimed that $J$-factor may be overestimated by about two orders of magnitude due to the stellar sample selection bias. However, the accurate prediction of the Segue 1 $J$-profile is beyond the scope of this paper.
\vspace{0.3in}
\section{Summary and Outlook}\label{sec:summary}
In this work, we have explored the potential of current and future $\gamma$-ray observatories to extend the search for DM beyond the unitarity bound.
Our results allow one to determine whether discovery of an UHDM candidate of a given mass and annihilation cross section is within reach. Furthermore, we provide an estimate of the constraints that can be derived on the UHDM annihilation cross section by current and future $\gamma$-ray observatories, assuming a non-detection.
Returning to Fig.~\ref{fig:lim}, we can place our obtained limits in the context of theoretical constraints on the allowed annihilation cross section of UHDM. All instruments considered can probe realistic cross sections for composite UHDM particles whose annihilation respects partial-wave unitary. For the given exposure times (50 hours for CTA-North and a VERITAS-like instrument, and 507 days for a HAWC-like instrument), CTA-North is projected to provide the most constraining limits, probing scales down to $R = (10~{\rm GeV})^{-1} $ for UHDM with a mass around 300~TeV. At higher masses, above 1~PeV, HAWC-like limits become the most constraining, reaching scales around $R = (1~{\rm GeV})^{-1}$ at 10~PeV. The VERITAS-like limits, while less constraining, are worse than those of CTA-North or a HAWC-like instrument by less than or equal to an order of magnitude for the entire mass range (with a slight advantage over the HAWC-like instrument at masses below 100~TeV).
This work draws attention to the exploration of DM beyond the conventional parameter range. The results we have derived are indicative, using reasonable assumptions about the data and IRFs for current-generation instruments, as well as realistic exposure times for current and future instruments. We hope that this work illustrates the interest and feasibility of searches for UHDM with the current-generation $\gamma$-ray instruments, and the value of considering such searches for future observatories. The phase space that can be probed, in terms of DM particle mass and annihilation cross section, is a relevant one for models predicting composite UHDM. This parameter space is currently unconstrained, but could be probed with archival datasets from current-generation $\gamma$-ray instruments, including HAWC, VERITAS, and other IACTs.
\vspace{0.1in}
\begin{acknowledgments}
{\it Acknowledgments.}
Our work benefited from discussions with Michael Geller, Diego Redigolo, and Juri Smirnov.
We would like to thank Alex Geringer-Sameth, Savvas M. Koushiappas, and Matthew Walker for providing the parameter sets for the $J$-factors.
This research has made use of the CTA instrument response functions provided by the CTA Consortium and Observatory, see https://www.cta-observatory.org/science/cta-performance/ (version prod5 v0.1; [citation]) for more details.
D. Tak and E. Pueschel acknowledge the Young Investigators Program of the Helmholtz Association, and additionally acknowledge support from DESY, a member of the Helmholtz Association HGF. M. Baumgart is supported by the DOE (HEP) Award DE-SC0019470.
\end{acknowledgments}
\bibliographystyle{aasjournal}
\bibliography{references}
\appendix
\section{Comparing reference and real instruments}\label{sec:check}
In this appendix, we perform a consistency check showing that our two reference instruments (VERITAS-like and HAWC-like) can qualitatively reproduce the published results from VERITAS \citep[92.0 hrs for Segue 1;][]{dm_veritas} and HAWC \citep[507 days for Segue 1;][]{dm_hawc}. In particular, we compute an expected UL band and then compare it with the corresponding published UL curves. At the outset, we emphasize that given the ingredients for estimating the DM annihilation signal (e.g., DM density profile) as well as the method for computing ULs differ from the publications, we do not expect complete consistency.
As described in Sec.~\ref{sec:uls}, each expected UL curve can be obtained from a single MC simulation, and an UL band is based on 300 realizations.
The consistency check is performed for the $b\bar{b}$ and $\tau^{+}\tau^{-}$ annihilation channels because both \cite{dm_veritas} and \cite{dm_hawc} provide those UL curves for Segue 1.
In Fig.~\ref{fig:sanityCheck} we show the comparison of the expected UL bands and the published UL curves for the two instruments, VERITAS-like and HAWC-like.
In the case of the VERITAS-like instrument, the published UL curve and the expected UL band are consistent at lower masses ($M_{\chi} \sim 1-10~{\rm TeV}$). However, as the DM mass increases, the two UL curves deviate. As mentioned earlier and discussed in \cite{Aleksic2012}, the deviation can be easily explained by the fact that the likelihood function used for the VERITAS-like instrument lacks sensitivity at high masses. It can be resolved if we perform the full binned/unbinned likelihood analysis with the actual dataset. However, moving to the full likelihood is beyond the scope of this paper.
Meanwhile, the result for the HAWC-like instrument shows greater consistency with the published result in the high-mass regime ($M_{\chi}\gtrsim 10$ TeV). The discrepancy at lower masses arises mostly from the assumptions we adopted on the HAWC-like instrument, described in Sec.~\ref{sec:hawc}; in particular, we have assumed the HAWC observation of Segue 1 exactly matches that of the Crab Nebula, for which we used the available observed background values and IRFs.
|
Title:
Schrodinger's Galaxy Candidate: Puzzlingly Luminous at $z\approx17$, or Dusty/Quenched at $z\approx5$? |
Abstract: $JWST$'s first glimpse of the $z>10$ Universe has yielded a surprising
abundance of luminous galaxy candidates. Here we present the most extreme of
these systems: CEERS-1749. Based on $0.6-5\mu$m photometry, this strikingly
luminous ($\approx$26 mag) galaxy appears to lie at $z\approx17$. This would
make it an $M_{\rm{UV}}\approx-22$,
$M_{\rm{\star}}\approx5\times10^{9}M_{\rm{\odot}}$ system that formed a mere
$\sim220$ Myrs after the Big Bang. The implied number density of this galaxy
and its analogues challenges virtually every early galaxy evolution model that
assumes $\Lambda$CDM cosmology. However, there is strong environmental evidence
supporting a secondary redshift solution of $z\approx5$: all three of the
galaxy's nearest neighbors at $<2.5$" have photometric redshifts of
$z\approx5$. Further, we show that CEERS-1749 may lie in a $z\approx5$
protocluster that is $\gtrsim5\times$ overdense compared to the field. Intense
line emission at $z\approx5$ from a quiescent galaxy harboring ionized gas, or
from a dusty starburst, may provide satisfactory explanations for CEERS-1749's
photometry. The emission lines at $z\approx5$ conspire to boost the $>2\mu$m
photometry, producing an apparent blue slope as well as a strong break in the
SED. Such a perfectly disguised contaminant is possible only in a narrow
redshift window ($\Delta z\lesssim0.1$), implying that the permitted volume for
such interlopers may not be a major concern for $z>10$ searches, particularly
when medium-bands are deployed. If CEERS-1749 is confirmed to lie at
$z\approx5$, it will be the highest-redshift quiescent galaxy, or one of the
lowest mass dusty galaxies of the early Universe detected to-date. Both
redshift solutions of this intriguing galaxy hold the potential to challenge
existing models of early galaxy evolution, making spectroscopic follow-up of
this source critical.
| https://export.arxiv.org/pdf/2208.02794 |
\begin{CJK*}{UTF8}{gbsn}
\title{Schrodinger's Galaxy Candidate: Puzzlingly Luminous at $z\approx17$, or Dusty/Quenched at $z\approx5$?
}
\correspondingauthor{Rohan P. Naidu, Pascal A. Oesch}
\email{rohan.naidu@cfa.harvard.edu, pascal.oesch@unige.ch}
\author[0000-0003-3997-5705]{Rohan P. Naidu}
\affiliation{Center for Astrophysics $|$ Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA}
\author[0000-0001-5851-6649]{Pascal A. Oesch}
\affiliation{Department of Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland}
\affiliation{Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, K\o benhavn N, DK-2200, Denmark}
\author[0000-0003-4075-7393]{David J. Setton}
\affiliation{Department of Physics and Astronomy and PITT PACC, University of Pittsburgh, Pittsburgh, PA 15260, USA}
\author[0000-0003-2871-127X]{Jorryt Matthee}
\affiliation{Department of Physics, ETH Z\"urich, Wolfgang-Pauli-Strasse 27, 8093 Z\"urich, Switzerland}
\author[0000-0002-1590-8551]{Charlie Conroy}
\affiliation{Center for Astrophysics $|$ Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA}
\author[0000-0002-9280-7594]{Benjamin D. Johnson}
\affiliation{Center for Astrophysics $|$ Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA}
\author[0000-0003-1614-196X]{John.~R.~Weaver}
\affiliation{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA}
\author[0000-0002-4989-2471]{Rychard J. Bouwens}
\affiliation{Leiden Observatory, Leiden University, NL-2300 RA Leiden, Netherlands}
\author[0000-0003-2680-005X]{Gabriel B. Brammer}
\affiliation{Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, K\o benhavn N, DK-2200, Denmark}
\author[0000-0001-8460-1564]{Pratika Dayal}
\affiliation{Kapteyn Astronomical Institute, University of Groningen, 9700 AV Groningen, The Netherlands}
\author[0000-0002-8096-2837]{Garth Illingworth}
\affiliation{Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA}
\author[0000-0003-1641-6185]{Laia Barrufet}
\affiliation{Department of Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland}
\author[0000-0002-5615-6018]{Sirio Belli}
\affiliation{Dipartimento di Fisica e Astronomia, UniversitГ di Bologna, via Gobetti 93/2, 40122 Bologna, Italy}
\author[0000-0001-5063-8254]{Rachel Bezanson}
\affiliation{Department of Physics and Astronomy and PITT PACC, University of Pittsburgh, Pittsburgh, PA 15260, USA}
\author[0000-0002-0974-5266]{Sownak Bose}
\affiliation{Institute for Computational Cosmology, Department of Physics, Durham University, Durham, DH1 3LE, UK}
\author[0000-0002-9389-7413]{Kasper E. Heintz}
\affiliation{Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, K\o benhavn N, DK-2200, Denmark}
\author[0000-0001-6755-1315]{Joel Leja}
\affiliation{Department of Astronomy \& Astrophysics, The Pennsylvania
State University, University Park, PA 16802, USA}
\affiliation{Institute for Computational \& Data Sciences, The Pennsylvania State University, University Park, PA, USA}
\affiliation{Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA}
\author[0000-0002-5757-4334]{Ecaterina Leonova}
\affiliation{GRAPPA, Anton Pannekoek Institute for Astronomy and Institute of High-Energy Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands}
\author[0000-0001-8442-1846]{Rui Marques-Chaves}
\affiliation{Department of Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland}
\author[0000-0001-7768-5309]{Mauro Stefanon}
\affiliation{Departament d'Astronomia i Astrof\`isica, Universitat de Val\`encia, C. Dr. Moliner 50, E-46100 Burjassot, Val\`encia, Spain}
\affiliation{Unidad Asociada CSIC "Grupo de Astrof\'isica Extragal\'actica y Cosmolog\'ia" (Instituto de F\'isica de Cantabria - Universitat de Val\`encia)}
\author[0000-0003-3631-7176]{Sune Toft}
\affiliation{Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, K\o benhavn N, DK-2200, Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, K\o benhavn N, DK-2200, Denmark}
\author[0000-0002-5027-0135]{Arjen van der Wel}
\affil{Astronomical Observatory, Ghent University, Krijgslaan 281, Ghent, Belgium}
\author[0000-0002-8282-9888]{Pieter van Dokkum}
\affiliation{Astronomy Department, Yale University, 52 Hillhouse Ave,
New Haven, CT 06511, USA}
\author[0000-0001-8928-4465]{Andrea Weibel}
\affiliation{Department of Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland}
\author[0000-0001-7160-3632]{Katherine E. Whitaker}
\affil{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA}
\affiliation{Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, K\o benhavn N, DK-2200, Denmark}
\keywords{High-redshift galaxies (734), Galaxy formation (595), Galaxy evolution (594), Early universe (435) }
\section{Introduction}
\label{sec:introduction}
The first weeks of \textit{JWST} data have already produced an overwhelming number of science publications. At the highest redshifts, \textit{JWST} has finally pushed our observational frontier beyond $z\sim11$, into the last unknown epoch of our cosmic timeline. Several teams have reported a surprisingly large number of particularly luminous sources very early in the Universe's history \citep[e.g.,][]{Naidu22,Castellano22,Donnan22, Atek22,Yan22,Finkelstein22, Harikane22b}.
Some of these early $z>10$ galaxy candidates have proven quite surprising given that the current area surveyed with \textit{JWST}/NIRCam used for these first studies is still rather limited ($<$60 arcmin$^2$). Standard extrapolations of the UV luminosity function's (UV LF) evolution and galaxy simulations predict a much larger survey area is required to yield such a bounty of luminous candidates. Even though the overall evolution of the UV LF at $z>10$ had been debated from the limited information available from \textit{HST} data \citep[e.g.,][]{Oesch18,McLeod16}, evidence had been growing for a differential evolution of the galaxy population: a high number density of the most UV-luminous sources seems to be in place rather early \citep[e.g.,][]{Stefanon19,Bowler20,Morishita20,Harikane22,Bagley22}, while the number density of the fainter galaxy population continues to decline at $z>8$ \citep[e.g.,][]{Oesch18,Ishigaki18,Bouwens22b}.
Theoretical models based on forming galaxies embedded in dark matter halos have not yet successfully reproduced this differential evolution, should it be confirmed. Most models thus underpredict the number density of the most UV-luminous observed galaxies \citep[e.g.,][]{Bowler20,Leethochawalit22GLASS,Finkelstein22}. Already with \textit{HST}, the discovery of GN-z11 in a very small search volume was very puzzling \citep{Oesch16,Waters16,Mutch16}. Now, the first results from \textit{JWST} imaging are further challenging models.
However, before changing the theoretical models, it is important to first confirm the high-redshift nature of these galaxies. Since \textit{JWST} provides us with the first-ever deep, high-resolution view of the Universe at $>2$ \micron, it is conceivable that we are discovering other, hitherto unknown galaxy populations. In particular, the most distant Lyman break galaxies (LBGs) are typically selected based on one single spectral feature: the complete break at the redshifted Ly$\alpha$ line due to absorption by the neutral inter-galactic medium. This break can be confused with a Balmer break at lower redshift \citep[e.g.,][]{Vulcani17}, or with extremely strong emission line sources \citep[e.g.,][]{Atek11,vanderWel11}, especially when limited filter coverage is available longward of the break \citep[e.g.,][]{Brammer13}. Additionally, dust-obscured sources can cause very red colors rendering galaxies undetected at shorter wavelengths, even in \textit{JWST} data \citep[e.g.,][]{Barrufet22,Labbe22,Nelson22,Fudamoto22}. However, such sources are usually guarded against in LBG selections by requiring blue SEDs longward of the break. Nevertheless, when selecting high-redshift galaxies in a novel regime, one has to be cautious.
Here, we present a detailed analysis of an extremely luminous $z\sim17$ galaxy candidate identified in the Early Release Science (ERS) program CEERS \citep[][]{Finkelstein22}. The source was first mentioned in \citet{Naidu22} and presented as a likely $z=16.6$ candidate in \citet{Donnan22} as well as \citet{Harikane22b}. Here, we present a detailed analysis of this source, including arguments for a possible lower redshift solution of this galaxy at $z\sim5$ \citep[see also][]{Zavala22}.
This paper is organized as follows. \S\ref{sec:data} describes the dataset and sample selection. In \S\ref{sec:results} we present our results, followed by a discussion on the implications follows in \S\ref{sec:discussion} and a summary in \S\ref{sec:summary}.
Magnitudes are in the AB system \citep[e.g.,][]{Oke83}. For summary statistics we report medians along with 16$^{\rm{th}}$ and 84$^{\rm{th}}$ percentiles. We adopt a \citet{Planck2015} cosmology.
\section{Data \& Methods}
\label{sec:data}
\begin{deluxetable}{lr}
\label{table:photometry}
\tabletypesize{\footnotesize}
\tablecaption{Photometry in units of nJy -- \textit{JWST}/NIRCam followed by \textit{HST}/WFC3.}
\tablehead{ \colhead{Band} & \colhead{\hspace{2cm} CEERS-1749} }
\startdata
\vspace{-0.2cm} \\
$F$115$W$ & -1$\pm$5\\
$F$150$W$ & 3$\pm$6\\
$F$200$W$ & 26$\pm$3\\
$F$277$W$ & 127$\pm$5\\
$F$356$W$ & 110$\pm$5\\
$F$410$M$ & 107$\pm$9\\
$F$444$W$ & 91$\pm$5\\
\hline
$F$606$W$ & 5$\pm$6\\
$F$814$W$ & 5$\pm$8\\
$F$125$W$ & 5$\pm$11\\
$F$140$W$ & 9$\pm$24\\
$F$160$W$ & -4$\pm$10\\
\enddata
\tablecomments{We set an error floor of $20\%$ on the fluxes for all \texttt{EAZY} and \texttt{Prospector} fits to conservatively account for systematic uncertainty that is not reflected in the errors here.}
\end{deluxetable}
\subsection{Imaging Data and Catalogs}
The data and analysis methods used here are the same as in \citet[][]{Naidu22}. We refer the reader to that paper for details. Briefly, this work is based on some of the first \textit{JWST}/NIRCam imaging datasets from the Early Release Science (ERS) programs CEERS \citep{Finkelstein22} and GLASS \citep{Treu22}. In particular, the sources analyzed in detail here were identified in the CEERS NIRCam images that span $\sim$40 arcmin$^2$.
The stage 2, calibrated images were obtained from the MAST archive and processed with the \texttt{grizli} pipeline, which performs WCS alignment and image mosaicking. Additionally, \texttt{grizli} masks `snowballs' and mitigates the 1/f noise that are most prominent in the short-wavelength data \citep{Rigby22}. The \textit{JWST} data used here includes the six wide filters F115W, F150W, F200W, F277W, F356W, F444W and the medium band filter F410M. The 5$\sigma$ depth as measured in empty circular sky apertures of 0\farcs32 diameter ranges from 28.5 to 28.9 mag in the wide filters.
Additionally, we include ancillary \textit{HST} data available in the AEGIS field, most notably from the CANDELS survey \citep{Koekemoer11,Grogin11}. Specifically, we use a re-reduction of the ACS F606W and F814W images, in addition to F125W, F140W, and F160W taken with WFC3/IR. All images were drizzled at 40mas and aligned to the GAIA DR3 catalog.
We use \texttt{SExtractor} to detect sources in the F444W filter and measure multi-wavelength photometry in small circular apertures of 0\farcs32 diameter. These fluxes are then corrected to total using the AUTO flux measurement in our detection band, in addition to applying small corrections for remaining flux lost based on the point-spread functions.
\subsection{NIRCam Zeropoint Uncertainties} %
Given possible uncertainties with the NIRCam zeropoints \citep[see, e.g.,][]{Rigby22}, we perform several tests on the current photometry. In particular, we use the available spectroscopic redshifts for galaxies in the CANDELS/EGS field to derive iterative zeropoint corrections with \texttt{EAZY}. The derived corrections depend on the exact choices of parameters. However, they typically remain smaller than $20\%$. To allow for this systematic uncertainty, we decided to set an error floor of 20\% to all photometric bands, but to use the original pipeline-provided zeropoints. As discussed later, this choice results in the appearance of lower redshift solutions for some de-facto secure very high-redshift candidates.
\subsection{Selection of Luminous High-Redshift Galaxy Candidates}
This paper follows an earlier analysis of \citet{Naidu22}, who identified the most luminous galaxy candidates in the existing ERS imaging data, based on optical non-detection criteria and photometric redshift measurements using the \texttt{EAZY} code \citep{Brammer08}. As first noted in that paper, the CEERS field revealed a seemingly reliable, but extremely luminous galaxy candidate with photometric redshift at $z=16.6\pm0.1$ \citep[see also][]{Donnan22, Harikane22b}. Given its extraordinary luminosity, if confirmed to lie at $z\sim17$, this source deserved special attention and analysis. We will discuss it in detail below.
\section{Results}
\label{sec:results}
\subsection{Evidence for a $z\approx17$ solution}
\label{sec:candidates}
We briefly discussed CEERS-1749 as a $z\approx17$ candidate in our search for luminous $z>10$ galaxies presented in \citet[][]{Naidu22}. CEERS-1749 is an extremely luminous (26.3 mag in $F$277$W$) galaxy that is well-detected in all bands at $\gtrsim2\mu$m (Figure \ref{fig:summaryCEERSz17}). It shows a sharp drop-off in flux between F277W and F200W (1.7 mag), is undetected at lower wavelengths (F115W, F150W), and appears to display a blue continuum slope characteristic of early Universe galaxies at longer wavelengths. Both \texttt{EAZY} and the \texttt{Prospector} modeling framework \citep[][]{Johnson21} interpret the break in the SED as being due to total absorption of $<1215$\AA\, rest-frame photons at $z\approx17$ by the neutral IGM. The redshift probability distribution is almost entirely contained at $z>10$ -- $p(z>10)>99.9\%$ -- unless a conservative error floor of $20\%$ on the photometry is adopted (see center and bottom panels of Figure \ref{fig:summaryCEERSz17}, discussed below).
While this candidate satisfied all our quality cuts and search criteria in \citet[][]{Naidu22}, we were concerned that the $F$150$W$ and $F$115$W$ photometry for this source, which are crucial to its candidacy, are based on slightly lower SNR areas of the mosaic, with uneven depth apparent around the object (see e.g., the F115W stamp in Figure \ref{fig:summaryCEERSz17}). This, combined with current NIRCam calibration uncertainties (\citealt{Rigby22}) made judging a stringent flux upper limit difficult. Stringent non-detections in these bands are critical to the source's candidacy because the break in the SED by itself is unable to settle the case. Unlike in the cases of GLASS-z11 and GLASS-z13 \citep[][]{Naidu22,Castellano22}, the drop in flux across immediately adjacent filters is not as dramatic ($\approx5\times$ vs. $\approx10\times$), leaving more room for other possibilities (as we discuss below).
We have since performed a battery of tests to check the robustness of the $z\approx17$ solution. The most crucial of these is the incorporation of \textit{HST} data, in particular, deep imaging in the F160W band (whose zero-point is well-known) where the source is undetected with a stringent $<10$ nJy $1\sigma$ upper-limit, supporting the NIRCam/F150W non-detection that we report here. In fact, the \texttt{EAZY} redshift derived by replacing the bluest \textit{JWST} bands (F115W and F150W) with \textit{HST}/F160W yields $z=16.5$ as the most likely solution. Other notable tests include recovering this source at $z\approx17$ in an entirely independent analysis of the CEERS images (Bouwens et al., in prep.), and setting a conservative error-floor of $\approx20\%$ on \textit{all} fluxes to account for systematic uncertainty in e.g., zeropoints across this analysis.
If confirmed at $z\approx17$, CEERS-1749 would be one of the most luminous galaxy candidates at $z>10$ ($M_{\rm{UV}}\approx-22$), second only to HD1 ($M_{\rm{UV}}\approx-23$, \citealt{Harikane22,Pacucci22}) and comparable to the spectroscopically confirmed GNz11 \citep[][]{Oesch16,Jiang21}. A source of such luminosity simply does not exist at such a potentially early time in a wide swath of empirical and theoretical models of the $z>10$ Universe (discussed further in \S\ref{sec:discussionz17}).
\subsection{Evidence for a $z\approx5$ solution}
\label{sec:lowz}
While the $z\approx17$ solution is formally favored, there is non-zero probability that the source lies at $z\approx5$. We note that our $p(z)$ estimate does not include a prior based on the luminosity function (which is poorly constrained at these epochs and luminosities). So, while the low redshift solution is formally disfavored, it is not ruled out, and the relative probability of the two solutions depends sensitively on our prior belief of the source residing at $z\approx5$ compared to $z\approx17$. In this section we consider the evidence for the lower redshift solution.
\subsubsection{Environmental evidence: a $z\approx5$ protocluster?}
The first argument we consider for the $z\approx5$ solution is based on the local environment and large-scale structure near the source. The galaxy's three nearest neighbors \textit{all} lie at precisely the redshift required by the quiescent galaxy solution, i.e., $z\approx5$ (Figure \ref{fig:z5solution}, top-left). The most massive of these galaxies is the nearest neighbor only $1.7\arcsec$ away at $z_{\rm{EAZY}}=4.9$ whose mass we fit to be a substantial $M_{\rm{\star}}\approx10^{11} M_{\rm{\odot}}$ (via the fiducial \texttt{Prospector} setup described in \citealt[][]{Naidu22}). If CEERS-1749 lies at the same redshift, and therefore a physical separation of $<15$ kpc, it may be an associated satellite galaxy of its massive neighbor analogous to the Magellanic Clouds ($M_{\rm{\star}}\approx5\times10^{9} M_{\rm{\odot}}$, \citealt[][]{vandermarel09}) and the present-day Milky Way ($M_{\rm{\star}}\approx10^{11} M_{\rm{\odot}}$, \citealt[][]{Bland-Hawthorn16}).
CEERS-1749 may also be part of a larger $z\approx5$ protocluster. We find that the $30\arcsec$ region around CEERS-1749 is overdense in galaxies with best-fit photometric redshifts of $z\approx5$ compared to the overall CEERS field (Figure \ref{fig:z5solution}). Across the full 40 arcmin$^{2}$ analyzed we find $\approx300$ galaxies at $4.5<z<5.5$ with SNR$>10$ in all LW bands (F444W, F356W, F277W). Strikingly, in $30\arcsec$ around CEERS-1749 we find $\approx20$ such sources, which translates to a $\approx4\times$ higher density than the field average. Even taking a model-independent point of view, selecting sources akin to CEERS-1749 with a red F200W-F277W color ($>0.75$) that are detected at $<2\sigma$ in F606W (the dropout filter at $z\approx5$) shows an overdensity of $\approx9\times$ in the $30\arcsec$ around the source. Intriguingly, the median redshift of these potential protocluster galaxies ($z_{\rm{cluster}}=4.9$) is an excellent match to the predicted range for a lower-redshift solution.
In models of hierarchical structure formation, protoclusters are expected to be among the first sites of star-formation in the Universe, as they form from the first overdensities that collapse into stars. Intriguingly, in the $z\approx5$ sample proximal to CEERS-1749 we do find galaxies with strong Balmer breaks characteristic of old stellar populations (see bottom row of Figure \ref{fig:z5solution}). A handful of these galaxies have an F200W-F277W color almost as red as CEERS-1749 within errors, and a handful even show a second mode in their redshift probability distributions at $z\approx16$. The existence of such proximal ancient galaxies at the right redshift makes it more plausible that CEERS-1749 may also belong to this protocluster.
\subsubsection{Plausible SEDs at $z\approx5$}
\label{sec:whatisz5}
\begin{deluxetable}{lrrrrrrrrrrrrr}
\label{table:miri}
\tabletypesize{\footnotesize}
\tablecaption{Predicted \textit{JWST}/MIRI photometry in units of nJy shows significant differences at longer wavelengths across the scenarios discussed in \S\ref{sec:whatisz5}.}
\tablehead{
\colhead{Band} & \colhead{$z\approx17$} & \colhead{$z\approx5$} & \colhead{$z\approx5$}\\
\colhead{} & \colhead{} & \colhead{Quiescent} & \colhead{Starburst}}
\startdata
\vspace{-0.2cm} \\
F560W & 99$^{+11}_{-10}$ & 133$^{+19}_{-21}$ & 109$^{+22}_{-7}$\\
F770W & 109$^{+55}_{-20}$ & 136$^{+26}_{-24}$ & 83$^{+28}_{-5}$\\
F1000W & 87$^{+68}_{-29}$ & 136$^{+37}_{-32}$ & 84$^{+37}_{-5}$\\
F1280W & 106$^{+93}_{-39}$ & 118$^{+37}_{-27}$ & 169$^{+21}_{-11}$\\
F1500W & 71$^{+102}_{-27}$ & 86$^{+36}_{-21}$ & 91$^{+22}_{-7}$\\
F1800W & 78$^{+107}_{-26}$ & 82$^{+82}_{-23}$ & 199$^{+49}_{-16}$\\
F2100W & 77$^{+96}_{-37}$ & 69$^{+81}_{-21}$ & 228$^{+49}_{-17}$
\enddata
\end{deluxetable}
Taking into account that CEERS-1749 may be a part of a $z\approx4.9$ protocluster, we can refine our models for the source by fixing the redshift to that of the overdensity (a Gaussian with $z=4.9\pm0.1$), which significantly shrinks the parameter space search volume.
Under our fiducial $\approx20\%$ error floor assumption, the galaxies we find are all quiescent systems (see top-row of Figure \ref{fig:altSEDs}). The only data point that this class of solutions struggles to explain is the F277W flux, where the prediction falls short by $\approx50\%$ -- this is why these solutions are disfavored compared to the $z\approx17$ scenario in e.g., Figure \ref{fig:summaryCEERSz17}. Perhaps there is a calibration or zero-point issue in our F277W photometry (though \citealt{Donnan22} report a similar F200W-F277W color as in this work). Or perhaps, the models we have employed here are missing some physics (e.g., line emission from AGN or shocks) that manifests around rest-frame 4300-5300 \AA.
Inspired by this, we add an AGN emission line template (following line ratios from \citealt[][]{Richardson14}, their Table 3) to the quiescent galaxy solution\footnote{https://github.com/bd-j/prospector/blob/agnlines/}. We are also motivated by the finding that ionized gas, likely due to AGN, is routinely observed in quiescent and post-starburst systems \citep[e.g.,][]{Belli19}. The resulting emission lines boost the F277W flux, providing a better fit (center panel, Figure \ref{fig:altSEDs}). However, the required AGN luminosity -- (H$\beta/L_{\rm{\odot}})/(M_{\rm{\star}}/M_{\rm{\odot}}) \approx10^{-2.5}$ -- is $\gtrsim100\times$ higher than expected for the galaxy's stellar mass based on $z\lesssim2$ scaling relations and observations \citep[e.g.,][]{Heckman14}. Nonetheless, the key takeaway from this exercise is that ionized gas regardless of its origin (e.g., from shock heating in the overdense environment) that produces emission line luminosities and ratios atypical for star-forming galaxies, and is therefore missed by standard models, may help explain the SED.
Finally, we perform a search by adopting a $10\%$ error-floor on all photometry. This yields a starburst solution (bottom panel, Figure \ref{fig:altSEDs}) in a relatively low-mass ($\approx5\times10^{8} M_{\rm{\odot}}$) system. The SED is dominated by young stars, and shows a Balmer \textit{jump} instead of a Balmer break. The $>2\mu$m photometry is remarkably well-explained by nebular emission -- this is a challenging constraint to match, given the required blue slope and the sensitivity of the F410M medium-band. In bluer bands the predicted flux lies barely under the $\approx1\sigma$ detection limits due to significant dust attenuation ($A_{\rm{5500}}\approx1.2$ mag). It is perplexing as to why this solution, with a $\chi^{2}$ lower than the quiescent galaxies, was missed in our broad $z=0-20$ search despite using conservative \texttt{dynesty} sampling settings. A clue might be the extremely tight redshift posterior of $z=4.87^{+0.00}_{-0.02}$. This is consistent with a highly ``spiky" likelihood space, where there is a very precise combination of redshift and nebular parameters that produce the perfect conspiracy of emission lines to mimic the $z\approx17$ system. Constraining the redshift to the overdensity mean, thereby greatly reducing the prior volume, likely helped reveal this solution, but this is a critical issue for future study.
\subsection{Physical Properties}
\label{sec:physical}
We derive physical properties for the source using \texttt{Prospector} \citep[][]{Leja17,Johnson21}. The priors and parameter choices are as described in \citet[][]{Naidu22} (their \S4.1; closely following \citealt{Tacchella22}). Briefly, we fit a non-parametric star-formation history assuming a prior on the redshift that follows the \texttt{EAZY} photometric redshift constraint. For our fiducial run we assume a ``continuity" prior on the star-formation history and a formation redshift of $z=20$ that produces smooth histories disfavoring abrupt jumps from bin to bin \citep[][]{Leja19}. The resulting properties are summarized in Table \ref{table:properties}. In what follows we split the discussion assuming the $z\approx17$ and $z\approx5$ solutions.
\subsubsection{CEERS-1749 at $z\sim17$}
Broadly, CEERS-1749 at $z\sim17$ has properties characteristic of galaxies found at $z\gtrsim8$ \citep[e.g.,][]{Stefanon21,Tacchella22,Leethochawalit22GLASS}, with a star-formation rate expected of its stellar mass and a blue $\beta_{\rm{UV}}\approx-2.3$ \citep[see also \texttt{Bagpipes} fits in][]{Donnan22}. As noted earlier, its $M_{\rm{UV}}\approx-22$ would place it among the most UV-luminous sources at $z\gtrsim6.5$. Perhaps the most striking property from the SED fitting is the stellar mass -- we infer $\log({M_{\rm{\star}}/M_{\rm{\odot}}})\approx9.6$ in stars to have formed in a mere $\sim220$ Myrs after the Big Bang. We caution that estimates of the stellar mass from rest-UV photometry alone come with significant systematic uncertainties since the light is dominated by young stars.
To understand the range of allowed stellar masses we consider the following changes to our fiducial fitting setup -- we assume a ``bursty" prior for the star-formation history that allows for rapid fluctuations \citep[][]{Tacchella22}; we consider three different initialization points for the star-formation history ($z=18, 20, 30$); we consider a Kroupa IMF in addition to the fiducial \citet[][]{Chabrier03} IMF. Across all these various parameter choices, we recover stellar masses of $\approx10^{9.4}-10^{9.8} M_{\rm{\odot}}$ with $\lesssim$0.3 dex uncertainties on each individual fit. As a limiting case we also model the galaxy as a single stellar population set to an age $<200$ Myr (the age of the Universe at $z=16.6$ is 230 Myrs), which yields a stellar mass of $\log(M_{\rm{\star}}/M_{\rm{\odot}})\approx10.0^{+0.2}_{-0.2}$ and an age of $70^{+30}_{-10}$ Myrs. Based on these experiments, we conservatively adopt a mass of $\log(M_{\rm{\star}}/M_{\rm{\odot}})\approx9.6^{+0.5}_{-0.5}$ or $\approx 5\times10^{9} M_{\rm{\odot}}$ for this source for the discussion in the rest of the paper.
CEERS-1749 was also recently reported by \citet[][see also \citealt{Harikane22b}]{Donnan22}, in their search for $z>10$ candidates across the \textit{JWST} ERS data, as lying at $z=16.6$. They infer a slightly lower mass of $\log(M_{\rm{\star}}/M_{\rm{\odot}})=9.0\pm0.4$. On top of differences between inference frameworks -- \texttt{Bagpipes} vs. \texttt{Prospector}, and importantly their underlying stellar isochrones, \texttt{Parsec} vs. \texttt{MIST}, see e.g., \citealt{Whitler22} -- their reported fluxes are significantly lower than those reported here (by $\approx2\times$), which can likely be traced to aperture corrections. CEERS-1749 is an extended source, and the point-source correction applied for its flux in \citet[][]{Donnan22} may have been insufficient.
\subsubsection{CEERS-1749 at $z\sim5$}
We run fits for CEERS-1749 by fixing the redshift to the protocluster redshift ($z=4.9\pm0.1$) as described in the previous section. We first discuss the quiescent case followed by the low-mass starburst scenario.
The quiescent galaxy solution displays very modest ongoing star-formation -- averaged over the last 50 Myrs, its SFR is $0.1^{+3.6}_{-0.1} M_{\rm{\odot}}$/yr, and over the last 10 Myrs an even more humble $0.02^{+4.08}_{-0.02}M_{\rm{\odot}}$/yr. The constraint against recent star-formation is a direct result of having to produce a strong Balmer break to match the red F200W-F277W color. We estimate its stellar mass to be $\approx5\times10^{9} M_{\rm{\odot}}$. No quiescent galaxy with such a low stellar mass has been spectroscopically confirmed at $z\gtrsim2$ yet, but this has been largely due to the impracticality of continuum spectroscopy at such luminosities. The low-mass quenched population certainly exists, as evidenced by \citet[][]{Marchesini22}'s recent confirmation of a $\approx8\times10^{9}M_{\rm{\odot}}$ quiescent system at $z\approx2.4$ via \textit{JWST}/NIRISS spectroscopy. Other candidate quiescent systems at $z>2$ of even lower mass have been observed as part of the same program, and may be confirmed as NIRISS calibrations improve.
The dusty star-forming galaxy scenario features a system that has formed $\approx15\%$ of its total stellar mass in only the last $<10$ Myrs. Its young stars are producing intense nebular emission that strongly boosts the broadband photometry and comfortably accounts for the F200W-F277W color. Such extreme emission has been inferred to be routine \citep[e.g.,][]{debarros19}, and is now beginning to be directly observed during these epochs \citep[e.g.,][]{Schaerer22}. The high attenuation ($A_{\rm{5500}}\approx1.2$ mag), on the other hand, may seem unexpected for a galaxy of such low stellar mass based on $<2\mu$m-selected samples \citep[e.g.,][]{Cullen18}. However, as fainter members of the ``\textit{HST}-dark" population (galaxies detected purely at $>2\mu$m) come into view \citep[e.g.,][]{Barrufet22,Nelson22,Fudamoto22}, galaxies like CEERS-1749 may be found to be common. In fact, the colors of the overdensity suggest it may be the perfect place to prospect for the low-mass end of this population. The dust-curve we infer is slightly steeper than that of the Small Magellanic Cloud ($A_{\rm1500}/A_{\rm{5500}}\approx6$) -- such steep curves are often observed in the local Universe \citep[][]{Salim20}, but typically in lower $A_{\rm{5500}}$ systems \citep[][]{Chevallard13}.
We end this section by observing that broadly, the properties of CEERS-1749 are physically plausible, but fall firmly in observational regimes that are only beginning to come into view with \textit{JWST} (e.g., low-mass quiescent, low-mass \textit{HST}-dark). Their novelty represents an exciting opportunity for surprises, as well as a potential challenge for $z>10$ searches.
\begin{deluxetable}{lrrrrrrrrrr}
\label{table:properties}
\tabletypesize{\footnotesize}
\tablecaption{Summary of properties for the high- and lower-redshift solutions.}
\tablehead{
\colhead{CEERS-1749\tablenotemark{$\dagger$} at} & \colhead{$z\sim17$} & \colhead{$z\sim5$} & \colhead{$z\sim5$}\\
\colhead{} & \colhead{} & \colhead{(quiescent)} & \colhead{(starburst)}
}
\startdata
\vspace{-0.2cm} \\
R.A. & \multicolumn{3}{c}{14:19:39.48} \\
Dec. & \multicolumn{3}{c}{+52:56:34.95} \\
Best-fit Redshift $z_{\rm{Prospector}}$ & $16.0$ & $4.8$ & $4.9$\\
Redshift $z_{\rm{Prospector}}$ & $16.0^{+0.6}_{-0.6}$ & $4.8^{+0.1}_{-0.1}$ & $4.87^{+0.00}_{-0.02}$\\
Best-fit Redshift $z_{\rm{EAZY}}$ & $16.6$ & $4.6$ & --\\
Redshift $z_{\rm{EAZY}}$ & $16.3^{+0.4}_{-1.1}$ & $4.4^{+0.6}_{-1.0}$ & --\\
UV Luminosity ($M_{\rm{UV}}$) & $-22.0^{+0.1}_{-0.1}$ & $-15.6^{+1.1}_{-0.7}$ & $-14.3^{+0.2}_{-1.0}$ \\
UV Slope ($\beta$; $f_{\rm{\lambda}}\propto \lambda^{\beta}$) & $-2.3^{+0.1}_{-0.1}$ & $1.1^{+1.1}_{-1.2}$ & $3.0^{+0.3}_{-0.5}$\\
Stellar Mass $\log$($M_{\rm{\star}}/M_{\rm{\odot}}$) & $9.6^{+0.2}_{-0.2}$ & $9.6^{+0.2}_{-0.5}$ & $8.7^{+0.1}_{-0.1}$\\
Age ($t_{\rm{50}}$/Myr) & $54^{+27}_{-27}$ & $564^{+105}_{-492}$ & $580^{+17}_{-236}$\\
SFR$_{\rm{50\ Myr}}$ ($M_{\rm{\odot}}$/yr) & $34^{+17}_{-12}$ & $0.1^{+3.6}_{-0.1}$ & $1.7^{+1.8}_{-0.2}$\\
$r_{\mathrm{eff}}$, $F$444$W$ [kpc] & 0.4 & \multicolumn{2}{c}{0.9}\\
$r_{\mathrm{eff}}$, $F$200$W$ [kpc] & 0.2, 0.3 & \multicolumn{2}{c}{0.5, 0.7}\\
\enddata
\tablenotetext{\dagger}{This is the same source as CEERS-93316 in \citet{Donnan22} and CR2-z17-1 in \citealt[][]{Harikane22b}.}
\tablecomments{Quantities derived via SED fitting assume a continuity prior (\citealt{Leja19}) on the star-formation history and a \citet[][]{Chabrier03} IMF. Variations on this fiducial assumption, with a focus on the recovered stellar mass, are tested in detail in \S\ref{sec:physical}. The two effective radii quoted in $F$200$W$ are for the two component fit in Fig. \ref{fig:sizefits}.}
\end{deluxetable}
\subsection{Clues from Morphology}
\label{sec:sizefits}
In order to characterize the morphology of the galaxy while accounting for the effect of the PSF, we fit two-dimensional \sersic profiles to the candidate in the F200W, F277W, and F444W imaging using \texttt{GALFIT} \citep{Peng10Galfit}. We create 150-pixel cutouts around the galaxy, then use \texttt{photutils} and \texttt{astropy} to create a segmentation map to identify nearby galaxies using the F444W image. We mask all nearby galaxies using the resultant segmentation maps and fit only CEERS-1749. In all our fits, we measure a scalar sky background estimated from the median of the sky pixels identified in the segmentation map and fix the flux of the sky to this value in \texttt{GALFIT}. We use a theoretical PSF model generated from WebbPSF at our 0\farcs04 pixel scale; we oversample the PSF by a factor of 9 in order to minimize artifacts as we rotate the PSF to the CEERS observation angle calculated from the APT file, then convolve with a 9x9 pixel square kernel and downsample to the mosaic resolution. In all fits, we constrain \sersic index $n$ to be between 0.01 and 8, the magnitude to be between 0 and 45 mag, and the half-light radius $r_e$ to be between 0.3 and 200 pixels (0\farcs012 - 8\farcs0).
The results of these fits are shown in Figure \ref{fig:sizefits}. In all 3 bands, the galaxy is well described by a disky ($n\sim1$) \sersic profile. However, in the F200W image and resultant residuals, there is a clear indication of non-\sersic flux to the southeast of the main galaxy, illustrating that the galaxy may be a clumpy disk or a merging pair. This same residual structure is marginally visible in the F277W image, but in F444W, the galaxy profile is consistent with being completely smooth.
Assuming that CEERS-1749 is at $z\approx17$, its morphology is consistent with bright galaxies catalogued across the Epoch of Reionization, which often break up into clumpy substructure in their rest-UV \citep[e.g.,][]{Bowler17, Matthee19}. The size in $F$444$W$ (0.4 kpc) as well as that of the individual components in $F$277$W$ (0.2 and 0.3 kpc) is quite compact, similar to that measured recently in the CEERS field for the most massive star-forming galaxies at $z\approx7-10$ \citep[][]{Labbe22}, hinting at an evolutionary link between the most luminous $z>10$ systems and the most massive galaxies at lower redshifts.
For the case of CEERS-1749 at $z\approx5$, the nearby clump in the images appears across a wide-range of rest-optical wavelengths ($\approx3000-7500$\AA). Clumpy star-formation occurring in the same disk is a possibility. However, these clumps are typically evident in the UV, with optical profiles behaving more smoothly.
We might perhaps be witnessing a merger. If CEERS-1749 lives in a dense protocluster environment at $\approx5$, mergers are quite likely. Mergers are a key channel for setting off the kind of vigorous starbursts required for the dusty starburst scenario \citep[e.g.,][]{Mihos96}. Further, several quenching mechanisms, particularly at $z\gtrsim2$, invoke mergers \citep[e.g.,][]{Man18} -- e.g., an AGN is triggered by fresh inflows of gas to the centre of the galaxy, and the subsequent feedback from the freshly fed AGN quenches star-formation. It is interesting to note that the merging clump in the images appears only in the bluer bands, and is not apparent in F444W. In the $z\approx17$ scenario, strong wavelength-dependent morphology across such a narrow rest-$1600$\AA\, to rest-$2150$\AA\, range is surprising, but is perhaps more easily explained by a blue galaxy merging with a quiescent/dusty one at $z\approx5$.
Another possibility is that CEERS-1749 may be in the process of tidal disruption by its massive neighbor if they lie at the same redshift. The galaxy lies well within the expected virial radius of its neighboring $M_{\rm{\star}}\approx10^{11} M_{\rm{\odot}}$ system that is only $<15$ kpc away. The faint substructure in the images may be stripped tidal debris trailing the galaxy. Starbursts often occur in the disrupting galaxy \citep[e.g.,][]{dicintio21}, following which it loses all its gas and is quenched \citep[e.g.,][]{Teppergarcia18}. A circular polar orbit that decays gradually may allow for a significant period between infall and tidal dissolution that may allow us to observe the galaxy in this transient state (see e.g., the gallery of mergers in \citealt{Naidu21}).
\section{Discussion}
\label{sec:discussion}
\subsection{Implications of the $z\approx17$ scenario}
\label{sec:discussionz17}
\label{sec:lcdm}
If CEERS-1749 is confirmed to lie at $z\approx17$, it would force a major revision of early galaxy evolution models, and potentially even our underlying cosmological framework \citep[see, e.g.,][]{Steinhardt16,Mason22,MBK22}. It is very challenging to produce such extraordinarily luminous and massive galaxies only $\sim$200 Myrs after the Big Bang under standard assumptions in the framework of $\Lambda$CDM cosmology (see also the candidates reported in \citealt{Atek22,Yan22}).
This situation is demonstrated in Figure \ref{fig:UVLF_lambdacdm}. First, we consider the number density implied by an $M_{\rm{UV}}\approx-22$ source at $z\approx16.5$ found in a search area of a mere 40 arcmin$^{2}$ and $\Delta z=1$. No theoretical UVLF or empirical extrapolation comes close to matching this implied number density within one to several orders of magnitude \citep[][]{Behroozi19,Bowler20,Naidu22,Dayal22}. Even more strikingly, the only way to match such a high number density is by coupling dark matter halo mass functions to a $100\%$ instantaneous star-formation efficiency \citep[see also][]{Mason22}. That is, every baryon allocated to a dark matter halo in accordance with the cosmic baryon fraction is converted into stars immediately following a \citet[][]{Salpeter55} IMF between $0.1-100\,M_{\rm{\odot}}$. For context, the typical efficiency inferred from a variety of arguments at $z\approx6-10$ is $<10\%$ \citep[e.g.,][]{Tacchella18,Stefanon21mass}.
The right panel of Figure \ref{fig:UVLF_lambdacdm} illustrates this extraordinary situation in terms of a stellar mass threshold implied by halo mass functions under a similar assumption of a $100\%$ star-formation efficiency calculated in \citet[][]{Behroozi18}. CEERS-1749 (and its implied number density) places it in a region of the diagram that is in tension with $\Lambda$CDM cosmology.
Numerous caveats underlie these comparisons, and there are several possible solutions to this apparent tension (CEERS-1749 at $z\sim5$ being the most likely one). Here we detail several other possibilities.
Lensing is expected to have a substantial effect on the bright end of the UVLF at high redshifts given the increasing optical depth. This is particularly pertinent for CEERS-1749 given that it sits $<2\arcsec$ from a $z\approx5$ $M_{\rm{\star}}\approx10^{11} M_{\rm{\odot}}$ galaxy and that the sightline includes a protocluster. However, lensing corrections even in the most overdense regions tend to be $<2\times$ \citep[e.g.,][]{Mason15lensing}, which does little to alleviate the situation in Figure \ref{fig:UVLF_lambdacdm}.
Another relevant class of ideas revises the relationship between light and mass. For instance, modifying the IMF to be extremely top-heavy produces much higher UV luminosities for a given stellar mass (up to $\approx10\times$ higher compared to our assumptions of a ``normal" IMF, e.g., \citealt{Fardal07}). Pop III stars and binary stars occurring at low metallicities similarly produce different translations between light and mass. And finally, a possibility that can not be ignored is that some fraction of the luminosity of CEERS-1749 may not be of stellar origin at all, but could arise from accretion onto early black holes \citep[e.g.,][]{Pacucci22}.
\subsection{Implications of the $z\approx5$ scenario}
\label{sec:discussionz5}
We emphasize that the redshift solution for CEERS-1749 across multiple studies, which use diverse data reduction choices and $z>10$ selection techniques, seems unambiguous: $z\approx17$, with $p(z>10)>99.9\%$, and little room permitted for any other possibility \citep[]{Naidu22, Donnan22, Harikane22b}. There is no hint of a $z\approx5$ solution that may be upweighted into relevance by e.g., a luminosity prior. If not for the conservative error floor on the photometry adopted here, and the fortuitous environmental evidence, there would be little reason to place this source at $z\approx5$ \citep[but see also][]{Zavala22}.
The difficulty of identifying the $z\approx5$ solution for CEERS-1749 could be construed to imply that some fraction of the seemingly secure $z>10$ candidates may be interlopers of the kind discussed in this work. Galaxies with relatively weaker breaks in their SED are the most vulnerable -- e.g., the dusty starburst scenario could account both for their break as well as the slope of their longer wavelength photometry. Such interlopers may help resolve the tension described in the prior section. At slightly lower redshifts ($z\approx6-10$) the occurrence of the strongest rest-optical lines as well as the presence of both Balmer breaks as well as Lyman breaks in the NIRCam coverage provide additional safeguards \citep[e.g.,][]{Labbe22}. Further, MIRI photometry (e.g., see how the dusty galaxy stands out in Table \ref{table:miri}), an additional medium band (for example, in the JADES GTO program filter-set, \citealt{Rieke20JADES}), or any spectroscopy would comfortably protect against such interlopers.
We also emphasize that we are dealing with an extraordinary situation given the foreground protocluster. The redshift range in which strong emission lines in a dusty system perfectly conspire to mimic a Lyman break \textit{as well as} the blue UV-slope of a $z>10$ galaxy is very narrow. For instance, in our dusty starburst scenario the full redshift posterior collapses to an extremely tight $z=4.87^{+0.00}_{-0.02}$. The implied foreground contaminant volume for such a narrow redshift range is therefore far less dire for $z>10$ searches than may seem at first glance, particularly when a medium-band is included in the filter-set. We illustrate this situation in Figure \ref{fig:conspiracy}.
If this source turns out to be a quiescent galaxy, it would be the highest redshift quiescent system known, with the current most distant galaxy at $z\approx4$ \citep[][]{Valentino20}. Finding quenched galaxies at such early times places stringent constraints on galaxy evolution models and the physics of feedback. Since the galaxy has a relatively low stellar mass, it is expected to be undergoing ``environmental quenching" \citep[e.g.,][]{Peng10}, which is consistent with its location in an overdensity. However, it being at such high redshift, only $\approx1$ Gyr after the Big Bang, challenges many theoretical scenarios that have been developed at lower redshifts, including the general result that satellites experience ``delayed-then-rapid" quenching \citep[e.g.,][]{Wetzel13,Fillingham19,Naidu22MZR}, which starts several Gyrs after infall.
On the other hand, confirmation of the dusty starburst scenario would extend the \textit{HST}-dark population to lower masses, raising intriguing questions about how such low-mass galaxies got so dusty so fast in only the first billion years of the Universe \citep[e.g.,][]{Ferrara16,Popping17,Lesniewska19, Dayal22}. Further, protoclusters at high redshift are expected to be among the first sites of star-formation and reionization in the Universe -- the dozens of potentially associated neighbors around CEERS-1749 are therefore exciting targets that make multi-object spectroscopic follow-up an even more compelling proposition.
\section{Summary \& Outlook}
\label{sec:summary}
Within the first few weeks of \textit{JWST}'s initial data release, the facility has already delivered a major expansion of our cosmic frontier, with dozens of $z>10$ candidates being reported. Several of these sources are unexpectedly luminous ($M_{\rm{UV}}\lesssim-21$), and are far more common than state-of-the-art projections for the \textit{full} Cycle 1 yield across all scheduled programs. In this paper we present potentially the most extreme of these systems: CEERS-1749. We discuss two redshift solutions for the source, each of which has wide-ranging implications.
\begin{itemize}
\item Across a variety of SED-fitting choices, we find $z\approx17$ to be the most likely redshift with no lower-$z$ solutions found. Other independent analyses and state-of-the-art techniques find a similarly confident, unambiguous solution \citep[e.g.,][]{Donnan22,Harikane22b}. Only a conservative $\approx20\%$ error-floor on all photometry to account for systematic uncertainties shows hints of a $z\approx5$ solution. [Figure \ref{fig:summaryCEERSz17}]
\item If the galaxy is at $z\approx17$, then it has physical properties (e.g., $\beta_{\rm{UV}}$, SFR) and a morphology (clumpy in the rest-UV) expected of $z>10$ galaxies. However, most strikingly, its stellar mass ($\approx5\times10^{9} M_{\rm{\odot}}$) and UV luminosity ($M_{\rm{UV}}\approx-22$) are unexpected for a system lying a mere $\sim220$ Myrs from the Big Bang. [Table \ref{table:properties}, Figure \ref{fig:UVLF_lambdacdm}]
\item We show that the SED of the galaxy can be explained by a $\approx10^{9}-10^{10} M_{\rm{\odot}}$ quiescent galaxy with line emission arising from ionized gas, or a $\approx5\times10^{8} M_{\rm{\odot}}$ dusty starburst whose nebular emission lines boost the photometry and conspire to produce an apparently blue slope in the $>2\mu$m photometry. The morphology of the source, which shows hints of a merger and/or tidal disturbance supports both these scenarios. [\S\ref{sec:whatisz5}, Figure \ref{fig:altSEDs}, \ref{fig:sizefits}]
\item The $z\approx5$ scenario has strong environmental support. The three nearest neighbors of the galaxy lie at precisely the redshift required by the low-$z$ solution, i.e., $z\approx5$. Further, there is a hint of a substantial $z\approx5$ overdensity in the CEERS field -- $\approx25$ sources $<5'$ from the candidate have photometric redshifts of $z\approx5$, some of them displaying mature stellar populations expected of an early-forming protocluster. [\S\ref{sec:lowz}, Figure \ref{fig:z5solution}]
\item These results suggest that at certain specific redshifts, $z>10$ candidates from \textit{JWST} may contain a class of lower-$z$ interlopers. If not for the various lines of circumstantial evidence, there would have been little reason to doubt the $z\approx17$ solution. However, we stress that the perfect storm of parameters required to mimic both the break and continuum of a $z>10$ candidate is possible only in a very narrow redshift range, especially when medium-bands are employed, implying that such interlopers may not be a major concern for $z>10$ searches. [\S\ref{sec:discussionz5}, Fig. \ref{fig:conspiracy}]
\end{itemize}
Spectroscopic follow-up of this remarkable galaxy is of critical urgency to \textit{JWST}'s mission of expanding the cosmic frontier. A $z\approx5$ solution might provide new insights into the physics of quenching and dust production, as well as an important class of interloper galaxies to strengthen $z>10$ searches. On the other hand, if this source does lie at $z\approx17$, we may embark on the grand enterprise of revising the physics of galaxy evolution at the earliest epochs.
\facilities{\textit{JWST}, \textit{HST}}
\software{
\package{IPython} \citep{ipython},
\package{matplotlib} \citep{matplotlib},
\package{numpy} \citep{numpy},
\package{scipy} \citep{scipy},
\package{jupyter} \citep{jupyter},
\package{Astropy}
\citep{astropy1, astropy2},
\package{grizli}
\citep{grizli}
}
\acknowledgments{
We are grateful to the CEERS team for planning these early release observations and speedily making resources available to the community that have made this work possible.
RPN acknowledges funding from \textit{JWST} programs GO-1933 and GO-2279. We acknowledge support from: the Swiss National Science Foundation through project grant 200020\_207349 (PAO, LB, AW).
The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No.\ 140.
RJB and MS acknowledge support from NWO grant TOP1.16.057. S. Bose is supported by the UK Research and Innovation (UKRI) Future Leaders Fellowship [grant number MR/V023381/1]. MS acknowledges support from the CIDEGENT/2021/059 grant, and from project PID2019-109592GB-I00/AEI/10.13039/501100011033 from the Spanish Ministerio de Ciencia e Innovaci\'on - Agencia Estatal de Investigaci\'on. S. Belli is supported by the Italian Ministry for Universities and Research through the \emph{Rita Levi Montalcini} program. K.E.H. acknowledges support from the Carlsberg Foundation Reintegration Fellowship Grant CF21-0103. PD acknowledges support from the NWO grant 016.VIDI.189.162 (``ODIN") and from the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program.
Cloud-based data processing and file storage for this work is provided by the AWS Cloud Credits for Research program.
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for \textit{JWST}. These observations are associated with programs \# 1324 and \# 1345.
}
\bibliography{MasterBiblio}
\bibliographystyle{apj}
\end{CJK*} |
Title:
Understanding the secular evolution of NGC 628 using UVIT |
Abstract: Secular and environmental effects play a significant role in regulating the
star formation rate and hence the evolution of the galaxies. Since UV flux is a
direct tracer of the star formation in galaxies, the UltraViolet Imaging
Telescope (UVIT) onboard ASTROSAT enables us to characterize the star forming
regions in a galaxy with its remarkable spatial resolution. In this study, we
focus on the secular evolution of NGC 628, a spiral galaxy in the local
universe. We exploit the resolution of UVIT to resolve up to $\sim$ 63 pc in
NGC 628 for identification and characterization of the star forming regions. We
identify 300 star forming regions in the UVIT FUV image of NGC 628 using
ProFound and the identified regions are characterized using Starburst99 models.
The age and mass distribution of the star forming regions across the galaxy
supports the inside-out growth of the disk. We find that there is no
significant difference in the star formation properties between the two arms of
NGC 628. We also quantify the azimuthal offset of the star forming regions of
different ages. Since we do not find an age gradient, we suggest that the
spiral density waves might not be the possible formation scenario of the spiral
arms of NGC 628. The headlight cloud present in the disk of the galaxy is found
to be having the highest star formation rate density ($0.23 M_{\odot} yr^{-1}
kpc^{-2}$) compared to other star forming regions on spiral arms and the rest
of the galaxy.
| https://export.arxiv.org/pdf/2208.05999 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
galaxies: spiral -- star formation -- evolution
\end{keywords}
\section{Introduction} %
\label{sect:intro}
Star formation in a galaxy is considered one among the best aids to understand the evolution of the galaxy. In the nearby Universe, the evolution of galaxies is dominated by secular processes, whereas the violent processes are less common \citep{Kormendy2004}. Secular evolution is a process that results from the slow rearrangement of energy and mass. The presence of bars, oval disks, and spiral structures, interactions occurring inside the galaxy contributes to secular evolution \citep{Kormendy2013}. Instabilities occurring as a result of the presence of spiral arms play an important role in the secular evolution of a galaxy. Heating and radial migration of stars are the two secular processes attributed to the presence of spiral arms \citep{Lynden-Bell1972, Sellwood2011, Bautista2021}. Also, recent studies suggest that the arms could introduce gas inflow, and add to the secular evolution of the galaxy \citep{Kim2014, Baba2016}. It is worth noting that spiral galaxies make up most of the high star-forming galaxies in the blue cloud region of an optical color-magnitude diagram \citep{Baldry2004}. Since the star formation properties of the galaxies can be affected by the spiral structure, studying them will elucidate the understanding of the evolution of galaxies.
The formation of spiral structures in the disk galaxies remains a question yet to be answered. Spiral density wave theory and swing amplification are the most accepted scenarios regarding spiral arm formation. According to the spiral density wave theory, static density waves constitute the long-lived stationary spiral arms \citep{Lin&shu1964}. On the other hand, swing amplification theory suggests that the local amplification in a differentially rotating disk results in the formation of spiral structure in a galaxy \citep{Goldreich1965, Julian1966, Elmegreen2011, Donghia2013}. Swing amplification theory considers spiral arms to be transient features.
Recent studies also explore the possibilities of whether the tidal interactions play a role in inducing the spiral features by generating localised disturbances which get enhanced by the swing amplification \citep{Kormendy1979, Bottema2003, Pettitt2016}. Along with that, the scenarios such as bar-induced spiral structure \citep{Contopoulos1980} and spiral features explained by a manifold \citep{Contopoulos1980, Athanassoula1992} are also considered as formation scenarios of spiral structures in galaxies. The longevity of spiral structure is the observational parameter that enables us to understand the formation scenario of spiral arms.
In the spiral density wave consideration, at the corotation radius, the angular speed of stars and gas is equal to the angular pattern speed of the spiral features, whereas the material rotates faster than the pattern inside the corotation radius. Outside the corotation radius, material rotates slower compared to the pattern speed of the spiral structures. The gas, while entering into high dense regions of spiral arms, might experience a shock which could result in the star formation \citep{Roberts1969}. As they age, the stars move away from the spiral arms, resulting in an age gradient across the spiral arms. If we assume a constant angular speed for the spiral arms, comparatively younger star clusters will be located near the arm whereas the older star clusters will be found further away from the spiral arms. Hence the distribution of single stellar population equivalent age helps us to make a better understanding about the possible formation scenario for the spiral arms.
NGC 628 is an Sc-type galaxy at a redshift of z $\approx$ 0.00219 having an estimated distance of 9.6 Mpc \citep{Kreckel2018} with an inclination of $9^{\circ}$. NGC 628 is the most prominent member of a small group of galaxies. The group is centered on NGC 628 and the peculiar spiral NGC 660. Two well-defined spiral arms observed in optical and UV images make NGC 628 a typical example of grand design spirals. The galaxy properties are listed in Table \ref{tab:Table1}. NGC 628 has not gone through any recent interactions in the past 1 Gyr \citep{Kamphuis1992}. This makes NGC 628 a good candidate to study the effects of secular evolution. The richness of ancillary data (Spitzer Nearby Galaxies Survey (SINGS); \citet{Kennicutt2003}, the GALEX Nearby Galaxy Survey (NGS); \citet{Gildepaz2007}, Legacy Extragalactic UV Survey (LEGUS); \citet{Calzetti2015}, Physics at High Angular Resolution in Nearby Galaxies (PHANGS); \citet{Leroy2021}, etc.), makes NGC 628 a perfect sample in a wide range of research topics including galaxy evolution, stellar populations \citep[e.g.,][] {Cornett1994, shabani2018}, and star formation \citep[e.g.,][]{Elmegreen1983, Gusev2014}.
In this study, we intend to identify and characterize the star-forming regions' properties in NGC 628 using the FUV and NUV broadband observations of NGC 628 carried out using UVIT. Our primary goal is to understand the recent star formation activity in the grand design spiral galaxy NGC 628, to make a better understanding about the star formation in the spiral arms and hence to connect it with the possible spiral arm formation scenario. \citet{Gusev2013} exclusively studied the photometric properties of the spiral arms and the star clusters within NGC 628 using GALEX images. They classified the arms of NGC 628 into longer and shorter arms based on the extent of the arms. They considered the shorter arm as a distorted one. One of the significant conclusions from their study was that the longer arm of NGC 628 hosts a regular chain of star forming complexes, whereas the shorter arm did not. In a follow-up study, \citet{Gusev2014} suggested that the longer arm hosts slightly younger star forming regions than in the shorter arm. \citet{shabani2018} studied the properties of spiral arms in NGC 628 using the stellar cluster catalog from the Legacy Extragalactic UV Survey (LEGUS) program and they suggested that there does not exist any age gradient across the arms of NGC 628. The advantage of our study is the better resolution and larger field of view provided by the UVIT images. \citet{Gusev2013} used the GALEX images, but the resolution of UVIT opens up an opportunity to revisit the UV properties of the galaxy with better resolution. Due to the field of view limitations, \citet{shabani2018} could not consider the spiral arms completely in their study. \citet{Jyoti2021} studied the star forming complexes in three nearby galaxies including NGC 628, using UVIT images. The study focused mainly on the properties of the star forming complexes inside and outside the optical radius ($R_{25}$). In this present study, the importance is given to the star formation properties of the spiral arms of NGC 628 to understand the formation mechanism behind the spiral arm formation.
UVIT opens a window of opportunity to make a better understanding of the star formation properties of NGC 628 with its $28^{\prime}$ field of view, which covers the galaxy much beyond the optical radius, and that too at a remarkable resolution of $1.4^{\prime\prime}$ \citep{GeorgeJelly2018,Mondal2018, Mondal2021}. The age and mass of the star forming regions are estimated using simple stellar population (SSP) models. We also explored the UV features of the headlight cloud in NGC 628 using UVIT data. The data and analysis are explained in Section \ref{sect:Data&analysis}, followed by the theoretical models in Section \ref{sect:models} and the results in Section \ref{sect:results}. Discussion and a summary is presented in Sections \ref{sect:Discussion} and \ref{sect:summary} respectively. A flat Universe cosmology is adopted throughout this paper with $H_{0}$ = 71 $km s^{-1} Mpc^{-1}$ and $\Omega_{M}$ = 0.27 \citep{Komatsu2011}. In the galaxy rest-frame, $1^{\prime\prime}$ corresponds to a distance of 44.7 pc.
\begin{table}
\centering
\caption{Basic parameters of NGC 628}
\label{tab:Table1}
\begin{tabular}{lll} %
\hline
Parameter & Value & Reference \\
\hline
Type & SA(s)C & 1 \\
RA (hh mm ss) & 01 36 41.747 & 2 \\
Dec (hh mm ss) & +15 47 01.18& 2\\
Distance & 9.6 Mpc & 3 \\
Inclination (i) & $9^{\circ} $ & 4\\
Position angle (PA) & $25^{\circ}$ & 5\\
\hline
\end{tabular}
\footnotesize{$^{1}$\citet{devac1991},$^{2}$\citet{Evans2010},$^{3}$\citet{Kreckel2017},$^{4}$\citet{Blanc2013},$^{5}$\citet{sakhibov2004}}
\end{table}
\section{Data and Analysis}
\label{sect:Data&analysis}
To understand the mechanisms affecting the star formation in NGC 628 and hence to better understand the secular evolution, we use the UVIT data obtained from the ASTROSAT ISSDC archive. UVIT, with a $28^{\prime}$ field of view, observes simultaneously in FUV (130-180 nm), NUV (200-300 nm), and VISible (320-550 nm) bands. FUV and NUV channels are used for science observations, whereas the primary objective of the VIS channel is to aid the drift correction. Compared to its predecessor GALEX, which has a spatial resolution of $\sim$ $5^{\prime\prime}$, UVIT provides a better resolution of $\sim$ $1.2^{\prime\prime}$ and $1.4^{\prime\prime}$ for NUV and FUV filters, respectively. Even though the UVIT plate scale at the detector plane is 3.33"/pixel, an onboard algorithm enabled these pixels to localize the position of a photon to $1/8^{th}$ of the pixel element. We also made use of the B-band image obtained from the (KPNO) 0.9 m telescope at the Kitt Peak National Observatory for this study (observers: van Zee, Dowell).
UVIT has observed NGC 628 (PI: ASK Pati, observation id: G06\_151, date of observation: 29-Nov-2016) in FUV and NUV channels. Each channel of UVIT consists of narrow as well as broadband filters. {In this study, we made use of the FUV F148W filter, with a peak wavelength of 1481 $\AA$ and NUV 263M filter with a peak wavelength of 2632 $\AA$ \citep{Tandon2020} having an exposure time of 1488.7s and 2086.5s respectively}. We employed the software package CCDLab \citep{Postma2017} to reduce the Level 1 data. Drift correction has been accounted using the NUV images since the drift correction using VIS image resulted in a lesser number of frames than the one performed using NUV images. By making use of the calibration files provided by \citet{Girish2017} and \citet{Postma2011}, each image is flat fielded, followed by the distortion correction and pattern noise. Final images are produced by combining the corrected images using CCDLab itself. The astrometric solutions are also made using the same software. For the FUV and NUV filters the adapted zero point magnitude values are 18.097 and 18.146 respectively \citep[Table 3,][]{Tandon2020}. Figure \ref{fig:composite} represents the UVIT color composite image of NGC 628 generated using the UV emission in F148W and N263M filters.
\section{Theoretical models}
\label{sect:models}
Understanding the properties of star forming regions in a galaxy such as age and mass is the key that enables us to get a clear idea about the process of star formation in the galaxy. In this study, we made use of the Starburst99 SSP model \citep{Leithere1999}
to characterize the young star forming regions of the galaxy NGC 628. Starburst99 is a spectro-photometric SSP model which provides us the spectra of young star clusters for a set of chosen parameters. The output spectra obtained from Starburst99 can be used to analyze the star-forming regions' evolutionary stages.
\citet{Mondal2018, Mondal2021} used these models extensively to estimate the masses of several compact star forming regions identified using UVIT images.
\begin{table}
\caption{Starburst99 model parameters}
\begin{tabular}{|c|c|}
\hline
Parameter & Value \\
\hline
Star Formation & Instantaneous \\
Stellar IMF & Kroupa (1.35, 2.35) \\
Stellar mass limit & 0.1, 0.5, 120 $M_{\odot}$ \\
Cluster mass range & $10^{3}M_{\odot}-10^{7}M_{\odot}$ \\
Stellar evolution track & Geneva (High mass loss) \\
Metallicity & Z= 0.02 \\
Age range & 1-900 Myr \\
\hline
\end{tabular}
\label{tab:criteria}
\end{table}
For this study, an instantaneous star formation law is used along with the Kroupa stellar initial mass function ($\alpha$ = 1.35 and 2.35) within a stellar mass range of 0.1-120 $M_{\odot}$. Assuming solar metallicity, spectra in the age range between 1-900 Myr are generated. The parameters and the input values we used are listed in Table 2. The expected magnitudes of these obtained spectra in F148W and N263M filters have been estimated by convolving with the corresponding UVIT filter effective area curves. This procedure is performed on the spectra obtained for five cluster masses ranging from $10^{3} M_{\odot}$ to $10^{7} M_{\odot}$. To estimate the age and masses of the star forming regions in NGC 628, we used the plot represented in Figure \ref{fig:sb99_mass}, generated using the Starburst99 model. Mass is estimated using the color-magnitude diagram given in Figure \ref{fig:sb99_mass} and age is estimated using the color value.
Extinction plays a significant role when studying the UV images of galaxies since the UV region exhibits a higher value for the extinction coefficient. In order to account for the Galactic extinction, we used the \citet{Cardelli1989} extinction law with $R_{v} = A_{v}/E(B-V) = 3.1$. For the UVIT bands, the ratio of $A(\lambda)/E(B-V)$ is 8.1 and 6.5 for F148W and N263M, respectively. In order to obtain the foreground extinction for UVIT filters from the Milky Way galaxy in the direction of NGC 628, we used the $A_{v}$ value of 0.192 obtained from \citet{Schlafly2011}. From the analysis, $A_{F148W}$ and $A_{263N}$ are found to be 0.5 mag and 0.4 mag, respectively. These extinction values are used to correct the UV magnitudes in both passbands.
\section{Results}
\label{sect:results}
\subsection{Star forming regions in NGC 628}
\label{subsect:regions}
The star forming regions in the galaxy enables us to achieve a better understanding of the evolution of the galaxy. The properties of the regions, such as the size, star formation rate (SFR), radial distance, and distribution along the morphological features, help us to break down the mechanisms driving the evolution of the galaxy.
Since the massive young OBA stars emit predominantly in FUV, it helps us to trace the young massive star forming regions in a galaxy. We made use of the ProFound package to find out the brightest regions from the UVIT FUV images. ProFound is an astronomical data processing tool available in the R programming language. ProFound identifies the peak flux locations in the image and identifies the source segments by means of watershed deblending. The detected segments are then iteratively grown (dilated) to estimate the entire photometry \citep{Robotham2018}. We used the subroutine of the same name in ProFound to identify the star forming regions in the F148W image. We implemented a criterion that the identified star forming region should span over a minimum of 6 pixels. This criterion was chosen by accounting for the resolution of FUV filters. The consideration used here is that we identify the minimum number of pixels to cover the diameter of a circle with size as that of the resolution of the FUV filter. A {\it skycut} of 3 was applied in the ProFound for the identification of star forming regions. Figure \ref{fig:profound} represents the output obtained from ProFound representing the star forming regions.
In Figure \ref{fig:profound}, the contours of the regions are color-coded according to the flux contained in each region. ProFound identified 469 bright regions in the FUV image of NGC 628 and the basic information of the identified regions such as position, magnitude and extent are obtained. After dilating to obtain a total flux measurement, the output of the analysis performed with ProFound gives us the number of pixels contained in the identified segments. We made use of the total number of pixels to estimate the area of the identified star forming regions. The magnitude of each region has been obtained on each of the identified regions. Also, the obtained magnitudes are corrected for line of sight Milky Way extinction. Star forming regions having FUV mag $<$ 21 are only selected for further analysis to exclude the regions with larger photometric error \citep{Mondal2021}. We obtained a final sample of 300 star forming regions in NGC 628.
The internal extinction due to the interstellar medium of galaxies affects the derived parameters such as star formation density (SFRD), age, and mass of the star forming regions. \citet{Sanchez2011} based on the PPAK Wide-field Integral Field Spectroscopy studied the stellar populations and line emission in NGC 628. They derived the dust extinction using the $H_{\alpha}/H_{\beta}$ line ratio. No specific extinction trend has been reported along the spiral arms or in any radial distribution. Since our study is based on the UV regime, which is strongly affected by the extinction, the variable internal extinction needs to be accounted for. The field of view difference between the UVIT images and the IFS images used in \citet{Sanchez2011}, limits our homogeneous estimation of internal extinction in NGC 628. Hence we use the Spitzer MIPS $24\mu$m image of NGC 628 obtained from the SINGS data archive \citep{Kennicutt2003} to account for the internal extinction. It needs to be noted that the resolution of UVIT images is ~4 times better than the MIPS $24 \mu$m images. Despite the fact that there is resolution mismatch, MIPS $24 \mu$m images are the best available data set to account for the internal extinction in our study. The segmentation maps obtained for the FUV image are overlaid on the MIPS $24 \mu$m image, and the internal extinction corrected magnitudes are estimated using the relation obtained from \citet[][Table 2]{Kennicutt2012} and is given below:
\begin{equation}
L(FUV_{corr}) = L(FUV_{obs}) + 3.89 \times L(25 ~ \mu m)
\end{equation}
\begin{equation}
L(NUV_{corr}) = L(NUV_{obs}) + 2.26 \times L(25 ~ \mu m)
\end{equation}
Hereafter, we use the extinction corrected magnitudes for further analysis. The star formation rate in each region is estimated using the relation obtained from \citet{Karachentsev2013} and is given below.
\begin{equation}
log(SFR_{FUV}(M_{\odot}yr^{-1})) = 2.78-0.4mag_{FUV}+2log(D)
\end{equation}
where, $mag_{FUV}$ denotes the background and extinction corrected magnitude and D is the distance to the galaxy in Mpc.
Figure \ref{fig:SFRD} represents the distribution of SFRD of the star forming regions with respect to the radial distance from the center of the galaxy. From Figure \ref{fig:SFRD}, ~ it is evident that the regions with SFRD $>$ 0.05 $M_{\odot} yr^{-1} kpc^{-2}$ are situated at a galactocentric distance of 3-10 kpc (outer part of the galaxy). The headlight cloud, which is reported to be an extremely bright HII region in the context of the NGC 628 galactic environment \citep{Herrera2020}, exhibits the highest SFRD of $0.23 M_{\odot} yr^{-1} kpc^{-2}$, in NGC 628. A brief discussion about the headlight cloud is given in Section \ref{sec:Headlight}. To understand the effects of different galactic properties on determining the SFR and the propagation of star formation along the galaxy, we have classified the regions according to their position in the galaxy. The propagation of the star formation along the spiral arms is discussed in Sections \ref{subsect:Arms} and \ref{subsect:Azimuth}.
\subsection{Age distribution}
\label{subsect:age dist}
The star forming regions' age can be estimated from the observed UVIT color using the SSP models generated in Section \ref{sect:models}. The star forming regions identified using ProFound consist of resolved star forming regions, stellar associations, and regions that cannot be further resolved due to the resolution constraints of UVIT. Each of the star forming regions identified by ProFound is considered as a single age stellar population to estimate the age. Using the segmentation maps generated for the FUV images, the corresponding magnitudes in NUV are also obtained from ProFound. From the output, we estimated the extinction corrected magnitudes in F148W and N263M filters and hence the respective color for the regions. Figure \ref{fig:mass_est} represents the UV color-magnitude diagram of the star forming regions, over-plotted with the Starburst99 model curves.
The age of the regions is estimated using linear interpolation along the color axis.
A better understanding of the propagation of star formation across the galaxy can be made from the spatial distribution of age of the star forming regions. Since the evolution of the galaxy can be due to secular and environmental effects, the spatial age distribution provides insights into the factors which affects star formation across the galaxy. The relative distribution of the younger and older star forming regions in the galaxy can be further correlated with the factors affecting the star formation such as the spiral arm and the possible interaction with other galaxies \citep{Gusev2013, shabani2018}.
In order to generate the age map to understand the spatial distribution of age of the star forming regions, we selected seven groups in the age range 1-300 Myr. The selected age groups are 1-20 Myr, 20-40 Myr, 40-70 Myr, 70-100 Myr,100-150 Myr, 150-200 Myr and 200-300 Myr. The bin selection is based on the histogram distribution of the age and the interval is varied with respect to the number of star forming regions in each bin. Also, the selected bin size is larger than the mean error associated with the estimated age in each bin. Left panel of the Figure \ref{fig:age_map} depicts the age map of the star forming regions identified in this study. It suggests that most of the population identified from the FUV image are young. 91$\%$ of the total star forming regions are found to be younger than 100 Myr. ~ 54$\%$ out of the total star forming regions are younger than 20 Myr, which suggests that, recent star formation occurred in the galaxy. From Figure \ref{fig:age_map} it is evident that the outer regions of the galaxy host younger star forming regions compared to the inner part. This could be due to the inside-out growth of the disk \citep{white1991, Brook2006, Munoz2007}.
\subsection{Mass distribution}
\label{subsect:Mass_dist}
The mass of the star forming regions depends on the mass of the parent molecular cloud. Magnitudes and colors of star forming region in F148W and N263M filters are used to estimate the mass of the star forming regions. By making use of the linear interpolation of the F148W magnitude axis of Figure \ref{fig:mass_est} for the observed color of each star forming region, the corresponding mass has been estimated. From Figure \ref{fig:mass_est} it is evident that the mass of the identified star forming regions cover a wide range of values starting from $10^{3}$ to $10^{7} M_{\odot}$ and peaks around $10^{5} M_{\odot}$. A large number of star forming regions falls in the $10^{5} M_{\odot}$ to $10^{6} M_{\odot}$ mass range.
The distribution of mass of the star forming regions across the galaxy helps us to understand the star formation properties across the galaxy. Right panel of the Figure \ref{fig:age_map} represents the mass map of the star forming regions identified in NGC 628. The mass range selected are log$(M/M_{\odot}) < $ 4.5, 4.5 $ < log(M/M_{\odot}) < $ 5, 5 $ < log(M/M_{\odot}) < $ 5.5, 5.5 $ < log(M/M_{\odot}) < $ 6, 6 $ < log(M/M_{\odot}) < $ 6.5 and $log(M/M_{\odot}) > 6.5 $. The bin selection is performed as discussed in section \ref{subsect:age dist}. Most of the less massive star forming regions (log$(M/M_{\odot}) < $ 4.5) are located in the outer parts of the galaxy. The highly massive star forming regions are situated in the inner part of the galaxy. Figure \ref{fig:age_map} suggests that the recently formed star forming regions are less massive than the older star forming regions in the galaxy.
\subsection{Propagation of star formation along the spiral arms of NGC 628}
\label{subsect:Arms}
The results obtained from the studies by \citet{Gusev2013,Gusev2014} and \citet{shabani2018} suggest two different trends in the star formation properties for the spiral arms of NGC 628. The difference between these two studies is the extent of the spiral arms and the selected star forming regions. The studies initiated by \citet{Gusev2013} considers the shorter arm as one with distortion. Hence only the star forming regions before the distortion starts is used in their analysis. In the case of \citet{shabani2018}, they consider the total extent of both the spiral arms. However, their analysis is incomplete because of the unavailability of data footprints.
In this context, we intend to study the star forming regions detected using UVIT FUV images in the spiral arms of NGC 628. It will help us to understand the difference in the properties, such as SFR, extent, age and mass of the star forming regions. We visually inspected the star forming regions identified using ProFound and separated them to each spiral arm based on their closeness. The arms are mentioned as Arm A (Longer arm/ Arm1) and Arm B (Shorter arm/ Arm2). We consider two scenarios based on the studies by \citet{Gusev2013} and \citet{shabani2018}. In the first case, in Arm B, we only consider the star forming regions before the distorted portion of the arm (as suggested in \citet{Gusev2013} ). In the second case, we consider all the regions identified in Arm B (including the distorted region). In both these cases, the regions selected for Arm A remain the same. Figure \ref{fig:spiral arm} represents the star forming regions identified in the spiral arms.
Figure \ref{fig:combined_age_hist} represents the Kernel density estimate (KDE) plots for the properties of star forming regions such as SFR density, the extent of the regions, age, and mass for both the arms. Left panel (a) represents the first scenario discussed above in which Arm B is considered as the shorter arm. The right panel (b) represents the second scenario in which the full extent of Arm B is considered. From Figure \ref{fig:combined_age_hist}, a significant difference between age and mass of the star forming regions in both the arms is evident in the shorter arm consideration of Arm B.
We performed a two-sample Kolmogorov-Smirnov test (KS test) on the properties of the star forming regions to check whether the properties of Arm A and Arm B constitute a single distribution or not. Same as the KDE analysis, we did the KS test on the samples separately in both the first and second scenarios. KS test performed on the SFRD of the star forming regions suggests that the probability that both the population is part of a single distribution is 34$\%$ if we consider the shorter arm for Arm B. The probability changes to 22$\%$ when we consider the full extent of Arm B. Based on the SFRD estimates, both the samples discard the consideration that there is a significant difference in the star formation rate of the two spiral arms of NGC 628. When the KS test is performed using the age estimates obtained for the star forming regions, the probability value for the shorter arm scenario strongly supports the result of \citet{Gusev2014} that there is a significant difference in the age distribution of star forming regions of longer and shorter arms of NGC 628 (probability = 0.001\%). On the contrary, when we use the Arm B sample with the total extent, the probability value changes to 59$\%$. The higher probability value suggests the population in both arms does not differ much in terms of the age of the regions. While considering the mass of the star forming regions in the arms, the KS test result suggests 0.7\% and 43\% for the shorter arm and total extent considerations of Arm B, suggesting the same result as in the case of age. It could be occurring since the older and massive star forming regions are situated in the inner part of the disk as we discussed in Section \ref{subsect:age dist} and \ref{subsect:Mass_dist}. In both scenarios, the KS test based on the extent of the star forming regions, suggests that both samples only belong to the same population.
\subsection{Azimuthal distance distribution of star forming regions}
\label{subsect:Azimuth}
\citet{Gusev2014} attribute the asymmetric star formation in spiral arms observed in their study to the spiral density waves. \citet{Henry2003} studied the asymmetry in the spiral arms of M51 and suggested that the presence of more than one spiral density wave could be the cause of variable star formation. The presence of a one-armed wave along with the dominant two-armed one in NGC 628 has been proposed by \citet{sakhibov2004}.
An age gradient across the spiral arms can be used to confirm the the presence of spiral density waves \citep{shabani2018}. To estimate the relative position of the star forming regions with respect to the spiral arms, defining the spiral arms is a requirement. According to the spiral density wave theory, the inner part of the spiral arms forms dark dust lanes due to the compression of gas as it flows through the potential minima of the density wave \citep{Roberts1969}. Since the dust lanes are narrow and well defined in optical images, we use the dust lanes to define the spiral arms of NGC 628. We used the B-band image obtained from the Kitt Peak National Observatory (KPNO) 0.9 m telescope for this purpose (observers: van Zee, Dowell). We defined the spiral arm ridge lines manually in the smoothed B band image of NGC 628. Figure \ref{fig:ridge} represents the B band image over-plotted with the spiral arms selected in both Arms A and B. We adopted a radius of 2 kpc for the bulge-dominated part \citep{shabani2018} and a corotation radius of 7 kpc \citep{sakhibov2004} for NGC 628. To do this analysis we assigned all the star forming regions inside the corotation radius to either Arm A or Arm B based on the location. For each star forming region, the distance to both the ridgelines are estimated. The star forming region is assigned to the arm, from which it has the minimum distance.
Based on the position of the star forming region and the ridgeline, the azimuthal distance of the star forming region from the ridgeline is estimated. Based on the histogram distribution of the age of the star forming regions, we selected three age bins for this analysis. The selected age bins are 1-30 Myr, 30-60 Myr and greater than 60 Myr. Figure \ref{fig:azimuth} represents the KDE plots of the azimuthal distance of star forming regions in the above-mentioned age bins. It is to be noted that the azimuthal distance of zero degrees means that the star forming region falls in the ridgeline. The median azimuthal distance from the ridge line is -5.7, -6.3, and -5.5 degrees for young, intermediate, and old populations, respectively. The median values of azimuthal distance are represented in Figure \ref{fig:azimuth} using vertical lines with corresponding colors. The deviation from the ridge line is represented in the inset of Figure \ref{fig:azimuth}. If an azimuthal age gradient is present, we expect an increasing or decreasing trend in the length of the lines representing the deviation from the ridgeline. From Figure \ref{fig:azimuth} it is evident that there is no significant difference in the azimuthal distribution of star forming regions as a function of age. An age gradient is not present across the spiral arms of NGC 628, which in-turn questions the consideration that density waves are the reason for the formation of spiral structures. This result is consistent with those obtained by \citet{shabani2018}, in which they suggest swing amplification as the possible formation scenario for spiral arms of NGC 628 based on the azimuthal distribution of star forming regions with different ages.
\subsection{Head light region in UV}
\label{sec:Headlight} %
NGC 628 hosts a giant molecular cloud in one of its outer spiral arms and is named as the headlight cloud. Studies performed by \citet{Kreckel2016} and \citet{Kreckel2018} using MUSE $H\alpha$ data found out a bright HII region, having luminosity two orders of magnitude brighter than the mean $H\alpha$ luminosity of the HII regions. \citet{Herrera2020} identified the position of this cloud at an offset of (47", 51") and at a radial distance of 3.2 kpc from the center of the galaxy. The cloud exhibits bright infrared emission in Spitzer and Herschel images. The mass distribution function study by \citet{Sun2018} highlighted that those features exhibiting an intense flux are associated with galactic centers or stellar bars. However, the position of this headlight cloud and the absence of any stellar bars makes the headlight cloud a perfect sample to understand the molecular cloud properties in the disk of a galaxy.
Figure \ref{fig:hl} represents the UVIT view of headlight cloud in the NGC 628. \citet{Herrera2020} considers the headlight cloud as the brightest molecular cloud in NGC 628 in their study. In our study also, headlight cloud is found out to be the brightest star forming region in NGC 628. It is to be noted that the 24$\mu$m emission is also high in the headlight cloud region. Further, the age of the headlight cloud is estimated to be 16 Myr. Based on the $H\alpha$ equivalent width (EW) along with the Starburst99 models and using the MUSE spectrum, \citet{Herrera2020} estimated the age of Headlight cloud to be 2-4 Myr. The difference between these two age estimates could be due to the difference in the stellar populations used to estimate the age because of the larger extent of headlight cloud obtained ($\sim$ 280 pc) in this study.
\section{Discussion}
\label{sect:Discussion}
In this study, we primarily focus on identifying the star forming regions in NGC 628 and characterize their properties to make a better understanding of the secular evolution of the galaxy. To attain these goals we used the available UVIT data in F148W and N263M filters. By comparing the flux values obtained in FUV and NUV bands with the theoretical models we estimate the age and mass of the star forming regions. It must be noted that the stochastic sampling of IMF could affect the results obtained from the Starburst99 analysis. Since most of the star forming regions identified in this study have mass greater than $10^{4} M_{\odot}$ we can consider that the IMF is fully sampled, and hence the effect of stochastic sampling of IMF is less likely to affect the results of our study \citep{daSilva2012}.
Based on the results obtained using FUV and NUV images from UIT, \citet{Cornett1994} found that the central region of NGC 628 does not host a significant population of OB stars. Based on the Effelsberg maps, \citet{Mulcahy2017} found that the northern spiral arm of NGC 628 consists of many HII regions, which are also bright in HI \citep{Walter2008} and infrared images \citep{Kennicutt2011}. \citet{Mulcahy2017}, based on their analysis along with the results from \citet{Marcum2001}, suggests that within the past 500 Myr, the entire disk of NGC 628 has undergone active star formation. They also concluded that inner regions had experienced a more declining star formation than the galaxy's outer regions. These results are consistent with the results obtained from our study (section \ref{subsect:regions} \& \ref{subsect:age dist}). Our UVIT analysis shows that the inner part of the galaxy hosts an older population, whereas a comparatively younger population dominates the outer part of the galaxy. It suggests an inside-out growth of the galaxy.
The reported asymmetry in the star formation properties of the arms has been attributed to the presence of more than one spiral density wave in the spiral arms \citep{Gusev2014}. From the analysis described in section \ref{subsect:Arms}, it is noted that there is no significant difference in the SFRD estimates in the star forming regions corresponding to Arm A and Arm B. In section \ref{subsect:Azimuth} we have analyzed the azimuthal age gradient across the spiral arms and found out that there is no significant age gradient to the azimuthal distribution of star forming regions across the spiral arms. This suggests that the density wave theory may not be fully responsible for the formation of spiral arms in NGC 628.
It also needs to be noted that five out of nine supernovae remnants (SNR) in NGC 628 \citep[mentioned in][]{Sonba2010} are located in the spiral Arm A. Three of the recent supernovae, SN 2003gd, 2013ej, and 2019krl, are also located in spiral Arm A. On the other hand, arm B does not host any supernovae or SNRs. \citet{Michalowski2020} exclusively studied the SNs 2002ap, 2003gd, 2013ej, and 2019krl located in NGC 628. They found that SN 2002ap is located at the end of an off-centre asymmetric 55 kpc-long HI extension containing 7.5\% of the total atomic gas of NGC 628. Based on this result, they suggested that the birth of the progenitor of SN 2002ap can be attributed to the accretion of atomic gas from the intergalactic medium. They also suggested the possibility of tidally disrupted companions of NGC 628 as the reason for the HI extension. They were unable to explain the possible formation scenario for the other 3 SNs located along the spiral arm A of NGC 628.
The results obtained from this study suggest that the density wave scenario might not be fully responsible for the formation mechanism for the spiral arms of NGC 628. As \citet{shabani2018} suggested, swing amplification can be a possible formation scenario for the spiral arms of NGC 628. A combined effect of density wave and swing amplification is also valid. A detailed study regarding the star formation properties of NGC 628 using multiwavelength data could provide a better picture regarding the same.
\section{Summary}
\label{sect:summary}
A summary of the main results obtained from our study is given below.
\begin{itemize}
\item In this study, we used the UVIT FUV and NUV observations of NGC 628 to identify and characterize the star forming regions in the galaxy.
\item We identified 300 star forming regions in the UVIT FUV image of NGC 628 using the ProFound package.
\item Around 91$\%$ of the star forming regions are found to be younger than 100 Myr. Only 54$\%$ of the regions are younger than 20 Myr.
\item The youngest clumps ($<$ 10 Myr) are majorly found in the outer extent of the galaxy whereas the central region hosts most of the older population of stars.
\item Mass range of the identified star forming regions extends from $10^{3}$ -- $10^{7} M_{\odot}$
\item Our study suggests that there is no difference between the star formation properties of the spiral arms of NGC 628. It contradicts the findings of \citet{Gusev2013}.
\item The results obtained from this study did not support the spiral density wave theory for the formation of spiral arms in NGC 628. Also, the absence of an age gradient is consistent with the results from \citet{Foyle2011} and \citet{shabani2018}.
\end{itemize}
\section*{Acknowledgements}
We thank the anonymous referee for the valuable comments that improved the scientific content of the paper. We thank Joseph Postma for his consistent help during the process of reducing UVIT L1 images and Aaron Robotham for his support while performing the source identification using ProFound. UK thanks Chayan Mondal, Prajwel Joseph, Akhil Krishna, Sudheesh and Arun Roy for their valuable suggestions throughout the course of the work. UK acknowledges the Department of Science and Technology (DST) for the INSPIRE FELLOWSHIP (IF180855). SSK, RT, and UK acknowledge the financial support from Indian Space Research Organisation (ISRO) under the AstroSat archival data utilization program (No. DS-2B-13013(2)/6/2019). SS acknowledges support from the Science and Engineering Research Board, India through a Ramanujan Fellowship. This publication uses the data from the UVIT, which is part of the AstroSat mission of the ISRO, archived at the Indian Space Science Data Centre (ISSDC).We gratefully thank all the individuals involved in the various teams for providing their support to the project from the early stages of the design to launch and observations with it in the orbit. We thank the Center for Research, CHRIST (Deemed to be university) for all their support during the course of this work.
\section*{Data availability}
The UVIT data used in this article will be shared on reasonable request to the corresponding author. All the data is already available at \url{https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp}.
\bibliographystyle{mnras}
\bibliography{bibtex} %
|
Title:
The Low Temperature Corona in ESO 511$-$G030 Revealed by NuSTAR and XMM-Newton |
Abstract: We present the results from a coordinated XMM-Newton $+$ NuSTAR observation
of the Seyfert 1 Galaxy ESO 511$-$G030. With this joint monitoring programme,
we conduct a detailed variability and spectral analysis. The source remained in
a low flux and very stable state throughout the observation period, although
there are slight fluctuations of flux over long timescales. The broadband
(0.3-78~keV) spectrum shows the presence of a power-law continuum with a soft
excess below 2~keV, a relatively narrow iron K$\alpha$ emission
($\sim$6.4~keV), and an obvious cutoff at high energies. We find that the soft
excess can be modeled by two different possible scenarios: a warm ($kT_{\rm e}
\sim$ 0.19~keV) and optically thick ($\tau - 18\sim25$) Comptonizing corona or
a relativistic reflection from a high-density ($\log [n_{\rm e}/{\rm
cm}^{-3}]=17.1 \sim 18.5$) inner disc. All models require a low temperature
($kT_{\rm e} \sim$ 13~keV) for the hot corona.
| https://export.arxiv.org/pdf/2208.01452 |
\title{The Low Temperature Corona in \src\ Revealed by \nustar\ and \xmm}
\correspondingauthor{Cosimo Bambi}
\email{bambi@fudan.edu.cn}
\author{Zuobin Zhang}
\affiliation{Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 2005 Songhu Road, 200438 Shanghai, China}
\author[0000-0002-9639-4352]{Jiachen Jiang}
\affiliation{Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK}
\author{Honghui Liu}
\affiliation{Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 2005 Songhu Road, 200438 Shanghai, China}
\author[0000-0002-3180-9502]{Cosimo Bambi}
\affiliation{Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 2005 Songhu Road, 200438 Shanghai, China}
\author{Christopher S. Reynolds}
\affiliation{Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK}
\author{Andrew C. Fabian}
\affiliation{Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK}
\author{Thomas Dauser}
\affiliation{Remeis-Observatory \& ECAP, FAU Erlangen-N\"urnberg, Sternwartstr. 7, 96049 Bamberg, Germany}
\author{Kristin Madsen}
\affiliation{Cahill Center for Astronomy and Astrophysics, California Institute of Technology, Pasadena, CA 91125, USA}
\affiliation{CRESST and X-ray Astrophysics Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD 20771 USA}
\author{Andrew Young}
\affiliation{School of Physics, Tyndall Avenue, University of Bristol, Bristol BS8 1TH, UK}
\author{Luigi Gallo}
\affiliation{Department of Astronomy and Physics, Saint Mary’s University, 923 Robie Street, Halifax, NS B3H 3C 3, Canada}
\author{Zhibo Yu}
\affiliation{Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 2005 Songhu Road, 200438 Shanghai, China}
\author{John Tomsick}
\affiliation{Space Sciences Laboratory, University of California, 7 Gauss Way, Berkeley, CA 94720-7450, USA}
\keywords{accretion, accretion discs – black hole physics – galaxies: Seyfert – X-rays: galaxies.}
\section{Introduction} \label{sec:intro}
Active galactic nuclei (AGNs) are luminous sources in the Universe over a broad energy range from radio to gamma rays. This emission results from the accretion of matter onto a supermassive black hole (SMBH) at the center of its host galaxy \citep{Lynden-Bell1969,Rees1984}. By studying the X-ray emission, we can directly probe the innermost region of AGNs. The AGN X-ray emission region is small in size \citep{Reis2013} and located close to the central SMBH and accretion disc, as suggested by studies of black hole mass dependence of AGN X-ray variability (e.g., \citealt{Axelsson2013}; \citealt{McHardy2013}; \citealt{Ludlam2015}), reverberation of X-ray radiation reprocessed by the accretion disc (e.g., \citealt{DeMarco2013}; \citealt{Uttley2014}; \citealt{Kara2016}), and quasar microlensing (e.g., \citealt{Mosquera2013}; \citealt{Chartas2016}; \citealt{Guerras2017}).
The typical broadband X-ray spectrum of a Seyfert~1 AGN consists of a power-law continuum, fluorescent emission lines, a Compton hump, and a soft excess below $\sim$2 keV. In the simplest scenario, thermal emission from the disc mostly emits in the ultraviolet (UV) band. The seed UV/optical photons are Compton up-scattered to the hard X-ray band in a region filled with a hot and optically thin plasma near the central black hole, which is often referred to as the corona (e.g., \citealt{Vaiana1978}; \citealt{Haardt1991}; \citealt{Merloni2003}). A fraction of the hard X-ray photons illuminate the surface of the accretion disc and are reflected to produce the reflection component (e.g., \citealt{Ross2005}; \citealt{Garcia2010}, which is smeared by relativistic effects (e.g., \citealt{2021SSRv..217...65B, Fabian1989}).
Broadband X-ray spectroscopy of the X-ray emission produced in the Comptonizing plasma can provide important insights into the principal properties of the corona, such as its temperature ($kT_{\rm e}$), optical depth ($\tau_{\rm e}$), and ultimately its geometry. The continuum originates in the Comptonization processes of low-energy disc photons scattered by hot electrons (e.g., \citealt{Balokovic2020}). The high-energy turnover is generally interpreted as the temperature of the corona (or a value close to it). Recently, \cite{Fabian2015} gathered results from \textit{NuSTAR} to map out their locations on the compactness-temperature ($\ell - \Theta$) diagram, and found many sources are (marginally) above the electron–electron coupling line and all are above the electron–proton line. Then \cite{Fabian2017} re-examined the case of hybrid coronae \citep{Zdziarski1993}, where the plasma contains both thermal and non-thermal particles, and found that objects with the lowest coronal temperature measurements require the largest non-thermal fractions.
\src\ is a bare Seyfert 1 AGN at redshift $z = 0.0224$ (\citealt{Tombesi2010}, \citealt{Winter2012}, and \citealt{Laha2014}). It is found to be one of the brightest bare Seyfert AGNs featured in the Swift 58 month BAT catalog \citep{Winter2012}. \cite{Ghosh2021} presented a broadband (optical-UV to hard X-ray) spectral study of \src\, using multi-epoch \textit{Suzaku} and \xmm\ \citep{Jansen2001} data from 2007 and 2012. They investigated the spectral features observed in the source with a physically motivated set of models and reported a rapidly spinning black hole ($a_* > 0.78$) and a compact corona, indicating a relativistic origin of the broad Fe emission line. \cite{Ghosh2021} also found an inner disc temperature of $2\sim3$~eV, which characterizes the UV bump, and that the SMBH accretes at a sub-Eddington rate ($\lambda_{\rm Edd}=0.004-0.008$).
Previous studies of \src\ were based on the \xmm\ observation data in 2007 and the \textit{Suzaku} observation in 2012 (e.g., \citealt{Ghosh2021}, \citealt{Tombesi2010}, \citealt{Winter2012}, and \citealt{Laha2014}). \nustar\ \citep{Harrison2013} and \xmm\ conducted a joint observing campaign on this source in 2019, consisting of five joint observations over twenty days. In this paper, we analyze the observational data from this \nustar\ and \xmm\ joint observation campaign, investigating the nature of the corona continuum, reflection, and soft excess.
The paper is organized as follows. In Sec.~\ref{observations}, we present the observational data reduction and the light curves. The spectral analysis with two different possible scenarios is reported in Sec.~\ref{analysis}. We discuss the results and report our conclusions in Sec.~\ref{discussion} and Sec.~\ref{conclusion}, respectively.
\section{Observations and data reduction}
\label{observations}
During the period of 07-20-2019 to 08-09-2019, \nustar\ and \xmm\ performed five joint observations. The details of these observations are in Tab.~\ref{t-obs}. These data are taken with two focal plane modules, named FPMA and FPMB, on board the \nustar\ satellite, and EPIC-pn CCD modules, on board of \xmm. A total unfiltered exposure time of $\sim 168$ ks with five different observations is obtained.
\begin{table*}
\centering
\caption{\rm Summary of the observations analyzed in the present work. \label{t-obs}}
\renewcommand\arraystretch{1.5}
\begin{tabular}{m{1.5cm}m{2.5cm}<{\centering}m{2.5cm}<{\centering}m{2.5cm}<{\centering}m{2.5cm}<{\centering}m{2.5cm}<{\centering}}
\hline\hline
& Mission & Obs.~ ID & Instrument & Start data & Exposure (ks) \\ \hline
Epoch~1 & \nustar\ & 60502035002 & FPMA/B & 2019-07-20 & 32.1 \\
&\xmm\ & 0852010101 & EPIC-pn & 2019-07-20 & 37.0 \\ \hline
Epoch~2 & \nustar\ & 60502035004 & FPMA/B & 2019-07-25 & 34.1 \\
&\xmm\ & 0852010201 & EPIC-pn & 2019-07-25 & 36.0 \\ \hline
Epoch~3 & \nustar\ & 60502035006 & FPMA/B & 2019-07-29 & 31.2 \\
&\xmm\ & 0852010301 & EPIC-pn & 2019-07-29 & 33.0 \\ \hline
Epoch~4 & \nustar\ & 60502035008 & FPMA/B & 2019-08-02 & 41.8 \\
&\xmm\ & 0852010401 & EPIC-pn & 2019-08-02 & 38.3 \\ \hline
Epoch~5 & \nustar\ & 60502035010 & FPMA/B & 2019-08-09 & 29.2 \\
&\xmm\ & 0852010501 & EPIC-pn & 2019-08-09 & 36.2 \\
\hline\hline
\end{tabular}
\vspace{0.3cm}
\end{table*}
\subsection{\xmm\ data reduction}
The \xmm\ EPIC cameras offer the possibility of performing extremely sensitive imaging observations over a field of view of $30'$ and the energy range from 0.2 to 12 keV, with moderate spectral ( $E/\Delta E \sim 20-50$) and angular resolution ($\sim 6''$ FWHM; $\sim 15''$ HEW). Because of the higher effective area of EPIC-pn compared with EPIC-MOS and its consistency with EPIC-MOS data, we only consider EPIC-pn data in the 0.3–10.0 keV energy band in our X-ray spectral analysis.
We reduce the data and extract products from Observation Data Files (ODF) following the standard procedures based on the XMM-Newton Science Analysis System (SAS 18.0.0) and the latest calibration files. The EPIC-pn data are produced using \texttt{epproc} and processed with the standard filtering criterion. Then we remove periods of high background by creating a Good Time Interval (GTI) file using the task \texttt{tabgitgen}. The source products are extracted from a circular region with a radius of $30''$ centered on the source, and the background is taken from a circular region with a radius of $65''$ offset source. The \texttt{evselect} task was used to select single and double events for EPIC-pn (PATTERN $\leq$ 4, FLAG $==$ 0) source event lists. The Redistribution Matrix File (RMF) and Ancillary Response File (ARF) are created by using the SAS tasks \texttt{rmfgen} and \texttt{arfgen}, respectively. We test the pile-up for these data by using the SAS task \texttt{epatplot} and find that the influence of pile-up events is negligible during the observations.
\subsection{\nustar\ data reduction}
The reduction of the \nustar\ \citep{Harrison2013} data was conducted following the standard procedure using the NuSTAR Data Analysis Software (NUSTARDAS v.2.1.1), and updated calibration files from \nustar\ CALDB v20220301. We produce calibrated and filtered event files with \texttt{nupipeline}. Passages through the South Atlantic Anomaly are excluded from consideration using the following settings: saamode = STRICT, SAACALC = 2 and TENTACLE = YES. For such an AGN source, the effect of the SAA filtering is non-negligible. To get more reliable spectra products, we use strict filtering criteria here to realize the reduction in the background rates.
We utilize the task package \texttt{nuproduct} to extract source products and their associated instrumental response files from a circular region of radius $65''$ centered on the source. The \nustar\ background varies across the field of view and between the four CdZnTe (CZT) detectors on each focal plane. For the background, we extract it from an optional maximal nearby polygon region free from source contamination, and we limit the background region on the CZT chip of the source, as shown in Fig.~\ref{image}. With these strategies, we minimize the systematic errors from the instrument and obtain a more reliable background spectrum. The products are obtained from FPMA and FPMB separately.
\subsection{Lightcurves and variability}
Fig.~\ref{lcurve-1} presents the light curves of \nustar\ and simultaneous \xmm\ observations on Epoch 1-5. \xmm\ data are binned in 150~s intervals and \nustar\ FPMA data and FPMB data are binned in 200~s intervals. The \xmm\ $0.3 - 10$~keV count rate of \src\ remains consistent within the range of $1.8 - 3.0$~ct~s$^{-1}$ during the first four epochs, and \nustar\ $3.0 - 78.0$~keV count rate within the range of $0.1 - 0.35$ ct s$^{-1}$. In Epoch 5, the light curve shows an increase of $\sim60$\% both in \nustar\ and \xmm\ count rates, compared to the average count rates of Epoch~1-4. In Epoch 5, which is the brightest epoch, the \xmm\ light curve shows a count rate of $\sim 3.5$ counts/s and \nustar\ shows a count rate of $\sim 0.35$ counts/s. In Epoch 2, which is the faintest epoch, the \xmm\ light curve shows a count rate of $\sim 2.0$ counts/s and \nustar\ shows a count rate of $\sim 0.2$ counts/s. Note that the photon rate fluctuation between different observations occurs simultaneously in multiple instruments, indicating that the variability occurs simultaneously for the entire broad energy band.
To investigate this variability, we extract the \xmm\ light curves in the 0.3–2.0 keV and 2–10 keV bands, shown in the upper panel of each plot in Fig.~\ref{lcurve-2}, which shows that both soft and hard energy bands vary simultaneously. Moreover, we plot the \xmm\ hardness (2–10 keV/0.3–2.0 keV) ratio in the lower panel of each plot. The hardness presents a stable trend both in a single observation and between observations. We also extract the spectra from divided epochs and only a tiny discrepancy in normalization is found between spectra, which confirms the result obtained from the light curve analysis.
As mentioned above, because variability occurs simultaneously for the entire broad energy band and the spectrum variations are only a discrepancy in normalization between the observations, we merge the spectra of Epoch 1-5 into a single spectrum. For \xmm, we produce a combined multi-observation EPIC-pn spectrum using the SAS ftool \texttt{epicspeccombine} and we focus on the \xmm\ data over the 0.3–10.0 keV band in the following analysis. For \nustar, we produce a combined multi-observation FPMA spectrum and FPMB spectrum using the HEASARC ftool \texttt{addspec} and we use the \nustar\ data over the 3.0–78.0 keV band. All spectra are rebinned to minimum counts of 20 per energy bin and oversample the spectral resolution by a factor of 3.
\section{Spectral Analysis}
\label{analysis}
In this section, we present an analysis of the time-averaged \nustar\ and \xmm\ spectra using the XSPEC (v12.12.1) package \citep{Arnaud1996}. To account for the differences between the detector responses of FPMA/B and EPIC-pn, we include a cross-calibration factor, which is fixed to unity for the EPIC-pn spectra, but varies freely for FPMA and FPMB \citep{Madsen2015a}. The $\chi^{2}$ statistics is employed and all parameter uncertainties are estimated at 90\% confidence level, corresponding to $\Delta \chi^{2}=2.71$. We include the absorption model \texttt{tbnew} to describe the Galactic absorption, using the recommended photoelectric cross sections of \citet{Verner1996}. In addition, the multiplicative model \texttt{zmshift} is used to account for the redshift of the source and fix the redshift at $z=0.0224$ during the spectral fitting.
We start the fitting with an absorbed power-law model, i.e. \texttt{tbnew $\times$ powerlaw} in XSPEC language. \texttt{powerlaw} accounts for a power-law component from the corona. In this fit, we ignore the data below 3 keV, above 15 keV and the 5-7 keV band, i.e., we ignore the possible soft excess, the iron emission line, and Compton hump. Fig.~\ref{soft_excess_iron} shows the broadband spectra of \src\ (upper panel) and the extrapolated data-to-model ratios of the 0.3-78.0 keV dataset from the above fit. The typical AGN spectral features can be seen: a soft excess below 2 keV, Fe K$\alpha$ emission at $\sim$6.4 keV, a weak Compton hump peaking at $\sim$20 keV, and a cutoff at high energy.
Fig.~\ref{iron_line} presents a zoomed-in version of the residual in the 6 keV region. Both the \xmm\ and \nustar\ spectra reveal a consistent shape for the iron K-shell emission line. To investigate the excess emission at 6$\sim$7~keV, we first introduce a \texttt{gaussian} model to the absorbed power-law model. The source frame line energy is consistent with 6.4 keV and line width $\sigma=0.04_{-0.03}^{+0.05}$~keV, which indicates the presence of a relatively narrow Fe emission line in \src. These features can be partially accounted for by a reprocessing of X-ray photons in a neutral and distant material, free from relativistic effects, possibly in the broad-line region (e.g., \citealt{Costantini2016}; \citealt{Nardini2016}), or the torus (e.g., \citealt{Yaqoob2007}; \citealt{Marinucci2018}). To probe the cutoff at high energies, we replace \texttt{powerlaw} with \texttt{nthcomp} (\citealt{Zdziarski1996}; \citealt{Zycki1999}). This model reveals a low temperature corona, $kT_{\rm e}=16_{-4}^{+5}$~keV. In our subsequent analysis, we will investigate it with more physical models.
To fit the soft excess, we separately try a phenomenological single temperature blackbody model, a warm corona model, and a high-density relativistic reflection model. The fits with these models are presented and discussed in the following subsections. The models are summarized in Tab.~\ref{t-mod}. The data-to-model ratios of the fits are depicted in the left column of Fig.~\ref{ratio_chi} and the best-fit models are shown in Fig.~\ref{eemod}. The best-fit results are summarized in Tab.~\ref{best-fit-1}, where $\nu$ is the number of degrees of freedom (dof) and $\chi_{\rm red}^{2}=\chi^{2}/\nu$ is the reduced $\chi^{2}$. Instead of the normalization of every component, in Tab.~\ref{best-fit-1} we report the flux of every component calculated by \texttt{cflux} over the energy range 0.3-80.0 keV.
\begin{table}
\centering
\caption{\rm Summary of the models used in our analysis: blackbody, warm corona, and relativistic reflection.
\label{t-mod}}
\renewcommand\arraystretch{1.5}
{\scriptsize
\begin{tabular}{lc}
\hline\hline
\makebox[0.024\textwidth]{Model} & Component \\
\hline
1 & \texttt{tbnew}$\times$\texttt{zmshift}$\times$(\texttt{bbody}+\texttt{nthcomp}+\texttt{xillverCp}) \\
2 & \texttt{tbnew}$\times$\texttt{zmshift}$\times$(\texttt{nthcomp}+\texttt{nthcomp}+\texttt{xillverCp}) \\
3 & \texttt{tbnew}$\times$\texttt{zmshift}$\times$(\texttt{relconv}$\times$\texttt{xillverDCp}+\texttt{nthcomp}+\texttt{xillverCp}) \\
\hline\hline
\end{tabular}
}
\vspace{0.5cm}
\end{table}
\subsection{Model 1: phenomenological blackbody model}
In this fit, we use the single temperature blackbody model \texttt{bbody} to describe the soft excess. The coronal emission is described by \texttt{nthcomp} with the seed photons originating from the disc. In addition, we introduce \texttt{xillverCp} \citep{Garcia2010, Garcia2011,Garcia2013} to account for the reprocessed emission from the disc with reflection fraction $F_{\rm ref}$ fixed to $-1$. In XSPEC notation, the phenomenological model reads as \texttt{tbnew} $\times$ \texttt{zmshift} $\times$ (\texttt{bbody}+\texttt{nthcomp}+\texttt{xillverCp}). In our spectral analysis, the fit is insensitive to the inclination angle, so we fix it to a best-fit value $i=73^{\circ}$. We also fix inclination angle at some other lower values and we get consistent results on the flux of \texttt{xillverCp}. In \texttt{nthcomp} model, we fix $kT_{\rm bb} = 10$~eV, which is the typical temperature for the accretion discs of AGNs and insensitive to the X-ray fitting processes \citep{Done2012}. The electron temperature $kT_{\rm e}$ and spectral slope $\Gamma$ are linked to that in \texttt{xillverCp}. The best-fit results are shown in the third column of Tab.~\ref{best-fit-1} and the uppermost panel of Fig.~\ref{ratio_chi} shows the corresponding data-to-model ratio and the zoomed-in version of the residuals in the 6~keV region.
Model~1 shows that the phenomenological blackbody model can fit the spectra well from a statistical point of view, with a good fit statistic $\chi_{\rm red}^{2} \sim 1.106$. The fitting requires a Galactic column density value of $N_{\rm H}$ = $0.041 \times 10^{22}$~cm$^{-2}$, which is consistent with the results in \citet{Dickey1990} and \citet{Willingale2013}. We set the ionization parameter as free and we get a value $\log\xi < 0.5$, hitting the lower limit $\log \xi=0.0$. From the upper right panel of Fig.~\ref{ratio_chi}, we can conclude that model 1 describes the iron emission region well with a neutral distant reflection model, although we can still see some systematic residuals around the Fe K energies, i.e. around 8~keV, which is the blue wing of a potential broad iron line (can be confirmed with model~3). The best-fit model gives a blackbody temperature $kT_{\rm bb}=0.143$~keV, which is in the range of characteristic temperature over a wide range of AGN luminosities and black hole masses (e.g., \citealt{Walter1993}; \citealt{Gierlinski2004}; \citealt{Bianchi2009}; \citealt{Crummy2006}), favoring an origin through atomic processes instead of purely continuum emission.
The most peculiar aspect of this model is the relatively low temperature of the hot corona ( $kT_{\rm e} = 13.2_{-1.7}^{+2.5}$~keV), which is uncommon for AGNs (e.g., \citealt{Nandra1994}; \citealt{Ricci2017}; \citealt{Balokovic2020}). To check the potential degeneracy between the coronal temperature and the strength of reflection, we test the constraints on corona temperature (${kT}_{\rm e}$) and reflection fraction, presented in the upper panel of Fig.~\ref{contour}. Note that reflection fraction represents the so-called observer's reflection fraction \citep{Ingram2019} in our analysis, and is defined as the observed reflected flux divided by the observed hot coronal flux in the 0.3$-$80~keV band. From the contour plot, we find both parameters are tightly constrained, and there is not any degeneracy between two parameters. We now implement some more physically motivated model to study the soft excess and the excess around 6-7~keV.
\subsection{Model 2: warm corona model}
A warm ($T_{\rm e} \sim 10^{5-6}$~K) and optically thick ($\tau \sim $10–40) corona model has been proposed to explain the observed soft excess in AGNs (e.g., \citealt{Magdziarz1998}; \citealt{Petrucci2018}; \citealt{Porquet2018}; \citealt{Middei2020}). In this scenario, the soft excess is the high-energy tail of the resulting spectrum of a warm corona. This corona may be an extended, slab-like plasma at the upper layer of the disc, which is cooler than the hot ($T_{\rm e} \sim 10^{8-9}$~K), centrally located, and more compact corona responsible for the non-thermal power-law continuum.
Based on model~1, in model~2 we replace \texttt{bbody} with the physically motivated model \texttt{nthcomp} to represent the warm corona. In this case, the model reads as \texttt{tbnew} $\times$ \texttt{zmshift} $\times$ (\texttt{nthcomp1}+\texttt{nthcomp2}+\texttt{xillverCp}) in XSPEC language. \texttt{nthcomp1} is to model the soft excess and \texttt{nthcomp2} is to model the hot corona. To model the reflection spectrum, we still use \texttt{xillverCp} with $F_{\rm ref}$ fixed to $-1$. \texttt{nthcomp} is characterized by the continuum slope, $\Gamma$, the temperature of the covering electron gas, $kT_{\rm e}$, and the seed photon temperature, $kT_{\rm bb}$. We fix $kT_{\rm bb} = 10$~eV for \texttt{nthcomp1} and \texttt{nthcomp2}. The electron temperature $kT_{\rm e}$ and spectral slope $\Gamma$ are set to be variable in \texttt{nthcomp1}. And the electron temperature $kT_{\rm e}$ and spectral slope $\Gamma$ of \texttt{nthcomp2} are linked to that in \texttt{xillverCp}.
The warm corona model results in a fit statistic $\chi_{\rm red}^{2} \sim 1.113$ (see the fourth column of Tab.~\ref{best-fit-1}) and models the residuals in Fig.~\ref{soft_excess_iron} well (the middle panel of Fig.~\ref{ratio_chi}). The spectral slope found for the warm corona is $\Gamma = 2.6_{-0.4}^{+0.4}$ and the estimate of the temperature is $0.1908_{-0.04}^{+0.0009}$, which is consistent with the range ($0.1 \sim 1$ keV) reported in \citet{Petrucci2018}. By comparison, the hot corona is characterized by a more gentle spectral slope ($\Gamma=1.716_{-0.009}^{+0.009}$) and a higher temperature (${kT}_{\rm e}=13.23_{-0.9}^{+0.21}$~keV), which are almost identical to model 1. The middle panel of Fig.~\ref{contour} shows the constraints on the temperature of the hot corona (${kT}_{\rm e}$) and the reflection fraction for model~2.
\begin{table*}
\centering
\caption{Best-fit values for model~1, model~2 and model~3.} \label{best-fit-1}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{6mm}
\begin{tabular}{lcccc}
\hline\hline
& & Model~1 & Model~2 & Model~3 \\
\hline
Model & Parameter & & & \\
\texttt{tbnew} & $N_{\rm H}$ (10$^{22}$~cm$^{-2}$) & $0.041_{-0.007}^{+0.009}$ & $0.061_{-0.013}^{+0.003}$ & $0.034_{-0.004}^{+0.005}$ \\
\hline
\texttt{bbody} & $kT$ (keV) & $0.143_{-0.011}^{+0.017}$ & - & - \\
\texttt{nthcomp} & $\Gamma$ & - & $2.6_{-0.4}^{+0.4}$ & - \\
& $kT_{\rm e}$ (keV) & - & $0.1908_{-0.04}^{+0.0009}$ & - \\
\texttt{relconv} & $a^*$ & - & - & $0.998^*$ \\
\texttt{xillverDCp} & $\log n_{\rm e}$ & - & - & $18.1_{-1.0}^{+0.4}$ \\
& $\log \xi$ & - & - & $1.0_{-0.7}^{+0.6}$ \\
\hline
\texttt{nthcomp} & $\Gamma$ & $1.716_{-0.009}^{+0.01}$ & $1.716_{-0.009}^{+0.009}$ & $1.753_{-0.02}^{+0.014}$ \\
& $kT_{\rm e}$ (keV) & $13.2_{-1.7}^{+2.5}$ & $13.23_{-0.9}^{+0.21}$ & $13.8_{-1.9}^{+2.5}$\\
\hline
\texttt{xillverCp} & $A_{\rm Fe}$ & $10.0_{-1.5}^{+P}$ & $10.0_{-0.9}^{+P}$ & $8.1_{-3}^{+P}$ \\
& $i$ (deg) & $73^*$ & $73_{-15}^{+7}$ & $72.6_{-5}^{+4}$ \\
& $\log \xi$ & $0.0_{-P}^{+0.5}$ & $0.0_{-P}^{+0.4}$ & $0^*$ \\
\hline
& $C_{\rm FPMA}$ & $1.247_{-0.022}^{+0.023}$ & $1.247_{-0.016}^{+0.016}$ & $1.248_{-0.022}^{+0.022}$ \\
& $C_{\rm FPMB}$ & $1.226_{-0.022}^{+0.023}$ & $1.226_{-0.016}^{+0.016}$ & $1.228_{-0.022}^{+0.022}$ \\
\hline
& $F_{\rm bbody}$ ($\times$ 10$^{-13}$) & $2.6_{-0.9}^{+1.0}$ & - & - \\
& $F_{\rm WC}$ ($\times$ 10$^{-13}$) & - & $4.7_{-1.8}^{+1.3}$ & - \\
& $F_{\rm xillverCp}$ ($\times$ 10$^{-13}$) & $5.9_{-0.8}^{+1.7}$ & $5.9_{-0.6}^{+0.6}$ & $6.2_{-1.5}^{+2.5}$ \\
& $F_{\rm HC}$ ($\times$ 10$^{-11}$) & $1.55_{-0.03}^{+0.06}$ & $1.51_{-0.04}^{+0.05}$ & $1.41_{-0.06}^{+0.06}$ \\
& $F_{\rm RR}$ ($\times$ 10$^{-12}$) & - & - & $1.2_{-0.3}^{+0.3}$ \\
\hline
$\chi^2$/d.o.f & & 646.39/585 & 649.51/584 & 648.11/584 \\
\hline\hline
\end{tabular} \\
\vspace{0.2cm}
\textit{Note.} The flux (0.3--80 keV) of the each component are presented in units of erg~s$^{-1}$~cm$^{-2}$. $F_{\rm WC}$, $F_{\rm HC}$ and $F_{\rm RR}$ represent the flux of the warm corona component, the hot corona component and the relativistic reflection component, respectively. $\xi$in units of erg~cm~s$^{-1}$. $n_{\rm e}$ in units of cm$^{-3}$.
\end{table*}
\subsection{Model 3: relativistic reflection model}
The other popular explanation of the soft excess is the relativistic disc reflection model (e.g., \citealt{Crummy2006}; \citealt{Fabian2009}; \citealt{Walton2013}; \citealt{Jiang2018}). In the strong gravitational field of supermassive black holes, the fluorescent features radiated on the inner region of the accretion disc are blurred \citep{Fabian2005}. Moreover, it has recently been shown that the existence of the enhanced inner-disc density, above the commonly assumed value of $n_{\rm e} = 10^{15}$~cm$^{-3}$ (e.g., \citealt{Ross1993}, \citealt{Ross2005}; \citealt{Garcia2011}), results in an increased emission at soft energies ($<$2 keV). This occurs because at high densities free–free heating (bremsstrahlung) in the disc atmosphere becomes dominant and results in an increased gas temperature (\citealt{Garcia2016}; \citealt{Jiang2019c}). As a consequence, relativistic reflection from a highly dense disc can lead to increased low-energy emission, which may account for the soft excess.
To test this explanation, We implement the reflection model \texttt{xillverDCp} \footnote{https://sites.srl.caltech.edu/~javier/xillver/index.html}, a version of \texttt{xillver} that allows for variable disc density, convolved by \texttt{relconv} \citep{Dauser2013} to model the relativistic reflection component. And the bare \texttt{xillverCp}, in which electron density is fixed at $n_{\rm e} = 10^{15}$~cm$^{-3}$, is reserved for the distant non-relativistic reflection component. From the view of self-consistent model, the parameters $F_{\rm ref}$ of the two reflection models are fixed as $-1$ to return only the reflection component, and a non-thermal power-law continuum model \texttt{nthcomp} is included to account for the power-law component from the hot corona. In XSPEC language, the model combination is \texttt{tbnew} $\times$ \texttt{zmshift} $\times$ (\texttt{relconv} $\times$ \texttt{xillverDCp} + \texttt{nthcomp} + \texttt{xillverCp}). Same as model~2, we fix $kT_{\rm bb} = 10$~eV for \texttt{nthcomp}. For the spectral slope $\Gamma$ and electron temperature ${kT}_{\rm e}$, the hot corona, relativistic disc reflection, and distant reflection components are tied together. And for \texttt{xillverCp} and \texttt{xillverDCp}, the inclination angles are linked to the same parameter in \texttt{relconv}. The ionization parameter is set to its minimum ($\xi = 0$~erg~cm~s$^{-1}$) in \texttt{xillverCp} and free in \texttt{xillverDCp}. Similarly, the electron density is set to its minimum ($n_{\rm e} = 10^{15}$~cm$^{-3}$) in \texttt{xillverCp} and free in \texttt{xillverDCp}. The inner radius $R_{\rm in}$ and the outer disc radius $R_{\rm out}$ of the accretion disc are fixed at their default value, i.e., $R_{\rm in}=R_{\rm ISCO}$ and $R_{\rm out}=400R_{\rm g}$ ($R_{\rm g}=GM/c^2$, gravitational radii). We assume that the emissivity profile follows a $q=3$ power law and the spin parameter is fixed at the maximum value $a_*=0.998$ in \texttt{relconv} because of its insensitivity to the fit, and the disc inclination angle varies freely. The iron abundance of \texttt{xillverCp} is linked to that in \texttt{xillverDCp}.
The best-fit values are listed in the fifth column of Tab.~\ref{best-fit-1} and the residuals are shown in the lower panel of Fig.~\ref{ratio_chi}. The relativistic reflection picture provides a better fit than the warm corona model, with $\chi_{\rm red}^{2} \sim 1.110$. Compared with the warm corona model, the relativistic reflection model slightly improves the fit with $\Delta \chi^2 = 1.95$. As shown in the fifth column of Tab.~\ref{best-fit-1}, this model gives essentially similar results to those reported in model~1 and model~2. With such a relativistic reflection model, the iron complex region is modeled better as shown in the lower right panel of Fig.~\ref{ratio_chi}. Same as the previous 2 models, this model provides a low temperature for the hot corona as well. The constraints on the temperature of the hot corona (${kT}_{\rm e}$) and reflection fraction are presented in the lower panel of Fig.~\ref{contour}. Here, the reflection fraction is flux ratio between the relativistic reflection component (dominates in 0.3$-$80~keV band) and the hot corona component.
In the fit of model~3, we model the emissivity profile of the disc with a power-law mode with $q_{\rm in}=q_{\rm out}=3$. If we free $q_{\rm in}$ and $q_{\rm out}$ in the fit, we find that these two parameters are insensitive to the fit. So we do not explore further such a possibility.
\section{Discussion}
\label{discussion}
In the previous sections, we presented the spectral properties of the AGN \src, and found that the variability mainly happens over the entire broadband spectrum, and the hardness ratio curve remains constant. With a simple absorbed power-law model, the typical AGN spectral features---a soft excess below $\sim$2 keV, Fe K$\alpha$ emission at $\sim$6.4 keV, and a cutoff at high energy, are revealed. The soft excess can be modeled by a single temperature blackbody model \texttt{bbody} with a typical temperature $kT_{\rm bb} = 0.143$~keV, and iron emission can be modeled by a simple gaussian model with centroid energy $E = 6.39$~keV in the rest frame of the object and line width $\sigma = 0.04_{-0.03}^{+0.05}$~keV, based on which we identify this line as Fe K$\alpha$ emission line and an origin of a distant reflector.
We carry out spectral analyses based on two different hypotheses to explain the soft excess: the warm corona and the relativistic reflection scenario. In the process of fitting with the warm corona model, the introduction of a soft ($\Gamma = 2.6$), cool temperature ($kT_{\rm e} = 0.1908$~keV) model \texttt{nthcomp} yields good results. In the process of fitting with the relativistic reflection model, we utilize the xillver-based model and this model provides somewhat satisfactory fits to the spectra. In this section, we discuss the physical aspects of previous fits and the exploration of further study.
\subsection{Eddington ratio estimation}
We calculate the Eddington ratio $\lambda_{\rm Edd}$ of \src\ by applying an average bolometric luminosity correction factor $\kappa$ = 20 \citep{Vasudevan2007} to the 2–10 keV band unabsorbed luminosity $5.56 \times 10^{42}$~erg~s$^{-1}$. A black hole mass of $\sim 4.57 \times 10^8 M_{\odot}$ \citep{Ponti2012} is considered. We obtain $\lambda_{\rm Edd} = \kappa L_{\rm X}/L_{\rm Edd} = 0.002$, which is consistent with the results reported in \citet{Ghosh2021}. \citet{Ghosh2021} analyzed the broadband spectra observed by \xmm\ and \textit{Suzaku}, and reported that the source was accreting at a sub-Eddington rate ($\lambda_{\rm Edd}$ varies within 0.002 - 0.008) between 2007 and 2012. They also reported a power-law continuum with a photon index varying between $\Gamma=1.7 - 2.0$, and the presence of a broad Fe emission line at $\sim$6.4 keV in the source spectra with $\sigma=0.08 - 0.14$~keV. It seems that the profile of Fe emission line is related to the flux state of the source. We will conduct a detailed investigation of the variability and spectral properties of different flux states in a forthcoming paper.
\subsection{Model the background spectra}
As shown in Fig.~\ref{soft_excess_iron}, the hard X-ray spectral shape of \src\ depends on the accuracy of the background modelling on \nustar. To avoid the influence of uncertainty from the background, we carefully choose the background region and use the most strict filtering criteria to filter the data. Moreover, we conduct a detailed analysis of background spectra and the uncertainty from the background spectrum. To do so, we use the toolkit named “\texttt{nuskybgd}” \citep{Wik2014}. Thanks to \texttt{nuskybgd}, we can construct the background spectra for any region in which we are interested.
As is typical, the background has both intrinsic and extrinsic components. The \texttt{nuskybgd} model consists of four components, which combine to fully describe the background: $$B_{\rm d}(E,x,y) = A_{\rm d}(E,x,y) + f_{\rm d}(E,x,y) + S_{\rm d}(E) + I_{\rm d}(E)$$
$A_{\rm d}(E,x,y)$ describes the stray-light cosmic X-ray background (CXB) through the aperture, marked by "aCXB"; $f_{\rm d}(E,x,y)$ describes the focused CXB, marked by "fCXB"; $S_{\rm d}(E)$ describes instrument line emissions and reflected solar X-rays, marked by "Inst"; $I_{\rm d}(E)$ describes the instrument Compton scattered continuum emissions, marked by "Intn". Using the ftool included in \texttt{nuskybgd}, we can fit the background spectrum by the model above. Based on the best-fit parameters, the background image and the background spectrum for an arbitrary region in the FOV can be produced.
Our background estimation of \src\ with \nustar\ is as follows. We extract the FPMA and FPMB background spectra for Epoch~1-5 separately, according to the strategy in Sec.~\ref{observations}. And then fit the background spectra with the standard \texttt{nuskybgd} model. The spectra of Epoch~1 are shown with all the components of the background model in Fig.~\ref{back}. The background spectra are well-fitted by our background model. To construct the background spectrum for the region of interest, we use the ftool \texttt{nuskybgd-spec} in \texttt{nuskybgd} for aCXB, fCXB, Inst, and Intn components. The background is estimated based on the best-fit parameters of the background spectrum for the region of interest. Thus, we simulate FPMA and FPMB background spectra for every epoch, which is 10 times the exposure time of the original observation. Because the model for background spectra is complex, the error would be very large if the exposure is too short. Simulated background spectra are merged to an averaged spectra for FPMA and FPMB separately. We replace the background spectra with the simulated one in model 1 to estimate the uncertainty from the background. Fig.~\ref{back_contour} compares the measurements using two methods. An almost identical constraint is given with the simulated background.
As shown in Fig.~\ref{back}, at higher ($E > 30$~keV) energies, internal $I_{\rm d}(E)$ term strongly dominates the background spectra. The remainder of the internal background consists of various activation and fluorescence lines, which are mostly resolved and only dominate the background between 22–32 keV. Above these energies, weaker lines are still present, but the continuum dominates. There is no dependence on pixel location $x$, $y$, only on energy $E$, so the spatial distribution across a given detector is uniform. and for internal background, the systematic uncertainty could in theory be arbitrarily close to 0\% \citep{Wik2014,Tsuji2019}, or at least has the least uncertainty compared to other components. In summary, \nustar\ observation supply the relatively reliable broadband spectra from 3.0-78.0~keV, although the background plays an important role in the high energy band.
\subsection{Physical properties of the warm corona model}
In the fit of model~2, the warm corona model with a hot corona and a neutral distant reflection component describes the observational data well. The corresponding optical depth of the warm corona ($\tau \sim 18-25$), calculated with Eq. (13) in \citet{Beloborodov1999}, i.e., $\Gamma \simeq \frac{4}{9} y^{-\frac{2}{9}}$ with $y =4[kT_{e}/m_{e}c^2+(kT_{e}/m_{e}c^2)^2]\tau(\tau + 1)$ the so-called Compton parameter. \citet{Petrucci2018} test the warm corona model on a statistically significant sample of unabsorbed, radio-quiet AGNs with \xmm\ archival data and find the temperature of the warm corona to be uniformly distributed in the 0.1–1 keV range, while the optical depth is in the range $\sim$ 10–40. The observational characteristics of the warm corona (i.e., a photon index of 2.5 and a temperature of 0.1–2~keV) is within the prediction of \citet{Petrucci2018} ($\tau \sim$10–40), and agree with an extended warm corona covering the disc which is mainly nondissipative (\citealt{Petrucci2018}, later corrected in \citealt{Petrucci2020}).
With the warm corona scenario, we get a slightly harder slope of the continuum $\Gamma= 1.716$, compared with model~3. Similar results were also be found in \citet{Garcia2018}, and \citet{Xu2021}. It means that the warm corona scenario requires a harder continuum in the absence of the compensation of the high energy photons from the inner disc reflection. In our fits, the difference in the photon index between the warm corona and relativistic reflection is $\Delta \Gamma \sim 0.04$. For the given X-ray AGN spectrum data with the soft excess, the lack of any disc reflection component in the pure warm corona model is likely to provide a harder continuum than the relativistic reflection explanation.
However, \citet{Gronkiewicz2020} computed the transition from the disc to corona, using the vertical model of the disc supported and heated by the magnetic field together with radiative transfer in hydrostatic and radiative equilibrium. They concluded that the radial extent of the warm corona is limited by local thermal instability and a warm corona like this is stronger in the case of a higher accretion rate and a greater magnetic field strength. So thermal instability should prevent the warm corona from forming for lower accretion rate system. The low accretion rate system is unable to provide enough energy to sustain a warm corona \citep{Ballantyne2020}. It is therefore unclear whether a strong warm corona can be sustained at the low accretion rates relevant here ($\dot{m}\sim0.002$). This may imply that even if a warm corona is present, a contribution from the disc reflection would be necessary to produce the observed strong soft excess.
\subsection{Physical properties of the relativistic disc reflection}
The high-density disc reflection model proposed in \citet{Garcia2016} is based on an extended model of the standard accretion disc. \citet{Garcia2016} demonstrated that if the disc density is higher than the typically fixed value $n_{\rm e} = 10^{15}$~cm$^{-3}$, the main effect is the enhancement of the reflected continuum at low energies, further enhancing the soft excess. We note that the relativistic reflection model produces a similar statistical result to the warm corona model for \src, with consistent key parameters.
The relativistic reflection explanation requires a dense accretion disc with density $\log [n_{\rm e}/$cm$^{-3}]=18.1_{-1.0}^{+0.4}$. This result is consistent with previous findings that a larger gas density than the previously adopted value of $\log [n_{\rm e}/$cm$^{-3}]=15$ is usually required for SMBHs with $\log [m_{\rm BH}/M_{\odot}] \le 8$, like Ark~564 \citep{Jiang2019c} and ESO 362$-$G18 \citep{Xu2021}. Another factor that affects the expected disc density is the accretion rate. \citet{Svensson1994} derived a relationship between the density of a radiation-pressure-dominated disc and the accretion rate, according to the standard thin disc model \citep{Shakura1973} . With the correlation formula $\log[n_{\rm e}] \propto -2\log[\dot{m}]$, they concluded that a lower accretion rate leads to a higher gas density. Similar conclusions were found in disc reflection modelling of black hole (BH) X-ray binaries \citep{Jiang2019b} and other AGNs with a high BH mass \citep[e.g. Mrk~509,][]{Garcia2018}.
Another characteristic is the low ionization parameter of the relativistic reflection component, which indicates that the degree of ionization on the accretion disc is relatively low. \citet{Ballantyne2011} reported a positive statistical correlation between $\xi$ and the AGN Eddington ratio $\dot{m}$ based on the simple $\alpha$-disc theory. We plug the Eddington ratio $\dot{m}=0.002$ into Formula (1) of \citet{Ballantyne2011}, and get the estimation of the ionization parameter through, $\log\xi \sim 0.5$. This value is smaller than our fitting results, but consistent within error. The physical interpretation is that the accretion rate affects the fraction of the accretion energy dissipated in the corona (e.g., \citealt{Svensson1994}; \citealt{Merloni2002}; \citealt{Blackman2009}), which emits X-ray photons to photoionize the inner disc surface. All models show a low reflection fraction, which recall the case of an outflowing corona (\citealt{Beloborodov1999b}; \citealt{Malzac2001}). This can be confirmed by future missions.
We explore the possibility of measuring the spin with this model. The result is that the spin parameter cannot be constrained in \texttt{relconv}. With one more free parameter, we slightly improve the fit with $\Delta \chi^2= 0.95$ and only have a lower limit ($a_*>-0.58$).
\subsection{Low-temperature corona in sub-Eddington accretors}
The most striking discovery is the relatively low temperature of the hot corona, which is confirmed by all broadband models, as shown in Fig.~\ref{contour}. To seek any possible variability of the temperature of coronae in a long timescale, we fit the \textit{Swift} 70-month BAT spectrum \citep{Baumgartner2013} together with the \xmm\ and \nustar\ spectrum. The ratio plot is shown in Fig.~\ref{swift_ratio}, fitted with model 1. The \textit{Swift} BAT spectrum shows a consistent shape with the \nustar\ FPM spectra above 30~keV, and we get an almost identical fitting results. The cross-calibration constant for BAT is 3.64.
\citet{Fabian2015} compiled a sample of all the high energy cut-offs observed with \nustar\ and populated these sources on the compactness-temperature ($\ell$–$\Theta$) plane, where $\Theta = kT_{\rm e}/m_{\rm e}c^{2}$ is the coronal electron temperature normalized by the electron rest energy and $l= (L/R)(\sigma_{\rm T}/m_{\rm e}c^{3}$) is the dimensionless compactness parameter \citep{Guilbert1983}. \citet{Fabian2015} defined $L$ as the luminosity of the power-law component from 0.1–200 keV and $R$ as the radius of the corona (assumed spherical).
The allowed parameter space in the $\ell$-$\Theta$ plane is limited by theoretical constraints, like the pair balance line that is estimated by \citet{Svensson1984}. With more power is fed into the corona, electron temperature $\Theta = kT_{\rm e}/m_{\rm e}c^{2}$ rises, and Compton scattering of the soft photons produces a power-law radiation spectrum extending to a Wien tail at energies around $2\Theta$. When the tail extends above $\sim2m_{\rm e}c^{2}$, photon–photon collisions happen and create electron–positron pairs. Thus any additional energy input will lead to an increased number of pairs, but does not increase the source temperature. \citet{Fabian2015} analyzed all sources observed up to that point by \nustar\ and found nearly all sources lay just below the pair-production limit for thermal Comptonization, suggesting that these coronae are pair-dominated plasmas.
To compare our results of \src\ with the results of \citet{Fabian2015}, we conduct the same analysis with the result of model~3. With model~3, we get an electron temperature of 13.8~keV ($\Theta=0.027$). Following \citet{Fabian2015}, we measure the Comptonization component from 0.1–200~keV to be $1.95\times10^{43}$~erg~s$^{-1}$. For simplicity, we assume a value of $R = 10R_{\rm g}$, which is a conservative assumption given the measurements. This leads to a compactness of $\ell=4.6$. \src\ lies well below the thermal pair-production limit and above the $e^{-}-p$ coupling line (Fig. 2 of \citet{Fabian2015}), and thus, assuming a thermal Comptonization model, we find that the corona in this source is a pair-dominated plasma. However, \citet{Fabian2017} re-examined the case of hybrid coronae \citep{Zdziarski1993}, where the plasma contains both thermal and non-thermal particles, as might be expected for a highly magnetized corona powered by the dissipation of magnetic energy. Searching the position in Fig. 6 of \citet{Fabian2017}, we find the corona of \src\ is prone to a hybrid plasma with large fraction of electrons following a non-thermal distribution. Further deep hard X-ray observations are required to distinguish these two scenarios.
Peculiarly, a low coronal temperature is seen in only a handful of AGN, such as 1H0419$-$577 (${kT}_{\rm e}=30^{+22}_{-7}$~keV; \citealt{Jiang2019a}), Ark~564 (${kT}_{\rm e}=15\pm2$~keV; \citealt{Kara2017}), GRS~1734$-$292 (${kT}_{\rm e}=11.9^{+1.2}_{-0.9}$~keV; \citealt{Tortosa2017}), IRAS~13197$-$1627 (${kT}_{\rm e}<42$~keV; \citealt{Walton2018}), 4C~50.55 (${kT}_{\rm e}\approx30$keV; \citealt{Tazaki2010}), IRAS~04416$+$1215 (${kT}_{\rm e}=3 \sim 20$~keV; \citealt{Tortosa2022}) and 3C~273 (${kT}_{\rm e}=12\pm1$~keV; \citealt{Madsen2015b}). Note that except the Seyfert 1 galaxy GRS 1734$-$292 ($\lambda_{\rm Edd}\sim0.03$) and the Seyfert 1.8 Galaxy IRAS 13197$-$1627 ($\lambda_{\rm Edd}\sim0.05$$-$$0.1$), all the other sources mentioned above are accreting at a significant fraction of Eddington. In this work, we find another source with a low-temperature corona in a significantly sub-Eddington regime.
\section{Conclusions}
\label{conclusion}
We investigated the variability and spectra properties from the joint \xmm\ and \nustar\ observing campaign on the Seyfert 1 Galaxy \src. Lightcurve and spectral analysis on its \xmm\ and \nustar\ simultaneous observations show that:
\begin{enumerate}
\item The source remained in a relatively constant flux state throughout the observation period.
\item The broadband (0.3–78 keV) spectrum shows the presence of a power-law continuum with a soft excess below 2 keV, a relatively narrow iron K$\alpha$ emission ($\sim$6.4 keV), and an obvious cutoff at high energies.
\item The soft excess can be modeled by two different possible scenarios: a warm corona, or a relativistically blurred reflection. Based on recent simulations of warm coronae, it is not clear whether such a warm corona structure can really exist at the low accretion rates relevant for \src. This may therefore argue in favor of a scenario in which the soft excess is instead dominated by the relativistic reflection.
\item The low temperature ($kT_{\rm e} \sim$ 13 keV) of the hot corona are required in all models.
\end{enumerate}
\section*{Acknowledgements}
This work was supported by the National Natural Science Foundation of China (NSFC), Grant No. 11973019, the Natural Science Foundation of Shanghai, Grant No. 22ZR1403400, the Shanghai Municipal Education Commission, Grant No. 2019-01-07-00-07-E00035, and Fudan University, Grant No. JIH1512604. J.J. acknowledges the support from the Leverhulme Trust, the Isaac Newton Trust and St Edmund's College, University of Cambridge.
\bibliography{ESO511}{}
\bibliographystyle{aasjournal}
|
Title:
Revised Temperatures For Two Benchmark M-dwarfs -- Outliers No More |
Abstract: Well-characterised M-dwarfs are rare, particularly with respect to effective
temperature. In this letter we re-analyse two benchmark M-dwarfs in eclipsing
binaries from Kepler/K2: KIC 1571511AB and HD 24465AB. Both have temperatures
reported to be hotter or colder by approximately 1000 K in comparison with both
models and the majority of the literature. By modelling the secondary eclipses
with both the original data and new data from TESS we derive significantly
different temperatures which are not outliers. Removing this discrepancy allows
these M-dwarfs to be truly benchmarks. Our work also provides relief to stellar
modellers. We encourage more measurements of M-dwarf effective temperatures
with robust methods.
| https://export.arxiv.org/pdf/2208.10510 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
stars: binaries-eclipsing, low-mass, fundamental parameters; techniques: photometric, spectroscopic
\end{keywords}
\vspace{0.3cm}
\section{Introduction}\label{sec:introduction}
There is a lack of precisely-characterised M-dwarfs in the literature. This inhibits our ability to constrain models of stellar structure for low mass stars. Exoplanet studies are also hampered, because our knowledge of the planets is limited by our knowledge of the host star. In the era of TESS and JWST, where M-dwarfs are popular targets, this is particularly problematic. In addition to poor statistics, there exist discrepancies between observations and theory. The most thoroughly studied is the so-called ``radius inflation'' problem, where M-dwarfs have been often observed with radii a few per cent higher at a given mass than expected by theoretical models \citep{Chabrier2000,Torres2014}. In this paper we tackle a different yet just as fundamental property: effective temperature. Like with the mass-radius relationship, we expect M-dwarfs to follow a mass-temperature relationship, with more massive stars being expectantly hotter. So far there is largely consistency between observations and theory, but any outliers must be rigorously studied.
Eclipsing binaries remain the most robust avenue for precise M-dwarf characterisation (e.g. \citealt{Triaud2017,vonBoetticher2019}). We can measure M-dwarf temperatures if we can observe the occultation of the M-dwarf by the companion star. In our study the M-dwarf is the smaller and cooler star in the binary, so its occultation is referred to as the secondary eclipse. An M-dwarf in the G + M eclipsing binary EBLM J0113+31 was found to have an effective temperature of $3922\pm42$ K \citep{MaqueoChew2014}, roughly 600 K hotter than expected for a $0.186M_\odot$ star. This irregularity was later shown to be erroneous by \citet{Swayne2020}, who calculated $T_{\rm eff,B}=3208\pm 40$ K, in line with expectations. The difference between the two studies was that \citet{Swayne2020} used TESS space-based photometry, whereas \citet{MaqueoChew2014} only had ground-based photometry. It was suggested that systematic errors in the J-band photometry created the error. A third analysis, \citet{Maxted2022}, added CHEOPS photometry and near-infrared SPIRou radial velocities. They derived a slightly higher temperature than \citet{Swayne2020} of $3375\pm40$ K. However, this is consistent with their heavier mass measurement of $0.197M_\odot$.
In this paper we study two other benchmark M-dwarfs with outlier temperatures: KIC 1571511B \citep{Ofir2012} and HD 24465B \citep{Chaturvedi2018}. The outlier nature of these targets is demonstrated in Fig.~\ref{fig:literature}. KIC 1571511B has $M_{\rm B}=0.14136M_\odot$ and $T_{\rm eff,B}=4030 - 4150$, which is $\sim1000$ K {\it hotter} than expected from models and the bulk of the literature. Conversely, HD 24465B has $M_{\rm B}=0.233M_\odot$ and $T_{\rm eff,B}=2335.6\pm8.6$ K, which makes it $\sim 800$ K {\it colder} than expected. This temperature is also suspiciously precise compared with the rest of the literature. Both KIC 1571511AB and HD 24465AB were first analysed using space-based photometry (Kepler and K2, respectively). They both have clearly visible secondary eclipses. We both re-analyse the existing data and new TESS data, using methods applied in several existing studies \citep{Gill2019,Swayne2020,Swayne2021}. We demonstrate that, as was the case with EBLM J0113+31, the original published temperatures are erroneous. We derive M-dwarf temperatures in line with theoretical models and the rest of the literature.
\section{Targets and Observations}\label{sec:targets_observations}
Observational and stellar properties are cataloged in Table~\ref{tab:target_table}. The photometric and spectroscopic data are shown in Fig.~\ref{fig:data}.
\subsection{KIC 1571511AB}
This is a 14.0-day eclipsing binary containing $1.265M_\odot$ and $0.141M_\odot$ stars, discovered using data from the original Kepler mission \citet{Ofir2012}. KIC 1571511B is considered a ``benchmark'' M-dwarf, which we define as having mass and radius errors less than 5\%. For KIC 1571511B: $\delta M_{\rm B}/M_{\rm B}=3.18\%$ and $\delta R_{\rm B}/R_{\rm B}=0.78\%$. The secondary star mass comes from six RV measurements from the FIbre-fed Echelle Spectrograph (FIES) on the Nordic Optical Telescope (NOT). \citet{Ofir2012} derive a temperature of $T_{\rm eff,B}=4030 - 4150$, which is roughly 1000 K hotter than expected. KIC 1571511AB has since been observed by the TESS space telescope. Unfortunately, the faintness of the target (Tmag = 12.95) means that we can see primary but not secondary eclipses, so we do not use these data.
\subsection{HD 24465AB}
This target comes from the \citet{Chaturvedi2018} study of four eclipsing binaries containing M-dwarfs. It is the only one which can be truly considered a benchmark M-dwarf, owing to precise K2 photometry. HD 24465AB is a 7.20-day binary consisting of $1.337M_\odot$ and $0.233M_\odot$ stars, where the M-dwarf is constrained to a precision of $\delta M_{\rm B}/M_{\rm B}=0.86\%$ and $\delta R_{\rm B}/R_{\rm B}=0.4\%$, although we suspect that these errors do not properly account for the modelling uncertainties in the primary star's parameters (Duck et al. under rev.). The mass is derived from 14 radial velocities taken with the PARAS (PRL Advanced Radial-velocity Abu-sky Search) spectrograph on the 1.2-m telescope at Gurushikhar, Mount Abu, India. HD 24465AB was observed by TESS in sectors 42 and 43, both in short cadence (120s). Unlike for KIC 1571511AB, these data are sensitive to the secondary eclipse because this is a much brighter target (Tmag = 8.50). This provides an opportunity to measure the secondary eclipse depth in two different passbands, since TESS has a significantly redder sensitivity than Kepler (Fig.~\ref{fig:bandpass}).
\renewcommand{\arraystretch}{1.3}
\begin{table}
\caption{Target information. Primary star parameters are taken from the original papers.} %
\label{tab:target_table} %
\centering %
\begin{tabular}{lll }
\hline\hline %
Name
& KIC 1571511AB
& HD 24465AB \\
\hline
TIC
& 122680701
& 242937935 \\
$\alpha$
& $19^{\rm h}23^{'}59.256^{"}$
& $03^{\rm h}54^{'}03.371^{"}$ \\
& $290.9969^{\circ}$
& $58.5140^{\circ}$ \\
$\delta$
& $+37^{\circ}11' 57.18^{"}$
& $+15^{\circ}08' 30.19^{"}$\\
& $+37.1992^{\circ}$
& $+15.1417^{\circ}$ \\
Original Paper
& \citet{Ofir2012}
& \citet{Chaturvedi2018} \\
$M_{\rm A}$ ($M_\odot$)
& $1.265^{+0.036}_{-0.030}$
& $1.337\pm0.008$\\
$R_{\rm A}$ ($R_\odot$)
& $1.343^{+0.012}_{-0.010}$
& $1.444\pm0.004$\\
$T_{\rm eff, A}$ (K)
& $6195\pm50$
& $6250\pm100$ \\
${\rm [Fe/H]}$
& $0.37\pm0.08$
& $0.30\pm0.15$ \\
\hline
\end{tabular}
\end{table}
\section{Methods}\label{sec:methods}
\subsection{Lightcurve Processing}\label{subsec:method_lightcurve}
For the Kepler, K2 and TESS data we use the \textsc{lightkurve} software package \cite{lightkurve2018} to download the data. For KIC 1571511AB we use the Kepler PDCSAP flux. For HD 24465AB we use the EVEREST flux \citep{Luger2016} for K2 and the PDCSAP flux for TESS. We flatten all three lightcurves using the \textsc{Wotan} detrending software \citep{Hippke2019}. We apply a Tukey's biweight filter with a 1 day window length, such that the eclipse depths are not affected. For HD 26645AB we manually removed the first few days of K2 data ($T-2,455,000 < 2065$). The original light curves and the fitted trends are shown in Fig~\ref{fig:data}.
\subsection{Exoplanet Fit}\label{subsec:methods_exoplanet}
We use the \textsc{exoplanet} software \citep{foremanmackey2021} to create joint photometry and radial-velocity fits. The fitted light curve, with primary and secondary eclipses, is calculated using \textsc{starry} \citep{Luger2019}, with quadratic limb darkening parameters calculated using \citet{Kipping2013}. After first estimating the maximum a posteriori parameters, we derive a posterior distribution and $1\sigma$ errorbars using PyMC3.
To convert from direct observables (e.g. the radial velocity semi-amplitude $K$ and the eclipse depths) to physical parameters ($M_{\rm B}$ and $R_{\rm B}$) we use the primary star mass and radius from the discovery papers (Table~\ref{tab:target_table}). These are implemented as a fixed value in all of the \textsc{Exoplanet} fits. The error in the primary star parameters is propagated to the errors in the secondary star mass and radius. The reason why we do not do a complete re-fit of all of the stellar and orbital parameters is that we want to fix as many parameters as possible. This will make it easier to identify the source of the surprisingly hot/cold temperatures previously published for the two targets.
For HD 22465 we do separate fits for the K2 and TESS data because the secondary eclipse depth will change in different passbands (Fig.~\ref{fig:bandpass}) and we seek two individual temperature measurements.
We note one issue with the radial velocity fits to HD 24465AB. We were unable to exactly replicate the fit of \citet{Chaturvedi2018} with their published data. In particular, our values for $K$ differ by $\approx 700$ m/s. In their Fig. 2 there are essentially no residuals to the RV fit, but in our best \textsc{exoplanet} fit we have residuals of 100's of m/s. We attempted an RV-only fit with the genetic algorithm \textsc{yorbit} \citep{Segransan2011}, but obtained the same fit as with \textsc{exoplanet}. Ultimately, our derived value for $M_{\rm B}$ is consistent with theirs, so for the purposes of this paper exploring the $T_{\rm eff}$ vs $M$ relationship our fit is sufficient. We had no such issues with the radial velocity fits of KIC 1571511AB.
\subsection{M-dwarf Effective Temperature Derivation}\label{subsec:methods_temperature}
The secondary eclipse depth is related to the brightness ratio of the two stars by
\begin{align}
\label{eq:sec_eclipse_depth}
D_{\rm sec} &= k^2S + A_{\rm g} \left(\frac{R_{\rm B}}{a}\right)^2,
\end{align}
where $k$ is the radius ratio, $S$ is the surface brightness ratio and $A_{\rm g}$ is the geometric albedo \citep{Charbonneau2005,Canas2022}. In the first line the $k^2S$ factor is the contribution from the intrinsic brightness of the M-dwarf. The $A_{\rm g}(R_{\rm B}/a)^2$ factor is light from the primary star reflected off the M-dwarf. Owing to the relatively wide separation of the binaries and a typical albedo of $A_{\rm g}=0.1$ \citep{Marley1999,Canas2022}, the reflection effect can be considered negligible. For example, for KIC 1571511 the reflection effect is $\approx4$ ppm, relative to a $\approx200$ ppm secondary eclipse.
To derive the secondary star's effective temperature we first calculate $S$ from Eq.~\ref{eq:sec_eclipse_depth} and then solve for $T_{\rm eff,B}$ in
\begin{align}
\label{eq:integral}
S= \frac{\int \tau(\lambda)F_{\rm B,\nu}(\lambda,T_{\rm eff,B},\log{g_{\rm B}})\lambda d\lambda}{\int \tau(\lambda)F_{\rm A,\nu}(\lambda,T_{\rm eff,A},\log{g_{\rm A}})\lambda d\lambda},
\end{align}
where $\tau$ is the instrumental transmission function for Kepler/K2/TESS\footnote{Kepler and K2 are different missions but the same telescope, and hence the same transmission function. Both Kepler and TESS transmission functions can be downloaded here: \url{http://svo2.cab.inta-csic.es/svo/theory/fps3/index.php?}} and $F$ is the flux of each star as a function of wavelength $\lambda$, effective temperature and surface gravity.
The factor of $\lambda$ inside each integral is the same correction as made in Duck et al. (under rev.), based on \citet{Bessell2012}. This is because the transmission functions are setup for the photon-counting instrumental CCDs, and so we add a factor of $\lambda/(hc)$ to calculate the instrumental flux rather than the photon count. The constants $hc$ are added to both integrals and hence cancel.
We calculate $F$ using an interpolated grid of PHOENIX stellar spectra models \citep{husser2013}. In Eq.~\ref{eq:integral} we convolve these theoretical spectra with the instrument's bandpass to predict the star's observed brightness. This is done for both stars. For the primary star we take the published value of $T_{\rm eff,A}$ and $\log{g_{\rm A}}$. For the secondary star we take our fitted value of $\log{g_{\rm B}}$, use the literature value for [Fe/H] and test a grid of $T_{\rm eff,B}$ between 2500 and 4000 K. We then solve Eq.~\ref{eq:integral} for $T_{\rm eff}$. The error bar on $T_{\rm eff,B}$ comes from applying Eq.~\ref{eq:integral} with the $1\sigma$ errors on $S$, [Fe/H], $T_{\rm eff,A}$, $\log{g_{\rm A}}$ and $\log{g_{\rm B}}$.
In Fig.~\ref{fig:bandpass} (right) we demonstrate how the secondary eclipse depth changes as a function of $T_{\rm eff,B}$, for different bandpasses (Kepler and TESS) and different host star masses ($1.0M_\odot$ and $1.2M_\odot$). We see that detecting secondary eclipses for late M-dwarfs (<3000 K) becomes very challenging.
\section{Results and Discussion}\label{section:results_discussion}
We derive effective temperatures that are significantly different to the \citet{Ofir2012,Chaturvedi2018} results, but in line with expectations from both models and the rest of the literature. This difference is highlighted in Fig.~\ref{fig:literature}. All of the results provided in Table~\ref{tab:params}. In Fig.~\ref{fig:fits} we show zoomed fits to the primary and secondary eclipses for both targets.
For HD 24465AB our TESS and K2 temperatures are slightly discrepant at the $\approx2\sigma$ level. This may be an artefact of our handling of dilution or light curve detrending. Our fractional uncertainty on $D_{\rm sec}$ for HD24465AB is 0.4\% for K2, compared with 1.7\% for TESS. The higher precision of K2 more than compensates for the deeper secondary eclipse in TESS's redder bandpass (Fig.~\ref{fig:bandpass}).
Our differences between $T_{\rm eff,B}$ in K2 and TESS are very small relative to the difference with the value from \citet{Chaturvedi2018}. Our fitted parameters for the M-dwarf mass and radius, as well as the binary orbital parameters, largely match the discovery papers. This suggests consistency with the fitting of the radial velocities and the primary eclipse, at least.
Why was the \citet{Ofir2012} temperature for KIC 1571511B roughly $1000$ K {\it too hot}, and the \citet{Chaturvedi2018} result for HD 24465B roughly $800$ K {\it too cold}?
\begin{table*}
\caption{Fitted parameters from both this paper and the literature.}
\begin{tabular}{l|ll|lll}
\hline
Target & \multicolumn{2}{c|}{KIC 1571511AB} & \multicolumn{3}{c}{HD 24465AB} \\\hline\hline
Author & This Paper & \citet{Ofir2012} & This Paper & This Paper & \citet{Chaturvedi2018} \\
Instrument & Kepler & Kepler & TESS & K2 & K2 \\
\hline
$M_{\rm B}$ ($M_\odot$) & $0.1441\pm0.0025$ & $0.1414^{+0.0051}_{-0.0042}$ & $0.23077\pm0.00092$ & $0.23020\pm0.00092$ & $0.233\pm0.002$ \\
$R_{\rm B}$ ($R_\odot$) & $0.1770\pm0.0014$ & $0.1783^{+0.0013}_{-0.0016}$ & $0.2475\pm0.00069$ & $0.24235\pm0.00069$ & $0.244\pm0.001$ \\
$P$ (days) & $14.022640\pm0.000000052$ & $14.02248^{+0.000023}_{-0.000021}$ & $7.196365\pm0.000002$ & $7.19644\pm0.000002$ & $7.19635\pm0.00002$ \\
$e$ & $0.3661\pm0.0015$ & $0.3269 \pm 0.0027$ & $0.20792\pm0.00016$ & $0.20948\pm0.00010$ & $0.208\pm0.002$ \\
$K$ (km/s) & $10.515\pm0.037$ & $10.521\pm0.024$ & $19.227\pm0.006$ & $19.307\pm0.006$ & $18.629\pm0.053$ \\
$b_{\rm pri}$ & $0.3737\pm0.0052$ & $0.383^{+0.040}_{-0.049}$ & $0.8584\pm0.0042$ & $0.84161\pm0.00067$ & *$0.83926$ \\
$k$ & $0.13180\pm0.00014$ & $0.13277^{+0.00038}_{-0.00046}$ & $0.1714\pm0.0015$ & $0.16783\pm0.00015$ & *$0.169$ \\
$D_{\rm sec}$ (normalised flux) & $0.0002043\pm0.0000059$ & $0.000275\pm0.000019$ & $0.001340\pm0.000023$ & $0.0005812\pm0.0000023$ & 0.000018 \\
$D_{\rm sec}$ (ppm) & $204.3\pm5.9$ & $275\pm19$ & $1340\pm23$ & $581.2\pm2.3$ & 18 \\
$S$ & $0.01176\pm0.00034$ & *0.01560 & $0.04476\pm0.00098$ & $0.020633\pm0.000073$ & *0.00063 \\
$T_{\rm eff, B}$ Observed (K) & $2970\pm17$ & $4030 - 4150$ & $3142\pm36$ & $3200\pm38$ & $2335.60\pm8.56$ \\
\hline
$T_{\rm eff, B}$ MIST Model (K) & \multicolumn{2}{c|}{2863} & \multicolumn{3}{c}{3020} \\
\hline
\end{tabular}
\flushleft\footnotesize{Parameter descriptions in order: $M_{\rm B}$ - M-dwarf mass; $R_{\rm B}$ - M-dwarf radius; $P$ - binary period; $e$ - binary eccentricity; $K$ - radial velocity semi-amplitude; $b_{\rm pri}$ - primary eclipse impact parameter; $k=R_{\rm B}/R_{\rm A}$ - radius ratio; $D_{\rm sec}$ - secondary eclipse depth in normalised flux units; $S$ surface brightness ratio; $T_{\rm eff,B}$ Observed - M-dwarf effective temperature; $T_{\rm eff,B}$ MIST Model - theoretically predicted temperature from MIST stellar models \citep{mist}. Parameters noted with * were not explicitly provided in the earlier papers and are instead calculated by us. We suspect that $D_{\rm sec}=$ 18 ppm ``observed'' by \citet{Chaturvedi2018} was actually a predicted value from their \textsc{PHOENIX} fit of the primary eclipse.}
\label{tab:params}
\end{table*}
\subsection{KIC 1571511B}
\citet{Ofir2012} derive an M-dwarf temperature of 4030 - 4150 K, which is more than 1000 K hotter than our value of $2970\pm 17$ K. There are three differences between our studies. First, their secondary eclipse depth ($D_{\rm sec}=0.000275\pm0.000019$) is roughly $3\sigma$ deeper than ours ($D_{\rm sec}=0.0002043\pm0.0000059$), despite both values being derived from Kepler data. It is possible that different light curve processing led to this discrepancy. We re-do our analysis with the \citet{Ofir2012} $D_{\rm sec}$ and derive only a slightly hotter temperature of $3075\pm 18K$, which would still be within theoretical expectations.
A second difference is that \citet{Ofir2012} derive $T_{\rm eff,B}$ assuming blackbodies, as opposed to our \textsc{PHOENIX} model spectra (comparison in Fig.~\ref{fig:bandpass}). We re-do our analysis with this assumption and actually obtain a colder temperature of $2695\pm 15$ K. This would be an outlier, but in the opposite direction of the \citet{Ofir2012} result. The third difference is that \citet{Ofir2012} assume a uniform Kepler passband between 420 and 900 nm. By applying this assumption we obtain $2884\pm15$. Again, this simplying assumption actually acts to make the M-dwarf {\it cooler}. This can be seen in Fig.~\ref{fig:bandpass} where the assumption of a uniform bandpass would imply a higher Kepler sensitivity at redder wavelengths. Overall, we cannot explain why the \citet{Ofir2012} result is so hot.
\subsection{HD 26645}
\citet{Chaturvedi2018} derive $T_{\rm eff,B}=2335.60\pm8.56$, which is 865 K cooler than our value from K2. We suspect that this value does not come from fitting the secondary eclipse, since \citet{Chaturvedi2018} note ``The secondary eclipse depth for all the sources were either undetectable or small'', yet they provide measurements for $T_{\rm eff,B}$ in all four cases. Figure~\ref{fig:fits} show that the secondary eclipse is in fact clear in our K2 data. However, \citet{Chaturvedi2018} do not appear to be using the EVEREST pipeline, and hence the secondary eclipse may have been hidden to them behind telescope systematics.
\citet{Chaturvedi2018} use the \textsc{PHOEBE} \citep{Prsa2011Phoebe} package to model the photometry and radial velocities, and state that $T_{\rm eff,B}$ is ``kept free for fitting''. However, since the fit was seemingly only of the {\it primary} eclipse and the radial velocities, there is essentially no information in the light curve concerning $T_{\rm eff,B}$. \textsc{PHOEBE} is arguably the most detailed software available for fitting eclipsing binaries, and we doubt it would produce such an outlier $T_{\rm eff,B}$ if both eclipses were being fit.
Finally, \citet{Chaturvedi2018} do note for HD 24465AB a secondary eclipse depth of 0.000018 in normalised flux units. This value is 32 times smaller than ours. We suspect this is not an observed depth but a predicted depth based on the $2335$ K effective temperature.
\section{Conclusion}\label{section:conclusion}
We have studied two benchmark M-dwarfs: KIC 1571511B \citep{Ofir2012} and HD 24465B \citep{Chaturvedi2018}. The former had a reported temperature about 1000 K hotter than expected. The latter was about 800 K colder than expected. Such discoveries would have posed significant challenges to stellar models. We re-analyse the original Kepler/K2 data to derive the M-dwarf effective temperature based on the secondary eclipse depth. Our results differ significantly from the original studies, and instead match the temperatures expected from both models and the majority of other literature M-dwarfs. With these new precise and reliable M-dwarf temperatures, these two targets can be truly considered benchmarks.
\section*{Data Availability Statement}
All radial velocities and light curves will be made available online.
\section*{Acknowledgements}
Support for DVM was provided by NASA through the NASA Hubble Fellowship grant HF2-51464 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This research is also supported work funded from the European Research Council (ERC) the European Union’s Horizon 2020 research and innovation programme (grant agreement n◦803193/BEBOP). Partial support for AD, RRM, and BSG
was provided by the Thomas Jefferson Chair Endowment for Discovery and Space Exploration.
MIS acknowledges support from STFC grant number ST/T506175/1.
\bibliographystyle{mnras}
\bibliography{outlier_temps} %
\bsp %
\label{lastpage} |
Title:
A numerical study of the interplay between Fermi acceleration mechanisms in radio lobes of FR-II radio galaxies |
Abstract: Context: Radio-loud AGNs are thought to possess various sites of particle
acceleration, which gives rise to the observed non-thermal spectra. Stochastic
turbulent acceleration (STA) and diffusive shock acceleration (DSA) are
commonly cited as potential sources of high-energy particles in weakly
magnetized environments. Together, these acceleration processes and various
radiative losses determine the emission characteristics of these extra-galactic
radio sources.
Aims: The purpose of this research is to investigate the dynamical interplay
between the STA and DSA in the radio lobes of FR-II radio galaxies, as well as
the manner in which these acceleration mechanisms, along with a variety of
radiative losses, collectively shape the emission features seen in these
extra-galactic sources.
Methods: A phenomenologically motivated model of STA is considered and
subsequently employed on a magneto-hydrodynamically simulated radio lobe
through a novel hybrid Eulerian-Lagrangian framework.
Results: STA gives rise to a curved particle spectrum that is morphologically
different from the usual shock-accelerated spectrum. As a consequence of this
structural difference in the underlying particle energy spectrum, various
multi-wavelength features arise in the spectral energy distribution of the
radio lobe. Additionally, we observe enhanced diffuse X-ray emission from radio
lobes for cases where STA is taken into account in addition to DSA.
| https://export.arxiv.org/pdf/2208.12823 |
\title{A numerical study of the interplay between Fermi acceleration mechanisms in radio lobes of FR-II radio galaxies}
\titlerunning{Fermi acceleration mechanisms in radio lobes of FR-II radio galaxies}
\author{Sayan Kundu
\inst{1}\fnmsep\thanks{sayan.astronomy@gmail.com}
\and
Bhargav Vaidya
\inst{1}
\and
Andrea Mignone
\inst{2}
\and
Martin J. Hardcastle
\inst{3}
}
\institute{Discipline of Astronomy, Astrophysics and Space Engineering, Indian
Institute of Technology, Indore, Madhya Pradesh, India - 452020
\and
Dipartimento di Fisica Generale, Universita degli Studi di Torino, Via Pietro Giuria 1, 10125 Torino, Italy
\and
Centre for Astrophysics Research, Department of Physics, Astronomy and Mathematics, University of Hertfordshire, College Lane, Hatfield AL10 9AB, UK
}
\date{}
\authorrunning{Kundu, Vaidya, Mignone, Hardcastle}
\abstract
{Radio-loud AGNs are thought to possess various sites of particle acceleration, which gives rise to the observed non-thermal spectra.
Stochastic turbulent acceleration (STA) and diffusive shock acceleration (DSA) are commonly cited as potential sources of high-energy particles in weakly magnetized environments.
Together, these acceleration processes and various radiative losses determine the emission characteristics of these extra-galactic radio sources.}
{The purpose of this research is to investigate the dynamical interplay between the STA and DSA in the radio lobes of FR-II radio galaxies, as well as the manner in which these acceleration mechanisms, along with a variety of radiative losses, collectively shape the emission features seen in these extra-galactic sources.}
{A phenomenologically motivated model of STA is considered and subsequently employed on a magneto-hydrodynamically simulated radio lobe through a novel hybrid Eulerian-Lagrangian framework.}
{STA gives rise to a curved particle spectrum that is morphologically different from the usual shock-accelerated spectrum.
As a consequence of this structural difference in the underlying particle energy spectrum, various multi-wavelength features arise in the spectral energy distribution of the radio lobe.
Additionally, we observe enhanced diffuse X-ray emission from radio lobes for cases where STA is taken into account in addition to DSA.}
{}
\keywords{Magnetohydrodynamics (MHD)
-- Methods: numerical
-- Acceleration of particles
-- Radio continuum: galaxies
-- X-rays: galaxies
-- Turbulence}
\section{Introduction}
{Radio galaxies} are considered one of the most energetic systems in the universe.
These extra-galactic objects are observed to possess a huge reservoir of relativistic non-thermal particles, which collectively shape their emission features \citep{blandford_2019}.
Further, due to the abundance of highly energetic particles, these galaxies are generally considered a favourable site to study various high-energy phenomena \citep{meisenheimer_2003}.
In recent years, with the advent of multi-messenger astronomy, different observations are uncovering various features and helping us understand different micro-physical processes happening in these systems \citep{marcowith_2020}.
Low frequency radio observations of these radio galaxies provide insights about their morphological structures \citep[see][for more details]{hardcastle_2020}, their magnetic field strength \citep{Croston_2005b} and their age \citep{alexander_1987,carilli_1991,Mahatma_2019}.
Based on the brightness of these sources at $178$\,MHz, they are classified as Fanaroff Rilley (FR) class I (low-power) or II (high-power) \citep{fanaroff_1974}.
These two classes of radio galaxies are observed to manifest different morphological structures.
While FR-II sources exhibit a one-sided smooth spine-like structure with a bright termination point, FR-I sources show a two-sided plume-like structure.
Additionally, FR-II sources show prominent signs of turbulent cocoons that have an extent of a few hundred kiloparsecs and are often partly visible as lobes \citep[][]{mullin_2008,hardcastle_2020}.
These lobes are believed to be highly magnetized cavities of rarefied plasma where most of the jet kinetic power is deposited.
Radio lobes also have a hotspot region near the jet termination region, responsible for accelerating particles to high energies via diffusive shock acceleration (DSA) \citep{brunetti_2001,prieto_2002,araudo_2018}.
These freshly shock accelerated particles further get mixed with the older plasma particles, already residing in the lobe, which makes the lobe a turbulent playground for various plasma waves to interact with the particles and eventually accelerate them via stochastic turbulent acceleration (STA).
Such a mechanism has also been invoked to explain the particle acceleration in various astrophysical systems such as solar flares \citep{Petrosian_2012}, corona above accretion disk of compact object \citep{Dermer_1996,Liu_2004,Belmont_2008,Vurm_2009}, supernova remnant \citep{Bykov_1992,Kirk_1996,Macrowith_2010,Ferrand_2010}, gamma-ray burst \citep{Schlickeiser_2000}, emission from blazars \citep[see][and references therein]{Asano_2018,tavecchio_2022}, Fermi bubbles \citep{Mertsch_2019}, galaxy clusters \citep{brunetti_2007,donnert_2014,vazza_2021}.
STA has been invoked as a possible mechanism for producing Ultra High Energy Cosmic Rays (UHECRs) from the radio lobe of Pictor A \citep{fan_2008} and Cen A \citep[][]{hardcastle_2009,sullivan_2009}.
Also, recently, it has been invoked as a plausible candidate in explaining the spectral curvature usually observed in FR-II radio lobes \citep[][]{harris_2019}.
In addition to the radio observations, X-ray observations of these radio loud AGNs have become popular due to the minimal contamination of the X-ray radiation by non-AGN sources.
Several components of these sources, such as radio lobes, hotspots, and collimated radio jet spine, are observed to radiate in the X-ray band \citep{vries_2018,Massaro_2018}.
Additionally, these lobes are often observed to give rise to diffuse X-ray emission from the region between the host galaxy and the radio hot spot, which is usually ascribed to the inverse Compton emission off the cosmic microwave background radiation (IC-CMB) \citep[][]{Hardcastle_2002,Croston_2005,blundell_2006}.
Recent observations reveal that the non-thermal X-ray emission from the radio lobe increases with red-shift, further supporting the IC-CMB origin \citep{gill_2021}.
Diffuse X-ray emission has also been reported in the jets of the FR-I class of radio galaxies and has been ascribed to a distributed particle acceleration mechanism \citep{hardcastle_2007b,worral_2008,worrall_2009}.
An IC-CMB model is also sometimes invoked to explain X-ray emission from the jets of FR-II radio galaxies and quasars, however such models require the jet to be highly relativistic and well aligned with the line of sight and consequently tend to imply very large physical jet lengths, sometimes in excess of several mega parsecs \citep{tavecchio_2000,celotti_2001,ghisellini_2005}.
Further, recent polarimetric studies and high-energy gamma-ray constraints provide evidence supporting the synchrotron emission model as the origin of diffuse X-ray emission from AGN jets \citep[see][for a recent review]{perlman_2020}.
This consequently requires particles with very high energies to be present in the jet and also favours a distributed particle acceleration mechanism due to the short synchrotron lifetime of the radiating particles.
The present work explores, for the first time, the interplay of vital particle acceleration mechanisms in a weakly magnetised plasma environment such as the radio lobes of FR-II radio galaxies and studies their effect on the emission properties of these systems.
Due to the complicated evolution of the dynamical quantities as a result of non-linear plasma flow pattern inside these lobes, we adopt a numerical approach for this work.
In particular, we have employed MHD simulations to produce radio lobes and analyse the emission features caused by particle energization in the presence of shocks and underlying turbulence.
We have adopted our recently developed second-order accurate STA framework \citep{Kundu_2021} for this purpose.
Owing to the increased computational complexity of the developed framework, this paper will focus on a 2D axisymmetric MHD jet model only while leaving the more computationally expensive 3D case to forthcoming works.
The paper is organised in the following way:
we describe our numerical setup for simulating a 2D axisymmetric AGN jet in section~\ref{sec:Dyn}.
Section~\ref{sec:emiss_setup} describes the numerical model to compute the emission properties.
In section~\ref{sec:results}, we present the results of the simulations.
In section~\ref{sec:summary} we summarise our findings and discuss the limitations of our model.
\section{Numerical Setup}\label{sec:setup}
In this section, we describe the numerical setup adopted for the present work.
The radio lobes are typically associated with the termination point of the AGN jet, where the velocity of the jet material reduces considerably such that relativistic effects become negligible \citep{espinosa_2011}.
Further, as shown by \cite{hardcastle_2013}, numerical simulations of realistic radio lobe require high Mach number flows as well as very high-resolution meshes in order to have radio lobes in pressure equilibrium with the surrounding medium and to resolve the transverse radial equilibrium.
Therefore, to investigate the emission profile of the radio lobes, we focus on a non-relativistic scenario and perform a two dimensional axisymmetric ideal MHD simulation using PLUTO code \citep{mignone_2007}. In particular, we solve the following set of conservation equations,
\begin{equation}\label{eq:continuity}
\frac{\partial\rho}{\partial t} + \nabla \cdot (\rho \vec v) = 0\,,
\end{equation}
\begin{equation}
\frac{\partial\vec v}{\partial t} + (\vec v \cdot \nabla)\vec v = - \frac{1}{\rho}\nabla P
+ \frac{1}{\rho}(\nabla \times \vec B)
\times \vec B
\,,
\end{equation}
\begin{equation}
\frac{\partial P}{\partial t} + \vec v \cdot \nabla P
+ \Gamma P \nabla \cdot \vec v = 0 \,,
\end{equation}
\begin{equation}\label{eq:induction}
\frac{\partial\vec B}{\partial t} = \nabla \times (\vec v \times \vec B)
\,,
\end{equation}
The quantities $\rho$, $P$, $\vec v$ and $\vec B$ represents density, pressure, velocity and magnetic field respectively.
The magnetic field $\vec B$ further satisfies the constraint $\nabla \cdot \vec B=0$.
$\Gamma$ represents the ratio of specific heats and its value is taken to be $5/3$, which is typically considered for supersonic non-relativistic jets \citep[][]{massaglia_2016}.
Eqs.~(\ref{eq:continuity})-(\ref{eq:induction}) is solved with HLLC Riemann solver using piece-wise linear reconstruction, van Leer flux limiter \citep{vanleer_1977} and second order Runge-Kutta time-stepping.
Additionally, we consider divergence cleaning \citep{dedner_2002} to satisfy the solenoidal constraint of magnetic field.
\subsection{Dynamical Setup} \label{sec:Dyn}
The two-dimensional axisymmetric simulations are carried out in a cylindrical geometry $\{r,z\}$ such that the radial and vertical extends range from $\{0,0\}$ to $\{65 L_0,195 L_0\}$ with a resolution of $780\times 2340$.
The physical quantities defined in our simulations are appropriately scaled by defining length, velocity and density scales.
For the length, we define the jet radius $r_{j} = L_{0} = 2$\,kpc as the scale length.
The core density is adopted as the scale for density such that $\rho_{0} = 5\times10^{-26}$\,gm/cc.
Finally, for an ambient temperature $T_a = 2$\,keV, we define the sound speed $c_{a}=v_{0}=730$\,km/s as the scale velocity.
The ambient medium density is initialized with an isothermal King profile \citep[][]{king_1972},
\begin{dmath}\label{eq:king}
\rho_{a}=\frac{\rho_0}{\left(1+\left(\frac{R}{R_c}\right)^2\right)^{\frac{3\beta}{2}}},
\end{dmath}
where $\rho_{a}$ is the ambient density that consists of a core with radius $R_c=40 L_0$ and $R/L_0 = \sqrt{r^2+z^2}$ is the spherical radius. The value of power law index is kept constant as $\beta=0.35$.
Initially, the ambient medium is set in a hydrostatic equilibrium using a gravitational potential ($\Phi_{\rm k}$) \citep{krause_2005},
\begin{dmath}
\Phi_{\rm k}=\frac{3\beta k_{B}T_{a}}{2\mu m_{H}}\log\left(1+\left(\frac{R}{R_c}\right)^{\mathbf{2}}\right),
\end{dmath}
where $k_{B}$, $\mu$ and $m_{H}$ are the Boltzmann constant, mean molecular weight and mass of hydrogen atom respectively.
The ambient pressure ($P_{a}$) is computed as follows,
\begin{dmath}\label{eq:prs}
P_{a} = \frac{\rho_{a}T_{a}k_{B}}{\mu}.
\end{dmath}
The ambient medium is set to be non-magnetized initially with the expectation that the magnetic field in the environment will have minimal impact on the non-thermal particle transport and the subsequent emission features within the lobe.
An under-dense beam of density $\rho_{j}=\eta\rho_{0}$ with velocity $v_{j}$ is continuously injected in the medium from a circular nozzle of radius $r_{j}$, along the vertical direction ($\hat{z}$) at $t=0$, with $\eta =0.1$ being the density contrast.
The nozzle is placed within the numerical domain with a height of $0.5 L_0$. The adopted resolution samples the jet nozzle radius with 12 computational cells.
The injection velocity ($v_{j}$) is obtained by choosing the sonic Mach number $M$ such that,
\begin{equation}
v_{j}=M c_{a},
\end{equation}
with $M = 25.0$.
The injected beam includes a toroidal magnetic field ($B_{j}$) with the following radial profile \citep{lind_1989}
\begin{dmath}\label{eq:b_phi}
B_{ j,\phi} = \left\{\begin{array}{cl}
B_{m}\frac{r}{r_{m}} & \mbox{$\text{for}\,\, r\leq r_{m}$} \\
B_{m}\frac{r_{m}}{r} &\mbox{$\text{for}\,\, r_{m}\leq r\leq r_{j}$} \\
0 &\mbox{$\text{otherwise}$},
\end{array} \right.
\end{dmath}
where the value of $B_{m}$ is governed by the plasma-beta parameter and $r_{m}$ is the magnetization radius.
Such a magnetic field profile corresponds to a uniform current density within the radius $r_{m}$, zero current density between $r_{m}$ and $r$ and a return current at $r$. Further, such a configuration also respects the symmetry condition on the $z$ axis ($B_{j}=0$ at $r=0$) \citep{komissarov_2007}.
Additionally, a suitable gas pressure is provided inside the jet to ensure radial balance between the hoop stress and pressure gradient force,
\begin{dmath}\label{eq:prs_jet}
P_{j} = \left\{\begin{array}{cl}
\left(\delta + \frac{2}{\kappa}\left(1-\frac{r^2}{r_m^2}\right)\right)P_{e} & \mbox{$\text{for}\,\, r< r_{m}$} \\
\delta P_{e} &\mbox{$\text{for}\,\, r_{m}\leq r< r_{j}$} \\
P_{e} &\mbox{$\text{at}\,\, r=r_{j}$},
\end{array} \right.
\end{dmath}
where $\delta = 1-\frac{r_{m}^2}{\kappa r_{j}^2}$ and $\kappa = \frac{2P_e}{B_m^2}$ and $P_{e}$ is the pressure in units of $\rho_0 v_0^2$ at the nozzle radius, computed from the ambient medium ($P_{e}=P_{a}$ at $r=r_{j}$).
Owing to the constraint imposed by 2D axisymmetric geometry, the induction equation (Eq.~\ref{eq:induction}) does not enable conversion of the toroidal magnetic field ($B_{\phi}$) to a poloidal one.
As a result, we consider a minimal value of $B_m\sim 100$\,$\mu$G, to avoid significant amplification of the $B_{\phi}$ due to its continuous injection into the computational domain over time.
Further, the initial kinetic power of the jet is calculated from the quantities defined at the jet nozzle \citep{massaglia_2016},
\begin{dmath}
W = \frac{\pi}{2}\left(\frac{\Gamma k_{B}N_{A}}{\mu}\right)^{\frac{3}{2}}\eta \rho_{0} r_{j}^{2}M^{3}T_{a}^{\frac{3}{2}},
\end{dmath}
where $N_{A}$ is Avogadro's number. For the choices adopted in the present work, we obtain $W\simeq10^{45}$\,erg/s corresponding to the FR-II class of radio galaxies \citep{fanaroff_1974}.
For the boundaries, we employ axisymmetric boundary condition about the axis for the inner $r$ boundary
and free flow boundary conditions for all other boundaries in the computational domain.
\subsection{Numerical setup to compute emission}\label{sec:emiss_setup}
The non-thermal emission from the radio lobe is modelled using the Eulerian-Lagrangian hybrid framework of the PLUTO code \citep{Vaidya_2018, mukherjee_2021}.
It employs passive Lagrangian (or macro-) particles whose dynamics is governed by the underlying fluid motion.
Physically, these macro-particles represent an ensemble of non-thermal particles (typically leptons) residing very closely together in physical space with a finite energy distribution.
The energy distribution of these macro-particles is evolved by solving the following transport equation,
\begin{equation}\label{eq:main}
\DS \pd{\chi_p}{\tau}
+ \pd{}{\gamma}\left[(S+D_{A})\chi_{p}\right] =
\pd{}{\gamma}\left(D\pd{\chi_p}{\gamma}\right)
\end{equation}
where $\tau$ is the proper time, $\gamma\approx p/m_0 c$ is the Lorentz factor of the electrons, with $m_0$ being the rest mass of the electron and $c$ is the speed of light in vacuum. The dimensionless quantity $\chi_{p}=N/n$, with $N(p,\tau)$ being the number density of the non-thermal particles with momentum between $p$ and $p+dp$ and $n$ being the number density of the fluid at the position of the macro-particle. The quantity $S$ represents various radiative and adiabatic losses. The acceleration due to Fermi II order mechanism is given as $D_{A} = 2D/\gamma$ with $D$ being the momentum diffusion coefficient. We, for simplicity, neglect the source and sink terms in the transport equation.
Eq.~(\ref{eq:main}) is solved using a $2^{nd}$ order accurate finite-volume conservative implicit-explicit (IMEX) scheme \citep{Kundu_2021}.
The radiative losses considered include synchrotron, IC-CMB and adiabatic expansion to model the cooling processes of relativistic electrons.
Additionally, as the particle spectra in the high energy region falls off rapidly due to various cooling processes, we follow \cite{winner_2019} and set the values of $\chi_{p} = 0$ beyond a threshold $\chi_{\rm cut}=10^{-21}$ .
Note that Eq.~(\ref{eq:main}) does not include shock acceleration; instead, a separate sub-grid prescription is employed to account for DSA \citep{Vaidya_2018,mukherjee_2021}.
The micro-physics of turbulent acceleration is encapsulated in the diffusion coefficient $D$. Typically, the empirical form of $D$ is given as an input in numerical simulations \citep{donnert_2014,vazza_2021}, as its quantification from first principles is complex particularly when applied to study large scale astrophysical environments.
In this work we opt for a phenomenologically motivated ansatz of exponentially decaying hard-sphere turbulence as a model of STA inside the radio lobe.
We consider the acceleration timescale ($t_A$) as follows \citep{kundu_2022_conf},
\begin{equation}\label{eq:acc_time}
t_{A} = \tau_{A} \exp\{(t-\tau_{t})/\tau_d\}
\end{equation}
where $\tau_{d}$ is the turbulence decay timescale, $\tau_{A}$ represents the acceleration timescale when turbulence decay is absent (or $\tau_{d}\to \infty$) and $t$ is the simulation time.
$\tau_{t}$ is the injection time of the macro-particle in a turbulent region. For a macro-particle that encounters a shock, its value is set to the time at which the last shock is encountered. While for those macro-particles that never undergo any shock, the value of $\tau_{t}$ is set to initial injection time in the computational domain.
This acceleration timescale has the capability to mimic the decay of turbulence, generally observed in various astrophysical sources.
The decay is a consequence of the finite lifetime of the turbulence and prevents particles from being continuously accelerated.
For this work, we model $\tau_{A}$ and $\tau_{d}$ as follows,
\begin{align}
\label{eq:tau_a}
\begin{split}
\tau_{A} &= \frac{\tau_{c}(\gamma_{\rm max} \rightarrow \gamma_{\rm min})}{\alpha},
\\
\tau_{d} &= \tau_{A}.
\end{split}
\end{align}
where $\tau_{c}(\gamma_{\rm max} \rightarrow \gamma_{\rm min})$ represents the radiative loss time for a particle to cool from $\gamma_{\rm max}$ to $\gamma_{\rm min}$ and $\alpha$ is the ratio of synchrotron cooling time and acceleration time.
It also controls the efficiency of STA.
Larger $\alpha$ value corresponds to smaller $\tau_{A}$, STA timescale, and $\tau_{d}$, turbulence damping timescale.
Hence larger $\alpha$ indicates faster stochastic acceleration and faster damping.
Also, with lower values of $\alpha$, the effect of STA asymptotically diminishes.
It is a parametric representation that models the turbulence that actually occurs in realistic radio lobes of FR-II radio galaxies, which is unresolved in our simulation.
In this work, we vary its value and study how this affects the emission signatures.
The diffusion coefficient can subsequently be written as,
\begin{equation}
D=\frac{\gamma^
2\exp\{-(t-\tau_{t})/\tau_d\}}{\tau_{A}}.
\end{equation}
The $\gamma^{2}$ dependency of the diffusion coefficient is a characteristic of the hard-sphere turbulence.
Further, instead of such $\gamma^{2}$ dependent diffusion coefficient, one could also explore alternative diffusion models.
For example, adopting Bohm type diffusion ($\propto \gamma$) could influence the results, however such a study of varying dependence of diffusion coefficient on $\gamma$ is beyond the scope of this paper.
To explore the ramifications of STA with varying efficiency on the emission of the simulated radio lobe structure, we use two alternative values for $\alpha=10^4$ and $10^5$ in this study.
Further, to sample the jet cocoon uniformly, we inject enough ($\sim 20$) macro-particles at every time step in the computational domain.
Initially, the normalized particle spectrum for each macro-particle is assumed to be a power-law, defined as $\chi_{p}(\gamma)=\chi_{0}\gamma^{-9}$, ranging from $\gamma_{\rm min}=1$ to $\gamma_{\rm max}=10^5$.
The value of $\chi_{0}$ is set by prescribing the energy density of the macro-particles to be a fraction ($\approx 10^{-4}$) of the initial magnetic energy density.
Note that the initial spectral index has a negligible effect on the emission of the system at later times as long as we consider a steep power-law.
To compute the emissivity, we convolve the instantaneous energy spectrum of each macro-particle with the corresponding single particle radiative power and extrapolate it to the nearest grid cells.
In particular we solve the following integral to compute the emissivity,
\begin{equation}\label{eq:emiss}
j(\nu',n',\tau)=\int_{1}^{\infty}\mathcal{P}(\nu',\gamma',\psi')N'(\gamma',\tau)d\gamma' d\Omega',
\end{equation}
where, $\mathcal{P}(\nu',\gamma',\psi')$ is the power emitted by a non-thermal particle per unit frequency ($\nu'$) and unit solid angle ($\Omega'$) with lorentz factor $\gamma'$, and whose velocity makes an angle of $\psi'$ with the direction $n'$.
$N'(\gamma',\tau)$ is the number of micro-particles between lorentz factor $\gamma'$ and $\gamma'+d\gamma'$ at time $\tau'$.
In the case of an axisymmetric simulation, the magnetic field becomes independent of the polar angle and therefore to consider the line of sight (LOS) effect in the synchrotron emissivity, appropriate co-ordinate transformation is required \citep{meyer_2021}. We transform the magnetic field from cylindrical to Cartesian coordinates and compute the LOS effect by rotating the simulated structure explicitly.
The entire rotation (of $360^{\circ})$ is performed with an interval of $5^{\circ}$.
Subsequently, the intensity maps of the structure are computed by doing a LOS integration of the calculated emissivity.
Note that all the emissivity calculation is performed by considering a viewing angle of $\theta = 90^{\circ}$ (i.e., along the $z=0$ plane in Cartesian co-ordinates).
\section{Results}\label{sec:results}
We categorize the major results from our simulations in two parts.
The first part gives an overview of dynamical aspects of radio lobes and the second part provides a detailed analysis of multi-wavelength emission signatures and particle acceleration processes within these lobes.
\subsection{Dynamics}
We have carried out axisymmetric MHD simulations following the initial conditions described in Section~\ref{sec:setup} using the relevant jet and ambient medium parameters. The simulation is carried out up to a physical time of $\sim 120$\,Myr.
In Fig. \ref{fig:density}, we show the density evolution of the injected jets at different times, viz., t = $37$, $64$, $91$ and $117$\,Myr.
The density structure at every time snapshot shows an expanding bi-directional under-dense region which at a later time ($t = 117$\,Myr) can be identified as lobes \citep{english_2016}.
Similar to \cite{hardcastle_2013}, we notice the formation of a long, thin lobe initially and a transverse expansion afterward.
This subsequent expansion in the transverse direction is attributed to the thermalization of the jet material by the shocks present in the lobe.
Further, we observe the formation of vortices at the lobe boundary, which are typically attributed to Kelvin-Helmholtz instabilities originated from the velocity shear between the lobe material and shocked ambient material.
Moreover, the entire structure is encapsulated within a forward-moving shock that can be seen to propagate through the ambient medium. This shock remains in the computational domain throughout the simulation time, preventing any mass, energy, and momentum from escaping the domain.
In Fig.~\ref{fig:temp_prs}, we show the temperature (left panel), thermal pressure (second panel), absolute velocity $|\vec{v}|$ (third panel) and plasma-beta (right panel) maps of the bi-directional jet at time $t = 117$\,Myr.
The temperature of the lobe (average value of $\sim70$\,keV) is relatively higher than the ambient medium ($T_a = 2$\,keV).
This is expected given the presence of a strong shock at the jet termination region, which is responsible for heating the jet material in the cocoon.
The existence of the strong shock can further be seen from the pressure map as shown in the second panel of the figure.
The pressure map also provides evidence of multiple re-collimation shocks along the jet axis.
Such shocks are expected to be favourable sites for accelerating particles via shock acceleration and are known to be a source of localised high-energy emissions.
Further, we observe the velocity of the jet is within the non-relativistic limit with an average value of $\sim 0.02c$.
The plasma-beta map, as depicted in the right panel of the figure, shows that the lobes are thermally dominated with an average lobe plasma-beta value of $\sim 32$.
The under-dense lobes observed in 2D simulations resemble the radio galaxies in a more consistent manner at later times \citep{hardcastle_2013}.
In particular, when the expansion results in the length of the under-dense region being comparable to the core radius of the galaxy.
Therefore, in this work, for the emission studies, we adopt the dynamical results at time $t=117$\,Myr.
\subsection{Emission}
We now turn our attention to the emission signatures of our model.
The discussion will be based on the comparison of synthetic emission signatures from different runs considered in our study.
The parametric study focuses mainly on the properties of the stochastic turbulent acceleration mechanism.
The details of these simulation runs are listed in Table~\ref{tab:sim_cases}, which considers various acceleration scenarios corresponding to different turbulent acceleration time scales $t_A$, while the background thermal fluid evolution remains exactly the same.
\begin{table*}
\centering
\begin{tabular}{| c c c c c p{0.5\linewidth} |}
\hline\\
\textbf{Run ID} & \textbf{DSA} & \textbf{STA} & \textbf{Turbulent Decay} & \textbf{$\alpha$} & \textbf{{Remarks}} \\
\hline
\hline\\
Case (a) & YES & NO & NO & 0 & Energy spectrum exhibits power law with exponential cut-off; PDF of $\gamma_{\rm avg}$ shows power law; SED shows transient peaks.\\ \hline
Case (b) & YES & YES & YES & $10^4$ & Individual macro-particle energy spectrum exhibits curvature; $\gamma_{\rm max}$ PDF indicates accumulation of particles around $10^4$; $\gamma_{\rm avg}$ PDF exhibits low-energy cut-off. Peak radiation from synthetic SED is $10^{10}$\,Hz through synchrotron and $10^{19}$\,Hz via IC-CMB.\\ \hline
Case (c) & YES & YES & YES & $10^5$ & Individual macro-particle energy spectrum exhibitsВ curvature. $\gamma_{\max}$ PDF shows particle accumulation around $10^{5}$; $\gamma_{\rm avg}$ PDF provides evidence ofВ low-energy cut-off. Synthetic SED peak at $10^{13}$\,Hz through syncrotron and $10^{21}$\,Hz via IC-CMB.\\ \hline
Case (d) & YES & YES & NO & $10^4$ & Individual macro-particle energy spectrum exhibits steady ultra-relativistic Maxwellian structure peaking at $\gamma\approx10^4$.\\ \hline
Case (e) & YES & YES & NO & $10^5$ & Individual macro-particle energy spectrum exhibits steady ultra-relativistic Maxwellian structure peaking at $\gamma\approx10^5$.\\
\hline
\end{tabular}
\vskip2ex
\caption{Properties of the different cases, considered in the present study for calculating emission from the radio lobe. The first column contains labels for cases for further reference. The second, third, and fourth columns represent the presence or absence of DSA, STA, and turbulent decay effects on the emission runs. The fifth column depicts the value of free parameter $\alpha$ (Eq.\ref{eq:tau_a}) chosen for different runs and the last column describes the hey results for each of the cases.}
\label{tab:sim_cases}
\end{table*}
The results obtained from cases (a) and (b) will be useful to comprehend the impact of STA and its interplay with DSA.
Cases (b) and (c) will highlight the implications of having different turbulent decay timescales (see Eqs.~\ref{eq:acc_time}, ~\ref{eq:tau_a}).
For cases (d) and (e), the turbulent decay is turned off by setting $\tau_{d}\rightarrow\infty$ in Eq.~(\ref{eq:acc_time}).
Comparing results from these cases will demonstrate the effect of the turbulent decay process in our simulations.
In realistic astrophysical environments, we expect the turbulence to decay on some time scale that is governed by the micro-physical properties of the wave-particle interaction in that system.
As the current work incorporates turbulence via a sub-grid model, we have explored the implications of different parameters through these 5 cases.
All the results presented in this section are for a dynamical time of $117$\,Myr, unless specified otherwise.
Further, logarithmic binning has been adopted for all the histograms.
\subsubsection{Effect of Turbulent acceleration on individual macro-particle energy spectrum}
\label{sec:spectrum}
In Fig.~\ref{fig:spectrum}, we show the evolution of the energy spectra for all the cases listed in Table~\ref{tab:sim_cases} for a randomly chosen macro-particle which encountered final shock at a dynamical time $t=25$\,Myr.
In the simulations presented in this work, the majority of the Lagrangian macro-particles are observed to encounter more than one shock.
Among them, we selected this particular particle, which had experienced multiple shocks only at earlier times, as a representative candidate to demonstrate the effects of turbulent acceleration on the particle energy distribution in the downstream of the shock for all the case scenarios.
The effect of multiple shocks on the energy spectrum of a Lagrangian macro-particle without STA has already been investigated in the context of AGN jet simulation \citep[see][for example]{mukherjee_2021,giri_2022}.
The spectral evolution of the macro-particle of case (a) is shown in the top left panel.
The spectrum exhibits a power-law with a high-energy cut-off which gradually shifts to lower energy with time, owing to various energy losses.
Additionally, a small hump can be seen in the low-energy part of the spectrum which is due to an excess of lower energy electrons arising from their higher energy counterparts due to radiative cooling.
The shape of the spectrum changes considerably when STA is considered in addition to DSA.
For cases (d) and (e) (shown in the right plot of the middle panel and left plot of the bottom panel respectively), the spectrum exhibits an ultra-relativistic Maxwellian distribution at later times.
This is a consequence of a steady competition between stochastic acceleration and radiative losses resulting in the acceleration of low-energy electrons towards higher energies \citep{Kundu_2021}.
Moreover, the peak of the distribution corresponds to the value of $\gamma$ at which acceleration and loss time scales match, i.e., $\tau_{c}=t_{A}$.
We find that the peak corresponds to $\gamma \approx \alpha$ and it depends on the choice of the turbulent acceleration timescale (see Eq.~\ref{eq:acc_time}).
When turbulent decay is included (cases b and c) we observe flatness of the spectrum in the lower energy regime, as compared to the power-law behavior observed in case (a), along with a high energy cut-off.
The flattening of the lower energy component of the spectrum is a consequence of the fact that STA provides a continuous acceleration to all the micro-particles, resulting in their acceleration to higher energies, depopulating the low energy regime.
Further, we point out that for the macro-particles which have encountered a shock, STA starts acting in the downstream and it modifies the energy spectra on a time-scale that depends on $t_A$ (Eq.~\ref{eq:acc_time}) which in turns is regulated by the turbulent decay time-scale $\tau_d$ and consequently develops a cut-off that moves towards lower energies.
In summary, the spectral evolution of a macro-particle, presented in Fig.~\ref{fig:spectrum} for different cases, clearly indicate that the presence of turbulent acceleration significantly affects the spectral energy distribution and its evolution.
Our results indicate that, in the absence of turbulent decay, spectral evolution eventually relaxes towards a steady-state configuration in which energy losses are balanced by turbulent acceleration, while, on accounting for the decay of turbulence, the energy spectrum exhibits a non-stationary behaviour in time and the cut-off is governed by the radiative loss time scale subsequent to the decay of turbulence.
Further, the spectrum shows flattening in the lower energy regime owing to the energization of low energy micro-particles to higher energy by STA.
\subsubsection{Effect of Turbulent acceleration on particle population}
\label{sec: collective}
This section focuses on the effects of turbulent acceleration on the entire macro-particle population in the lobe.
In particular, we compute the effect of the STA with turbulence decay on the cut-off energy ($\gamma_{\rm max}$) for the macro-particle population.
To compute the cut-off energy of a macro-particle we consider a generic form of its energy spectrum,
\begin{equation}\label{eq:fit}
\gamma^{-m}\exp\left(-\frac{\gamma}{\gamma_{\rm max}}\right),
\end{equation}
where $m$ can be positive or negative depending on the macro-particle and $\gamma_{\max}$ is the cut-off energy.
The exponential decay term takes care of the effects on the spectrum due to various radiative losses (see section~{\ref{sec:spectrum}}).
The value of $\gamma_{\max}$ is calculated by multiplying Eq.~(\ref{eq:fit}) by a power-law profile, $\gamma^{10}$, and calculating the maximum point of the resultant curve.
In Fig.~\ref{fig:histogram}, we show the probability distribution function (PDF) of the maximum (or cut-off) energy ($\gamma_{\max}$) attained by individual macro-particles for cases (a) (left panel), (b) (middle panel), and (c) (right panel).
For case (a), the distribution peaks around $\gamma_{\max}\approx 10^2$, followed by a broken power-law like tail beyond that.
The origin of this peak can be attributed to the presence of various radiative losses in the system.
The peak is also observed to gradually move towards lower values of $\gamma_{\max}$ with time.
To support the above argument, we undertake the following exercise:
for a particle undergoing synchrotron cooling only, the initial Lorentz factor $\gamma^{'}$ after a time period of $t^{'}$ becomes,
\begin{equation}\label{eq:fin_gam}
\gamma^{*}=\frac{1}{C_{0}B^2 t^{'}+\frac{1}{\gamma^{'}}}
\end{equation}
with $C_{0}=1.28\times 10^{-9}$ is the synchrotron constant for electron, $B$ is the magnetic field.
For our case, considering an averaged magnetic field of $B=19.70\,\mu$G and $t^{'}=117$\,Myr we get $\gamma^{*}\approx 5.4\times 10^{2}$ for a range of $\gamma^{'}$ values, which correlates with the position of the peak.
The break in the power-law around $\gamma_{\max}\sim 10^{5}$ is attributed to the continuous injection of the macro-particles in the computational domain with $\gamma_{\max}=10^{5}$ (see section \ref{sec:emiss_setup}).
The presence of an additional smaller peak around $\gamma_{\max}\sim10^{9}$ can also be observed.
This smaller peak is a transient feature, which arises from recently shocked macro-particles and is a manifestation of the continuous injection of jet material along with the Lagrangian macro-particles inside the computational domain.
The presence of this transient peak has been reported in earlier works as well \citep[see for example][]{borse_2021}.
Further, the power-law trend of the tail of the PDF is typically ascribed to the interplay between the continuous injection of macro-particles in the computational domain and the shock acceleration of these freshly injected particles. Such a power-law like behaviour of the distribution in an AGN jet cocoon has also been reported in \cite{mukherjee_2021}.
The PDFs for cases (b) and (c) show some additional peaks, as compared to case (a).
The origin of the peak at $\gamma_{\max}\sim 10^{2}$ is similar to case (a), while the high-energy one
($\gamma_{\max}\sim 10^{9}$) is again due to recently shocked macro-particles.
In addition, one can observe humps at $\gamma_{\max} \sim 10^{4}$ (for case b) and at $\gamma_{\max} \sim 10^{5}$ (for case c).
Their presence is caused by particles undergoing turbulent acceleration downstream of the shock, resulting in freezing the evolution of the cut-off at $\gamma_{\max}\approx\alpha$ for some time due to the competition between STA and radiative losses and afterwards, due to the decay of turbulence, the cut-off continues to decrease towards lower energy as dictated by loss processes.
To understand the distribution of electron energy within macro-particles, we also estimate the average value of $\gamma$ (at the final simulation time, $t = 117$ Myr) denoted by $\gamma_{\rm avg}$ as:
\begin{equation}\label{eq:gamma_avg}
\gamma_{\rm avg}(t) = \frac{\int_{\gamma_{min}}^{\gamma_{\rm max}} \gamma N(\gamma,t)d\gamma}{\int_{\gamma_{min}}^{\gamma_{\rm max}} N(\gamma,t)d\gamma}\,.
\end{equation}
where $\gamma_\max$ and $\gamma_\min$ are given in section \ref{sec:emiss_setup}.
In Fig.~\ref{fig:aver_gamma}, we plot the PDF of $\gamma_{\rm avg}$ for the entire macro-particle population.
In the left panel of the figure, we show the PDF for $\gamma_{\rm avg}$ for case (a).
The distribution exhibits a power-law tail ($\propto \gamma_{\rm avg}^{-q}$, with $q\approx2.54$) beyond $\gamma_{\rm avg}\sim 10^{2}$.
For cases (b) and (c) (depicted in the middle and right panel of the figure), the PDFs exhibit a power-law distribution starting from $\gamma_{\rm avg}\sim 10^{3}$ with a small hump and an exponential cut-off.
The hump feature arises due to competition between STA and radiative losses (see above).
It is interesting to note that the slope of the power-law for both the cases (b) and (c) ($q=0.29$, $0.38$ respectively) is flatter than case (a).
This is a consequence of the fact that STA continuously supplies energy to the macro-particles by accelerating the low energetic micro-particles to the higher energy, thus compensating for the radiative losses, as opposed to the case with only DSA.
Finally, in presence of both DSA and STA, the $\gamma_{\rm avg}$ PDFs exhibit a low-energy break around $\gamma_{\rm avg}\sim10^{3}$ owing to the fact that STA boosts low-energy particles to higher energies.
This process is absent if only DSA is present, since there is no selective mechanism to accelerate only the low-energy particles during shock acceleration (which involve convolution of the entire upstream spectrum of each macro-particles to downstream \citep{mukherjee_2021}) and hence $\gamma_{\rm avg}$ PDF cannot form a low-energy break.
Further, in Fig.~\ref{fig:integrated}, we present the integrated particle spectrum considering the whole macro-particle population for each of the three case scenarios.
The integrated particle spectrum is calculated as follows:
\begin{equation}
F(\gamma)=\sum_{i}\frac{\chi^{i}_{p}(\gamma)}{\mathcal{N}_{i}(\gamma)\int\chi^{i}_{p}(\gamma')d\gamma'}\,,
\end{equation}
where $i$ corresponds to individual macro-particles inside the computational domain, $\chi^{i}_{p}(\gamma)$ is the distribution function of the $i^{th}$- macro-particle and $\mathcal{N}_{i}(\gamma)$ represents the number of macro-particles with Lorentz factor $\gamma$.
The DSA spectrum (case (a)) is in the form of a broken power-law with the break at $\gamma \approx 5 \times 10^2$ (region highlighted with orange color in the figure).
Such a behaviour is expected when computing a resultant distribution comprising all the macro-particles, where the spectral evolution is mediated by shock acceleration and radiative losses \citep{heavens_1987}.
The position of the break has a direct correspondence with the peak in the $\gamma_{\max}$ PDF for case (a) and can be explained by the same reasoning (see Eq.~\ref{eq:fin_gam}).
When STA is taken into account (cases (b) and (c)), the spectrum exhibits an inverse power-law behaviour for $\gamma \lesssim 4\times 10^2$, followed by a low energy break and a power-law trend with a high energy cut-off highlighted in blue and green for cases (b) and (c), respectively in the figure.
The spectral behaviour in the region $\gamma\lesssim 4\times10^2$ is a manifestation of the low-energy flattening in the individual macro-particle spectrum (see section~\ref{sec:spectrum}) due to turbulent acceleration.
The origin of the low energy break bears a similar explanation as the case (a). However, for cases where STA is taken into account, the cut-off is accompanied by piled up micro-particles (see case (c) of Fig.~\ref{fig:spectrum}) as opposed to case (a), which is why the break appears more prominent in cases (b) and (c).
The high energy cut-off in the integrated particle spectrum (at $\gamma\approx10^4$ for case (b) and $\approx10^5$ for case (c)) is governed by the formation of the quasi-stationary cut-off in the individual macro-particle spectrum due to the interplay of DSA and STA.
As a result, the position of these high energy cut-offs has an exact correspondence with the peaks observed in Fig.~\ref{fig:histogram} for the cases where STA is taken into account.
The power-law trend beyond $\gamma\gtrsim 10^6$ for all the case scenarios is a consequence of the continuous macro-particle injection in the computational domain and a fraction of them subsequently getting shock accelerated.
In summary, turbulent acceleration with exponential decay modifies the macro-particles' maximum energy ($\gamma_{\rm max}$) distribution by presenting additional hump to the PDFs.
The location of the humps is closely connected to the $\gamma$ of individual macro-particles where $\tau_{c}=t_{A}$.
The PDF of $\gamma_{\rm avg}$ for cases (b) and (c) exhibits a power-law trend with an exponential cut-off and a low energy break.
The integrated spectrum with only DSA exhibits a low energy break, whereas with STA, an additional cut-off at high energy is also seen.
\subsubsection{Turbulent acceleration as a sustained acceleration process}
In this section we examine how STA supports the macro-particles to sustain their energy from extreme radiative losses.
To properly characterize this behaviour we consider an equivalent magnetic field for each macro-particle and compare it with the dynamical magnetic field at the position of the macro-particle.
This is computed from the instantaneous single macro-particle energy distribution as follows,
\begin{equation}\label{eq:mag_equip}
\frac{B_{\rm eq}^{2}}{8\pi}=m_{0}c^2\int_{\gamma_{min}}^{\gamma_{\rm max}}\gamma N(\gamma,t)d\gamma,
\end{equation}
where $B_{\rm eq}$ is the corresponding equivalent magnetic field.
Following Eq.~(\ref{eq:mag_equip}), we compute $B_{\rm eq}$ for cases (a), (b), and (c) and compare it with the corresponding dynamical magnetic field computed at the local macro-particle position at each instant, $B_{\rm dyn}$.
We plot the time evolution of the histogram of the quantity, $B_{\rm eq}/B_{\rm dyn}$, on a logarithmic scale, for all three cases in the top panel of Fig.~\ref{fig:equipartition}, where orange, blue, green, and black curves in each panel depict the histogram at times $5$\,Myr, $29$\,Myr, $58$\, Myr, and $117$\, Myr, respectively.
All the histograms are normalized so that the maximum peak value is unity.
As shown in the top left panel, for case (a), the histogram gradually shifts towards a state with, $B_{\rm dyn} \sim B_{\rm eq}$ as time progresses.
The other two cases (b) and (c) exhibit a similar pattern, where one observes a broadening of the histogram as well.
For case (a), the shape of the PDF
can be observed to
evolves to a negatively skewed distribution on a logarithmic scale.
To analyze the reason behind this kind of evolution, we show (bottom left panel) a 2D histogram depicting the value of $\tau_{t}$ with respect to the magnetic field ratio, which indicates implying that the macro-particle population with a larger magnetic field ratio has recently been shocked.
This should not be surprising since the shock acceleration energizes particles thereby increasing $B_{\rm eq}$.
The 2D histogram also shows that a relatively small
fraction of macro-particles has magnetic field ratios larger than unity, due to the absence of any further acceleration process.
As a result these particles undergo strong cooling and quickly lose their energy, hence featuring an exponential fall in the histogram beyond $B_{\rm eq}\sim B_{\rm dyn}$ (top left plot of Fig.~\ref{fig:equipartition}).
On the contrary, for cases (b) and (c) (top middle and right panels), the 1D histogram evolves to a more extended structure, which closely resembles the log-normal shape.
This extended form of the histograms is ascribed to the presence of STA which provides a continuous acceleration to the macro-particles and helps them maintain their energy even in the presence of radiative cooling.
This is further confirmed by observing the corresponding 2D histograms in the bottom panels (middle and right, respectively).
In contrast to case (a), both figures show more macro-particles in the region $B_{\rm eq}/B_{\rm dyn}\gtrsim 1$.
Also we can infer that even macro-particles that were shocked earlier (smaller $\tau_{t}$) feature a higher value of $B_{\rm eq}/B_{\rm dyn}$ as a result of the fact that with STA macro-particles can sustain their energy for a longer amount of time.
In summary, for all the cases, one observes the distribution gradually evolve towards a state where $B_{\rm eq}\sim B_{\rm dyn}$.
Further, due to the presence of STA, as compared to only DSA, the histogram manifests a more extended structure that is evenly spread due to the macro-particles which were shocked at earlier time but could sustain their energy from radiative losses because of STA.
\subsubsection{Synthetic Spectral Energy Distribution of Radio lobe}
In Fig.~\ref{fig:sed}, we present the spectral energy distribution (SED) for cases (a) (green line), (b) (red line) and (c) (blue line).
The SED is calculated by integrating the emissivity (Eq.~\ref{eq:emiss}) along the line of sight \citep{Vaidya_2018} with two different radiation mechanisms: synchrotron (solid lines) and IC-CMB (dashed lines).
The synchrotron SED shows, for case (a), enhanced emission in the X-ray band with multiple peaks at $\nu \sim 10^{18}$ and $\nu \sim 10^{21}$ Hz.
These peaks are originated from freshly shocked macro-particles \citep{borse_2021, mukherjee_2021}.
This can be further verified analytically using the relation between the critical (or cut-off) frequency ($\nu_{c}$) of synchrotron radiation and the corresponding $\gamma$ \citep[see for example Eqs.~(5.80) from][]{condon_2016},
\begin{equation}
\nu_{c}\approx\frac{\gamma^2 eB}{2\pi m_{e}c},
\end{equation}
For instance, with an averaged magnetic field of the lobe $B= 19.70$\,$\mu$G and $\nu_{c} \sim 10^{21}$\,Hz, one obtains a corresponding value for $\gamma \sim 10^9$, which is consistent with the peak in the PDF of $\gamma_{\max}$ as seen in Fig.~\ref{fig:histogram}.
For case (b), in addition to similar shock-induced transient signatures, the synchrotron emission shows a distinct peak in the low energy GHz radio band ($\nu\sim 10^{10}$ Hz).
The origin of such a low energy peak is a direct evidence of turbulent acceleration and corresponds to the hump in the PDF at $\gamma_{\max} \sim 10^4$ (see middle panel of Fig.~\ref{fig:histogram}).
Likewise, the synchrotron peak can also be observed for case (c) at a slightly higher energy, $\nu\sim10^{13}$\,Hz.
The macro-particles that are accelerated via STA and give rise to the peak in PDF around $\gamma_{\rm max}\sim 10^{5}$ (right panel of Fig.~\ref{fig:histogram}) are mainly contributing to the emission at this frequency band.
The macro-particle population that is stochastically accelerated in cases (b) and (c) is not only responsible for synchroton emission but also contributes to the distinct peaks in the IC-CMB spectral energy distribution ($\nu\sim 10^{19}$\,Hz for case b, $\nu\sim 10^{21}$\,Hz for case c).
We have verified that these correspond to the frequency of the photons scattered of a population of electrons with energy $\gamma_{\max}\sim10^{4}$ and $\gamma_{\max}\sim10^{5}$ for case (b) and (c), respectively.
In fact, the post-scattering frequency of the photons $\nu_s$ is related to the electron energy as follows:
\begin{equation}\label{eq:iccmb}
\nu_{s}\approx \gamma_{\max}^{2}\nu_{0}\,,
\end{equation}
where $\nu_{0}$ is the frequency at which the cosmic microwave background (CMB) radiates.
Using $\nu_{0}=160$\,GHz in Eq. ({\ref{eq:iccmb}}), we find that an electron population at $\gamma_{\max}\sim 10^4$ would scatter the CMB photons at a frequency of $\sim10^{19}$\,Hz.
A similar inference can be drawn for the origin of the IC-CMB peak around $\sim 10^{21}$\,Hz for case (c).
Additionally, the peaks in the $\gamma$-ray band ($\nu \sim 10^{27}$\,Hz) for all cases corresponds to the particles with $\gamma_{\max}\sim 10^9$.
After observing the SED and identifying the particle populations responsible for the various peaks, we proceed to show the spatial distributions of these particle populations in order to understand the resulting emission structure.
In Fig.~\ref{fig:gam_spatial} we show the spatial distribution of the particle populations responsible for the aforementioned peaks.
The top panels depict the particle distributions with $\gamma_{\max}\sim10^4$ for case (a) (left plot), case (b) (middle plot) and case (c) (right plot).
These particles are correlated to the peak in the SED caused by IC-CMB at $\nu\sim10^{19}$\,Hz, as explained earlier in this section.
The macro-particles in case (a) can be seen to be more confined around the shocks in the beam and, to a lesser extent, to the cocoon region.
The reason for this is that, after the shock acceleration, the macro-particles energy evolution is governed by loss mechanisms only and as a result, they lose a considerable amount of energy in a short distance.
On the contrary, when turbulent acceleration is included, the particle distribution corresponding to $\gamma_{\max} \sim 10^4$ stretches over a wider area (see the upper middle and right plot), since macro-particles can be re-accelerated via turbulence, sustaining high-energy for a longer distance before losing a substantial portion of their energy.
In comparison to case (a), this extended spatial distribution implies a more diffuse structure of X-ray radiation attributable to IC-CMB.
In the lower panel of Fig.~\ref{fig:gam_spatial} we show the spatial distribution of the macro-particles with $\gamma_{\rm max}\sim 10^5$, responsible for the peak in the IC-CMB SED at $\sim10^{21}$\,Hz.
Similar to the former scenario, the particle distribution shows an extended morphology for case (c) as compared to other two cases, for the same reasons discussed before.
Interestingly, the spatial distributions for case (a) (left panel) and case (b) (middle panel) have a very similar structure.
The reason for this can be further investigated by comparing the $\gamma_{\max}$ histograms for case (a) and (b) (left and middle panels in Fig.~\ref{fig:histogram}), showing a similar behavior (after the peak at $\gamma_{\max}\sim 10^4$ for the latter).
In summary, we showed that, in the presence of stochastic acceleration, the emission from the radio lobe changes significantly compared to the case where STA is neglected.
With the inclusion of STA, the spatial distribution of the X-ray emitting particles through IC-CMB exhibit a wider extent (see Fig.~\ref{fig:gam_spatial}) as compared to the only DSA case, indicating a emission structure which is diffusive.
\subsubsection{Spectral Index distribution}
In this section, we focus on the effect of STA on the radio frequency regime ($\leq 15$\,GHz).
With the advent of several high-resolution low-frequency telescope arrays, it is possible to quantify the distribution of the spectral index in extended lobes \citep{alexander_1987,harwood_2013}.
In this regime, the emission from astrophysical systems are dominated by synchrotron radiation which follows a power-law relation with the frequency, $I_{\nu}\propto \nu^{-\delta}$, with $\delta$ being the spectral index.
In our simulation we compute the intensity from the macro-particles energy distribution (see section~\ref{sec:emiss_setup}) and further calculate the spectral index $\delta$ using the equation below:
\begin{equation}
\delta=\frac{\log(I_{\nu_2})-\log(I_{\nu_1})}{\log(\nu_1)-\log(\nu_2)}
\end{equation}
In the top panel of Fig.~\ref{fig:spectral} we show the Gaussian-filtered spectral index maps of the radio lobe considering two frequencies, $\nu_{1}=1.5\,GHz$ and $\nu_{2}=15\,GHz$ for cases (a), (b), and (c).
All the spectral maps show - from top to bottom - signs of spectral steepening from the outer regions of the lobe (near the bow-shock) towards the inner part.
Such a spatial distribution can be further analyzed by observing the bottom panel of the figure, where we plot the vertical distribution of the spectral index value on the path depicted by the black dashed line shown in the corresponding top panel, from the inner region of the lobe to the outer region.
The spectral index distribution behaves similarly for all three cases, showing a rapid increase followed by a softer (or almost constant) one.
By analyzing the slope of this second part, we obtain an average value for case (a) as $-1.01$, while for cases (b) and (c) it is $-0.80$ and $-0.49$ respectively.
This implies that the radiation spectrum becomes harder as one increases $\alpha$ in the lobe.
The spatial extent of the region with constant spectral index is larger for case (b) as compared to cases (a) and (c).
For case (a), this directly follows from the absence of any continuous acceleration mechanism other than shocks, and the ensuing radiative cooling of the macro-particles in the back flow over a short time scale.
In contrast, for cases (b) and (c), STA provides additional continuous acceleration to the macro-particles.
For this reason, the macro-particles could be able to radiate for a longer amount of time and the value of the spectral index could be maintained for a longer distance.
Additionally, due to faster turbulence decay, case (c) maintains the spectral index for a shorter spatial extent as compared to case (b).
Our results have shown that the signature of continuous acceleration of particles due to stochastic turbulence impact on several observables, including the spectral index variation along the lobe.
We also observed that while, with increasing $\alpha$ the spectral index value inside the lobe increases owing to the shorter acceleration timescale, the extent of the region with constant spectral index decreases due to turbulence decay.
We have discussed the implications of the synthetic measures quantified in section~\ref{sec:results} with multi-wavelength observational signatures in the next section.
\section{Summary and Discussion}
\label{sec:summary}
In this work we have presented 2D axisymmetric large-scale numerical simulations of AGN jets using a fluid-particle hybrid approach, in order to focus on particle acceleration processes and emissions from radio lobes.
In spite of their limitations, and owing to the prohibitive cost of 3D computations, 2D models still provide fundamental insights on the interplay between different acceleration mechanisms and their influence on emission signatures.
Further owing to the multi-scale nature of the system, the underlying turbulence is considered as a sub-grid manner and its effect on the cosmic ray transport is modelled with a phenomenologically motivated ansatz for the turbulent acceleration timescale that can mimic the turbulence decay process usually observed in various astrophysical sources.
By introducing this timescale, we solve the cosmic ray transport equation to evolve their energy distribution, accounting for diffusive shock acceleration (DSA), stochastic turbulent acceleration (STA) as well as for radiative losses (synchrotron and inverse Compton), as implemented in the PLUTO code by \cite{Vaidya_2018}.
We explore different scenarios by selectively including or excluding the aforementioned acceleration mechanisms, and study their effects on the emission signatures of the radio lobes.
Below, we summarise the primary results from this work.
\begin{itemize}
\item We observe significant modification of the energy spectra of macro-particles when turbulent acceleration is included in addition to DSA as compared to only shock acceleration case.
The interplay of DSA, STA and and turbulent decay results in features such as flattening of the spectrum in the low-energy region and a dynamically evolving high energy cut-off.
These features produce curvature in the particle spectrum which can further manifest in the emission properties of the radio lobe \citep{duffy_2012}.
\item The analysis of the maximum attainable energy results in a unimodal PDF with a broken power-law tail when only shock acceleration is accounted for, while when both DSA and STA are included, the PDF exhibits a bimodal structure.
Further, the PDF of the average energy ($\gamma_{\rm avg}$) for each macro-particle shows a power-law profile with an exponential cut-off on inclusion of STA.
These distributions closely resemble the case in which STA is mediated by continuous particle injection and escape \citep[see Fig.~2b of ][]{katarzynski_2006}.
Here particle injection due to shocks act as a source while the escape is due to turbulence decay.
The lobe integrated spectrum exhibits a broken power-law like structure for DSA, whereas with STA it displays a high energy cut-off in addition to the low energy break.
The position of the low energy break corresponds to the $\gamma$ where radiative loss time becomes equal to the dynamical time.
The integrated spectrum generated by including STA can be utilised as a consistent input for one-zone radio lobe modelling that accounts for particle acceleration due to turbulence.
\item Further analysis of STA and its effect on sustaining the particle's energy against radiative cooling is performed through the evolution of $B_{\rm eq}/B_{\rm dyn}$ histograms, showing for all the three cases, that the system evolves to a state where $B_{\rm eq}\sim B_{\rm dyn}$.
However, with STA, the corresponding distributions become wider when compared to the only shock scenario, as a result of the additional energization.
\item The study of the synthetic SED of the simulated source demonstrates the existence of additional peaks in the radio band due to synchrotron emission and in the X-ray band through the IC-CMB mechanism when STA is taken into account.
Further analysis of the spatial distribution of the macro-particles corresponding to these additional peaks implies a more extended and diffuse emission in the X-ray band owing to the interplay of the two acceleration mechanisms.
The extent of the spatial distribution is further observed to be modulated by changing the value of $\alpha$ (see Eq.~\ref{eq:acc_time}).
This implies that, with an appropriate choice of $\alpha$, one might achieve diffuse emission around localized regions inside the radio lobe \citep[for example diffuse synchrotron emission around the hotspot of 3C445, see ][]{prieto_2002}.
\item The radio frequency spectral maps along with the spectral index profile inside the lobe indicate a harder emission spectrum due to STA as compared to the DSA case.
The spectral index is observed to remain constant over a distance inside the radio lobe whose length gets modulated with the efficiency of the turbulent acceleration.
The value of the spectral index in this region is $\sim -0.49$ for case (c), for case (b) it is $\sim -0.8$ and for case (a) it is $\sim -1.01$.
This kind of behaviour has also been found in various observations of radio lobes \citep{parma_1999}.
Radio lobes of parsec-scale AGN jets have been observed to exhibit similar characteristics \citep{hovatta_2014}.
However, it should be noted that from observation of radio lobes there is no evidence of spectral index $\approx-0.5$ or higher.
This, consequently, may impose a limit to the extent and the effectiveness of STA in the actual radio lobes.
\end{itemize}
\subsection*{Present Limitations and Future Extension}
The results shown in the present study represent a first step towards a more realistic description of the complex interaction between the turbulent radio lobe material and the non-thermal particles, and it is certainly limited by a number of considerations.
2D axisymmetric models, for instance, are similar to 3D models only in the case of stable jets and homogeneous media.
Time-dependent jet propagation is known to be prone to 3D instabilities (e.g., Kelvin–Helmholtz and current-driven modes) that cannot be captured by axisymmetric models \citep[see, e.g.][]{mignone_2010,bodo_2013,bodo_2016}.
These instabilities are known to have an effect on the jet emission \citep{acharya_2021,borse_2021} and can induce a range of non-axisymmetric structures, such as filaments and shocks along jets and in the back-flowing zone \citep[see for example][]{tregillis_2001,matthews_2019}.
Such non-axisymmetric structures are known to enhance the turbulence inside the back-flowing region and hence would strongly influence particle mixing \citep{jones_1999}.
Another potential issue with 2D axisymmetric simulations is that the induction equation (Eq.~\ref{eq:induction}) does not allow, because of the $\partial_{\phi}=0$ condition, conversion of toroidal magnetic field ($B_{\phi}$) to poloidal field \citep{porth_2013}.
This leads to the continuous amplification of the injected $B_{\phi}$ component in the computational domain over time, eventually affecting the jet dynamics.
However, for this work we consider a very small $B_{\phi}$ value to lessen any substantial impact on the dynamics.
Nevertheless, 2D computations still allow our method to be tested with finer grid spacing providing better resolution across shocked structures.
This would be computationally expensive in the fully 3D case.
Additionally, we also consider an un-magnetized ambient medium in the expectation that the magnetic field in the ambient medium will have minimal impact on the non-thermal particle transport within the lobe.
Our simulations describe the interaction between cosmic ray particles and jet materials although the former behaves essentially as a passive scalar without backreaction on the fluid.
Future extension will consider more exhaustive two-fluid approaches by also taking into account energy and momentum transfer between the two components in a self-consistent way \citep{girichidis_2020,ogrodnik_2021}.
It should be emphasized that the employment of parameters in our model is an unwanted, albeit necessary, consequence of the fact that large scale simulations cannot possibly resolve (and therefore sample) the small-scale turbulence regions.
Sub-scale micro-physical processes (such as turbulent acceleration timescale or MHD turbulence damping rate) must therefore be encoded through a sub-grid recipe.
In this work, in fact, we consider a one parameter exponentially decaying hard-sphere turbulence as a model of STA inside the radio lobe, with certain values for the parameter ($\alpha=10^{4}, 10^{5}$) and compute the emission signatures from the radio lobes via synchrotron and IC-CMB processes.
Future extensions of this work will hopefully consider fully 3D investigations, where the impact of non-axisymmetric plasma instabilities may deeply affect the morphology.
Additionally, the sub-grid prescription of turbulence decay has a crucial role in governing some of the essential properties of emission.
\bibliographystyle{mnras}
\bibliography{lobe} |
Title:
Parametrizing gravitational-wave polarizations |
Abstract: We review the formalism underlying the modeling of gravitational wave (GW)
polarizations, and the coordinate frames used to define them. In the process,
we clarify the notion of "polarization angle" and identify three conceptually
distinct definitions. We describe how those are related and how they arise in
the practice of GW data analysis, explaining in detail the relevant conventions
that have become standard the LIGO-Virgo standard. Furthermore, we show that
any GW signal can be expressed as a superposition of elliptical (i.e.,
fully-polarized) states, and examine the properties and possible
parametrizations of such elementary states. We discuss a variety of common
parametrizations for fully-polarized modes, and compute Jacobians for the
coordinate transformations relating them. This allows us to examine the
suitability of each parametrization for different applications, including
unmodeled or semimodeled signal reconstructions. We point out that analyses
parametrized directly in terms of the plus and cross mode amplitudes will tend
to implicitly favor high signal power, and to prefer linearly-polarized waves
along a predefined direction; this makes them suboptimal for targeting face-on
or face-off sources, which will tend to be circularly polarized. We discuss
alternative parametrizations, with applications extending to continuous waves,
ringdown studies, and unmodeled analyses like BayesWave.
| https://export.arxiv.org/pdf/2208.03372 |
\title{Parametrizing gravitational-wave polarizations}
\author{Maximiliano Isi}
\email[]{misi@flatironinstitute.org}
\affiliation{Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010}
\hypersetup{pdfauthor={Isi}}
\date{\today}
\begin{acronym}
\acro{GR}{general relativity}
\acro{CBC}{compact-binary coalescence}
\end{acronym}
\section{Introduction}
\label{sec:intro}
Gravitational waves (GWs) come in two distinct polarization states, whose amplitude and phase evolution reflect the structure of \ac{GR} and the dynamics of the source.
As for electromagnetic waves, these states are only unambiguously defined up to rotations of the reference frame around the wave's direction of propagation.
When analyzing data from detectors like LIGO \cite{TheLIGOScientific:2014jea} and Virgo \cite{TheVirgo:2014hva}, it is natural to parametrize polarizations differently depending on the application.
For instance, searches for \acp{CBC} aim to relate the signal observed by different detectors to templates obtained from theory, and thus benefit from describing GW polarizations in the same frame as the predictions (e.g., \cite{Faye:2012we,Kidder:2007rt}).
On the other hand, unmodeled (or semimodeled) analyses aim to reconstruct GWs without relying on detailed input from theory, and must instead make an arbitrary choice in orienting the polarization frame \cite{Klimenko:2004qh,Klimenko:2005xv,Klimenko:2008fu,Cornish:2014kda,Cornish:2020dwh}.
Furthermore, lacking waveform templates, unmodeled analyses must also decide how to parametrize the GW polarization state and its time evolution in a way sufficiently flexible to capture a range of morphologies while parsimonious enough to remain computationally tractable.
Analyses that focus on recovering signal power without coherently modeling the phase evolution may use yet different conventions \cite{Romano:2016dpx}.
The abundance of polarization parametrizations and reference directions is visible in the literature as well as in the implementation of data analysis software.
Such variety can cause confusion, and hinder comparisons across analyses with different conventions, or even complicate the interpretation of individual analysis outputs.
As an example, this comes into play when parametrizing continuous GWs from galactic pulsars, and in relating such (projected) measurements to electromagnetic observations of the source orientation (e.g., \cite{Ng:2007te,Dupuis:2005xv,Isi:2017equ,Pitkin:2017qfy}).
They are relevant in parametrizing ringdown signals for black hole spectroscopy, where multiple factorizations are possible for the polarization amplitudes (e.g., \cite{Isi:2021iql,Carullo:2019flw,LIGOScientific:2020tif,LIGOScientific:2021sio}).
They are also important for understanding the implications of different treatments of polarization ellipticity in unmodeled analyses, e.g., with the \textsc{BayesWave} algorithm \cite{Cornish:2014kda,Cornish:2020dwh,Chatziioannou:2021mij}, and in comparing such results to modeled \ac{CBC} inference.
This paper provides a comprehensive exposition of the formalism underlying GW polarizations as it pertains practical applications.
The goal is twofold:
\emph{first}, pedagogical, in reviewing the relations between different polarization conventions, and in clarifying how these come to bear in real-world data analysis;
\emph{second}, technical, in explicitly working out the coordinate transformations that link different parametrizations, and providing ready-to-use expressions for the corresponding Jacobians---the mathematical factors that translate between posterior probability densities obtained under different parametrizations, which are required to exchange priors when carrying out Bayesian inference or similar applications.
In this work, the exposition is geared towards observers, or theorists interested in drawing connections to observation---as such, it strives for concreteness over abstraction, and, in particular, steers away from the rich formal connections between the treatment of GW polarizations and the mathematical structure of GR.
The review of GW polarizations begins in Sec.~\ref{sec:primer} with a derivation of signal decompositions into three different polarization bases: linear, circular and elliptical.
Having established the importance of elliptical (fully polarized) modes, Sec.~\ref{sec:ellip_modes} examines their key properties, outlines some of their uses, and sketches their connection to spin-weighted spherical harmonics.
Next, Sec.~\ref{sec:angles} carefully examines the different notions of ``polarization angle'' that arise for elliptical and nonelliptical signals, elucidating both their conceptual independence and their frequent interchangeability in practical applications.
Taking advantage of the mathematical formalism introduced in the preceding sections, Sec.~\ref{sec:jacobians} provides a census of different parametrizations of elliptical states, derives the Jacobians connecting them, and discusses their implications for parameter estimation.
Finally, Sec.~\ref{sec:nongr} briefly covers generalizations of these ideas to beyond-GR polarization states, and Sec.~\ref{sec:conclusion} concludes.
\section{Polarization primer}
\label{sec:primer}
\subsection{Linear basis}
\label{sec:linear}
In GR, there exist two propagating gravitational degrees of freedom, corresponding to two independent GW polarizations (e.g., \cite{Thorne1983,Thorne:1987af,Poisson2014,BT}).
At any given time, their local effect can be encoded in a driving matrix $h_{ij}$ representing the transverse-traceless part of the metric perturbation, also known as the \emph{gravitational-wave field}.
In a Cartesian frame with $z$-axis along the direction of propagation, we can write this matrix as
\beq \label{eq:hij}
(h_{ij}) = \begin{pmatrix}
h_+ & h_\times & 0 \\
h_\times & - h_+ & 0 \\
0 & 0 & 0
\end{pmatrix} ,
\eeq
where the plus ($+$) and cross ($\times$) polarization functions, $h_{+/\times}$, depend implicitly on the retarded time, $t - R/c$, in a way determined by the source dynamics and by the (luminosity) distance $R$ to the source, as well as on any other relevant source parameters controlling the amplitude and phase of the wave, as dictated by Einstein's equations.
It can be useful to rewrite \eq{hij} as $h_{ij} = h_+ e^+_{ij} + h_\times e^\times_{ij}$, in terms of the $e^{+/\times}_{ij}$ polarization basis tensors given by
\begin{subequations} \label{eq:lin}
\begin{align}
e^+_{ij} &\equiv \hat{x}_i \hat{x}_j - \hat{y}_i \hat{y}_j \, ,\\
e^\times_{ij} &\equiv \hat{x}_i \hat{y}_j + \hat{y}_i \hat{x}_j\, ,
\end{align}
\end{subequations}
where $\hat{x}$ and $\hat{y}$ are arbitrary orthonormal vectors that, with $\hat{z}$, form a right-handed Cartesian basis; we will call this the \emph{wave frame}.
Since this frame is constructed to have $\hat{z}$ aligned with the wavevector $\vec{k}$ (i.e., $\hat{z} = \hat{k} \equiv \vec{k}/|k|$), the polarization tensors are implicit functions of the wave propagation direction $\hat{k}$, or, equivalently, the source sky location $\hat{n} = -\hat{k}$.
For a given $\hat{k}$, it is easy to check that these tensors are orthogonal such that $e^p_{ij} e^{p'ij}=2\delta^{pp'}$ for $p,p'$ in $\{+,\times\}$.
We will refer to plus and cross jointly as the \emph{linear} polarization basis.
Their physical interpretation is best illustrated by their instantaneous effect on a small, freely-falling ring of particles, as shown in Fig.~\ref{fig:rings}.
Other polarization bases can be constructed, as we will see below, but the linear polarizations are generally the most convenient for expressing measurements.
In the small-antenna limit, the signal induced by a passing GW on a given detector can be written as the dyadic projection
\beq \label{eq:h}
h(t) \equiv D^{ij} h_{ij} = F_+ h_+ + F_\times h_\times\, ,
\eeq
with antenna patterns $F_{+/\times} \equiv D^{ij} e^{+/\times}_{ij}$ defined in terms of a detector tensor $D_{ij}$ that encodes the geometry of the measurement.
For a differential-arm detector like LIGO with arms pointing along unit vectors $\hat{X}$ and $\hat{Y}$, this is just $D_{ij} = (\hat{X}_i \hat{X}_j - \hat{Y}_i \hat{Y}_j)/2$.%
\footnote{These expressions are valid in the local Lorentz frame of the detector, so we can raise and lower indices with the flat metric.}
In this limit, the antenna patterns are thus purely geometric factors that encode the relative orientations of the detector and wave frames, as defined by $\{\hat{X},\, \hat{Y},\, \hat{Z}\}$ and $\{\hat{x},\, \hat{y},\, \hat{z}\}$ respectively.
After fixing the frame orientation, any plane GW may be expressed in terms of the Fourier components of its polarization amplitudes as
\begin{align}
\label{eq:planewave}
h_{ij}(t,\vec{x}) &= \frac{1}{2\pi}\int_{-\infty}^{+\infty} \tilde{h}_{ij}(\omega, \hat{k})\, e^{i\omega \left(\frac{\hat{k}\cdot\vec{x}}{c}-t\right)} \infd \omega \\
&= \frac{1}{2\pi} \sum_{p=+,\times} \int_{-\infty}^{+\infty} \tilde{h}_p(\omega)\, e^p_{ij}(\hat{k})\, e^{i\omega \left(\frac{\hat{k}\cdot\vec{x}}{c}-t\right)} \infd \omega \nonumber
\end{align}
where the sum is over linear polarization states ($+,\times$) defined in some wave frame attached to the propagation direction $\hat{k}$,%
\footnote{More broadly, the strain at any point in spacetime may be expressed with full generality as a superposition of these planewaves
by integrating over all directions of propagation (e.g., \cite{Romano:2016dpx,Isi:2018miq}).}
and we obtained the second line using the fact that the $e^{+/\times}_{ij}$ are real valued.
Equation \eqref{eq:planewave} implicitly defines the complex-valued Fourier polarization functions $\tilde{h}_p(\omega)$ to correspond to the time-domain polarizations at the spatial origin, $h_p(t) \equiv h_p(t, \vec{x}=0)$, by
\beq \label{eq:ft}
\tilde{h}_p(\omega) \equiv \int_{-\infty}^{+\infty} h_p(t)\, e^{i\omega t} \infd t \, ,
\eeq
establishing our convention for the Fourier transform.
Since $h_{ij}$ is real valued, the Fourier strain must satisfy the complex-conjugate symmetry $\tilde{h}_{ij}(-\omega, \hat{k}) = \tilde{h}_{ij}^*(\omega,\hat{k})$, where the asterisk indicates complex conjugation.
For the linear polarizations, this directly reduces to
\beq \label{eq:sym_linear}
\tilde{h}_{+/\times}(\omega) = \tilde{h}_{+/\times}^*(-\omega)\, ,
\eeq
because the linear basis tensors are themselves real valued.
As usual, then, the positive and negative frequencies must be considered as inseparable contributions to a single Fourier mode.
The existence of this symmetry reveals a redundancy in the description that we can exploit to write Eq.~\eqref{eq:planewave} more concisely.
\subsection{Circular basis}
First, instead of the linear plus and cross polarizations above, we could equivalently work with the associated \emph{circular} right-handed (R) and left-handed (L) modes.
These are defined in the Fourier domain by the complex-valued basis tensors
\beq \label{eq:circ}
e^{R/L}_{ij} \equiv \frac{1}{\sqrt{2}} \left(e^+_{ij} \pm i e^\times_{ij} \right) ,
\eeq
with the plus (minus) sign corresponding to R (L).
These tensors are also orthogonal and normalized similarly to $e^{+/\times}_{ij}$ such that $(e^{p'ij})^* e^p_{ij} = 2 \delta^{pp'}$ for $p,p'$ in $\{R,L\}$.
To understand the physical significance of the circular polarizations,
consider a purely R-polarized monochromatic mode $h^R_{ij}$ with frequency $\omega > 0$, unit amplitude and zero phase offset; at the spatial origin ($\vec{x}=0$), the time-domain strain from such a mode will be given by
\begin{align} \label{eq:circ_example}
h_{ij}^R(t;\omega) &= \Re \left[ e^R_{ij}\, \exp(-i\omega t) \right] \nonumber\\
&= \frac{1}{\sqrt{2}} \left( e^+_{ij} \cos \omega t + e^\times_{ij} \sin \omega t \right) ,
\end{align}
using the definition from Eq.~\eqref{eq:circ}.
In the 2D Cartesian space defined by the linear polarization amplitudes, i.e. $\left(h_+, h_\times\right)$, this defines a circle, around which the \emph{phasor} encoding the state of the wave rotates counterclockwise (for $\omega > 0$).
This means that, at any given time, the wave will have a unit total amplitude (i.e., $h^2_+ + h^2_\times=1$) and the cross polarization will \emph{lag behind} the plus polarization by $\pi/2$ radians in phase.
Consequently, a purely R-polarized wave will deform a ring of freely-falling particles into an elliptical pattern that is seen to rotate counter-clockwise when looking towards the source (Fig.~\ref{fig:pol_diagram_circ}), i.e., it follows the right-hand rule relative to the direction of propagation (pointing away from the source).
The opposite will be true for purely L-polarized waves, which will result in a clockwise-rotating ellipse.
This assignment of the ``right'' and ``left'' labels is known as the ``source based'' handedness convention.
The orthogonality and completeness of the tensors in Eq.~\eqref{eq:circ} mean that we can rewrite Eq.~\eqref{eq:planewave} in terms of the circular polarizations without loss of generality as the sum
\beq \label{eq:planewave_circ}
h_{ij}(t,\vec{x})
= \frac{1}{2\pi} \sum_{p=R,L} \int_{-\infty}^{+\infty} \tilde{h}_p(\omega)\, e^p_{ij}(\hat{k})\, e^{i\omega \left(\frac{\hat{k}\cdot\vec{x}}{c}-t\right)} \infd \omega \, ,
\eeq
where $\tilde{h}_{R/L}$ and $e^{R/L}_{ij}$ have replaced their $+/\times$ counterparts.
As is straightforward to show from Eq.~\eqref{eq:circ}, the circular and linear polarization Fourier amplitudes are related by
\begin{align} \label{eq:circ_amps}
\tilde{h}_{R/L} = \frac{1}{\sqrt{2}} \left(\tilde{h}_+ \mp i\tilde{h}_\times \right) ,
\end{align}
with the minus (plus) sign for R (L).
Based on this, the complex-conjugate condition of Eq.~\eqref{eq:sym_linear} implies that
\beq \label{eq:sym_circular}
\tilde{h}_R(\omega) = \tilde{h}_L^*(-\omega) \, ,
\eeq
which again manifests the redundancy in Eq.~\eqref{eq:planewave_circ}, as in Eq.~\eqref{eq:planewave}.
It also reveals that R and L switch roles for $\omega \to - \omega$, as we anticipated below Eq.~\eqref{eq:circ_example}, indicating that these states are invariant under parity-time reversals.
\subsection{Elliptical basis}
\label{sec:ellip}
Next, it is convenient to encode the two linear GW polarizations as quadratures of a single complex-valued scalar field,
\beq
H(t) \equiv h_+ - i h_\times,
\eeq
in the time domain.
This complex number provides an alternative representation of the $\left(h_+, h_\times\right)$ phasor introduced in the previous section (see bottom panel of Fig.~\ref{fig:pol_diagram_circ}).
If this quantity, the \emph{complex strain}, is purely real (imaginary), then the wave is purely plus (cross) polarized.
In those same terms, a unit-amplitude circularly-polarized mode like the one in \eq{circ_example} can be expressed simply as $H(t) = \exp(\mp i \omega t)/\sqrt{2}$, with the minus (plus) sign in the exponent corresponding to R (L) for $\omega > 0$.%
\footnote{The choice of sign in the definition of the complex strain as $h_+ - ih_\times$ matches the convention of the Fourier transform in Eq.~\eqref{eq:ft} in order to make it so that $\exp(-i|\omega| t)$ encodes a right-handed mode as defined in the source-based convention.}
Using this fact, an economic way of expressing the information in Eq.~\eqref{eq:planewave} for any given direction of propagation $\hat{k}$ is to write the time-domain complex strain at the spatial origin ($\vec{x}=0$) as a Fourier integral of the form
\begin{align} \label{eq:hcomp_fd}
H(t) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} \tilde{H}(\omega)\, e^{-i \omega t} \,\infd \omega \, ,
\end{align}
where the complex-valued Fourier amplitudes are defined by $\tilde{H}(\omega) \equiv \int (h_+ - i h_\times) \exp(i\omega t)\, \infd t = \tilde{h}_+(\omega) - i \tilde{h}_\times(\omega)$, following our Fourier transform convention in Eq.~\eqref{eq:ft}.
Unlike in Eq.~\eqref{eq:planewave}, it is clear that these Fourier amplitudes will not generally satisfy the symmetry $\tilde{H}(-\omega) = \tilde{H}^*(\omega)$, since the quantity on the left hand side of Eq.~\eqref{eq:hcomp_fd} is not real-valued unless the wave is fully plus-polarized.
In fact, given the interpretation of $\exp(\pm i \omega t)$ discussed above, the positive (negative) frequency Fourier amplitudes in Eq.~\eqref{eq:hcomp_fd} must encode contributions from the R-polarized (L-polarized) portion of the waveform.
This becomes obvious if we note that, by Eq.~\eqref{eq:circ_amps} and the definition of $\tilde{H}$, it must be the case that $\tilde{H}(\omega) = \sqrt{2}\, \tilde{h}_R (\omega) = \sqrt{2} \tilde{h}_L^*(-\omega)$, the last equality being due to Eq.~\eqref{eq:sym_circular}.
We can leverage this to rewrite Eq.~\eqref{eq:hcomp_fd} as an integral restricted to positive frequencies,
\begin{equation} \label{eq:hcomp_fd_rl}
H(t) = \frac{1}{\sqrt{2\pi^2}} \int_{0}^{\infty} \left[ \tilde{h}_R(\omega)\, e^{-i \omega t} + \tilde{h}_L^*(\omega)\, e^{i \omega t}\right] \infd \omega \, .
\end{equation}
This expression carries the same information as Eq.~\eqref{eq:planewave} without any redundancies.
Equation \eqref{eq:hcomp_fd_rl} lends itself to a straightforward physical interpretation.
Any plane GW, with arbitrary time evolution and polarization state (including unpolarized states), can be expressed as a superposition of fully-polarized Fourier modes;
each such monochromatic mode of frequency $|\omega|$ is made up of two counterrotating circularly-polarized contributions (R and L, the two summands) that add up to a single elliptically polarized mode.
Such elliptical, or \emph{fully-polarized}, modes are thus of fundamental importance; we discuss their properties in detail below, beginning with modes of a definite frequency as they appear in \eq{hcomp_fd_rl}.
\section{Elliptical modes}
\label{sec:ellip_modes}
\subsection{Monochromatic modes}
\label{sec:ellip:mono}
\subsubsection{Morphology}
\label{sec:ellip:mono:morph}
Elliptical GWs define an ellipse in the $\left(h_+, h_\times\right)$ phasor space (Fig.~\ref{fig:ellipse}).
We can see this explicitly for the Fourier modes in Eq.~\eqref{eq:hcomp_fd_rl} above by considering a monochromatic signal given by $\tilde{h}_{R/L}(\omega) = \pi\, \delta(\omega-\omega_0)\, C_{R/L} $,%
\footnote{The prefactor can be motivated by noting that $\tilde{h}_{R/L}(\omega) = 2\pi \delta(\omega - \omega_0) C_{R/L}/2 = \delta(f - f_0) C_{R/L} / 2$ implies that $C_{R/L}/2$ are amplitude densities with respect to the frequency $f \equiv \omega/2\pi$; the additional factor of $1/2$ normalizes the signal power such that $h_+^2 + h_\times^2 = 1$ for $|C_R|=1, |C_L|=0$ or $|C_R|=0, |C_L|=1$.}
isolating a single Fourier mode of frequency $\omega_0 >0$ and complex-valued amplitudes $\pi\, C_{R/L}$.
The result of the Fourier integral for $H(t) \equiv h_+ - i h_\times$ is then (relabeling $\omega_0 \to \omega$ after integration)
\begin{align} \label{eq:ellip_circ}
H(t) =\frac{1}{\sqrt{2}} \left( C_R\, e^{-i \omega t} + C^*_L\, e^{i\omega t}\right)\, ,
\end{align}
for complex amplitudes $C_{R/L} \equiv A_{R/L} \exp(i\phi_{R/L})$, where $A_{R/L}$ and $\phi_{R/L}$ are real valued.
Without loss of generality, the above expression can be refactored into%
\footnote{This is the same parametrization we defined in \cite{Isi:2021iql} up to a factor of $\sqrt{2}$ in the circular polarization amplitudes.}
\begin{align}
H(t) = \frac{1}{2}A\hspace{-2pt}\left[ \left(1+\epsilon\right) e^{-i (\omega t - \phi_R)} + \left(1-\epsilon\right) e^{i (\omega t - \phi_L)} \right]\hspace{-3pt} .
\end{align}
Here
$A \equiv (A_R + A_L)/ \sqrt{2}$ is the peak amplitude of the mode, and $\epsilon = (A_R - A_L)/(A_R + A_L)$ is its ellipticity.
With some trigonometry, it is easy to show that this corresponds to linear polarization quadratures given by
\begin{subequations} \label{eq:hcomp_ellip}
\beq
h_+ = A \left[\cos \theta \cos(\omega t - \phi) - \epsilon \sin \theta \sin(\omega t - \phi)\right] ,
\eeq
\beq
h_\times = A \left[\sin \theta \cos(\omega t - \phi) + \epsilon \cos \theta \sin(\omega t - \phi)\right] ,
\eeq
\end{subequations}
with $\phi \equiv (\phi_L + \phi_R)/2$ and $\theta \equiv (\phi_L - \phi_R)/2$.%
\footnote{Since $\phi_{R/L}$ are $2\pi$-periodic, the most generic relation between them and $\{\theta$, $\phi\}$ is actually $\theta = [\phi_L - \phi_R + 2\pi (k-j)]/2 \mod 2\pi$ and $\phi = [\phi_L + \phi_R + 2\pi (k+j)]/2 \mod 2\pi$ for any integers $k,j$. \label{foot:angles}}
In the $\left(h_+,h_\times\right)$ plane, this defines an ellipse with semimajor axis $A$ and semiminor axis $\epsilon A$, oriented so as to subtend an angle $\theta$ between the semimajor axis and the $h_+$ axis, and with an initial location around the ellipse given by $-\phi$ (Fig.~\ref{fig:ellipse}).
The total power in this mode is given by the square of the \emph{intensity amplitude}, which we define as $\hat{A} \equiv \sqrt{A_R^2 + A_L^2} = A \sqrt{1 + \epsilon^2}$.
Equation \eqref{eq:hcomp_ellip} encapsulates all possible morphologies of a monochromatic, fully polarized wave.\footnote{We use ``elliptical'' generically to also encompass circular and linear polarizations as special cases; in this sense, ``elliptical'' and ``fully polarized'' are synonyms.}
As special cases, $\epsilon = +1$ ($\epsilon = -1$) encodes an R (L) circularly-polarized wave, while $\epsilon =0$ encodes a $+$ ($\times$) linearly-polarized wave if $\theta = 0,\pi$ ($\theta = \pm \pi/2$);
an example in between, with $\epsilon=1/2$ and $\theta = \pi/2$, is illustrated in Fig.~\ref{fig:pol_diagram_ellip} (compare to Fig.~\ref{fig:pol_diagram_circ}, where $\epsilon=1$).
Each Fourier component in Eq.~\eqref{eq:hcomp_fd_rl} is a fully polarized mode of this kind, with ellipticity determined by the relative magnitudes of $\tilde{h}_{R/L}(\omega)$, and ellipse orientation determined by the difference in their Fourier phases, through $\theta = (\arg \tilde{h}_L - \arg \tilde{h}_R)/2$.
\newcommand{\jonesbasis}{\vec{\mathfrak{e}}}
The domain for the parameters in \eq{hcomp_ellip} is $A \geq 0$, $-1 \leq \epsilon \leq 1$, $0 \leq \theta < 2\pi$ and $0 \leq \phi < 2\pi$ (or, equivalently, $-\pi \leq \theta < \pi$ and $-\pi \leq \phi < \pi$).
However, allowing $\theta$ and $\phi$ to vary freely over this range results in a double covering of the waveform space; this is because the template is invariant under the addition or subtraction of $\pi$ to both $\theta$ and $\phi$, i.e., under the transformations $\{\theta, \phi\} \to \{\theta \pm \pi, \phi \pm \pi\}$, for any combination of plus and minus signs
The existence of this degeneracy is easy to infer from Fig.~\ref{fig:ellipse}, and can be traced back to the property discussed in footnote \ref{foot:angles} in relation to $\phi_{R/L}$.
Within the $[-\pi, \pi]$ branch cut, the $\{\theta, \phi\}$ space can therefore be restricted to a diamond bounded by the four diagonals satisfying $|\phi| = \pi \pm \theta$.
This comes into play in practice when translating between probability densities obtained under different parametrizations, as we do in Sec.~\ref{sec:jacobians} (see in particular Fig.~\ref{fig:jac_Aeps_Arl_angles}).
The requirement that $\theta$ extend all the way up to $2\pi$ (or $\pm \pi$) arises from our definition of the phase angle $\phi$ with respect to the semimajor axis of the ellipse (see Fig.~\ref{fig:ellipse}).
Fundamentally, however, $\theta$ need only be specified over half that range in order to determine the \emph{orientation} of the ellipse, disregarding the signal phase.
Indeed, if we instead chose to work in terms of a phase angle $\bar{\phi} \equiv \theta - \phi = -\phi_R$ measured counterclockwise from the $h_+$ axis (and thus decoupled from $\theta$), \eq{hcomp_ellip} would become
\begin{subequations} \label{eq:hcomp_ellip_2th}
\beq
h_+ = \frac{A}{2} \left[\left(1+\epsilon\right) \cos(\omega t + \bar{\phi}) + \left(1 - \epsilon\right) \cos(\omega t + \bar{\phi} - 2\theta) \right] ,
\eeq
\beq
h_\times = \frac{A}{2} \left[\left(1 + \epsilon\right) \sin(\omega t + \bar{\phi}) -\left(1-\epsilon\right) \sin(\omega t + \bar{\phi} - 2\theta) \right] ,
\eeq
\end{subequations}
where now $\theta$ only enters the template as $2\theta$, and so $0 \leq \theta < \pi$ (or $-\pi/2 \leq \theta < \pi/2$) spans the full space of waveforms, with the initial state set freely by $0 \leq \bar{\phi} < 2\pi$.
We can obtain another useful parametrization for fully polarized states by replacing the ellipticity parameter $\epsilon$ in \eq{hcomp_ellip} with an angle $\chi \equiv \arctan \epsilon$, which is also illustrated in Fig.~\ref{fig:ellipse}.
In terms of this quantity and the intensity amplitude $\hat{A}=A\sqrt{1+\epsilon^2}=A \sec\chi$, the elliptical mode of \eq{hcomp_ellip} becomes
\begin{subequations} \label{eq:hcomp_ellip_chi}
\beq
h_+ = \hat{A} \left[\cos\chi \cos \theta \cos(\omega t - \phi) - \sin\chi \sin \theta \sin(\omega t - \phi)\right] ,
\eeq
\beq
h_\times = \hat{A} \left[\cos\chi \sin \theta \cos(\omega t - \phi) + \sin\chi \cos \theta \sin(\omega t - \phi)\right] ,
\eeq
\end{subequations}
Now, $\chi = 0$ gives a linearly polarized state, while $\chi=\pm \pi/4$ gives a R/L circularly polarized state.
Its domain is given by $-\pi/4 \leq \chi \leq \pi/4$, as implied by $-1 \leq \epsilon \leq 1$.
\subsubsection{Mathematical framework}
\label{sec:math}
The mathematical treatment of polarized GW states is entirely analogous to the electromagnetic case.
To start, any of these states can be represented graphically by a series of phasor diagrams like the one in Fig.~\ref{fig:ellipse}, as in the bottom of Figs.~\ref{fig:pol_diagram_circ} and \ref{fig:pol_diagram_ellip}.
For monochromatic modes (i.e., of a definite frequency $\omega$), the same information can also be encoded algebraically in a complex valued \emph{Jones vector} $\vec{C}$ like
\beq \label{eq:jones}
\begin{pmatrix}
h_+\\
h_\times
\end{pmatrix} \equiv
\Re \left[ \begin{pmatrix}
C_+\\
C_\times
\end{pmatrix} e^{-i\omega t}\right] \equiv
\Re \left[ \vec{C}\, e^{-i\omega t}\right] ,
\eeq
with $C_{+/\times} \equiv A_{+/\times} \exp(i\phi_{+/\times})$.
In that notation, $ \jonesbasis_+ \equiv \left(1, 0\right)$ encodes a unit-amplitude linearly polarized $+$ mode, and $\jonesbasis_\times \equiv \left(0,1\right)$ a $\times$ mode; meanwhile, the vectors $\jonesbasis_{R/L} \equiv \left(1,\pm i\right)/\sqrt{2}$ encode circular R/L modes, with the plus sign for R.
Thus, the generic signal in \eq{jones} can be equally conveyed by
\beq \label{eq:jones_bases}
\vec{C} = C_+ \, \jonesbasis_+ + C_\times \, \jonesbasis_\times = C_R \, \jonesbasis_R + C_L \, \jonesbasis_L\, ,
\eeq
with $C_{R/L} = (C_+ \mp i C_\times)/\sqrt{2}$ the same complex amplitudes as in \eq{ellip_circ}---although note that here $C_L$ appears without conjugation.
We will briefly make use of Jones vectors to facilitate coordinate transformations below.
Considering the parametrization in \eq{hcomp_ellip_chi}, we have two angles that fully define the shape of the polarization ellipse, $\chi$ and $\theta$.
If we interpret $-\pi/2 \leq 2\chi \leq \pi/2$ and $0 \leq 2\theta \leq 2\pi$ respectively as latitude and longitude coordinates, then the space of all unique polarization states can be arranged into a sphere such that linear polarization states of different orientations live on the equator ($\chi = 0$), and circular states live on the poles ($2\chi = \pm \pi/2$) \cite{poincare,goldstein}.
Any two antipodal states in this so-called \emph{Poincar\'e sphere} can function as a polarization basis.
In this language, reexpressing Eq.~\eqref{eq:planewave} as Eq.~\eqref{eq:planewave_circ} amounted to effecting a Poincar\'{e} rotation of our basis vectors.
The polarization ellipse (Fig.~\ref{fig:ellipse}) can be recovered from the Poincar\'e sphere by a stereographic projection.
If we scale the radius of the Poincar\'{e} sphere to be the signal intensity $I \equiv \hat{A}^2$, then it can be defined in terms of Cartesian coordinates corresponding to the three other \emph{Stokes parameters} that characterize the distribution of power in the signal accross different polarization states \cite{Anile1974}.
For a fully polarized monochromatic mode, in addition to $I$ itself, these are given by
\begin{subequations} \label{eq:stokes}
\beq
Q \equiv |C_+|^2 - |C_\times|^2 = \hat{A}^2 \cos 2\chi \cos 2\theta \, ,
\eeq
\beq
U \equiv C_+ C_\times^* + C_+^* C_\times = \hat{A}^2 \sin2\chi \sin 2\theta \, ,
\eeq
\beq
V \equiv |C_R|^2 - |C_L|^2 = \hat{A}^2 \sin 2\chi \, ,
\eeq
\end{subequations}
for $C_+ = (C_R + C_L)/\sqrt{2}$ and $C_\times = i (C_R - C_L)/\sqrt{2}$.
As implied by the definitions above, $Q/I$ controls the (power) fraction of linear polarization, $U/I$ the orientation of the linear component, and $V/I$ the fraction of circular polarization.
The Poincar\'{e} sphere is then the sphere of radius $I$ centered on $\left(Q=0, U=0, V=0\right)$.
For a fully polarized state, the Stokes parameters (quantifying signal power) are equivalent to the polarization quantitites $\left\{A, \epsilon, \theta\right\}$ or $\{\hat{A}, \chi, \theta\}$ defining the ellipse in Fig.~\ref{fig:ellipse} (and quantifying signal amplitude).
Because they are defined in terms of power, Stokes parameters do not retain phasing information, but have the advantage of being easily generalizable to fully or partially unpolarized waves, which can be achieved by replacing the definition in Eq.~\eqref{eq:stokes} with corresponding two-point correlation functions (power spectra); in the fully-unpolarized case, $Q=U=V=0$ and there is no Poincar\'{e} sphere to speak of.
The Stokes parameters are thus especially useful when dealing with stochastic signals \cite{Romano:2016dpx,Conneely:2018wis,Seto:2008sr,Kato:2015bye}; since we will be dealing mainly with phase-coherent signals, we will not make further reference to Stokes parameters in what follows.
\subsection{Non-monochromatic modes}
\label{sec:ellip:gen}
We arrived at the expression for a fully-polarized, monochromatic GW in Eq.~\eqref{eq:hcomp_ellip} by way of the generic Fourier decomposition of a plane wave in Eq.~\eqref{eq:hcomp_fd_rl}, wherein elliptical modes appear naturally with a determinate frequency.
Yet, we may also speak of fully-polarized states even if the signal is not monochromatic.
The argument applies to any high-frequency coherent wave, i.e., any signal that can be written as a slow-varying amplitude modulating a fast phase.
In that case, the polarization parameters $\{A, \epsilon, \theta\}$ can be defined instantaneously using the stationary phase approximation or similar procedures.
This way, any GW with a constant polarization state, i.e., whose polarization ellipse takes a fixed, determinate shape (but not necessarily scale), can be encapsulated by an expression of the form
\begin{subequations} \label{eq:ellip_gen}
\begin{equation} %
h_+ = \mathcal{A}(t) \left[\cos \Phi(t) \cos \theta - \epsilon \sin \Phi(t) \sin\theta \right] ,
\end{equation}
\begin{equation} %
h_\times = \mathcal{A}(t) \left[ \cos \Phi(t) \sin \theta + \epsilon \sin \Phi(t) \cos\theta \right] ,
\end{equation}
\end{subequations}
enhancing Eq.~\eqref{eq:hcomp_ellip} with a (slowly) time varying amplitude $\mathcal{A}(t)$ and a (quickly) time varying phase $\Phi(t)$, which need no longer grow linearly with time.
Following this expression, the aspect ratio and orientation of the polarization ellipse remains constant,%
\footnote{The shape of the ellipse could also be made to vary adiabatically via $\epsilon$ and $\theta$ but that is seldomly done in real-world applications.}
while its size may increase or decrease according to $\mathcal{A}(t)$.
The initial state of the signal is defined by the initial amplitude $A = \mathcal{A}(t=0)$ and phase $\phi = \Phi(t=0)$.
Most conceivable signals are neither monochromatic nor fully polarized.
Nevertheless, a large variety of morphologies can be captured by a finite superposition of elliptically polarized modes, potentially with time-varying polarization parameters as above.
This should be apparent from the fact that an (uncountably) \emph{infinite} set of elliptical modes can describe \emph{any} GW signal, as we showed in \eq{hcomp_fd_rl}.
For many practical applications, it is advantageous to decompose signals into
sums of fully-polarized modes in the shape of \eq{ellip_gen},
\begin{subequations} \label{eq:ellip_sum}
\begin{equation} \label{eq:ellip_sum_p}
h_+ = \sum \mathcal{A}_n(t) \hspace{-1pt} \left[\cos \Phi_n(t) \cos \theta_n - \epsilon_n \sin \Phi_n(t) \sin\theta_n \right] ,
\end{equation}
\begin{equation} \label{eq:ellip_sum_c}
h_\times = \sum \mathcal{A}_n(t) \hspace{-1pt} \left[ \cos \Phi_n(t) \sin \theta_n + \epsilon_n \sin \Phi_n(t) \cos\theta_n \right] ,
\end{equation}
\end{subequations}
with a sum over some number of modes indexed by $n$, with amplitudes and phases taking some prescribed functional form for each $n$.
The form of Eq.~\eqref{eq:ellip_sum} is flexible enough that it can be used in practice to model arbitrary signals in real detector data.
For example, that is the strategy taken by \textsc{BayesWave} \cite{Cornish:2014kda,Cornish:2020dwh}, which reconstructs generic GW signals by fitting a variable number of elliptically-polarized sine-Gaussians.%
\footnote{\textsc{BayesWave} can currently operate in two configurations: one which assumes the overall signal is elliptically polarized, and another which does not.}
It is also the case, in ringdown studies that fit the final portion of a compact binary signal as a superposition of elliptically polarized damped sinusoids \cite{Isi:2021iql}.
For such applications, each phasing function will usually correspond to some given frequency $\omega_n$ as in a Fourier expansion, so that $\Phi_n(t) = \omega_n t + \phi_n$; meanwhile, the $\mathcal{A}_n(t)$ functions encode amplitude envelopes evolving slowly over some timescale $\tau_n \equiv 1/\gamma_n$ (or, equivalently, with some quality factor $Q_n \equiv \omega_n \tau_n/2$).
For example, in the case of ringdown templates, $\mathcal{A}_n(t) = A_n \exp(-\gamma_n t)$ and $\Phi_n(t) = \omega_n t + \phi_n$, for some set of frequencies and damping rates to be inferred from the data together with polarization parameters $\{ A_n, \epsilon_n, \theta_n, \phi_n\}$.
Equations \eqref{eq:ellip_sum} can be equivalently written in the frequency domain, as done for the sine-Gaussian basis in \cite{Cornish:2014kda,Cornish:2020dwh}.
The elliptical decomposition of Eq.~\eqref{eq:ellip_sum} allows us to flexibly model a GW signal without assuming full independence of the two GW polarizations.
This is justified because, as argued in \cite{Chatziioannou:2021mij}, we expect both polarizations to be generated by the same physical processes, so that their spectral properties should not be totally independent.
Moreover, even if there was a choice of waveframe in which the two linear polarizations looked completely dissimilar, the polarizations will look spectrally similar to generic observers whose frame is randomly oriented (see the discussion of polarization mixing in Sec.~\ref{sec:angles} below).
Besides the modeling of generic signals, \eq{ellip_sum} serves as the exact representation of several classes of astrophysically-relevant signals.
The most salient example of this, as we will see below, is that of CBCs; in particular, a nonprecessing, quasicircular CBC dominated by the quadrupolar angular harmonic of the radiation can be described by a single, fully polarized component, as in \eq{ellip_gen}.
More generally, the signal from a precessing CBC is well represented by the superposition of five fully polarized modes \cite{Fairhurst:2019vut}.
\subsection{Relation to spherical harmonics}
\label{sec:harmonics}
When modeling specific sources (e.g., in a numerical-relativity simulation), it is common to decompose the outgoing strain in terms of spin-weighted spherical harmonics ${}_{-2} Y_{\ell m}$ in the frame of the source (e.g., \cite{Kidder:2007rt}), so that, for a detector infinitely far away, we can write
\begin{align} \label{eq:spherical}
H(t) = \sum_{\ell \geq 2} \sum_{-\ell \leq m \leq \ell} H_{\ell m}(t)\, {}_{-2}Y_{\ell m} (\iota, \varphi)\, ,
\end{align}
for a source seen with inclination $\iota$ and azimuthal angle $\varphi$, with intrinsic time-dependence encoded in the $H_{\ell m}$ functions as determined by Einstein's equations.
The decomposition into spherical harmonics presumes the choice of both (1) a polar frame defining $\iota$ and $\varphi$, and (2) an orientation of the waveframe vectors with respect to the direction of propagation to establish the meaning of $h_{+/\times}$ as in \eq{hij}.
In the LIGO-Virgo convention (which follows \cite{Blanchet:2008je,Faye:2012we}), the waveframe in \eq{spherical} is defined by $\hat{x} = -\hat{e}_\iota$ and $\hat{y} = - \hat{e}_\varphi$ \cite{LALSuite:source}, and the overall polar frame is centered on and comoving with the source, with an orientation respecting its symmetries (e.g., aligned with the orbital plane).
The different $H_{\ell m}$'s in \eq{spherical} are generated by the time evolution of specific current and mass moments of the source \cite{Thorne:1980ru}.
As such, their structure must inherit the symmetries of Einstein's equations, including parity.
In particular, for any source satisfying equatorial-reflection (planar) symmetry, like a nonprecessing inspiral, parity can be shown to imply that $H_{\ell -m} = (-1)^\ell H_{\ell m}^*$ \cite{Faye:2012we}, assuming that the coordinates in \eq{spherical} are oriented such that $\iota=\pi/2$ is the plane of symmetry.
Allowing for a generic (slow) amplitude and (fast) phase evolution by writing $H_{\ell m}(t) = \mathcal{A}_{\ell m}(t) \exp[-i \Phi_{\ell m}(t)]$, this symmetry reduces to $\mathcal{A}_{\ell -m}(t) = (-1)^\ell \mathcal{A}_{\ell m}(t)$ and $\Phi_{\ell m}(t) = - \Phi_{\ell -m}(t)$, where we have taken $\mathcal{A}$ and $\Phi$ to be real valued.
With that ansatz, \eq{spherical} can be rewritten with an explicit term for negative values of $m$ (and double counting $m=0$ modes) as
\begin{widetext}
\begin{subequations} \label{eq:spherical_modes}
\begin{align}
H(t) &= \sum_{\ell \geq 2} \sum_{0\leq m \leq \ell} \left[H_{\ell m}(t)\, {}_{-2}Y_{\ell m} (\iota, \varphi) + H_{\ell -m}(t)\, {}_{-2}Y_{\ell -m} (\iota, \varphi) \right] \\
&= \sum_{\ell \geq 2} \sum_{0\leq m \leq \ell} \left[\mathcal{A}_{\ell m}(t)\, e^{-i\Phi_{\ell m} (t)} {}_{-2}Y_{\ell m}(\iota, \varphi) + \mathcal{A}_{\ell m}(t)\, e^{i\Phi_{\ell m} (t)} {}_{-2}Y_{\ell m}^*(\pi-\iota, \varphi) \right] \\
&= \sum_{\ell \geq 2} \sum_{0\leq m \leq \ell} \left[\mathcal{C}_{\ell m}(t)\, e^{-i\Phi_{\ell m} (t)} + \mathcal{C}_{\ell -m}(t)\, e^{i\Phi_{\ell m} (t)} \right]\, ,
\end{align}
\end{subequations}
\end{widetext}
for some overall complex-valued amplitudes $\mathcal{C}_{\ell \pm m}$, which absorbe the angular dependence of the spherical harmonics and any potential (slow) time variation in $\mathcal{A}_{\ell m}$.
In the second line above, we took advantage of the identity relating spherical harmonics for different signs of $m$, ${}_{-2} Y_{\ell -m}(\iota,\varphi) = (-1)^{\ell} {}_{-2} Y_{\ell m}^*(\pi-\iota,\varphi)$ \cite{goldberg:1967}.
The summand in the last line of \eq{spherical_modes} takes the form of \eq{ellip_circ}, and its interpretation is the same for any fixed observation direction: each $(\ell$, $|m|)$ angular harmonic contributes a single, elliptically polarized mode to the waveform, composed of right- and left-handed pieces corresponding to the $m>0$ and $m<0$ modes respectively.
Thus the overall strain for such a source must be a superposition of purely polarized modes, with adiabatically evolving amplitudes as in \eq{ellip_sum}.
The amplitude and ellipticity of each mode are determined by a combination of the intrinsic amplitudes $\mathcal{H}_{\ell \pm m}$, and the viewing angle $(\iota, \varphi)$---the latter through the ${}_{-2} Y_{\ell m}$ factors.
The intensity of the mode will vary with time following $\mathcal{A}_{\ell m}(t)$; meanwhile, its ellipticity, as observed from a given $\iota$ and $\varphi$, will be fixed by the relative amplitudes of the $\pm|m|$ spherical harmonics,
\begin{align}
\epsilon_{\ell|m|}(\iota) = \frac{\left|{}_{-2} Y_{\ell m}(\iota,\varphi)\right| - \left|{}_{-2} Y_{\ell -m}(\iota,\varphi)\right|}{\left|{}_{-2} Y_{\ell m}(\iota,\varphi)\right| + \left|{}_{-2} Y_{\ell -m}(\iota,\varphi)\right|} ,
\end{align}
which is exclusively a function of the inclination $\iota$, because $\varphi$ only affects the phase (not the magnitude) of the spin-weighted spherical-harmonic factors, with ${}_{-2} Y_{\ell m}(\iota,\varphi) = {}_{-2} Y_{\ell m}(\iota) \exp(i m \varphi)$ factoring out the $\varphi$ dependence.
The complex strain $H_{\ell|m|}(t)$ for a given elliptical $(\ell, |m|)$ mode, as given by the summand in \eq{spherical_modes}, can be further rewritten as
\begin{equation}
H_{\ell|m|}(t) = \mathcal{A}_{\ell m}(t) \left[ Y^+_{\ell m} \cos \Phi_{\ell m}'(t) -
i Y^\times_{\ell m} \sin \Phi_{\ell m}'(t) \right],
\end{equation}
where we have defined $\Phi_{\ell m}'(t) \equiv \Phi_{\ell m}(t) - m \varphi$, and
\begin{equation}
Y_{\ell m}^{+/\times}(\iota) \equiv {}_{-2} Y_{\ell m}(\iota) \pm {}_{-2} Y_{\ell m}(\pi-\iota) \, ,
\end{equation}
with the plus (minus) sign for $+$ ($\times$), and noting that, after factoring out the $\varphi$ dependence, the ${}_{-2} Y_{\ell m}(\iota)$ quantities are real valued.
For the special case of the dominant $\ell=|m|=2$ mode, the strain $H_{\ell|m|} = h_+ - i h_\times$ thus reduces to
\begin{subequations} \label{eq:nonprecessing}
\beq
h_+ = \frac{1}{2} \sqrt{\frac{5}{4\pi}} \mathcal{A}_{22}(t) \left(1 + \cos^2\iota\right) \cos \Phi'_{22}(t) \, ,
\eeq
\beq
h_\times = \sqrt{\frac{5}{4\pi}} \mathcal{A}_{22}(t) \cos\iota \sin \Phi'_{22}(t) \, ,
\eeq
\end{subequations}
as can be checked by computing explicit expressions for ${}_{-2} Y_{22}(\iota)$.
This is exactly of the form of \eq{ellip_gen}, with amplitude $\mathcal{A} = \sqrt{5/16\pi}\,\mathcal{A}_{22}\left(1+\cos^2\iota\right)$, ellipticity
\beq \label{eq:ellip_cosi}
\epsilon = \frac{2 \cos\iota}{1+\cos^2\iota}\, ,
\eeq
which we illustrate in Fig.~\ref{fig:ellip_cosi}, and $\theta = 0$.
The fact that $\theta = 0$ is a consequence of our special choice of coordinate frame in \eq{spherical}, which we constructed to reflect the symmetries of the planar source so that the equator is the plane of symmetry (we return to this point in Sec.~\ref{sec:position}).
The above results, Eqs.~(\ref{eq:spherical_modes}--\ref{eq:ellip_cosi}), hold only for sources with equatorial-reflection symmetry.
The GWs for more generic, precessing, sources will not generally be given by the superposition of fully polarized modes with constant ellipticity.
However, some of such signals may be decomposed into elliptical modes with a slowly-evolving ellipticity; that is the case, for example, for the early stages of precessing compact binary inspirals, whose signal can be well approximated by \eq{nonprecessing} with a slowly varying inclination.
In some cases, nonplanar sources can also give rise to superpositions of fully polarized modes.
For example, this is the case for black-hole ringdown signals \cite{Vishveshwara:1970cc, Press:1971wr, Teukolsky:1973ha, Chandrasekhar:1975zza}, which can be written as a harmonic expansion similar to \eq{spherical},
\beq \label{eq:spheroidal}
H(t) = \sum_{\ell \geq 2} \sum_{-\ell \leq m \leq \ell} \sum_{n\geq 0} C_{\ell m n} e^{-i\tilde{\omega}_{\ell m n} t} {}_{-2}S_{\ell m n} (\iota, \varphi) ,
\eeq
for complex frequencies $\tilde{\omega}_{\ell m n} \equiv \omega_{\ell m n} - i/\tau_{\ell m n}$ indexed by the usual angular numbers $\ell$ and $m$, as well as an overtone number $n$, which orders modes of a given $(\ell, m)$ by decreasing damping time; the angular dependence is encoded in the spin-weighted spheroidal harmonics, ${}_{-2}S_{\ell m n}$ \cite{Teukolsky:1973ha,Press:1973zz,Leaver:1985ax,Berti:2005gp,Cook:2014cta}, which have replaced the spherical harmonics in \eq{spherical}.
Parity in this decomposition implies $\tilde{\omega}_{\ell m n} = -\tilde{\omega}_{\ell-m n}^*$; it can thus be shown that, for fixed $\iota$ and $\varphi$, \eq{spheroidal}, is equivalent to
\beq \label{eq:ringdown}
H(t) = \sum \left( C_{\ell m n}' e^{-i\omega_{\ell m n} t} + C_{\ell -m n}' e^{i\omega_{\ell m n} t} \right) e^{-t/\tau_{\ell m n}} ,
\eeq
where the $m$ sum is now restricted to nonnegative values, $0 \leq m \leq \ell$, and $C'_{\ell \pm m n}$ are redefined amplitudes absorbing angular factors.
Comparing to \eq{ellip_circ}, it is evident from \eq{ringdown} that the ringdown strain is made up from elliptically polarized components, with exponentially decaying amplitudes.
If the ringdown excitations had equatorial symmetry, then the initial amplitudes in \eq{spheroidal} would satisfy $C_{\ell -m n }= (-1)^{\ell} C^*_{\ell m}$, and the ellipticity of the observed modes would only be a function of the observing direction.
(See Sec.~IIA and Appendix B of \cite{Isi:2021iql} for an extended discussion.)
\section{Polarization angles}
\label{sec:angles}
\subsection{Wave-frame and the angle $\psi$}
\label{sec:pol}
Equation \eqref{eq:hij} presumes a specific choice of frame orientation that defines the basis in which the $h_{ij}$ components are written and, therefore, the physical meaning of $h_{+}$ and $h_\times$.
Although \eq{hij} requires that $\hat{z}$ be parallel to the (spatial) wave vector $\vec{k}$, there is no a priori restriction on the orientation of the $x$ and $y$ axes within the plane perpendicular to $\vec{k}$.
This freedom is usually encapsulated in the choice of an arbitrary \emph{polarization angle} $\psi$, defined with respect to some convenient reference direction.
For instance, in the LIGO-Virgo convention, this angle is defined with respect to celestial coordinates such that $\psi=0$ means that the waveframe $\hat{x}$ is parallel to the celestial equator due west, and $\psi$ is measured following the right hand rule around $\hat{z}$ \cite{LALSuite:wave,Anderson:T010110}; we illustrate this in Fig.~\ref{fig:diagram_waveframe}.
With some trigonometry, it is straightforward to show that a \emph{clockwise}%
\footnote{This \emph{passive} clockwise rotation of the waveframe corresponds to an \emph{active} counterclockwise rotation of the polarization state.}
rotation of $\hat{x}$ and $\hat{y}$ by some angle $\Delta \psi$ around $\hat{z}$ leaves the form of \eq{hij} unchanged after redefining
\begin{subequations} \label{eq:htransf}
\beq
h_+ \rightarrow h_+' = h_+ \cos 2\Delta \psi - h_\times \sin 2\Delta\psi \, ,
\eeq
\beq
h_\times \rightarrow h_\times' = h_\times \cos 2\Delta \psi + h_+ \sin 2\Delta\psi \, .
\eeq
\end{subequations}
This contravariant transformation gives the polarization amplitudes that would be measured by an observer in the rotated (primed) frame, as a function of the amplitudes in the original frame.
The $2\Delta\psi$ dependence in \eq{htransf} reveals the fact that $h_+$ and $h_\times$ are nothing but the two components of a tensor field with spin-weight $|s|=2$, and the two polarizations are only defined up to an arbitrary choice of $\psi$.
The basis polarization tensors themselves transform inversely to the amplitudes, covariant with the rotation.
Under clockwise rotations of the wave frame, therefore, the antenna patterns of \eq{h} transform through an expression complementary to \eq{htransf},
\begin{subequations} \label{eq:Ftransf}
\beq
F_+ \rightarrow F_+' = F_+ \cos 2\Delta \psi + F_\times \sin 2\Delta\psi \, ,
\eeq
\beq
F_\times \rightarrow F_\times' = F_\times \cos 2\Delta \psi - F_+ \sin 2\Delta\psi \, ,
\eeq
\end{subequations}
ensuring that the observable $h(t)$ in \eq{h} is independent of the arbitrary angle $\psi$.
More generally, any scalar like $D^{ij} e^{p}_{ij}$ will necessarily be frame invariant.%
\footnote{This extends to gauge transformations: the spacetime tensors $D_{ab}$ and $h_{ab}$ are gauge dependent, but their inner product is not.}
Unlike the linear modes of Eq.~\eqref{eq:lin}, the tensors of Eq.~\eqref{eq:circ} do not mix under rotations around the direction of propagation:
the circular polarizations are eigenstates of the helicity operator with weight $\pm 2$, corresponding to the two helicities of a spin-2 massless particle (see, e.g., \cite{Hinterbichler2011}).
The equivalent transformation to Eq.~\eqref{eq:htransf} is
\begin{subequations} \label{eq:htransf_circ}
\begin{align}
h_R &\rightarrow h_R' = h_R \exp(- i2 \Delta \psi) \, ,\\
h_L &\rightarrow h_L' = h_L \exp(+ i2 \Delta \psi)\, ,
\end{align}
\end{subequations}
meaning that a rotation around $\hat{z}$ is equivalent to a simple change in the overall phase of the circular polarization components.
As such, a change in $\psi$ can be absorbed by a redefinition of the Fourier phases in \eq{hcomp_fd_rl}, multiplying the integral through by $\exp(-i2\Delta\psi)$.
Equations~(\ref{eq:htransf}--\ref{eq:htransf_circ}) allow us to transform predictions for the strain $h_{ij} = h_+ e^{+}_{ij} + h_{\times} e^{\times}_{ij}$ in some waveframe $\{\hat{x}, \hat{y}, \hat{z}\}$ to a different one $\{\hat{x}', \hat{y}', \hat{z}'\}$, rotated clockwise around $\hat{k}=\hat{z}=\hat{z}'$ by $\Delta \psi$ (simply labeled $\psi$, if the primed frame corresponds to the reference frame that defines $\psi=0$).
In real-world data analysis applications, however, we simply write the unprimed basis vectors in the primed basis and evaluate \eq{h} through numerical dot products using \eq{lin}.
To do this, we express the components of $e^{+/\times}_{ij}$ and $D_{ij}$ in a common basis suitably aligned with the reference waveframe---for ground-based detectors, where we take $\hat{x}'$ to be parallel to the celestial equator, these are equatorial celestial coordinates (Fig.~\ref{fig:diagram_waveframe}).
Knowing how the $\{\hat{x}', \hat{y}'\}$ vectors are expressed in such coordinates, we can construct $h_{ij}'$ by noting that $\hat{x} = \cos\psi\, \hat{x}' + \sin\psi\, \hat{y}'$ and $\hat{y}= - \sin\psi\, \hat{x}' + \cos\psi\, \hat{y}'$.
It is then straightforward to write the signal at any given detector in terms of the polarization amplitudes $h_{+/\times}$ computed in the original frame.
The definition of the angle $\psi$ (as in Fig.~\ref{fig:diagram_waveframe}) is not intrinsically related to any feature of the signal: it simply chooses an absolute reference direction that defines an arbitrary frame in which to prescribe the $h_+$ and $h_\times$ polarization functions in \eq{hij}, or equivalently, the frame in which to measure the phases of the circularly polarized Fourier components.
Nevertheless, even though any choice of assignment for $\psi$ is formally valid, specific signal morphologies may make some choices more convenient than others.
\subsection{Elliptical waves and the angle $\theta$}
\label{sec:theta}
Another notion of ``polarization angle'' arises naturally in the description of elliptically polarized signals.
The expression for an elliptical wave in \eq{ellip_gen} presumes some specific choice of $\psi$ that defines the meaning of plus vs cross by orienting $\hat{x}$, as explained in the previous section.
The expression simplifies if we choose that angle such that the plus and cross axes are aligned with the principal components of the ellipse, i.e., constructing the polarization frame to ensure that $\theta = 0$ (see Fig.~\ref{fig:ellipse}).
With such a choice of wave frame (equivalently, choice of $\psi$), \eq{ellip_gen} becomes just
\begin{subequations} \label{eq:ellip_frame}
\beq
h_+ = \mathcal{A}(t) \cos \Phi(t) \, ,
\eeq
\beq
h_\times = \epsilon \mathcal{A}(t) \sin \Phi(t)\, ,
\eeq
\end{subequations}
and we may simply read off the ellipticity $\epsilon$ as the ratio of the $\times$ to $+$ amplitudes.
Crucially, an elliptical wave will generally not take the form of Eq.~\eqref{eq:ellip_frame} unless $\hat{x}$ is chosen appropriately; only circularly polarized signals ($\epsilon=\pm1$) will take this simplified form irrespective of the wave frame orientation (again showing that these are eigenstates of the helicity operator).
Given the above, when working with a single elliptically-polarized wave, Eq.~\eqref{eq:ellip_frame} defines a privileged orientation of the wave frame, unique up to rotations by $\pi/2$ around $\hat{z}$.
If we adopt $\theta =0$ as a convention (or, equivalently, $\theta=\pi$), then we \emph{define} our wave frame to lie along the principal axes of the polarization ellipse and, thus, the polarization angle $\psi$ becomes synonymous with the polarization ellipse orientation by construction.
However, the two angles $\theta$ and $\psi$ are conceptually distinct; in particular, $\theta$ is defined only for elliptically polarized waves, whereas $\psi$ is always defined.
As for any GW, the detector output for an elliptically polarized wave will be given by \eq{h}.
In this case, however, \eq{Ftransf} implies that $\psi$ and $\theta$ are degenerate, as detailed in Appendix A of \cite{Isi:2017equ}.
Concretely, for a fixed sky location (i.e., propagation direction), rotating the waveframe \emph{clockwise} around $\hat{z}$ results in a change from $\psi \to \psi' = \psi + \Delta\psi$ in the antenna patterns, which can be absorbed by a change in $\theta$.
This is because the expression for the strain at a given detector,
\beq
h = F_+(\psi + \Delta \psi)\, h_+ + F_\times(\psi + \Delta \psi)\, h_\times \, ,
\eeq
can be expanded by means of \eq{Ftransf} to read
\begin{align}
h = &\left[ F_+(\psi) \cos 2\Delta\psi + F_\times(\psi) \sin 2\Delta\psi \right] h_+\, + \\
&\left[F_\times(\psi) \cos 2\Delta\psi - F_+(\psi)\sin 2\Delta\psi\right] h_\times \, .
\end{align}
Plugging in the expressions for an elliptical wave in \eq{ellip_gen} and taking advantage of trigonometric identities, this can be rearranged into
\begin{widetext}
\begin{align} \label{eq:theta_psi}
h = & \mathcal{A}(t) \left[\cos \Phi(t) \cos(\theta + 2\Delta\psi) - \epsilon \sin \Phi(t)\sin(\theta + 2\Delta\psi) \right] F_+(\psi) +\nonumber\\
&\mathcal{A}(t) \left[\cos \Phi(t) \sin(\theta + 2\Delta\psi) + \epsilon \sin \Phi(t) \cos(\theta + 2\Delta\psi) \right] F_\times(\psi),
\end{align}
\end{widetext}
which is the same result we would have obtained by replacing $\theta \to \theta' = \theta + 2 \Delta\psi$ in \eq{ellip_gen};
for a signal made up of multiple fully-polarized components, as in \eq{ellip_sum}, the waveframe orientation affects all ellipse orientations in the same way, i.e., $\theta_n \to \theta_n'=\theta_n + \Delta\psi$ (Fig.~\ref{fig:diagram_ellipse_extra}).
We could have equivalently (and more quickly) derived this by noting that $\theta$ is related to the phases of the circularly-polarized components of the signal by $\theta = \left(\phi_L - \phi_R\right)/2$, as in \eq{hcomp_ellip}; the transformation rule for $\theta$ then follows from \eq{htransf_circ}, which implies $\phi_{R/L} \to \phi'_{R/L} \mp 2\Delta\psi$, with the negative (positive) sign for R (L).
This relation between $\theta$ and $\psi$ implies that elliptical-wave analyses that allow $\theta$ to vary freely should avoid degeneracies by fixing $\psi$ to an arbitrary a priori value.
Choosing this fiducial value to be $\psi=0$, the template at a given detector would be constructed as
\begin{equation}
h = F_+(\psi=0)\, h_+ + F_\times(\psi=0)\, h_\times \, ,
\end{equation}
for $h_{+/\times}$, as in Eq.~\eqref{eq:ellip_sum}, functions of $\{\theta_i, \epsilon_i\}$ and whatever other parameters are needed to evaluate the amplitude and phasing functions $\{\mathcal{A}_i(t), \Phi_i(t)\}$ (or their frequency-domain analogs).
The antenna patterns $F_{+/\times}$ are evaluated for some sky location and arrival time, which can be allowed to vary so as to measure them from the observed data.
On the other hand, the angle $\psi$ is fixed; allowing it to vary would amount to shifting all $\theta_i$ values by $2\psi$, per Eq.~\eqref{eq:theta_psi} and Fig.~\ref{fig:diagram_ellipse_extra}.
Fixing $\psi$ to some fiducial value was the approach taken in \cite{Isi:2017equ,Chatziioannou:2021mij,Isi:2021iql}.
\subsection{Compact binaries and the angles $\Psi$ and $\Omega$}
\label{sec:position}
When modeling GW waveforms from specific systems, it is useful to tie the polarization frame to the geometry of the source.
This is advantageous because, in order to write out explicit expressions for $h_+$ and $h_\times$, we must make \emph{some} definite choice of frame orientation, and doing so in a way that respects the symmetries of the source (if any) can lead to simplified expressions.
That was the case in going from Eq.~\eqref{eq:ellip_gen} to Eq.~\eqref{eq:ellip_frame} above: if we know \emph{a priori} that the waves from a given source will always be elliptically polarized, then it makes sense to anchor our wave frame to some feature of the source orientation that will ensure alignment with the principal directions of the polarization ellipse (i.e., $\theta=0$).
For a nonprecessing compact binary, as we saw in Sec.~\ref{sec:harmonics}, it is natural to orient our coordinates so as to respect the planar symmetry of the source.
With that standard choice, we find that the linear polarizations take the simple form of \eq{nonprecessing}, which matches the expression for an elliptical mode with $\theta=0$ as in \eq{ellip_frame}.
This again reveals that our choice of coordinates was a good one in modeling that source: because this wave-frame orientation preserves the symmetries of the binary, it also happens to be aligned with the principal directions of the polarization ellipse.
When making predictions for the signal we may always choose this frame to simplify calculations.
Of course, the frame that is most convenient for source modeling need not be the best frame to describe measurements.
In order to compare predictions to measurements, we need to understand how the frame in which the $h_{+/\times}$ polarizations were predicted is oriented with respect to the detectors.
The frame in \eq{nonprecessing}, which we here denote with unprimed symbols $\{\hat{x}, \hat{y}, \hat{z}\}$, was constructed such that the GW direction of propagation, $\hat{z}=\hat{k}$, is purely radial, with the remaining basis elements purely polar or azimuthal.
Although different definitions may be found in the literature (e.g., \cite{Faye:2012we,Kidder:2007rt}), the LIGO-Virgo convention is to choose $\hat{y}$ such that it points towards the ascending node, i.e., parallel to the \emph{line of nodes} defined by the intersection of the orbital plane with the plane of the sky \cite{LALSuite:source}; $\hat{x}$ completes the triad (Fig.~\ref{fig:diagram_sourceframe} with $\Omega = \pi/2$).
In this convention, then, $\hat{x} = -\hat{e}_\iota$, $\hat{y} = -\hat{e}_\varphi$ and $\hat{z}=\hat{e}_r$, where $\left(r, \iota, \varphi\right)$ are the spherical coordinates associated with the spherical-harmonic frame in \eq{spherical_modes}.
Having specified $h_+$ and $h_\times$ in that standard, source-based frame, all we need to do to predict the signal at a given detector is to evaluate \eq{h}.
As described in Sec.~\ref{sec:pol}, this is done in practice by expressing $\{\hat{x}, \hat{y}\}$ in terms of canonical reference vectors $\{\hat{x}',\hat{y}'\}$, which are themselves tied to an Earth-centered celestial coordinate system (Fig.~\ref{fig:diagram_waveframe}).
By convention, we specify the relative orientation between the two frames through the angle $\psi$ defined clockwise from $\hat{x}$ to $\hat{x}'$ around $\hat{z}$, where $\hat{x}'$ is the intersection of the celestial equator with the plane of the sky \cite{LALSuite:wave,Anderson:T010110}.
Knowing that, for CBCs, $\hat{y}$ was constructed to lie along the line of nodes, then $\psi = 0$ must mean that the ascending node points towards the projected celestial north ($\hat{y}'=\hat{y}=\ascnode$), and that the projection of the orbital angular momentum onto the plane of the sky is parallel to the horizon due west ($\hat{x}'=\hat{x}=\hat{L}_\perp$); we illustrate this in Fig.~\ref{fig:diagram_skyview} from the point of view of the observer.
In this convention, $\psi$ is identical to the complement of the \emph{position angle} of the source's orbital angular momentum $\Psi$, defined to be the angle between the projected orbital angular momentum and the celestial north in the plane of the sky (i.e., the angle between $\hat{L}_\perp$ and $\hat{y}'$ in Fig.~\ref{fig:diagram_sourceframe}, shown explicitly in Fig.~\ref{fig:diagram_skyview} for $\Omega=\pi/2$).
More generally, $\Psi = \pi - \psi - \Omega$ in terms of the \emph{longitude of the ascending node} $\Omega$ with $\hat{x}$ as the origin of longitude.
LIGO and Virgo always fix $\Omega = \pi/2$ \cite{LALSuite:source}, tying the primed polarization frame, in which $h_{+/\times}$ are predicted, to the source geometry.
Thus, when LIGO-Virgo report measurements of the polarization angle in CBCs, the quantity reported is the in-plane sky angle of the orbital ascending node relative due north.
In fact, with these conventions for a nonprecessing binary, the three angles $\psi$, $\Psi$ and $\theta$ can all be subsumed by a single parameter (usually written $\psi$) simultaneously encoding the orientation of the polarization basis, the alignment of the source in the sky, and the principal axes of the GW polarization ellipse.
We can then think of this angle as a property of the source to be measured from our data, rather than an arbitrary parameter orienting our frame.
Although this equivocation vastly simplifies analyses, it is helpful to keep in mind that the three angles are conceptually distinct: $\psi$ can always be defined, but $\theta$ only exists for fully polarized waves, and $\Psi$ is an orbital element, not defined for arbitrary sources (say, a stochastic source, or a supernova).
If the component spins are not (anti)aligned with the orbital angular momentum, the spins and the orbital plane will both precess.
As a consequence, the system will not be reflection symmetric and the GW signal will not be elliptically polarized overall (see Sec.~\ref{sec:harmonics}).
Nonetheless, it is still conventional to tie $\hat{y}$ to the source as in Fig.~\ref{fig:diagram_sourceframe}, referring to the line of nodes as oriented at some specific point in the binary evolution (e.g., when the detected GW signal reaches 20 Hz, or at a mass-invariant reference point \cite{Varma:2021csh,Mould:2021xst}).
In that case, yet another coordinate frame is used to specify the component spins at the reference time, as specified in \cite{LALSuite:spins} and illustrated in App.~\ref{app:spins}.
In summary, we can identify three conceptually distinct Cartesian frames: a wave frame that determines the principal directions along which we \emph{define} the effect of a plus vs cross wave; for an elliptical wave, an intrinsic polarization frame, encoding the principal directions of the polarization ellipse; and a source frame, aligned with the symmetries of the source, or otherwise anchored to some defining feature of it; all of these can be specified in some astronomical frame, like ecliptic celestial coordinates.
For nonprecessing binaries, which are highly symmetric, we can define the source frame to make it always align with the polarization frame.
In unmodeled analyses, as those discussed in Sec.~\ref{sec:ellip:gen}, it is not possible or useful to explicitly tie the polarization frame to properties of the source, since these analyses are not tailored to any specific source to begin with, or they purposely disregard source orientation information for the sake of generality.
In that case, the model for $h_+$ and $h_\times$ can be defined in any arbitrary wave frame.
A common choice is to simply set $\psi = 0$ in the standard coordinates described above, i.e., with $\hat{y}$ pointing towards the celestial north (Fig.~\ref{fig:diagram_sourceframe}).
Having done so, all information regarding polarization orientation will be encoded in the $\theta$ parameter of Fig.~\ref{fig:ellipse}, with one value per elliptical mode in the decomposition of \eq{ellip_sum}.
Varying both $\psi$ and $\theta$ simultaneously is ill-advised in that context, since the two parameters will be fully degenerate (see end of Sec.~\ref{sec:ellip}, including Fig.~\ref{fig:diagram_ellipse_extra}).
\section{Coordinate transformations}
\label{sec:jacobians}
In the previous sections, we have introduced different parametrizations of elliptical (i.e., fully polarized) waves, including Eqs.~\eqref{eq:ellip_circ}, \eqref{eq:hcomp_ellip} and \eqref{eq:hcomp_ellip_chi}.
Their use varies depending on the specific application, according to convenience and convention.
Understanding the relation between the different parametrizations becomes especially important when implementing and interpreting measurements, since the choice of parametrization often influences the prior specified in the analysis.
Measurements obtained with different parametrizations can be related via a Jacobian.
We may also want to switch parametrizations for technical reasons.
Although conceptually insightful, the manifestly-elliptical parameterization in terms of $\{A, \epsilon, \theta, \phi\}$ of \eq{hcomp_ellip} contains multiple degeneracies that make it less than ideal for sampling purposes.
For instance, the angles $\theta$ and $\phi$ become totally degenerate when $\epsilon = \pm 1$.
To circumvent this, we may switch to a more suitable parametrization in the sampling process, and then translate the result back into $\{A, \epsilon, \theta, \phi\}$ for interpretation.
In that case, we can still specify a prior in terms of the elliptical quantities by again making use of a Jacobian.
If we parametrize our analysis in terms of some alternative set of parameters $\vec{\xi}$, we can impose some prior distribution defined in the space of elliptical quantities, $p({A, \epsilon, \theta, \phi})$, by choosing a corresponding prior $p(\vec{\xi})$ for the $\vec{\xi}$ quantities such that
\begin{equation} \label{eq:jac}
p \left( \vec{\xi} \right) = p \left( A, \epsilon, \theta, \phi \right) \left| \frac{\partial \{A, \epsilon, \theta, \phi\}}{\partial \vec{\xi}} \right| ,
\end{equation}
where the last factor $J \equiv | \partial (A, \epsilon, \theta, \phi)/\partial \vec{\xi} |$ is the determinant of the Jacobian matrix.
Applying the Jacobian without any further reweighting yields a flat prior on the $\{A, \epsilon, \theta, \phi\}$ quantities over the region covered by the original prior.
As with any coordinate transformation, the integration limits must be adjusted to ensure that they correspond to the targeted region in the $\{A, \epsilon, \theta, \phi\}$ space---for example, sampling uniformly in the two Cartesian quadratures $(x, y)$, we can effect a uniform prior on the polar quantities $(0 < r=\sqrt{x^2+y^2} \leq L, \theta= \arctan y/x)$ by applying a Jacobian $\propto 1/r$ and explicitly enforcing $r \leq L$ (Fig.~\ref{fig:jac_example}).
In this section, we will consider four different parametrizations of an elliptical wave, and present the Jacobians relating them to the $\{A, \epsilon, \theta, \phi\}$ parametrization.
We will focus on a single elliptical component as a standin for any individual term in the sum of Eq.~\eqref{eq:ellip_sum}, so that the results are trivially generalizable to decompositions of GWs with arbitrary polarizations, as would be used by \textsc{BayesWave} or other generic analyses.
We assume the amplitude could potentially subsume any (slow) time dependence endowed by $\mathcal{A}(t)$ in Eq.~\eqref{eq:ellip_gen}, e.g., the amplitude parameters below could correspond to a reference amplitude $A=\mathcal{A}(t=0)$.
\subsection{Amplitude and ellipticity}
\label{sec:jac:Achi}
In Sec.~\ref{sec:ellip:mono}, we presented two equivalent parametrizations of the $h_{+/\times}$ components of an elliptical wave, Eqs.~\eqref{eq:hcomp_ellip} and \eqref{eq:hcomp_ellip_chi}, illustrated in Fig.~\ref{fig:ellipse}.
Equation ~\eqref{eq:hcomp_ellip} parametrizes the signal strength via the maximum amplitude achieved by the wave, $A$ (the semimajor axis in Fig.~\ref{fig:ellipse}), and the shape of the polarization ellipse via the ellipticity, $\epsilon$ (the ratio between the semiminor and semimajor axes);
meanwhile, Eq.~\eqref{eq:hcomp_ellip_chi} parametrizes the strength via the intensity amplitude $\hat{A}$, which is the square-root of the signal intensity $I$, and the shape of the ellipse through the angle $\chi$.
The two parametrizations are straigthforwardly related by
\begin{equation} \label{eq:Aellip_Ahatchi}
\begin{cases}
\hat{A} = A \sqrt{1 + \epsilon^2} \\
\chi = \arctan \epsilon
\end{cases}
\end{equation}
and the inverse transformation
\begin{equation} \label{eq:Ahatchi_Aellip}
\begin{cases}
A = \hat{A} \cos \chi \\
\epsilon = \tan \chi \\
\end{cases} ,
\end{equation}
with no change to the angles $\theta$ and $\phi$.
The Jacobian relating these two transformations is simply
\begin{equation} \label{eq:jac_Aeps_Achi}
J_0 \equiv \left| \frac{\partial(A,\epsilon,\theta,\phi)}{\partial(\hat{A}, \chi, \theta, \phi)}\right| = \sec \chi = \sqrt{1 + \epsilon^2} \, ,
\end{equation}
or, equivalently $J_0 = \hat{A}/A$.
This Jacobian, illustrated in Fig.~\ref{fig:jac_Aeps_Achi}, indicates that a uniform prior in the $\{\hat{A}, \chi\}$ quantities implicitly favors linear polarizations ($\epsilon = 0$) over circular ones, although the preference is mild.
\subsection{Circular components}
\label{sec:jac:Arl}
A monochromatic elliptical wave, Eq.~\eqref{eq:hcomp_ellip}, can be specified in terms of the circular polarization basis elements as in \eq{ellip_circ},
where the $C_{R/L} \equiv A_{R/L} \exp(i\phi_{R/L})$ quantities control the amplitude and phase of the right and left circularly-polarized components of the signal.
The representation in terms of such circular-mode amplitudes and phases is equivalent to Eq.~\eqref{eq:hcomp_ellip} if we impose
\begin{equation} \label{eq:Cphi_to_Aellip}
\begin{cases}
A = \frac{1}{\sqrt{2}}\left(A_R + A_L\right) \\
\epsilon = (A_R - A_L)/(A_R + A_L) \\
\theta = \frac{1}{2}(\phi_L - \phi_R)\\
\phi = \frac{1}{2}(\phi_L + \phi_R)\\
\end{cases}
\end{equation}
(although see footnote \ref{foot:angles} for the angles).
Equivalently, the inverse transformation is
\begin{equation} \label{eq:Aellip_to_Cphi}
\begin{cases}
A_R = \frac{1}{\sqrt{2}} A \left(1 + \epsilon\right) \\
A_L = \frac{1}{\sqrt{2}} A \left(1 - \epsilon\right) \\
\phi_R = \phi - \theta \\
\phi_L = \phi + \theta \\
\end{cases} .
\end{equation}
These expressions are particularly simple: amplitude parameters $\{ A_R, A_L\}$ transform directly into amplitude parameters $\{A, \epsilon\}$, irrespective of phasing angles.
This is a consequence of the fact that the circular polarizations are defined to be invariant under rotations around the direction of propagation, up to an overall phase as shown in \eq{htransf_circ}.
The above transformations imply a Jacobian
\begin{equation} \label{eq:jac_Aeps_Arl}
J_1 \equiv \left| \frac{\partial(A,\epsilon,\theta,\phi)}{\partial(A_R, A_L, \phi_R, \phi_L)}\right| \propto \frac{1}{A_R + A_L}\, ,
\end{equation}
with a proportionality constant of $1/\sqrt{2}$, which is ignored in most applications as it can be absorbed by an overall normalization; based on \eq{Cphi_to_Aellip}, this is also proportional to $1/A$.
Therefore, a prior uniform in $A_R$ and $A_L$ results in a triangular prior in the overall amplitude of the mode defined by $A$ (Fig.~\ref{fig:jac_Aeps_Arl}).
Equation \eqref{eq:jac_Aeps_Arl} implies that an analysis that samples uniformly in $A_R$ and $A_L$ within some range $0 \leq A_{R,L} \leq A_{R/L,\mathrm{max}}$ actually favors large overall mode amplitudes $A$, with a triangular distribution that vanishes at $A=0$ and $A=\sqrt{2}\,A_{R/L,\mathrm{max}}$, and peaks at $A = A_{R/L,\mathrm{max}}/\sqrt{2} \equiv A_{\max}$ (top panel of Fig.~\ref{fig:jac_Aeps_Arl}).
Without enforcing the $A \leq A_{\max}$ constraint, the ellipticity distribution will no longer be uniform, instead favoring linear polarizations (right panel of Fig.~\ref{fig:jac_Aeps_Arl}).
This was the case, e.g., for one of the ringdown analyses in \cite{LIGOScientific:2020tif}, which sampled uniformly in amplitude coefficients equivalent to $A_{R/L}$ up to an overall scaling.
The absence of angles in \eq{jac_Aeps_Arl} indicates that a uniform distribution in $\phi_{R/L}$ is also uniform in terms of $\theta$ and $\phi$.
However, this feature can be obfuscated by the fact that the relation between the two sets of angles is not strictly bijective due to their $2\pi$-periodicities (see footnote \ref{foot:angles}).
When applied as written, \eq{Aellip_to_Cphi} transforms a uniform distribution over $-\pi/2 < \phi_{R/L} < \pi/2$ into a uniform distribution over a $\pi/4$-rotated square domain in the $\{\theta, \phi\}$-space, as implied by the discussion in Sec.~\ref{sec:ellip:mono:morph} and illustrated in Fig.~\ref{fig:jac_Aeps_Arl_angles}; the corresponding marginals appear to favor $\theta=0$ and $\phi = 0$ (gray in Fig.~\ref{fig:jac_Aeps_Arl_angles}).
The uniformity over the full range of angles can again be made manifest by applying the more generic transformation of footnote \ref{foot:angles} (blue in Fig.~\ref{fig:jac_Aeps_Arl_angles}), at the expense of restoring the double-covering of the waveform space described in Sec.~\ref{sec:ellip:mono:morph}.
We can also relate the circular amplitudes to the alternative parametrization of \eq{hcomp_ellip_chi}.
The straightforward relation is given by the transformations
\begin{equation} \label{eq:Cphi_to_Ahatchi}
\begin{cases}
\hat{A} = \sqrt{A_R^2 + A_L^2} \\
\chi = \arctan\left( \frac{A_R - A_L}{A_R + A_L}\right)
\end{cases} ,
\end{equation}
and
\begin{equation} \label{eq:Cphi_to_Ahatchi}
\begin{cases}
A_R = \frac{1}{\sqrt{2}} \hat{A} \left(\cos\chi + \sin \chi\right) \\
A_L = \frac{1}{\sqrt{2}} \hat{A} \left(\cos\chi - \sin \chi\right)
\end{cases} ,
\end{equation}
while the remaining angles are related as in Eqs.~\eqref{eq:Cphi_to_Aellip} and \eqref{eq:Aellip_to_Cphi}.
Accordingly, the Jacobian that takes us from the circular parametrization to one flat in $\{\hat{A},\chi\}$ can be shown to be $J \propto \hat{A}^{-1}$.
Thus, as expected from the composition of Eqs.~\eqref{eq:jac_Aeps_Achi} and \eqref{eq:jac_Aeps_Arl}, a prior uniform in $A_{R/L}$ will also favor large intensity amplitudes with probability $\propto \hat{A}$, when restricted to the appropriate range; it will also be uniform in $\chi$.
\subsection{Linear components}
\label{sec:jac:Apc}
Rather than using the circular basis, we could instead work with the linear polarization modes as the fundamental quantity, parametrizing them directly as%
\footnote{Or, equivalently, using $\sin$ for $h_\times$ instead of $\cos$, to resemble \eq{nonprecessing}; this amounts to a redefinition of $\phi_\times$.}
\begin{subequations} \label{eq:Aphi}
\begin{equation}
h_+ = A_+ \cos (\omega t - \phi_+)\, ,
\end{equation}
\begin{equation}
h_\times = A_\times \cos (\omega t - \phi_\times) \, ,
\end{equation}
\end{subequations}
where $A_{+/\times}$ and $\phi_{+\times}$ are initial amplitudes and phases for each polarization, as elsewhere in the text.
Structurally, this mimics the parametrization adopted by \textsc{BayesWave} for each wavelet \cite{Cornish:2020dwh}.
\newcommand{\xp}{x_{+}}
\newcommand{\xc}{x_{\times}}
\newcommand{\xpc}{x_{+/\times}}
\newcommand{\yp}{y_{+}}
\newcommand{\yc}{y_{\times}}
\newcommand{\ypc}{y_{+/\times}}
\newcommand{\xr}{x_{R}}
\newcommand{\xl}{x_{L}}
\newcommand{\xrl}{x_{R/L}}
\newcommand{\yr}{y_{R}}
\newcommand{\yl}{y_{L}}
\newcommand{\yrl}{y_{R/L}}
Equation \eqref{eq:Aphi} still represents an elliptically polarized mode.
To relate this parametrization to that in Eq.~\eqref{eq:hcomp_ellip}, it is convenient to first map Eq.~\eqref{eq:Aphi} into the circular-basis parameters of the previous section.
We can do this geometrically by considering the respective Jones vectors (Sec.~\ref{sec:math}), from which we get $C_{R/L} = (C_+ \mp i C_\times)/\sqrt{2}$, for $C_{+/\times} \equiv A_{+/\times} \exp(i\phi_{+/\times})$ as in \eq{jones_bases}.
As illustrated in Fig.~\ref{fig:diagram_apac}, trigonometry then implies that
\begin{equation} \label{eq:Aphi_to_Cphi}
\begin{cases}
A_R^2 = \frac{1}{2}\left[A_+^2 + A_\times^2 + 2 A_+ A_\times \sin(\phi_\times - \phi_+)\right] \\
A_L^2 = \frac{1}{2}\left[A_+^2 + A_\times^2 - 2 A_+ A_\times \sin(\phi_\times - \phi_+)\right] \\
\phi_R = \mathrm{atan2}\left(\yp -\xc, \xp + \yc \right)\\
\phi_L = \mathrm{atan2}\left(\yp + \xc, \xp - \yc \right)
\end{cases} ,
\end{equation}
where, to simplify the notation, we have defined the cosine and sine quadratures
\begin{subequations} \label{eq:xy}
\begin{equation}
\xpc \equiv A_{+/\times} \cos \phi_{+/\times} \, ,
\end{equation}
\begin{equation}
\ypc \equiv A_{+/\times} \sin \phi_{+/\times} \, .
\end{equation}
\end{subequations}
Together with Eq.~\eqref{eq:Cphi_to_Aellip}, this allows us to compute $\{A, \epsilon, \theta, \phi\}$ as a function of $\{A_+, A_\times, \phi_+, \phi_\times\}$.
This transformation is clearly less straightforward than those for the circular components in the previous section, with amplitude and phase parameters mixing into each other.
This is because this coordinate transformation encodes the frame rotation that would bring an arbitrarily-oriented elliptical wave into the simple form of Eq.~\eqref{eq:Aphi}, which is nothing but the special frame we identified in Eq.~\eqref{eq:ellip_frame}.
The overall Jacobian relating $\{A, \epsilon, \theta, \phi\}$ to $\{A_+, A_\times, \phi_+, \phi_\times\}$ is quite simple, however, when expressed in terms of the former set of parameters,
\begin{subequations} \label{eq:jac_Aphi}
\begin{align}
J_2 &\equiv \left| \frac{\partial(A, \epsilon, \theta, \phi)}{\partial(A_+, A_\times, \phi_+, \phi_\times)}\right| \nonumber \\
&= 2 A_+ A_\times \left[ \sqrt{A_+^4 + A_\times^4 + 2 A_+^2 A_\times^2 \cos 2(\phi_\times - \phi_+)} \right. \nonumber \\
& \times \left( \sqrt{A_+^2 + A_\times^2 -2 A_+ A_\times \sin(\phi_\times-\phi_+)} \right. \nonumber \\
&\left.\left. + \sqrt{A_+^2 + A_\times^2 +2 A_+ A_\times \sin(\phi_\times-\phi_+)}\right)\right]^{-1}\\
&= \frac{1}{2 A} \sqrt{\left(\frac{1 + \epsilon^2}{1 - \epsilon^2}\right)^2 - \cos^2 2\theta} \, .
\end{align}
\end{subequations}
The Jacobian factorizes into a piece for the size of the ellipse ($1/A$), and a less trivial piece for its shape and orientation (function of $\epsilon$ and $\theta$).
The $J_2 \propto 1/A$ dependence implies that an analysis with uniform priors in the linear polarization amplitudes will implicitly favor high overall signal power, as was the case for the circular amplitudes in Fig.~\ref{fig:jac_Aeps_Arl}.
Additionally, the dependence on the ellipse's shape implies that the Jacobian diverges to positive infinity for $\epsilon = \pm 1$, meaning that circular polarizations will be disfavored in this scenario.
Both those features are visible in Fig.~\ref{fig:jac_Aphi}, which shows the distribution imposed on all of our canonical parameters, $\{A,\epsilon, \theta, \phi\}$, by drawing uniformly in $0 < A_{+/\times} < A_{\max}$ and $0 < \phi_{+/\times} < 2\pi$, for some arbitrary scale $A_{\rm max}$.
The distribution increases proportionally with $A$ up to $A_{\max}$ and peaks strongly at $\epsilon = 0$, sharply favoring linear polarizations.
In fact, the $\theta$ dependence of \eq{jac_Aphi} implies that pure $+$ or $\times$ polarizations ($\theta=0$ or $\theta = \pm\pi/2$, respectively) will be favored over any other orientation, i.e., pure linear polarizations aligned with the frame used to define $+$ and $\times$ in \eq{Aphi}.
The sharpness of the $\epsilon$ and $\theta$ features in Fig.~\ref{fig:jac_Aphi} suggests that fully correcting for the Jacobian in \eq{jac_Aphi} will be challenging in sampling applications.
Therefore, the parametrization of \eq{Aphi} is likely nonperformant if the goal is to obtain results under a uniform prior in $\{A,\epsilon\}$---we found this to be the case in practice in the context of \cite{Chatziioannou:2021mij}.
The parameterization is otherwise also likely undesirable if there is no known orientation of the polarization frame to favor in writing down \eq{Aphi}, i.e., in the language of Sec.~\ref{sec:angles}, if there is no a priori preferred polarization angle $\psi$.
\subsection{Linear polarization quadratures}
\label{sec:jac:Axy}
In the previous section we introduced the linear polarization quadratures $\xpc \equiv A_{+/\times} \cos \phi_{+/\times}$ and $\ypc \equiv A_{+/\times} \sin \phi_{+/\times}$, \eq{xy}, which are the Cartesian components (real and imaginary) corresponding to the complex-valued Jones amplitudes that encode the polarization state of the signal (see Fig.~\ref{fig:diagram_apac} and Sec.~\ref{sec:math}).
In \eq{Aphi_to_Cphi} we used these quantities to conveniently express the relation between the phases of the linear components of \eq{Aphi} and those of their circular counterparts, \eq{ellip_circ}, but their usefulness extends more widely.
Notably, the quadratures are usually more suitable for sampling applications, since working with periodic phases like $\phi_{+/\times}$ can be problematic for stochastic algorithms like Markov chain Monte Carlo (MCMC) \cite{Hogg:2017akh}.
Together with Gaussian priors, they can also make some problems analytically integrable \cite{Hogg:2020jwh}.
The usefulness of the linear-polarization quadratures stems from the fact that, unlike phase parameters like $\phi_{+/\times}$, they enter the waveform linearly.
Concretely, the expression for an elliptical monochromatic mode in terms of these quantities is
\begin{subequations} \label{eq:Axy}
\begin{equation}
h_+ = \xp \cos \omega t + \yp \sin \omega t \,
\end{equation}
\begin{equation}
h_\times = \xc \cos \omega t + \yc \sin \omega t \, .
\end{equation}
\end{subequations}
The relation of $\xpc$ and $\ypc$ to the linear polarizations of \eq{Aphi} is given directly by the definition in \eq{xy}; such relation implies a transformation into the circular-polarization parameters given by
\begin{equation} \label{eq:Cphi_to_Axy}
\begin{cases}
A_R^2 = \frac{1}{2}\left[\left(\yp - \xc\right)^2 + \left(\xp + \yc\right)^2\right] \\
A_R^2 = \frac{1}{2}\left[\left(\yp + \xc\right)^2 + \left(\xp - \yc\right)^2\right] \\
\phi_R = \mathrm{atan2}\left(\yp -\xc, \xp + \yc \right)\\
\phi_L = \mathrm{atan2}\left(\yp + \xc, \xp - \yc \right)
\end{cases} ,
\end{equation}
where last two lines are the same as in \eq{Aphi_to_Cphi}.
The inverse transformation is, as one might expect from Fig.~\ref{fig:diagram_apac},
\begin{equation} \label{eq:Axy_to_Cphi}
\begin{cases}
\xp = \frac{1}{\sqrt{2}} \left( \xr + \xl \right)\\
\yp = \frac{1}{\sqrt{2}} \left( \yr + \yl \right)\\
\xc = \frac{1}{\sqrt{2}} \left( \yl - \yr \right)\\
\yc = \frac{1}{\sqrt{2}} \left( \xr - \xl \right)\\
\end{cases} ,
\end{equation}
for circular-polarization quadratures defined as $\xrl \equiv A_{R/L}\cos\phi_{R/L}$ and $\yrl \equiv A_{R/L} \sin \phi_{R/L}$.
From \eq{Cphi_to_Aellip}, we can then derive a relation between $\{\xpc, \ypc\}$ and the canonical parameters $\{A,\epsilon,\theta,\phi\}$.
The inverse transformation, from $\{A,\epsilon,\theta,\phi\}$ into $\{\xpc, \ypc\}$ is easier to express succinctly and is given by
\begin{equation} \label{eq:Axy_to_Cphi}
\begin{cases}
\xp = A \left(\cos\theta \cos\phi + \epsilon \sin\theta \sin\phi\right)\\
\yp = A \left(\cos\theta \sin\phi - \epsilon \sin\theta \cos\phi\right)\\
\xc = A \left(\sin\theta \cos\phi - \epsilon \cos\theta \sin\phi\right)\\
\yc = A \left(\sin\theta \sin\phi + \epsilon \cos\theta \cos\phi\right)
\end{cases} ,
\end{equation}
as is straightforward to check based on \eq{Axy} and \eq{hcomp_ellip} by basic trigonometry.
The corresponding Jacobian is remarkably simple when expressed in terms of the ellipse amplitude and shape,
\begin{subequations} \label{eq:jac_Aeps_Axy}
\begin{align}
J_3 &\equiv 2 \left\{\sqrt{\left(\yp -\xc \right)^2 +\left(\xp +\yc\right)^2}
\right. \nonumber \\
&\times \sqrt{\left(\yp + \xc\right)^2 + \left(\xp - \yc\right)^2} \nonumber\\
&\times \left[ \sqrt{\left(\yp -\xc \right)^2 +\left(\xp +\yc\right)^2} \right. \nonumber\\
&+ \left.\left. \sqrt{\left(\yp + \xc\right)^2 + \left(\xp - \yc\right)^2} \right]\right\}^{-1} \\
&= \frac{1}{A^{3} \left(1 - \epsilon^2\right)} .
\end{align}
\end{subequations}
This Jacobian again factorizes into a piece for the size of the polarization ellipse and another for its shape, but without a dependence on the ellipse orientation.
The scale-dependent factor ($1/A^3$) indicates that a flat prior on $\{\xpc,\ypc\}$ will strongly favor large signal amplitudes.
Like in \eq{jac_Aphi}, this $J_3$ Jacobian diverges for $\epsilon = \pm 1$ which means that circular polarizations will be disfavored, albeit less strongly than $J_2$.
The lack of dependence of $J_3$ on the orientation ellipse $\theta$ indicates that no specific polarization frame is preferred by this prior, reflecting the isotropy built into the definition of $\xpc, \ypc$.
The features described above are visible in the distribution imposed on $\{A,\epsilon,\theta,\phi\}$ by drawing uniformly on $\{\xpc, \ypc\}$, as shown in Fig.~\ref{fig:jac_Axy} for $A$ and $\epsilon$.
Over the targeted region ($A < A_{\max}$) the distribution steeply favors high signal amplitudes; it also favors linear polarizations ($\epsilon = 0$), although less sharply than in Fig.~\ref{fig:jac_Aphi}.
Enforcing $A < A_{\max}$, no specific value of $\theta, \phi$ is preferred; however, a similar structure to that in Fig.~\ref{fig:jac_Aphi} would appear unless explicitly mitigated, as explained in that case.
The constraint on the amplitude, $A < A_{\rm max}$, is crucial to guarantee isotropy in the ellipse orientation: without it, the corners of the squares defined by $-A_{\max} < \xpc,\ypc < A_{\max}$ would result in special directions of high probability, just as in the example of Fig.~\ref{fig:jac_example}.
The same result could be obtained by applying an intrinsically isotropic prior in the $\{\xpc, \ypc\}$ space, e.g., uncorrelated Gaussians.
\subsection{Inclination of a planar source}
\label{sec:jac:cosi}
There are several applications for which it is desirable to make a connection between the ellipticity of a signal and the corresponding inclination angle of a source via \eq{ellip_cosi}.
This is because even unmodeled analyses, like \textsc{BayesWave}, often target sources that are dominated by the quadrupolar harmonic of the radiation ($\ell=|m|=2$), and that can be presumed to respect the planar symmetry that gave rise to that equation (see Sec.~\ref{sec:harmonics}).
In that case, a physically meaningful prior for the shape of the polarization ellipse is usually one that is uniform in $\cos\iota$, corresponding to an isotropic prior on the source orientation.
Such a prior is necessarily nonuniform in $\epsilon$, as can be inferred from the relation between the two quantities, illustrated in Fig.~\ref{fig:ellip_cosi}: uniform draws in $\cos\iota$ will necessarily favor circular polarizations over linear ones, since the $\epsilon$-versus-$\cos\iota$ curve flattens at the edges as $\cos\iota\to\pm 1$ and $\epsilon \to \pm 1$.
Conversely, a prior uniform in $\epsilon$ will necessarily disfavor face on ($\cos\iota=+1$) or face off ($\cos\iota=-1$) sources.
Indeed, the Jacobian $J_4 \equiv \left|\partial\epsilon/\partial \cos\iota\right|$, transforming from $\cos\iota$ to $\epsilon$, is
\begin{equation} \label{eq:jac_eps_cosi}
J_4 =1 - \epsilon^2 + \sqrt{1-\epsilon^2}
\propto \frac{1 - \cos^2\iota}{\left(1+\cos^2\iota\right)^2} \, ,
\end{equation}
which vanishes for $\cos\iota = \pm 1$ (or, equivalently, $\epsilon = \pm 1$) and peaks at $\cos\iota = 0$ ($\epsilon= 0$).
This indicates that a distribution uniform in $\cos\iota$ will place infinite weight on $\epsilon = \pm 1$, while a distribution uniform in $\epsilon$ will place no weight on $\cos\iota = \pm 1$ (left and right panels in Fig.~\ref{fig:jac_cosi}, respectively).
The divergences of the Jacobian above complicate transformations from one prior to the other, and suggest their implementation in sampling applications is likely nonperformant---in other words, if the goal is to apply a uniform prior in $\cos\iota$, then we should sample in that quantity directly, not in $\epsilon$.
This issue becomes more pronounced if, rather than being uniform in $\epsilon$, the original prior itself disfavored $\epsilon = \pm 1$ in the first place.
This was the case for the parametrization in terms of $\hat{A}$ and $\chi$ in Sec.~\ref{sec:jac:Achi}, the linear polarization amplitudes in Sec.~\ref{sec:jac:Apc} and the linear polarization quadratures in Sec.~\ref{sec:jac:Axy}.
Of all these, the problem is most severe for the linear polarization amplitudes, since that parametrization places heavy weight (formally infinite) on $\epsilon = 0$ (Fig.~\ref{fig:jac_Aphi}).
Unfortunately, this was the parametrization used in \cite{Chatziioannou:2021mij}, which likely explains the difficulty in recovering the sky location of the circularly-polarized ($\cos\iota=1$) signal simulated in Fig.~11 of that work.
\section{Nontensor polarizations}
\label{sec:nongr}
\newcommand{\xsym}{\ensuremath{x}}
\newcommand{\ysym}{\ensuremath{y}}
\newcommand{\bsym}{\ensuremath{b}}
\newcommand{\lsym}{\ensuremath{l}}
\newcommand{\hx}{h_{\xsym}}
\newcommand{\hy}{h_{\ysym}}
\newcommand{\hb}{h_{\bsym}}
\newcommand{\hlon}{h_{\lsym}}
Metric theories beyond GR may allow for up to six independent polarizations, including the two tensor $+$ and $\times$ modes expected in GR.
The discussion above generalizes easily to include those additional modes, starting with an enhanced version of the driving matrix in Eq.~\eqref{eq:hij},
\beq \label{eq:hij_ngr}
(h_{ij}) = \begin{pmatrix}
\hb + h_+ & h_\times & \hx \\
h_\times & \hb - h_+ & \hy \\
\hx & \hy & \hlon
\end{pmatrix} ,
\eeq
where, in addition to plus and cross, also appear the vector-$x$ (\xsym) and vector-$y$ modes (\ysym), as well as the scalar breathing (\bsym) and longitudinal (\lsym) modes;%
\footnote{There are other possible normalizations in use in the literature, e.g., $\hlon \to \sqrt{2} \hlon$.}
Equivalently, as above, we can write this as a weighted sum over generalized polarization tensors,
\beq
h_{ij} = \sum_p h_p\, e^p_{ij} \, ,
\eeq
for $p$ in $\{+,\times, \xsym, \ysym, \bsym, \lsym\}$, and polarization tensors $e^p_{ij}$ defined implicitly by comparison with Eq.~\eqref{eq:hij_ngr}.
Generally, the $h_p$ are functions of time, as for plus and cross above.
With similar assumptions as in the GR case, the detector output can be written as a sum over polarizations weighted by antenna patterns,
\beq \label{eq:h_ngr}
h(t) = \sum_p F_p(\alpha, \delta; \psi)\, h_p(t)\, ,
\eeq
with $F_p \equiv D^{ij} e^p_{ij}$ as before.
The physical effect of the non-GR polarizations is encoded in the antenna patterns, and is illustrated in, e.g., Fig.~1 of \cite{Isi:2017equ}.
The considerations presented above regarding wave frame orientation and antenna pattern symmetries apply just as well to the generalized polarization tensor of Eq.~\eqref{eq:hij_ngr}, except for the different properties that the beyond-GR modes exhibit under rotations.
Polarizations of different spin weight do not mix with each other under rotations.
A rotation by $\Delta \psi$ around the line of propagation transforms the two vector amplitudes by
\begin{subequations} \label{eq:htransf_v}
\beq
\hx \rightarrow \hx' = \hx \cos \Delta \psi - \hy \sin \Delta\psi \, ,
\eeq
\beq
\hy \rightarrow \hy' = \hx \cos \Delta \psi + \hy \sin \Delta\psi \, ,
\eeq
\end{subequations}
reflecting the fact that these are the components of a spin weight $|s|=1$ field (hence ``vector'').
Accordingly, any transformation in which the polarization angle entered as $2\psi$ for the tensor modes will look the same for vector modes but with the angle entering simply as $\psi$.
In particular, the two vector modes allow for the definition of right and left handed combinations in full analogy with \eq{circ},
\beq \label{eq:circ_vec}
e^{v,R/L}_{ij} \equiv \frac{1}{\sqrt{2}} \left(e^\xsym_{ij} \pm i e^\ysym_{ij} \right) ,
\eeq
except that they correspond to eigenstates of the helicity operator with eigenvalues $\pm 1$, instead of $\pm 2$.
These circular vector modes transform, in analogy with \eq{htransf_circ}, by
\begin{subequations} \label{eq:htransf_circ_vec}
\begin{align}
h_{v,R} &\rightarrow h_{v,R}' = h_{v,R} \exp(- i \Delta \psi) \, ,\\
h_{v,L} &\rightarrow h_{v,L}' = h_{v,L} \exp(+ i \Delta \psi)\, .
\end{align}
\end{subequations}
Just like we can define circular vector modes, we can also construct elliptically polarized vector states.
These take on the same fundamental role for vector GWs as detailed in Sec.~\ref{sec:ellip_modes} for their tensor counterparts; in this case, the mathematical formalism for polarization states is identical to that of electromagnetic waves, which also correspond to a field of spin weight $\left|s\right|=1$.
On the other hand, the two scalar modes are invariant under rotations around $z$,
\begin{subequations} \label{eq:htransf_s}
\beq
\hb \rightarrow \hb' = \hb\, ,
\eeq
\beq
\hlon \rightarrow \hlon' = \hlon\, ,
\eeq
\end{subequations}
revealing that these behave as spin-weight $s=0$ fields (hence ``scalar'').
Since these modes are already invariant under rotations, there is no meaningful notion of a circular (or elliptical) scalar polarization.
Furthermore, in the small-antenna limit, differential-arm GW detectors are only sensitive to the traceless linear combination of the two scalar polarizations.
In terms of the breathing and longitudinal modes above, this is
\beq
h_{\rm s} \equiv \hb - 2\hlon\, ,
\eeq
which is the only scalar mode measurable by existing detectors.%
\footnote{The effect of this traceless scalar mode is to simultaneously stretch (squeeze) along the $x$ and $y$ directions while squeezing (stretching) along the $z$ direction; the fully-trace scalar mode, to which current detectors are insensitive, stretch and squeeze space isotropically in all three directions.}
Equivalently, the geometric antenna patterns for the breathing and longitudinal modes are the same up to an overall constant (with our normalization, $F_{\bsym} = -F_{\lsym}$).
Therefore, the two terms are degenerate in Eq.~\eqref{eq:h_ngr} and their contributions cannot be disentangled in a model-independent way, i.e., without theory- and source-specific information about the detailed morphology of the $\hb(t)$ and $\hlon(t)$ functions.
For unmodeled analyses, it thus suffices to include only one scalar term in Eq.~\eqref{eq:h_ngr}---commonly that for the breathing mode---so that the sum is over only five polarizations instead of six.
The rest of the mathematical formalism covered in Sec.~\ref{sec:ellip_modes} can easily be extended to accommodate nontensor modes.
In particular, a generalized definition of Stokes parameters was derived in \cite{Anile1974} to account for all helicities---this requires 36 Stokes parameters.
However, the practical utility of such fully-generalized Stokes parameters is unclear, since the polarizations of different helicites do not mix into each other under rotations.
Instead, it is possible to simply enhance the set of four tensor Stokes parameters by an additional four vector Stokes parameters (defined analogously), and two parameters for the intensity of each of the scalar modes;
this adds up to 12 polarization parameters, instead of 36, at the expense of ignoring potential coherence across modes of different spin weight.
\section{Conclusion}
\label{sec:conclusion}
We have reviewed in detail the mathematical treatment of GW polarizations as it pertains practical applications for GW data analysis.
We began by showing how any GW signal can be decomposed into linear (Fig.~\ref{fig:rings}), circular (Fig.~\ref{fig:pol_diagram_circ}) or elliptical (Fig.~\ref{fig:pol_diagram_ellip}) modes, after choosing a physical polarization frame.
Arguing for the conceptual importance of elliptical (i.e., fully-polarized) modes, we outlined several of their key properties and reviewed a number of standard mathematical tools (Jones vectors, Poincar\'{e} sphere, Stokes parameters) useful in their description.
Since a large number of signal morphologies can be captured by superpositions of fully-polarized states, we emphasized their practical importance for GW data analyiss in unmodeled (or loosely modeled) applications, as well as in connection to the decompositions of the GW strain from planar sources (e.g., nonprecessing \acp{CBC}) into spin-weighted spherical harmonic.
We then clarified the conceptual distinctions between different notions of ``polarization angle'' ($\psi$, $\theta$, and $\Psi$ or $\Omega$) and showed how the different angles can often (but not always) be used interchangeably in practice.
In the process, we described in detail the different coordinate frames that appear in the practice of GW data analysis, including for making waveform predictions, for describing the wave propagation, and for deriving the measured signal over a network of detectors.
The current LIGO-Virgo conventions for all these frames are illustrated in Figs.~\ref{fig:diagram_waveframe}, \ref{fig:diagram_sourceframe} and \ref{fig:diagram_skyview}, which clarify the relations between all the relevant angles.
(Appendix \ref{app:spins} describes an additional coordinate frame used to specify generic spins in a binary, even though it is not directly relevant to GW polarizations.)
To lay out the connections between analyses that make use of different polarization parametrizations, we computed Jacobians for the corresponding coordinate transformations.
This allowed us to understand the implications of parametrizations for the polarization of a signal assumed by, e.g., \textsc{BayesWave} or ringdown analyses.
We found that parametrizing the GW signal in terms of the circular polarization amplitudes (Fig.~\ref{fig:jac_Aeps_Arl}), the linear polarization amplitudes (Fig.~\ref{fig:jac_Aphi}), or the cosine and sine quadratures of the linear polarizations (Fig.~\ref{fig:jac_Axy}) leads to implicitly favoring high signal intensities.
The parametrizations in terms of the linear amplitudes or their quadratures, as well as a parametrization in terms of an ellipse shape angle $\chi$ (Fig.~\ref{fig:jac_Aeps_Achi}), lead to favoring linear polarizations.
This preference is particularly pronounced for the parametrization in terms of the linear amplitudes, which also picks a preferred direction for the polarization ellipse, aligned with the plus and cross axes as determined by the implicit definition of the physical waveframe (choice of $\psi$).
We also showed how to relate the ellipticity to the inclination of a planar source, and how an isotropic prior in the source inclination is highly nonuniform in terms of ellipticity, favoring circular polarizations.
In the last section, we briefly touched on the generalization to metric theories of gravity with additional (nontensor) modes, for which the mathematical treatment is, for the most part, exactly analogous.
\begin{acknowledgments}
I would like to thank Will Farr, Katerina Chatziioannou and Leo Stein for insighful discussions, as well as Jose Mar\'ia Ezquiaga and Jolien Creighton for comments on the draft.
The Flatiron Institute is a division of the Simons Foundation, supported through the generosity of Marilyn and Jim Simons.
This paper carries LIGO document number \dcc{}.
\end{acknowledgments}
\appendix
\section{Compact-binary spin frames}
\label{app:spins}
Besides the polarization-related frames discussed in the main text, additional coordinates come into play when describing a precessing \ac{CBC}.
These are required to specify the orientations of the spins of the individual objects in the binary, since these are not aligned with the orbital angular momentum for the case of a precessing system.
Even though these coordinates are not directly relevant to the description of GW polarizations, we describe them here for completeness following the current LIGO-Virgo convention \cite{LALSuite:spins} (conventions occasionally change \cite{Pfeiffer:T1800226}).
The component spin vectors, $\vec{S}_{1/2}$, are prescribed at an arbitrary reference time (e.g., the moment when the signal reaches 20 Hz at the detector) in a Cartesian frame with $z$-axis along the orbital angular momentum, $\vec{L}$, with $x$-axis along the line pointing from the lighter object ($m_2$) to the heavier object ($m_1$), and with $y$-axis completing the right-handed triad; this $L$-based coordinate frame is shown in blue Fig.~\ref{fig:spins}.
In the above frame, the spin components are specified relative to the orbital plane.
For a precessing system, the orientation of the binary itself is set with respect to the observer through the angle $\theta_{JN}$ between the direction of propagation $\hat{k}$ and the total angular momentum of the binary, $\vec{J} \equiv \vec{L} + \vec{S}_1 + \vec{S}_2$ (Fig.~\ref{fig:spins} in black); this angle is similar to $\iota$ except for being defined relative to $\vec{J}$ instead of $\vec{L}$ (the two angles are the same for nonprecessing systems).
An additional angle, $\phi_{JL}$, establishes the orientation of $\vec{L}$ around $\vec{J}$, measured azimuthally with respect to the vector $\hat{x}_J$ perpendicular to the plane containing both $\vec{J}$ and $\hat{k}$, i.e., $\hat{x}_J = \hat{k} \times \hat{J} / |\hat{k} \times \hat{J}|$ in Fig.~\ref{fig:spins}.
The final degree of freedom is set by specifying the orbital phase $\phi_{\rm orb}$ at the reference time, defined as the angle spanned by the location of the primary body with respect to the line of nodes ($\ascnode$) within the orbital plane.
For a nonprecessing binary, $\vec{J}$ is parallel to $\vec{L}$, and so $\phi_{JL}$ is undefinded.
Meanwhile, $\theta_{JN}$ reduces to the angle $\iota$, which is defined to be the angle between $\hat{k}$ and $\hat{L}$ (Fig.~\ref{fig:diagram_sourceframe}).
The term ``inclination angle'' can refer either to $\theta_{JN}$ or $\iota$ depending on context.
\bibliography{refs}
|
Title:
Target Selection and Validation of DESI Emission Line Galaxies |
Abstract: The Dark Energy Spectroscopic Instrument (DESI) will precisely constrain
cosmic expansion and the growth of structure by collecting $\sim$40 million
extra-galactic redshifts across $\sim$80\% of cosmic history and one third of
the sky. The Emission Line Galaxy (ELG) sample, which will comprise about
one-third of all DESI tracers, will be used to probe the Universe over the $0.6
< z < 1.6$ range, which includes the $1.1<z<1.6$ range, expected to provide the
tightest constraints.
We present the target selection of the DESI SV1 Survey Validation and Main
Survey ELG samples, which relies on the Legacy Surveys imaging. The Main ELG
selection consists of a $g$-band magnitude cut and a $(g-r)$ vs.\ $(r-z)$ color
box, while the SV1 selection explores extensions of the Main selection
boundaries.
The Main ELG sample is composed of two disjoint subsamples, which have target
densities of about 1940 deg$^{-2}$ and 460 deg$^{-2}$, respectively. We first
characterize their photometric properties and density variations across the
footprint. Then we analyze the DESI spectroscopic data obtained since December
2020 during the Survey Validation and the Main Survey up to December 2021. We
establish a preliminary criterion to select reliable redshifts, based on the
\oii~flux measurement, and assess its performance. Using that criterion, we are
able to present the spectroscopic efficiency of the Main ELG selection, along
with its redshift distribution. We thus demonstrate that the the main selection
with higher target density sample should provide more than 400 deg$^{-2}$
reliable redshifts in both the $0.6<z<1.1$ and the $1.1<z<1.6$ ranges.
| https://export.arxiv.org/pdf/2208.08513 | .
\begin{document}
\title{Target Selection and Validation of DESI Emission Line Galaxies}
\correspondingauthor{Anand Raichoor}
\email{araichoor@lbl.gov}
\input{DESI-2021-0104_author_list.tex}
\keywords{Emission line galaxies, Surveys, Large-scale structures}
\section{Introduction} \label{sec:intro}
Since the observation of the acceleration of the expansion of the Universe \citep{riess98a,perlmutter99a}, the cosmology community has focused its efforts to gather the data providing the more precise possible observational constraints.
Several cosmological probes are used in order to perform independent measurements with different systematics (see \citealt{weinberg13a} for a review), the most established methods being
Type Ia supernovae and baryonic acoustic oscillations (BAO) to constrain the geometry of the Universe,
and weak-lensing, galaxy clusters, and redshift-space distortions (RSD) to constrain the growth of structures.
To reach that goal, dedicated facilities will survey large fractions of the sky with high-quality imaging (e.g., DES: \citealt{des-collaboration05a}, HSC: \citealt{aihara18a}, \textit{Euclid}: \citealt{laureijs11a}, LSST: \citealt{ivezic19a}) and/or massive spectroscopy (e.g., 2dF: \citealt{colless03a}, 6dF: \citealt{jones09a}, BOSS: \citealt{dawson13a}, WiggleZ: \citealt{drinkwater10a}, eBOSS: \citealt{dawson16a}, DESI, \textit{Euclid}, PFS: \citealt{takada14a}).
Massive spectroscopic surveys probe the large-scale structures (LSS) of the matter distribution, by measuring the spectroscopic redshift ($\zspec$) of a vast number of galaxies over large areas and different epochs.
One strength of this approach is that the very same dataset allows one to constrain at the same time the geometry of the Universe with the BAO scale \citep{eisenstein98a} and the growth of structures with the RSD \citep{kaiser87a} method.
The SDSS experiment \citep{york00a} has been a pioneer of such surveys, with the co-first BAO measurement \citep{eisenstein05a}.
\citet{alam21a} summarize and analyze twenty years of SDSS spectroscopic observations of about two million $\zspec$ over $0 < z < 5$ and 10,000 deg$^2$, which led to state-of-the-art constraints on the Hubble constant ($H_0 = 68.18 \pm 0.79$ km.s$^{-1}$.Mpc$^{-1}$) and the $\sigma_8$ parameter normalizing the growth of structures ($\sigma_8 = 0.85 \pm 0.03$).
The DESI experiment \citep{levi13a,desi-collaboration16a,desi-collaboration16b} will pursue this effort and increase the number of observed $\zspec$ by an order of magnitude, with about 40 million extra-galactic $\zspec$ over 14,000 deg$^2$.
DESI will follow the same approach as SDSS, and use an optimized tracer for each targeted redshift range.
About 13 million bright galaxy sample (BGS) will cover the $0.05 < z < 0.4$ range,
about 8 million luminous red galaxies (LRG) will cover the $0.4 < z < 1.1$ range,
about 16 million emission line galaxies (ELG) will cover the $0.6 < z < 1.6$ range,
and lastly about 3 million quasars (QSO) will cover the $z > 0.9$ range, used as tracers in the $0.9 < z < 2.1$ range, and using Ly-$\alpha$ forests as a probe of the intergalactic medium at $z > 2.1$.
Additionally, DESI will also observe about 10 million stars from the Milky Way Survey (MWS).
The BGS and MWS programs will be observed in `bright' time, i.e. when the Moon is up, whereas the other tracers (LRG, ELG, QSO) will be observed in `dark' time, i.e. when the Moon is down.
This paper is dedicated to the DESI ELG sample, which is composed of star-forming galaxies.
The principle of the ELG sample is to take advantage of the two following facts:
(1) the Universe star-formation rate density peaks at $z \sim 1$-2 \citep[e.g.,][]{madau14a}, thus star-forming galaxies were very common at that epoch;
(2) the ELG $\zspec$ can be reliably measured in a rather short amount of observation time, as it only requires a significant detection of the emission lines in the spectrum, with no need to significantly detect the continuum; in particular, the \oii doublet $\lambda \lambda$ 3726,29 \AA~offers an unambiguous signature of the $\zspec$ (see for instance \citealt{moustakas06a} for the link between the \oii line-strength and the star formation).
Some reference intensive spectroscopic surveys sampled that ELG population at $z \sim 1$-2 over few square degrees (e.g., VVDS: \citealt{le-fevre13a}, or DEEP2: \citealt{newman13a}), paving the way for their use in spectroscopic cosmological experiments.
For the above reasons, the ELG tracer is a key tracer of this decade massive spectroscopic surveys (e.g., DESI, \textit{Euclid}, PFS), and will constitute about one-third of DESI spectra.
The DESI ELG sample will probe the Universe over the $0.6 < z < 1.6$ range, and in particular over the $1.1 < z < 1.6$ range, which will bring the tightest of the DESI cosmological constraints.
It will be the very first to densely sample this redshift range, providing faint targets not extensively explored by any survey.
For instance, the eBOSS ELG sample \citep{raichoor17a,raichoor21a} has a target density about ten times smaller and targets about one magnitude brighter than the DESI ELG sample.
In that respect, the DESI ELG sample will be the first of its kind, and thus provides several challenges.
First, the target density needs to be high, about 2400 deg$^{-2}$ because ELG targets will be assigned fibers after the LRG and QSO targets, hence the selection must provide enough targets so that a target can be reached by each fiber most of the time.
This requires to select rather faint targets; however, another constraint is the requirement that the number of $\zspec$ measurement failures in a typical DESI exposure (15 minutes in nominal conditions) remain a reasonable fraction of observed spectra.
For that purpose, a large enough fraction of the targets needs to have sufficient \oii flux to secure a reliable $\zspec$ measurement.
A quantified requirement is that the DESI ELG target sample provides at least 400 deg$^{-2}$ reliable $\zspec$ in both the $0.6 < z < 1.1$ and $1.1 < z < 1.6$ ranges, as Fisher forecasts show that this is sufficient to reach the required cosmological precision of the DESI experiment \citep{desi-collaboration22b}.
Lastly, as for other tracers, the DESI ELG sample must have a fraction of catastrophic $\zspec$ measurements ('catastrophics`) as low as possible (of the order of one percent), the LSS analysis being very sensitive to catastrophics $\zspec$ \AR{reference?}.
To meet these requirements, the DESI experiment conducted a Survey Validation program (December 2020 to May 2021) before starting the actual Main Survey in May 2021.
The first part of the Survey Validation program, hereafter called SV1 (December 2020 to March 2021) consisted in deep observations of extended target selections for all tracers.
Those SV1 data have been used to fine-tune the Main Survey target selections.
The second part of the Survey Validation program was the One-Percent Survey, hereafter called One-Percent (April -- May 2021), where target selections close or equal to the Main Survey ones were observed at a very high completeness.
This paper is part of a series of papers presenting the DESI target selections and their characterization.
\citet{desi-collaboration22b} present an overview of the DESI spectroscopic observations and the tracers used by those papers, and \citet{myers22a} present how those target selections are implemented in DESI.
\citet{lan22a} and \citet{alexander22a} present the construction of spectroscopic truth tables based from visual inspection (VI) for the galaxies (BGS, LRG, ELG) and the QSO targets, respectively.
The MWS sample is presented in \citet{cooper22a},
the BGS sample is presented in \citet{hahn22a},
the LRG sample is presented in \citet{zhou22a},
the ELG sample is presented in this paper, and
the QSO sample is presented in \citet{chaussidon22a}.
Those five target selection papers present the DESI final samples, and superseed the preliminary target selections presented in \citet{allende-prieto20a}, \citet{ruiz-macias20a}, \citet{zhou20a}, \citet{raichoor20a}, and \citet{yeche20a}.
This paper is structured as follows.
Section~\ref{sec:photdata} presents the imaging, the footprints, and the photometry used to select the ELG targets.
We then present the Main Survey ELG target selection in Section~\ref{sec:maints}, the SV1 ELG target selection in Section~\ref{sec:svts}, and the Main Survey ELG sample photometric properties in Section~\ref{sec:mainphot}.
Section~\ref{sec:specdata} introduces the DESI spectroscopic data (SV1, One-Percent, and Main survey observations up to December 2021), used in Section~\ref{sec:mainspec} to analyze the spectroscopic properties of the Main Survey ELG sample.
We conclude in Section~\ref{sec:concl}.
All magnitudes are in the AB system \citep{oke83a}, and corrected for Galactic extinction using the \citet{schlegel98a} maps.
All displayed sky maps use the \texttt{Healpix} scheme \citep{gorski05a} with a resolution of 0.21 deg$^2$ (\texttt{nside} = 128), but the computation in Section~\ref{sec:photsyst} uses a finer resolution of 0.05 deg$^2$ (\texttt{nside} = 256).\\
\section{Imaging, footprints, and photometry} \label{sec:photdata}
The DESI ELG targets are selected from the $grz$-photometry of the DR9 release of the Legacy Imaging Surveys\footnote{\niceurl{https://www.legacysurvey.org/dr9}} \citep[LS-DR9;][]{schlegel22a}.
This release covers about 19,700 deg$^2$ in the optical $grz$-bands -- complemented with the Wide-field Infrared Survey Explorer (\textit{WISE}) near-infrared data \citep{meisner21a}.
We present here a brief description of the optical imaging and the photometry, focusing on the part relevant for the ELG target selection, and refer the reader to \citet{schlegel22a} for more details.
\subsection{Imaging} \label{sec:imaging}
The optical $grz$-imaging comes from several observing programs.
The northern part of the North Galactic Cap (NGC) comes from two programs: the Beijing-Arizona Sky Survey \citep[BASS,][]{zou17a} provides the $g$- and $r$-bands observed with the 90Prime camera on the Bok 2.3 m telescope; and the Mayall z-band Legacy Survey (MzLS) provides the $z$-band observed with the Mosaic-3 camera on the 4 m Mayall telescope at Kitt Peak National Observatory (KPNO).
The southern part of the NGC and the South Galactic Cap (SGC) mostly come from two programs, the Dark Energy Camera Legacy Survey \citep[DECaLS,][]{dey19a} and the Dark Energy Survey \citep[DES,][]{des-collaboration05a}; both use the Dark Energy Camera \citep[DECam][]{flaugher15a} on the 4 m Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO).
We note that the DECaLS, BASS, and MzLS surveys followed a dynamic observing strategy to achieve as much as possible a uniform depth across the footprints.
In particular, the considered depths account for the Galactic extinction, i.e. the imaging is deeper in regions with high Galactic extinction, so that the target selection should be less sensitive to the Galactic dust map (see Section 6.2 of \citealt{dey19a}).
Nevertheless, because of the capping of the individual imaging exposure time, this strategy cannot be applied for the $g$-band imaging for $\rm{E(B-V)} \gtrsim 0.15$, as the $g$-band has the largest extinction factor\footnote{The A/E(B-V) coefficients are 3.214, 2.165, and 1.211 for the $g$-, $r$-, and $z$-band, respectively; see \niceurl{https://www.legacysurvey.org/dr9/catalogs/\#galactic-extinction-coefficients}}.
The DES survey did not follow such a strategy, but as it is fairly deep, the effect of the Galactic extinction on the imaging depth is less critical for the ELG targets.
Figure~\ref{fig:ebvdepth} illustrates this approach for the $g$-band where the extinction factor is the largest.
In regions of high extinction (top panel), the $g$-band depth (middle panel) is larger, which results in a rather homogeneous extinction-corrected depth map in each of the three footprints of the imaging program (bottom panel).
\subsection{Footprints} \label{sec:footprints}
As a result of the different programs providing the imaging, different parts of the footprint have different imaging depths, which matters for the ELG target selection, as those tracers are faint in imaging.
To illustrate this point, Figure~\ref{fig:photsnr} displays the normalized cumulative distributions of the signal-to-noise ratio (SNR) in the selection band\footnote{For instance $\rm SNR = \texttt{flux\_g} \times \sqrt{\texttt{flux\_ivar\_g}}$ for the ELG targets.} over the nominal DESI footprint for the three dark tracers of DESI.
The ELG targets have a typical SNR of 13 in the imaging, whereas the QSO and LRG targets have a typical SNR of 36 and 60, respectively, and thus are less sensitive to the depth variations across the footprints.
For that reason, we define three footprints, which will be analyzed separately in this paper:
the North, corresponding to the Dec $>$ 32.375$^\circ$ part of the NGC, covered by BASS and MzLS;
the South-DECaLS, corresponding to the non-DES SGC part and the Dec. $<$ 32.375$^\circ$ part of the NGC;
the South-DES, corresponding to the DES imaging in the SGC.
Those footprints can be visualized in the depth maps in Figure~\ref{fig:ebvdepth}, where the North is displayed in blue-green, the South-DECaLS in yellow-orange, and the South-DES in red.
Table~\ref{tab:imagdens} reports the approximate areas and imaging depths per footprint.
One notices that the North footprint is about 0.5 mag shallower in the $g$- and $r$-bands than the South-DECaLS footprint, and the South-DES footprint is about 0.5-1.0 mag deeper than the South-DECaLS footprint in all three $grz$-bands.
As ELG targets are faint, those depth differences impact the target selection, in terms of detected objects and contamination, as it will be seen further in the paper.\\
\begin{table*}
\centering
\begin{tabular}{lccccccc}
\hline
\hline
Footprint & LS-DR9 area & DESI area & $g$-depth & $r$-depth & $z$-depth & ELG\_LOP density & ELG\_VLO density\\
& $[$deg$^2]$ & $[$deg$^2]$ & $[$AB mag$]$ & $[$AB mag$]$ & $[$AB mag$]$ & $[$deg$^{-2}]$ & $[$deg$^{-2}]$\\
\hline
North & 5100 & 4400 & 24.1 & 23.5 & 23.0 & 1930 & 410\\
South-DECaLS & 9500 & 8500 & 24.5 & 23.9 & 23.0 & 1950 & 490\\
South-DES & 5100 & 1100 & 24.9 & 24.7 & 23.5 & 1900 & 480\\
\hline
\end{tabular}
\caption{
Imaging properties and ELG target density per footprint for each of the two Main ELG samples (ELG\_LOP and ELG\_VLO).
Areas are approximate.
Depths are 5-$\sigma$ depths for a 0.45 arcsec radius exponential profile, typical of DESI ELGs.
}
\label{tab:imagdens}
\end{table*}
\subsection{Photometry} \label{sec:photometry}
The overall data reduction and photometry is performed with the \texttt{legacypipe}\footnote{\niceurl{https://github.com/legacysurvey/legacypipe}} pipeline.
The LS-DR9 images are astrometrically calibrated with \textit{Gaia} DR2 \citep{gaia-collaboration18a} and photometrically calibrated with Pan-STARRS 1 \citep{chambers16a}, using color terms to place the photometry on the same system as the LS-DR9 one \AR{add sentence about ubercal for dec$<$-30}.
The photometry is performed with the \texttt{Tractor} software \citep{lang16a,lang22a}.
Source detection is done on stacked images, then all measurements are based on individual exposures.
Each source is modeled with an analytic profile (point-source, exponential with fixed parameters, exponential, de Vaucouleurs or S\'{e}rsic) and a model image is generated for each exposure.
Increasingly more complex profiles are allowed for sources detected with higher SNR.
The source properties (position, shape, flux) are measured through a likelihood optimization ($\chi^2$ minimization) of the set of model images covering the considered region.
Based on the best-fit properties, \texttt{Tractor} provides the total flux of each source, as well as its 'fiber flux`, which corresponds to the predicted flux within a fiber of diameter 1.5 arcsec -- the size of a DESI fiber -- for 1 arcsec Gaussian seeing.
Those fiber fluxes hence predict the typical amount of light that a DESI fiber would see.
\section{Main Survey Target Selection} \label{sec:maints}
In this section, we present the DESI Main Survey ELG target selection.
The selection cuts are detailed in Table~\ref{tab:maincuts} and illustrated in Figure~\ref{fig:maingrz}.
The extended selection explored in the DESI SV1 program, which was used to finalized those Main Survey cuts, is presented in Section~\ref{sec:svts}.
The DESI ELG sample is the first of its kind, with no previous significant reference sample observed so far.
For instance, the VVDS or DEEP2 observations covered few square degrees; the WiggleZ \citep{drinkwater10a} or the eBOSS/ELG \citep{raichoor17a, raichoor21a} surveys observed about one thousand square degrees, but their ELG samples were more than one magnitude brighter, have a five-to-ten times lower density (about 200-400 deg$^{-2}$) and only extends to $z < 1.1$.
The first proposed DESI ELG selection was based on a simple $g-r$ vs. $r-z$ selection \citep{desi-collaboration16a}, though it could not be spectroscopically tested at that time.
\citet{karim20a} explored that selection, along with more advanced ones, with dedicated spectroscopical observations.
This pilot program demonstrated that all tested selections had overall similar performances.
Based on that analysis, for the sake of simplicity and robustness, we chose simple color-color cuts in the $g-r$ vs. $r-z$ diagram to select the DESI ELG targets.
We describe hereafter the chosen cuts.
\subsection{ELG\_LOP and ELG\_VLO subsamples} \label{sec:mainsubs}
As mentioned in Section \ref{sec:intro}, the goal of the DESI ELG sample is to provide cosmological constraints over the $0.6 < z < 1.6$ range, with favoring as much as possible the $1.1 < z < 1.6$ range, where other DESI samples are the least dense.
To do so, the ELG sample is split in two disjoint samples, ELG\_LOP at $\sim$1940 deg$^{-2}$ and ELG\_VLO at $\sim$460 deg$^{-2}$\footnote{The area to compute those densities does not account for the $\sim$1\% area removed with the angular masking described in Section \ref{sec:qualcuts}.}.
We remind that the DESI observations use priorities to assign fibers to targets \citep{raichoor22b}.
ELG\_LOP has higher priority in the fiber assignment and favors the $1.1 < z < 1.6$ range, whereas ELG\_VLO has lower priority and favors the $0.6 < z < 1.1$ range.
With that fiber assignment configuration, cosmological Fisher forecasts demonstrate that the ELG sample fulfills the expected performance \citep{desi-collaboration22b}.
The names of those two samples, ELG\_LOP and ELG\_VLO, are names assigned to targeting bits by \texttt{desitarget}, the target selection pipeline \citep{myers22a}, and indicate their ELG priority state in the fiber assignment ('low` and `very-low').
\subsection{ELG\_HIP subsample} \label{sec:mainhip}
For the dark tiles, the tracers in order of decreasing fiber assignment priorities are: QSO, LRG, ELG\_LOP, and ELG\_VLO.
This results in very-high fiber assignment rates for the QSO and LRG targets, but lower ones for the ELG\_LOP targets, and even lower ones for the ELG\_VLO targets.
In order to still have a significant number of observed pairs of ELG and LRG targets, a third ELG sample is defined, ELG\_HIP, which is a ten percent random subsampling of the ELG\_LOP and ELG\_VLO samples, with the same fiber assignment priority that the one of the LRG targets.
This provides more information about the small-scale cross-correlation between the ELG targets and higher priority targets.
Without this extra sample, the small-scale effects of fiber collisions, together with the preference for always observing the higher priority objects, would significantly increase the noise for cosmological analyzes cross-correlating ELG targets with LRG targets \citep[see e.g.,][for the weights computation method]{bianchi18a, mohammad20a}.
Similarly to ELG\_LOP and ELG\_VLO, the ELG\_HIP name is assigned to targeting bits by \texttt{desitarget}, and indicates the ELG priority state in the fiber assignment ('high`).
As this ELG\_HIP sample is a random subsample of the ELG\_LOP and ELG\_VLO samples, we do not discuss it further in this paper.
\subsection{Main Survey selection cuts} \label{sec:maincuts}
The Main Survey ELG selection cuts are detailed in Table \ref{tab:maincuts} and illustrated in Figure \ref{fig:maingrz}.
Those are of three kinds:
(1) quality cuts to ensure that the photometry is reliable;
(2) a cut in the $g$-band fiber magnitude;
(3) a selection box in the $g-r$ vs. $r-z$ diagram.
We underline that the cuts are the same in the North and in the South-DECaLS/DES footprints, even though the photometric systems are slightly different.
This choice was motivated by several reasons:
the exact color transformation between the two systems is not trivial, as it depends on the considered object (e.g., star, blue or red galaxies);
the North has different imaging systematics than the South, in particular in the $g$-band depth;
with the partial sampling of the Survey Validation program it was not possible to define a secure tuning of the selection cuts, which would provide a similar ELG redshift distribution in the three footprints, as the redshift distribution has non-trivial dependencies on the imaging and foreground variations.
For the sake of simplicity, we thus keep the same cuts in the three footprints.
\subsubsection{Quality cuts} \label{sec:qualcuts}
Quality cuts are designed to select sources with reliable photometric measurements.
For computation reasons, the \texttt{legacypipe} pipeline processes the sky in $0.25^\circ \times 0.25^\circ$ bricks, which slightly overlap.
The \texttt{brick\_primary} cut requests the object not to be in the LS-DR9 overlap between two bricks.
We further require that there is at least one observation in each of the three $grz$-bands, and that the measured flux as a positive SNR in all three bands (i.e., a positive flux and a non-null inverse variance).
And we apply a minimal angular masking, to reject regions around very bright stars ($\textit{Gaia} \; \rm G < 13$) and large galaxies \citep{moustakas22a} or globular clusters.
We emphasize that this masking is purposely minimal, and common to all three DESI dark tracers.
Further \textit{a posteriori} masking will be applied on the spectroscopic data with the analyze thereof, as for instance we know from the target density variations only that the ELG target selection has spurious targets around moderately bright stars ($13 < \textit{Gaia} \; \rm G < 16$).
\subsubsection{$g$-band magnitude cut}
We then make a selection on the $g$-band magnitude, which is motivated by the fact that the \oii flux correlates best with the bluest band flux \citep{comparat16a}; this ensures that the selection favors \oii emitters.
We discard bright objects, which are unlikely to be at $z > 0.6$; as those represent a marginal fraction of the ELG sample, it does not matter if we cut on the fiber or total magnitude.
The faint cut in the $g$-band fiber magnitude is tuned to reach the desired densities of about 1940 deg$^{-2}$ for ELG\_LOP and 460 deg$^{-2}$ for ELG\_VLO.
A cut on the fiber magnitudes was favored over a cut in total magnitudes, because the latter provides more $\zspec$ failures due to galaxies with not enough flux in the DESI fiber.
\subsubsection{$g-r$ vs. $r-z$ selection}
The ELG\_LOP and ELG\_VLO samples rely on simple $g-r$ vs. $r-z$ cuts.
The primary motivation of the cuts is the redshift selection, as illustrated in Figure~\ref{fig:maingrz}.
That figure displays the density of $g_{\rm fib}<24.1$ objects in the LS-DR9 catalogs, where the color-coding indicates the mean photometric redshift ($\zphot$) from the HSC/DR2 data release \citep{aihara19a,tanaka18a}.
Those $\zphot$ measurements are of exquisite quality for our magnitude and redshift range of interest, thanks to the depth and wavelength coverage of the HSC data; \citet{karim20a} already illustrated this point with previous HSC data and we further illustrate in Appendix~\ref{app:hsczphot} how they compare with the DESI ELG $\zspec$.
The slanted cut with a positive slope ($g - r < 0.5 \times (r - z) + 0.1$) in common to ELG\_LOP and ELG\_VLO selections rejects stars and galaxies at $z < 0.6$.
The ELG\_LOP $r-z>0.15$ cut rejects $z > 1.6$ galaxies, for which the \oii doublet is outside the DESI spectrograh coverage so that no reliable $\zspec$ can be expected.
The ELG\_LOP slanted cut with a negative slope ($g - r < -1.2 \times (r - z) + 1.3$) optimizes the fraction of $1.1 < z < 1.6$ targets with high \oii flux, as this is the goal for this sample.
At first order, the redshift is driving this cut, as shown by the HSC $\zphot$ measurements.
At second order, favoring the \oii emitters pushes this cut to the blue, as illustrated by the stellar evolution tracks on Figure~\ref{fig:maingrz}.
Those tracks show two simple \citet{bruzual03a} evolution models of galaxies computed with \texttt{EzGal} \citep{mancone12a}.
The two galaxy models are formed at $z = 3$ with simple exponentially declining star formation histories (i.e., with a star formation rate proportional to $e^{-\rm{age} / \tau}$).
One is moderately star-forming ($\tau = 1$ Gyr, red dashed line), the other one is more star-forming ($\tau = 5$ Gyr, green solid line); the symbols illustrate where such galaxies would be in the $g-r$ vs. $r-z$ diagram for different observation redshifts: as expected, at a fixed redshift, galaxies with bluer colors are more star-forming.
For instance one sees at $z = 1.1$ (circle) that the most star-forming model is about 0.5 magnitude bluer in $r-z$ than the other one.
The ELG\_VLO slanted cuts with a negative slope ($(g - r > -1.2 \times (r - z) + 1.3)$ and $(g - r < -1.2 \times (r - z) + 1.6)$) are an extension of the ELG\_LOP selection towards redder colors, hence lower redshifts.
The reddest cut is driven to remove $z < 0.6$ galaxies.
We remind that the ELG\_VLO sample is disjoint from the ELG\_LOP sample.
\subsubsection{Target density}
The cuts described above provide an ELG\_LOP sample of about 1940 deg$^{-2}$ and an ELG\_VLO sample of about 460 deg$^{-2}$.
Because of the different imaging properties -- in particular depths -- of the three North, South-DECaLS, South-DES footprints, the actual average density over each footprint is slightly different, as reported in the last two columns of Table~\ref{tab:imagdens}: from 1900 deg$^{-2}$ to 1950 deg$^{-2}$ for the ELG\_LOP sample and from 410 deg$^{-2}$ to 490 deg$^{-2}$ for the ELG\_VLO sample.\\
\begin{table*}
\centering
\begin{tabular}{lccl}
\hline
\hline
Sample & Density & Cuts & Comment\\
\hline
\multirow{4}{*}{Clean} & \multirow{4}{*}{-} & \texttt{brick\_primary} = True & Unique object\\
& & $\texttt{nobs\_\{grz\}} > 0$ & Observed in the $grz$-bands\\
& & $\texttt{flux\_\{grz\}} \times \sqrt{\texttt{flux\_ivar\_\{grz\}}} > 0$ & Positive SNR in the $grz$-bands\\
& & $(\texttt{maskbits} \; \& \; 2^1) = 0$, $(\texttt{maskbits} \; \& \; 2^{12}) = 0$, $(\texttt{maskbits} \; \& \; 2^{13}) = 0$ & Not close to bright star/galaxy\\
\hline
\multirow{5}{*}{ELG\_LOP} & \multirow{5}{*}{$\sim$1940 deg$^{-2}$} & Clean & Clean sample\\
& & $(g > 20)$ and $(g_{\rm fib} < 24.1)$ & Magnitude cut\\
& & $0.15 < r - z$ & $r - z$ cut\\
& & $g - r < 0.5 \times (r - z) + 0.1$ &Star/low-z cut\\
& & $g - r < -1.2 \times (r - z) + 1.3$ & Redshift/\oii cut\\
\hline
\multirow{5}{*}{ELG\_VLO} & \multirow{5}{*}{$\sim$460 deg$^{-2}$} & Clean & Clean sample\\
& & $(g > 20)$ and $(g_{\rm fib} < 24.1)$ & Magnitude cut\\
& & $0.15 < r - z$ & $r - z$ cut\\
& & $g - r < 0.5 \times (r - z) + 0.1$ &Star/low-z cut\\
& & $(g - r > -1.2 \times (r - z) + 1.3)$ and $(g - r < -1.2 \times (r - z) + 1.6)$ & Redshift/\oii cut\\
\hline
\end{tabular}
\caption{
Main Survey target selection cuts.
The cuts are the same for the North and South-DECaLS/DES regions.
We use the following definitions:
$\{grz\} = 22.5 - 2.5 \cdot \rm{log}_{10}(\texttt{flux\_\{grz\}} / \texttt{mw\_transmission\_\{grz\}})$, $g_{\rm fib} = 22.5 - 2.5 \cdot \rm{log}_{10}(\texttt{fiberflux\_g} / \texttt{mw\_transmission\_g})$.
The \texttt{brick\_primary}, \texttt{nobs\_\{grz\}}, \texttt{flux\_\{grz\}}, \texttt{fiberflux\_g}, \texttt{flux\_ivar\_\{grz\}}, \texttt{mw\_transmission\_\{grz\}}, \texttt{maskbits} columns are described here: \niceurl{https://www.legacysurvey.org/dr9/catalogs/}.
}
\label{tab:maincuts}
\end{table*}
\section{SV1 Target selection} \label{sec:svts}
In this section, we describe the DESI SV1 target selection, which expands the Main Survey selections described in Section~\ref{sec:maints}.
The SV1 selection cuts are detailed in Appendix~\ref{app:svcuts} and Table~\ref{tab:svcuts}, and illustrated in Figure~\ref{fig:svgrz}.
\subsection{Motivations} \label{sec:svmotivations}
The SV1 ELG sample was designed to provide informations to finalize the Main Survey ELG selections.
The only existing magnitude-limited, spectroscopic reference samples probing the desired DESI ELG magnitudes are limited to a few square degrees (e.g., DEEP2: \citealt{newman13a}, VVDS: \citealt{le-fevre13a}).
Besides, DESI being a new instrument, its ability to measure reliable $\zspec$ for targets as faint as ELG ones needs to be thoroughly tested.
For those two reasons, the DESI SV1 ELG sample is exploring a rather large photometric space, with a target density of about 7000 deg$^{-2}$.
\subsection{SV1 selection cuts} \label{sec:svcuts}
We hereafter detail the philosophy of the DESI SV1 ELG selection cuts reported in Table~\ref{tab:svcuts} and illustrated in Figure~\ref{fig:svgrz}.
\subsubsection{$g-r$ vs. $r-z$ extensions}
The first explored extensions relax the Main cuts in the $g-r$ vs. $r-z$ diagram, as illustrated in the top panel of Figure~\ref{fig:svgrz}.
The cuts are generously extended towards bluer $r-z$ colors, with a $g-r < 0.2$ cut to securely remove low-redshift galaxies and stars.
According to HSC $\zphot$ measurements, that region should include a significant fraction of redshifts in the range $1.1 < z < 1.6$ and has been extremely poorly explored so far.
While this region is very valuable for DESI, with $1.1 < z < 1.6$ ELG targets, it is also costly because any $z > 1.6$ target would not provide a reliable $\zspec$, as the \oii doublet is outside of the DESI spectrograph coverage.
The cuts are also slightly extended towards the low-redshift galaxies and stellar locus (positive slope cut).
Existing spectroscopic data and HSC $\zphot$ consistently show that there is a sharp transition, with a density of $z<0.6$ objects quickly rising when going to redder $g-r$ colors.
As DESI is expected to provide reliable $\zspec$ for most of those, there only is a marginal need to explore that region.
Lastly, the cuts are extended towards redder $r-z$ colors, to cover the eBOSS/ELG selection region.
From HSC $\zphot$ and eBOSS/ELG $\zspec$ measurements, we know that this region mostly hosts $z<1.1$ galaxies.
This extension is motivated by the early desire to have an overlap with the eBOSS/ELG sample, and to secure a fallback Main selection in the unlikely case the DESI instrument were performing far worse than expected.
\subsubsection{Faint extensions (sliding cut)}
An important extension explores the faint end of the target selection to test the ability of the DESI instrument to provide a reliable $\zspec$ there.
Provided that the target density significantly increases when going fainter, this extension is restricted to blue objects - the most interesting ones for ELGs - to prevent the SV sample to be overwhelmed by faint objects.
To do so, we adopted a sliding cut in the $g-r$ vs. $r-z$ diagram, as illustrated in the bottom panel of the Figure~\ref{fig:svgrz}.
The sliding cut uses the $\coii$ color, which broadly scales as the \oii flux.
On the red $r-z$ side, this cut restricts the sample to bright objects, as faint objects there are expected to have a marginal \oii flux, and thus are unlikely to provide a reliable $\zspec$.
On the blue $r-z$ side, this cut explores targets fainter by few tenths of magnitudes, which are expected to have a significant \oii flux, and hence would provide a reliable $\zspec$.
\subsubsection{$g_{\rm tot}$ and $g_{\rm fib}$ extensions}
Finally, all the above cuts are applied on samples restricted in $g_{\rm tot}$, the total $g$-band magnitude or in $g_{\rm fib}$, the fiber $g$-band magnitude.
While a $g_{\rm tot}$-limited sample corresponds to a better defined galaxy population, it could contain a significant fraction of targets with too small a flux in the DESI fibers to provide a reliable $\zspec$.
A $g_{\rm fib}$-limited sample has the advantage of being more homogeneous and complete in terms of reliable $\zspec$.\\
\section{Photometric properties of the Main sample} \label{sec:mainphot}
This Section presents a preliminary discussion about the Main Survey ELG sample density fluctuations across the footprint, which are driven by variations of both the LS-DR9 imaging properties and of the astrophysical foreground maps (e.g., Galactic stellar density and dust extinction).
The ELG target magnitudes being close to the imaging depth, this sample is more sensitive to those imaging/foreground variations than other DESI dark tracers, and will likely require significant dedicated work to account for those in data analysis, which needs to beforehand remove such dependencies prior to perform a cosmological analysis.
Besides, the final LSS ELG sample will be restricted to objects with a reliable redshift in the $0.6 < z < 1.6$ range.
Both ELG target density fluctuations and redshift efficiency variations with spectroscopic observing conditions will have to be corrected to produce reliable cosmological results.
This is why we hereafter restrict to simple diagnoses, in order to illustrate the overall properties of the Main Survey ELG sample.
\subsection{Magnitude distributions}
Figure~\ref{fig:mag} displays the ELG\_LOP (solide lines) and ELG\_VLO (dashed lines) normalized cumulative distributions of target magnitudes.
For the $g$-band, we both present the fiber magnitude, used for the selection, and the total magnitude.
For the $r$- and $z$-bands, we only present the total magnitude, as those come at play through colors only.
Both the ELG\_LOP and ELG\_VLO selections are selected with a $g_{\rm fib}<24.1$ cut, hence the two selections present very similar $g$-band magnitude distributions.
However, the different location of the selection boxes in the $g-r$ vs. $r-z$ diagram implies different magnitude distributions in the $r$- and $z$-bands, with the ELG\_LOP targets being 0.2 mag fainter than the ELG\_VLO targets in the $r$-band, and 0.5 mag fainter in the $z$-band.
We thus expect the ELG\_LOP selection to have a stronger dependency with the imaging $z$-band depth.
\subsection{Density maps}
Figure~\ref{fig:sky} displays the density fluctuations of ELG\_LOP (top panel) and ELG\_VLO (bottom panel) targets across the whole LS-DR9 footprint.
A 14 000 deg$^2$ footprint covered by DESI is indicated with thick black contours.
Several features are visible, in particular for the ELG\_LOP sample; we comment the most noticeable ones.
The blue regions at $\rm(R.A., Dec.) \sim (130^\circ, 60^\circ)$ or $\rm(R.A., Dec.) \sim (30^\circ, 20^\circ)$ are under-densities due to a region of high Galactic dust, with many small-scale structures, as illustrated in Figure~\ref{fig:ebvcutout}.
A possible interpretation could be that, even though the imaging is deeper in dusty regions of the footprint, the extinction effect is only partially corrected in those regions (see Section~\ref{sec:imaging}), in particular high variations of the dust extinction at small scale cannot be handled at the imaging level.
However, we note that some high-extinction regions can show an excess of ELG\_LOP targets, as for instance at $\rm(R.A., Dec.) \sim (345^\circ, 20^\circ)$.
Proper explanations of those effects likely require a detailed analysis of the interplay of the target selection with the dust extinction, the imaging depth, and the behavior of the \texttt{Tractor} source detection and fitting in those regions.
Approaches like \texttt{Obiwan} \citep{kong20a}, which injects fake sources in the imaging itself and then runs \texttt{Tractor}, may bring key information for such issue.
The ELG\_LOP sample seems to have an overdensity along the Sagittarius Stream, displayed as a dashed line in Figure~\ref{fig:sky}.
The Sagittarius Stream has a stellar population bluer than the Galactic population \AR{ref?} and could in principle add contaminants to the ELG\_LOP selection.
As of now it is not clear whether the ELG\_LOP overdensity is due to that Sagittarius Stream stars, or if it is just concomitant: a detailed analysis of the spectroscopic observations will be required to clarify this issue.
The overdensities in the North at $(\rm{R.A, Dec.}) \sim (180^\circ, 40^\circ)$ or $(210^\circ, 40^\circ)$ correspond to regions of shallower extinction-corrected $g$-band imaging (see bottom plot of Figure~\ref{fig:ebvdepth}).
Lastly, we comment three other features noticeable on those maps, even though they are well outside the DESI footprint, and hence not relevant for the DESI observations.
For both selections, the density becomes slightly smaller below the $\rm{Dec.} = -30^\circ$ latitude in the DES region.
This is due to a known shift of approximately 0.02 mag in the $z$-band, where the calibration method transitions from Pan-STARRS1 to Ubercal \citep{padmanabhan08a,schlegel22a} \AR{check consistency with DR9 paper}.
The large ELG\_LOP overdensity at $\rm{(R.A., Dec.)} \sim (80^\circ, -60^\circ)$ at very South edge of the DES region is contamination from the Large Magellanic Cloud, which adds a high density of blue stars in that region.
And the ELG\_LOP overdensity at $\rm{(R.A., Dec.)} \sim (150^\circ, -20^\circ)$ is due to the much shallower imaging there (see Figure~\ref{fig:ebvdepth}).
\subsection{Photometric systematics} \label{sec:photsyst}
In this Section, we present how the ELG target selection depends on the imaging and foreground properties.
As already stated, ELG targets have magnitudes close to the imaging depth, which makes the sample sensitive to fluctuations in the imaging and foreground maps.
We consider here the simplest set of maps.
For the foreground, those encompass the \textit{Gaia} stellar density and the Galactic dust extinction (E(B-V) parameter); besides, we also consider the projected distance to the Sagittarius Stream, as it has been seen in the previous Section that it could be a relevant quantity to consider.
For the imaging, in each of the three $grz$-bands, we consider the seeing ('PSF size`) and the 'galaxy depth` corrected for dust extinction.
We use dust-extinction corrected depths as the target selection relies on dereddened magnitudes.
Figures~\ref{fig:elglopsyst} and~\ref{fig:elgvlosyst} show how the ELG\_LOP and ELG\_VLO target selection densities vary with those foreground and imaging maps, for each of the North, South-DECaLS, and South-DES footprints.
We note that we discard here the South-DES $\rm{Dec.} < -30^\circ$ region, because of the small photometric calibration issue mentioned in the previous section.
Those figures are based on 0.05 deg$^2$ \texttt{Healpix} pixels ($\texttt{nside} = 256$).
For each footprint, the density variations are normalized to the average density over the footprint.
The different properties of each footprint are clearly visible in this plot, and illustrate the need to analyze those separately.
In general, both selections display similar trends, the ELG\_LOP sample showing stronger trends, as expected from the fact that it contains fainter objects.
The most significant dependency is that with the $g$-band depth, which shows two behaviors.
For the North and the South-DECaLS footprints, the density decreases with increasing depth, whereas for the South-DES footprint, it increases with increasing depth.
A possible explanation could be the following: for shallow imaging, the trend would be driven by contamination from stars and $z < 0.6$ galaxies due to scattering in the $g-r$ color, which makes them move inside the selection box.
The scattering also affects $z > 0.6$ galaxies, making them go outside the selection box.
However the density of such $z>0.6$ galaxies is much smaller than that of the $z<0.6$ galaxies (see Figure~\ref{fig:maingrz}).
The net effect would be an increase in the number of selected targets.
The strength of this effect decreases as the depth increases (the color scattering decreasing), and the impact on density eventually vanishes.
Another effect could come at play with deeper imaging, namely the increase in the number of detected sources in the imaging: that second effect could explain the trend seen in the South-DES region.
There is a decrease of the target density with the Galactic extinction for $\rm{E(B-V)} > 0.05$ mag, the trend being stronger in the North than in the South-DECaLS and South-DES footprints.
This could be explained by two facts: the imaging depth does not fully correct for the Galactic extinction (Section~\ref{sec:imaging}) and the regions with high extinction are embedded in small-scale structures that cannot be correctly accounted for in the imaging strategy (Figure~\ref{fig:ebvcutout}).
Interestingly, both selections lead to increased density close to the Sagittarius Stream, which could be explained by contamination from stars from the stream.
\subsection{Sensitivity to photometric zero-point uncertainties}
We estimate the sensitivity of the Main Survey ELG target selection to $\sigma_{\rm zp}$, the imaging photometric zero-point uncertainties, using the same approach as in \citet{myers15a} and \citet{raichoor17a}.
Results are reported in Table~\ref{tab:zp}.
In each of the $g$-, $r$-, and $z$-band, one at a time, we add $\pm0.01$ mag to the photometry and re-run the target selection algorithm to estimate $\delta N_{0.01} = \frac{|\Delta N|}{N}$, the fractional change in the target density due to this magnitude shift.
We find consistent $\delta N_{0.01}$ values across the footprints.
The ELG\_LOP selection has $\delta N_{0.01} \sim 0.05, 0.03, 0.01$ in the $g$-, $r$-, and $z$-band, respectively.
The ELG\_VLO selection has $\delta N_{0.01} \sim 0.04, 0.05, 0.04$ in the $g$-, $r$-, and $z$-band, respectively.
We notice that the selections have different sensitivities in the $z$-band, ELG\_VLO being more sensitive.
The expected rms variation in the number density due to shifts of the imaging zero-point is then estimated to be $\frac{\delta N_{0.01}}{0.01} \times \sigma_{\rm zp}$.
The LS-DR9 has $\sigma_{\rm zp}$ values of 0.003 mag in the $g$- and $r$-bands, and of 0.006 mag in the $z$-band \citep{schlegel22a}. \AR{to be confirmed!}
If we assume Gaussian errors for the zero-points, 95 percent of the footprint lie within a $\pm 2 \sigma_{\rm zp}$ of the expected zero-point in any photometric band, meaning that 95 percent of the footprint has a variation in target density lower than $4 \times \sigma_{zp} \times \frac{\delta N_{0.01}}{0.01}$.
The resulting fluctuations for each photometric band are given in Table \ref{tab:zp}.
Both selections have density fluctuations of 1-6 percent in all cases, except for the ELG\_VLO sample in the $z$-band, where the density fluctuations is about 8-9 percent.
That level of fluctuation is reasonable, and should be able to be addressed with the weighting scheme in the LSS analysis.\\
\begin{table*}
\centering
\begin{tabular}{lcc|cc|cc}
\hline
\hline
Band & Footprint & $\sigma_{\rm{zp}}$ & \multicolumn{2}{c|}{ELG\_LOP} & \multicolumn{2}{c}{ELG\_VLO}\\
& & & $\delta N_{0.01}$ & Fluctuations over & $\delta N_{0.01}$ & Fluctuations over\\
& & & & 95 percent of the area & & 95 percent of the area\\
& & [mag] & & [percent] & & [percent]\\
\hline
\multirow{3}{*}{$g$} & North & \multirow{3}{*}{0.003} & 0.052 & 6.2 & 0.038 & 4.6\\
& South-DECaLS & & 0.055 & 6.5 & 0.038 & 4.6\\
& South-DES & & 0.052 & 6.2 & 0.040 & 4.8\\
\hline
\multirow{3}{*}{$r$} & North & \multirow{3}{*}{0.003} & 0.030 & 3.6 & 0.046 & 5.5\\
& South-DECaLS & & 0.036 & 4.3 & 0.045 & 5.4\\
& South-DES & & 0.032 & 3.8 & 0.051 & 6.1\\
\hline
\multirow{3}{*}{$z$} & North & \multirow{3}{*}{0.006} & 0.004 & 1.0 & 0.035 & 8.4\\
& South-DECaLS & & 0.005 & 1.2 & 0.032 & 7.7\\
& South-DES & & 0.004 & 0.9 & 0.037 & 9.0\\
\hline
\end{tabular}
\caption{
Sensitivity of the Main ELG target selection to the imaging photometric zeropoint uncertainties.
Column 3 is the imaging photometric zeropoint uncertainties ($\sigma_{\rm zp}$).
Columns 4 and 6 are the fractional changes in target density due to a $\pm$0.01 mag shift in the zeropoint ($\delta N_{0.01}$).
Columns 5 and 7 are the expected fluctuations in the number of selected targets over 95 percent of the footprint.
}
\label{tab:zp}
\end{table*}
\section{Spectroscopic data} \label{sec:specdata}
\AR{check all numbers with sv overview paper and VI paper!}
We now present preliminary results from the DESI spectroscopic observations of this ELG sample.
Those observations include three phases of the DESI experiment: the SV1, the One-Percent, and the Main surveys.
This Section introduces those observations, along with the reduction of the data, which will be used in Section~\ref{sec:mainspec} to perform the analysis.
The SV1 and One-Percent data presented below will be part of the Survey Validation data released in the DESI Early Data Release \citep{desi-collaboration23a}.
\subsection{The DESI instrument}
The DESI instrument, described in details in \citet{desi-collaboration16b} and \citet{desi-collaboration22a}, is a multi-spectroscopic instrument mounted at the prime focus of the 4m Mayall Telescope at Kitt Peak, Arizona.
The focal plane covers a field of view of about 8 deg$^2$ \citep{miller22a} and is equipped with 5,000 fiber positioners \citep{silber22a} distributed in ten 'petals`.
For each of the 500 fibers of a given petal, the light is dispersed by one of the ten three-arm spectrographs ('B`: 360 nm to 600 nm; 'R`: 560 nm to 780 nm; 'Z`: 740 nm to 990 nm).
The resolving power ($R = \lambda / \Delta\lambda$) increases with the wavelength, from $\sim$2000 at the shortest wavelengths to nearly $\sim$5500 at the longest ones.
The wavelength coverage and the resolving power were designed to ensure that the instrument could measure and resolve the ELG \oii $\lambda \lambda$ 3726,29 \AA~doublet in the $0.6 < z < 1.6$ range.
\subsection{Observations} \label{sec:specobs}
We briefly summarize here the DESI ELG spectroscopic observations used hereafter; the interested reader can find details in \citet{desi-collaboration22b}.
DESI observations are conducted by 'tile`, i.e. by group of 5,000 fibered targets observed at once.
Each tile is observed so as to reach a required SNR value on average on all spectra.
This is done through the computation of an effective exposure time, $\efftime$, which accounts for observing conditions and of per-fiber properties \citep{guy22a}.
The sky map of the DESI ELG tiles used in this paper is displayed on Figure~\ref{fig:specobs}, which shows that each program has a specific tiling coverage of the footprint.
The above data contain 37 tiles with ELG targets from the SV1 (December 2020 to March 2021), which explored extended samples to finalize the target selection (see Section~\ref{sec:svts}).
Those typically have $\efftime \sim 4,000$s, i.e. four times the nominal Main survey \efftime, so that the observations provide secure data to study the faint end of the explored samples.
Three of those tiles have much higher $\efftime$ (7,000 -- 15,000s) and were used to build a truth table of about ten thousand ELG spectra with Visual Inspection (VI) \citep{lan22a}.
The One-Percent Survey (April 2021 to May 2021) observed 239 dark tiles distributed over 20 regions ('rosettes`) of the NGC with an \efftime~of about 1300s, i.e. 30 percent larger than the nominal Main survey \efftime.
A specificity of the One-Percent Survey observations is that most targets which did not have a conclusive $\zspec$ after a first observation were re-observed with another tile, to increase the SNR.
This significantly complicates the analysis in Section~\ref{sec:mainspec}, as repeat observations of the faintest targets to secure a reliable $\zspec$ measurement are not representative of the Main Survey.
In what follows, repeat observations -- i.e., observations of the same target from different tiles -- are thus removed from the One-Percent survey analysis.
We use the Main Survey observations processed in \citet{guy22a}, which have been taken from May 2021 to July 2021, and include 305 dark tiles with a narrow distribution of \efftime~(1100s $\pm$ 190s).
This dataset, displayed in green in Figure~\ref{fig:specobs}, only covers part of the North and South-DECaLS footprints.
Lastly, for the redshift distribution (Figure~\ref{fig:nz}) and the comparison with the HSC $\zphot$ (Figure~\ref{fig:desi_hsc}), we complete this Main Survey sample with 973 Main dark tiles observed from September 2021 to December 2021 (in orange in Figure~\ref{fig:specobs}), which provide a significant coverage of the SGC, so that we have a representative sampling of the three footprints (North, South-DECaLS, and South-DES), in particular in terms of imaging depth, Galactic extinction, and stellar density,.
The pipeline reduction for that sample is not rigorously the same as that described in Section~\ref{sec:fujalupe} -- it is a slightly less advanced, but is a very close version.
\subsection{Data reduction} \label{sec:fujalupe}
The spectroscopic data reduction and the redshift fitting with the \texttt{Redrock} software\footnote{\niceurl{https://github.com/desihub/redrock}} are fully described in \citet{guy22a} and \citet{bailey22a}, respectively.
We discard any observed spectrum with flagged issues in the data ($\texttt{COADD\_FIBERSTATUS}~!= 0$).
A key output quantifying the reliability of the best-fit $\zspec$ is the $\chi^2$ difference between the best-fit template and the second best one (\texttt{DELTACHI2}).
A large \texttt{DELTACHI2} generally implies a reliable $\zspec$ measurement.
We also use the SNR of the measured \oii flux (\texttt{FOII\_SNR}) computed as follows.
The continuum is estimated in the vicinity of the doublet, from the wavelengths 200 \AA~(in rest-frame) blue-wards of the \oii doublet.
Then the \oii doublet is simply fitted with two Gaussians at the expected positions corresponding to the measured $\zspec$.
The \oii flux, the line ratio, and the line width are let free in the fit.
Figure~\ref{fig:desi_sdss} compares the DESI spectrum of an ELG target to the one observed with the eBOSS survey.
The typical eBOSS observations were of one hour, against fifteen minutes for a typical DESI observation.
This is a representative $\zspec \sim 0.85$ ELG spectrum, with mostly undetected continuum and some significant emission lines.
The zoom panels on the bottom row show the improvement brought by DESI: its higher resolution allows one to nicely resolve the \oii doublet, which provides an unambiguous feature to estimate the redshift; besides it also provides sharper emission lines, with higher SNR. \AR{I could have picked one where the desi is much more better than the eboss, but I don t want to make eboss look bad!}
\subsection{Weighting ELG and QSO targets}
Except for 12 ELG-dedicated SV1 tiles, the ELG targets always have a fiber assignment priority lower than the QSO and LRG targets.
The intersection between ELG and LRG target samples is virtually empty.
However, the ELG and QSO target samples intersection is significant: for instance, in the Main Survey, about one third of the QSO targets are also ELG targets, mostly ELG\_LOP ones; those $\sim$100 deg$^2$ targets represent about 5 percent of the ELG\_LOP targets.
Nevertheless, this 'ELGxQSO` subsample is not representative of the overall ELG sample, since it consists of bright objects with different colors.
As the QSO targets have top priority for the fiber assignment, the first passes will mostly be filled with QSO and LRG targets, and the ELG targets will be assigned during the later passes.
For the One-Percent Survey, as the program is finished, all ELG\_LOP targets were assigned.
But for the not-ELG-dedicated SV1 tiles or the -- currently not finished -- Main Survey, the 'ELGxQSO` targets are well over-represented in the observed spectra.
For instance, in a typical first pass tile of the Main survey, those 'ELGxQSO` targets can represent up to 30 percent of the observed ELG\_LOP targets.
In the analysis in Section~\ref{sec:mainspec} -- except for the repeat analysis, we thus correct that effect, and appropriately down-weight the 'ELGxQSO` observed targets.
We split the sky in healpix pixels large enough\footnote{\texttt{nside} = 16, i.e. a pixel area of 13.4 deg$^2$.} to reasonably track the selections density fluctuations.
For each of the SV1, One-Percent, and Main Survey, and each of the Main ELG\_LOP, and ELG\_VLO selections, we compute $f_{\rm targ}$ the fraction QSO targets in the considered selection for each pixel.
If we note $n_{\rm QSO}$ and $n_{\rm notQSO}$ the number of spectroscopically observed QSO and non-QSO targets for the considered selection in each pixel, we define the per-pixel weight to be applied to the $n_{\rm QSO}$ targets has follows:
$w_{\rm QSO} = f_{\rm targ} \times n_{\rm notQSO} / (n_{\rm QSO} - f_{\rm targ} \times n_{\rm QSO})$.
This ensures that $w_{\rm QSO} \times n_{\rm QSO} / (n_{\rm notQSO} + w_{\rm QSO} \times n_{\rm QSO}) = f_{\rm targ}$, i.e. that the weighted 'ELGxQSO` observed targets represent the same fraction of the observed sample than that of the parent target sample for each pixel.\\
\section{Spectroscopic properties of the Main sample} \label{sec:mainspec}
This Section presents the spectroscopic properties of the ELG targets, based on the analysis of the spectroscopic data presented in the Section~\ref{sec:specdata}.
\subsection{Reliable $\zspec$ criterion} \label{sec:zcrit}
We introduce the criterion used hereafter to select a reliable $\zspec$ for the ELG spectra, which is a cut in the \{\texttt{FOII\_SNR}, \texttt{DELTACHI2}\} space.
We emphasize that this is a preliminary, simple criterion, showing at first order what can be achieved.
As for each DESI tracer, the ELG spectra require a dedicated reliable $\zspec$ measurement criterion, which maximizes the fraction of selected redshifts and minimizes the fraction of catastrophic redshifts in the selected sample, typically at the one percent level.
Such requirements are driven by the LSS analysis, which is sensitive to catastrophic redshifts \AR{reference?}.
One possible criterion is a high \texttt{DELTACHI2} value, as used for other DESI tracers \citep{hahn22a,zhou22a,chaussidon22a}.
The specificity of the ELG spectra is that they are at low SNR: a $\zspec$ reliably estimated with the \oii doublet may not have a large \texttt{DELTACHI2} value, as the pixels related to the \oii doublet represent a marginal fraction of the pixels and a fit with a single emission line with a different redshift could still provide a comparable $\chi^2$.
For that reason, selecting reliable $\zspec$ with a \texttt{DELTACHI2} criterion only would discard a large fraction of good redshifts, noticeably at high redshift, where the \oii doublet is the only feature in the spectrum.
A relevant parameter space to consider in this regard is the \{\texttt{FOII\_SNR}, \texttt{DELTACHI2}\} space.
Figure \ref{fig:zcrit} shows the criterion we adopt in this paper:
\begin{equation}
\rm{log}_{10} (\texttt{FOII\_SNR}) > 0.9 - 0.2 \times \rm{log}_{10} (\texttt{DELTACHI2})
\label{eq:zcrit}
\end{equation}
This Figure is computed using approximately 3.5 thousand ELG\_LOP and ELG\_VLO VIed spectra.
For those spectra, the VI provides two informations from the deep reductions: $\zspecvi$ and $\qavi$, its confidence level; both those quantities are merged from the diagnosis of several inspectors.
The VI confidence level $\qavi$ ranges from 0 to 4 and, following the definition established in \citet{lan22a}, spectra with $\qavi \geq 2.5$ are considered to provide a robust $\zspecvi$, which we consider as the truth here.
We conservatively consider all spectra with $\qavi < 2.5$ to be a failure in shallower reductions.
As those observations come from the SV1 three deep, VIed tiles, which have been exposed with many exposures, we are able to generate several tens of coadded reductions with \efftime~ranging from 200s to 1600s.
Then, for each $\zspec$ measurement from those shallower reductions, we compare its value to the $\zspecvi$ from the deep reductions.
We consider that the $\zspec$ measurement is 'VI-validated` if it verifies the two following criteria:
\begin{subequations}
\begin{align}
\qavi \geq 2.5 \label{eq:vizok_a}\\
c \cdot |\zspec - \zspecvi| / (1 + \zspecvi) < 1000 \; \rm{km.s}^{-1}, \label{eq:vizok_b}
\end{align}
\end{subequations}
where $c$ is the speed of light in km.s$^{-1}$.
The top panel of Figure~\ref{fig:zcrit} shows that a simple cut in \texttt{DELTACHI2} is not optimal at all:
a sample selected with the fiducial $\texttt{DELTACHI2} > 9$ threshold (used in the \texttt{Redrock} to flag a low-reliability $\zspec$ measurement), would be highly contaminated with catastrophic $\zspec$ measurements;
a more conservative threshold, e.g., $\texttt{DELTACHI2} > 25$, would discard a significant number of reliable $\zspec$ measurements.
Our simple criterion of Equation~\ref{eq:zcrit} selects more than 95 percent of the reliable $\zspec$ measurements, while keeping a very low fraction of catastrophic $\zspec$ (about one percent).
The bottom panel of Figure~\ref{fig:zcrit} displays the average redshift for each position in the \{\texttt{FOII\_SNR}, \texttt{DELTACHI2}\} plane, using the $\zspecvi$ measurements with $\qavi \geq 2.5$.
It shows that the high-redshift ELG targets, which are the most valuable ones for DESI, have a low \texttt{DELTACHI2} value, despite their reliable $\zspec$ measurement (likely from the identification of the resolved \oii~doublet).
Lastly, we illustrate in Figure~\ref{fig:foiiz} how $\zspec$ measurements selected by Equation~\ref{eq:zcrit} are distributed in the \{{\texttt{FOII\_SNR}, \texttt{Z}\} plane, for the $0.6 < z < 1.6$ range (top) and zooming in the $1.45 < z < 1.55$ range (bottom).
To have the largest sample size, this Figure displays about 600 thousands Main ELG spectra observed in the Survey Validation and Main Survey from coadded reductions with $800\rm s < \efftime < 1200\rm s$.
A noticeable feature is the drop in the measured \texttt{FOII\_SNR} for some $\zspec$ values, especially at $z > 1.5$.
Those are mostly due to sky line subtraction residuals.
To illustrate this point, sky line wavelengths converted into redshifts are displayed at the top of the plots, assuming the redshift to be that of an \oii doublet appearing at the same observed wavelength as the sky line i.e. $\zspec = \lambda_{\rm{sky}} / 3728 - 1$.
Future data reduction improvements (sky subtraction, \oii flux measurement) will be valuable for the analysis of the ELG sample, hopefully increasing the fraction of reliable $\zspec$ at $z > 1.5$.
A detailed analysis of all the recent Main Survey spectra -- possibly enhanced with additional VI -- will allow to refine this Equation~\ref{eq:zcrit} criterion.
Likely improvements would be to refine the cut in the space to enlarge the fraction of selected $\zspec$; to refine the selection in the $1.5 < z < 1.6$ range, where the \oii doublet falls in the forest of sky lines; or refine it at $z < 1$, using other lines, like the \oiii $\lambda \lambda$ 4960,5008 \AA.
\subsection{Fraction of selected catastrophic $\zspec$ estimated from VI}\ \label{sec:catavi}
An important quantity to control is the fraction of $\zspec$ selected with our reliability criterion (Equation~\ref{eq:zcrit}) which has a catastrophic $\zspec$ estimate.
We first assess this fraction with using the SV1 VI sample from the three deep ELG tiles.
We identify as a catastrophic $\zspec$ estimate any measurement passing our Equation~\ref{eq:zcrit} criterion, but failing Equation~\ref{eq:vizok_a} or Equation~\ref{eq:vizok_b}.
We restrict here to spectra that would be selected for an LSS analysis, i.e. redshifts in the $0.6 < z < 1.6$ range passing our criterion (about 2.9 thousand ELG\_LOP targets and 0.7 thousand ELG\_VLO targets).
From the multiple reductions, we have at hands about sixty reductions with spanning \efftime~values between 200s and 1600s.
Figure~\ref{fig:cata_vi} presents for the ELG\_LOP sample the fraction of catastrophic $\zspec$ as a function of \efftime~values (top) and $\zspec$ (bottom), and demonstrates that our criterion is effective in keeping the catastrophic $\zspec$ fraction at the order of the percent level.
For a typical $\efftime \sim 1000\rm s$ -- the nominal $\efftime$ for the Main Survey, the catastrophic $\zspec$ fraction for the ELG\_LOP sample is $\sim$0.2 percent; for the ELG\_VLO sample, it is virtually zero.
The fraction is independent of \efftime; this is the desired behavior, i.e. a shallow reduction would naturally select much less spectra, but those would be of similar quality as spectra from a deeper reduction.
As a function of $\zspec$, the catastrophic $\zspec$ fraction is very low for $z < 1.2$, but starts to increase for $1.3 < z < 1.4$, and is more significant for $1.5 < z < 1.6$.
The reasons are twofold.
First, this reflects the fact that as redshift increases, emission lines move redward leaving only the \oii doublet.
Then, for $1.5 < z < 1.6$, the \oii~doublet falls in a region with many strong sky lines subtraction residuals (see Figure~\ref{fig:foiiz}), likely preventing the visual inspectors to securely confirm the redshift; our conservative choice to consider all redshifts from a target with $\qavi < 2.5$ as a failure (see Equation \ref{eq:vizok_a}) likely explains the high values of catastrophic $\zspec$ fraction in $1.5 < z < 1.6$ (see next Section).
In any case, to reduce the catastrophics $\zspec$ fraction in the $1.5 < z < 1.6$ range will require to improve the sky subtraction in the reduction pipeline.\\
\subsection{Fraction of selected catastrophic $\zspec$ and $\zspec$ precision estimated from repeats}\ \label{sec:catarp}
We use repeat observations of the Main ELG targets to re-assess the catastrophic $\zspec$ fraction with an independent method, and to determine the $\zspec$ measurement precision.
We build a sample of about 19 (5) thousand pairs of independent repeat observations of the Main ELG\_LOP (ELG\_VLO) targets as follows.
We consider the SV1, One-Percent, and Main per-night reductions with $800\rm s < \efftime < 1200\rm s$, each reduction being made from independent observations (either different tile observation or different exposures for a given tile).
We then restrict to the Main ELG targets with a reliable $\zspec$ measurement (Equation~\ref{eq:zcrit}) and with one of the two measurements in $0.6 < z < 1.6$, and identify targets having two or more reductions.
For each pair, we consider the redshift difference $dv = c \cdot (z_0 - z_1) / (1 + z_0)$, where $c$ is the speed of light in km.s$^{-1}$.
First, the Figure~\ref{fig:cata_rp_z} independently re-assesses with these repeat observations the catastrophic $\zspec$ fraction as a function of redshift.
We obtain sub-percent fractions for all redshifts.
This is consistent with the Figure~\ref{fig:cata_vi}, except for the $1.3 < z < 1.6$ range, where the fraction from the repeats has much less pronounced peaks.
That is consistent with our statement in previous section, that those peaks in Figure~\ref{fig:cata_vi} are likely driven by our conservative choice to consider all redshifts from a target with $\qavi < 2.5$ as a failure.
The top panel of Figure~\ref{fig:repeat} shows $dv$ as a function of \texttt{FOII\_SNR} for the ELG\_LOP sample: only 0.2 percent of the pairs have a catastrophic measurement (red dots); the ELG\_VLO sample has virtually zero catastrophic measurements.
This fraction is in agreement with the catastrophic rate estimated in Section~\ref{sec:catavi} from a totally independent method.
Besides, this panel emphasizes that the \oii doublet is crucial for the ELG $\zspec$ measurement, as it shows a clear correlation between the redshift precision and the \texttt{FOII\_SNR}.
The $dv$ distribution is reported in the bottom panel of Figure~\ref{fig:repeat}, and can be reasonably modeled by the weighted sum of two Gaussian distributions centered on zero with widths of 5 and 20 km.s$^{-1}$.
Eventually, if we use the same statistical measurement as \citet{lan22a} we measure for the ELG\_LOP (ELG\_VLO) sample $\rm{MAD}(dv) \times 1.48 / \sqrt{2} \sim 7 \; \rm{km.s}^{-1}$ ($9 \; \rm{km.s}^{-1}$), where MAD is the median absolute deviation, in agreement with \citet{lan22a}.
This $\zspec$ precision is the best among DESI tracers \citep{lan22a,alexander22a}, as the $\zspec$ is based on sharp emission lines.
\subsection{Efficiency: fraction of selected $\rm{z_{min}} < \zspec < \rm{z_{max}}$} \label{sec:selfrac}
We present the fraction of ELG spectra which is selected by our reliability criterion from Equation~\ref{eq:zcrit}.
Figure~\ref{fig:selfrac} displays that fraction as a function of \efftime, for the Main ELG\_LOP and ELG\_VLO samples, and for the three surveys (SV1, One-Percent, and Main; top panel).
Combining those three surveys (bottom panel) allows us to probe a wide range of \efftime, SV1 exploring the entire range but with low statistics, One-Percent probing values from 1000s to 1500s, right in the high tail of the Main survey range.
For this Figure, we restrict to reductions with $100\rm s < \efftime < 1600\rm s$.
As demonstrated in Section~\ref{sec:photsyst}, the ELG\_LOP and ELG\_VLO selections have some dependencies with the $g$-band imaging depth, the Galactic extinction, and the distance to the Sagittarius Stream.
To allow a comparison of the three surveys, we:
(1) restrict to regions with $g$-band depth smaller than 24.5 mag and E(B-V) smaller than 0.1 mag;
(2) subsample the SV1 and One-Percent data so that the distance to the Sagittarius Stream values are representative of the distribution probed by the Main Survey.
We call efficiency for a given $z_{\rm{min}} < z <z_{\rm{max}}$ range the fraction of observed ELG targets which obtain a reliable $\zspec$ measurement in that redshift range.
We display in this figure the efficiency for the two important redshift ranges to control for the target sample, namely $0.6 < z < 1.6$, the nominal redshift range, and $1.1 < z < 1.6$, the high-redshift part of that range, where ELGs are the most important tracers for DESI.
For the Main Survey nominal $\efftime \sim 1000$s, the ELG\_LOP selection has an efficiency in $0.6 < z < 1.6$ of 60-65 percent, and an efficiency in $1.1 < z < 1.6$ of 30-35 percent.
The ELG\_VLO selection has a much higher efficiency in $0.6 < z < 1.6$ of 90-95 percent, but an efficiency of only 20-25 percent in $1.1 < z < 1.6$, as expected from the designed photometric cuts.
A noticeable feature in Figure~\ref{fig:selfrac} is the overall agreement between the three surveys.
This highlights the relevance of DESI approach, with using the SV1 to explore a large selection sample, then One-Percent to refine the selections and check them with observations slightly deeper than nominal, before finalizing the Main survey selection.
And this is what allows us to combine the three surveys together (bottom panel of Figure~\ref{fig:selfrac}), leading to a precise measurement over a large range of $\efftime$ values.
The second visible feature is the flattening of the efficiency curves towards large \efftime~values.
As expected, the efficiency is low for low \efftime~values, as the ELG spectra do not have enough SNR to securely measure a $\zspec$ value for most of the sample.
Then the efficiency strongly increases with increasing \efftime~up to approximately 750s.
Finally the efficiency flattens for \efftime~values larger than 750s.
As a consequence, the efficiency is rather constant over the \efftime~range probed by the Main survey (blue histogram in the top panel), which naturally has some scatter around the requested value of 1000s.
\subsection{Efficiency in the photometric space} \label{sec:effphot}
This Section analyzes the redshift efficiency in the $0.6 < z < 1.6$ and $1.1< z < 1.6$ ranges as a function of the $g$-band magnitude and of the $grz$-band colors.
We use the SV1 ELG sample in order to quantify how the efficiency varies in the photometric space for the Main ELG selection, and at the borders of that selection.
Such a study was used to finalize the Main Survey ELG selection.
We use reductions of the 25 SV1 ELG-only tiles with $800\rm{s} < \efftime < 1200\rm{s}$, i.e. \efftime~values representative of the Main Survey; those tiles include about 43.2 thousand ELG targets.
The results are presented in Figure~\ref{fig:effphot}, where the top (resp. bottom) row is the efficiency in the $0.6 < z < 1.6$ (resp. $1.1 < z < 1.6$) range.
The left column shows the $g-r$ vs. $r-z$ diagram, the middle column shows the $\coii$ vs. $g_{\rm{fib}}$ diagram, and the right column shows the $g_{\rm{tot}}$ vs. $g_{\rm{fib}}$ plane.
On all plots, the Main ELG\_LOP selection is displayed as a solid dark line, the Main ELG\_VLO selection as a dashed red line, and the SV1 ELG selection as a thick solid grey line.
The $g-r$ vs. $r-z$ plots include a $g_{\rm{fib}} < 24.1$ selection, i.e. the same magnitude limit as the Main ELG sample.
The efficiency in the $1.1 < z < 1.6$ interval (bottom) was the key motivation behind the Main ELG selection box definition.
The efficiency inside the ELG\_LOP selection box is rather stable, but sharply drops on the blue $r-z$ side (likely because any $z > 1.6$ galaxy cannot provide a reliable $\zspec$) and on the red $g-r$ side (likely due to contamination from stars and $z<0.6$ galaxies).
On the red $r-z$ side, the efficiency drops more smoothly, which justifies the ELG\_VLO selection box definition, in conjunction with the very high efficiency of that $g-r$ vs. $r-z$ region for the $0.6 < z < 1.6$ range (top plot).
The $\coii$ vs. $g_{\rm{fib}}$ diagram allows us to test the effect of selecting ELG targets fainter than $g_{\rm{fib}} < 24.1$.
At fixed $\coii$, the efficiency is actually rather stable for $\coii > 0$ when going to magnitudes fainter than the Main ELG cut (solid black line), illustrating the good performance of DESI.
Nevertheless, such a selection would also bring in many failures from the $\coii < 0.5$ region, which would mitigate the gain in efficiency; that, combined with the fact that going fainter increases the density variations with imaging and foreground properties (see Section~\ref{sec:photsyst}), explains why it was not considered in the end.
The $g_{\rm{tot}}$ vs. $g_{\rm{fib}}$ plot include the $g-r$ vs. $r-z$ selection of the Main ELG cuts.
They confirm the expectations that a total magnitude based selection would add poor efficiency targets, likely extended targets with little flux inside DESI fibers.
This justifies our choice to use a fiber magnitude-based cut for the Main selection.
\subsection{Redshift distribution} \label{sec:nz}
We present the Main Survey ELG sample redshift distribution.
We consider the Main Survey observations up to December 2021, as this allows us to build a sample observed over a footprint fairly representative of the full DESI footprint (see Section \ref{sec:specobs}).
We restrict to tiles with $800\rm s < \efftime < 1200\rm s$.
This sample contains about 1.5 million ELG\_LOP spectra (North: 0.2 million, South-DECaLS: 1 million, South-DES: 0.3 million) and about 187 thousand ELG\_VLO spectra (North: 21 thousand; South-DECaLS: 132 thousand; South-DES: 34 thousand).
For each sample and each footprint, Table~\ref{tab:eff} lists the overall efficiency, i.e. the fraction of observed ELG spectra which provide a reliable $\zspec$ (Equation~\ref{eq:zcrit}), and the efficiencies in the $0<z<0.6$, $0.6<z<1.1$, and $1.1<z<1.6$ ranges.
With our current criterion, 68 to 73 percent of the ELG\_LOP targets provide a reliable $\zspec$ ($\sim$70 percent in North and South-DECaLS, 73 percent in South-DES).
\begin{table*}
\centering
\begin{tabular}{lcccccc}
\hline
\hline
Sample & Footprint & Target density & \multicolumn{4}{c}{Efficiency}\\
& & [deg$^{-2}$] & All redshifts & $0 < z < 0.6$ & $0.6 < z < 1.1$ & $1.1 < z < 1.6$\\
\hline
\multirow{3}{*}{ELG\_LOP} & North & 1930 &0.71 & 0.05 & 0.33 & 0.32\\
& South-DECaLS & 1950 & 0.68 & 0.05 & 0.29 & 0.34\\
& South-DES & 1900 & 0.73 & 0.02 & 0.31 & 0.39\\
\hline
\multirow{3}{*}{ELG\_VLO} & North & 410 & 0.93 & 0.01 & 0.70 & 0.21\\
& South-DECaLS & 490 & 0.94 & 0.01 & 0.67 & 0.25\\
& South-DES & 480 & 0.95 & 0.00 & 0.69 & 0.26\\
\hline
\end{tabular}
\caption{
Efficiency per sample and footprint for the Main Survey ELG selection.
We call efficiency for a given $\rm{z_{min}} < z < \rm{z_{max}}$ range the fraction of observed ELG targets which provides a reliable $\zspec$ measurement (according to Equation~\ref{eq:zcrit}) in that redshift range.
}
\label{tab:eff}
\end{table*}
Figure~\ref{fig:nz} shows the expected density of observed ELG targets providing a reliable $\zspec$ at the end of the survey, when all passes will have been completed.
The distributions are normalized to
the target density, multiplied by
the fiber assignment rate at the end of the survey, multiplied by
the fraction of observed ELG targets providing a reliable $\zspec$.
The target densities are reported in Table~\ref{tab:imagdens}.
The fiber assignment rate at the end of survey is expected to be 0.69 for the ELG\_LOP selection and 0.42 for the ELG\_VLO one \citep{raichoor22b,desi-collaboration22b}; those values account for the 1\% loss rate affecting the ELG observations, i.e. where observations are discarded because of non-valid fibers (e.g., due to mechanical issue, petal-rejection; see \citealt{desi-collaboration22b,schlafly22a}).
The fractions of observed ELG targets providing a reliable $\zspec$ are listed in the fourth column of Table~\ref{tab:eff}.
Altogether, the normalization for the three footprints range from 910 to 960 deg$^{-2}$ for the ELG\_LOP selection, and from 160 to 190 deg$^{-2}$ range for the ELG\_VLO selection.
Considering the efficiencies per redshift range in Table~\ref{tab:eff}, the ELG\_LOP redshift distribution provides more than 400 deg$^{-2}$ reliable $\zspec$ in both the $0.6<z<1.1$ and the $1.1<z<1.6$ ranges, and thus fulfills DESI requirements.
As expected, the deeper the imaging is, the more reliable $\zspec$ are gathered in the $1.1<z<1.6$ range.
The North and South-DECaLS footprints show comparable $z<0.6$ contamination of 5 percent, but the South-DECaLS footprint provides 30 deg$^{-2}$ more $z>1.1$ redshifts.
The South-DES footprint, with imaging 0.5 mag or more deeper than in DECaLS-South, has significantly better performances, with almost no $z<0.6$ contamination, and should bring 20 deg$^{-2}$ more $0.6<z<1.1$ and 60 deg$^{-2}$ more $1.1<z<1.6$ reliable redshifts than the South-DECaLS footprint, despite an overall target density smaller by 50 deg$^{-2}$.
Lastly, we stress that characterizing the redshift distribution of the $\sim$30 percent targets which do not provide a reliable $\zspec$ will be an important topic for the DESI ELG LSS analysis; in particular, estimating the fraction and distribution in the redshift range of interest.
This can be achieved with using for instance accurate $\zphot$ or the clustering redshift method \citep[e.g.][]{newman08a}.
We note that this fraction could be decreased thanks to further developments.
First, it is very likely that some failures are spurious targets close to bright or medium stars, as the angular masking was purposely chosen to be very minimal at the targeting step.
Preparatory work has shown that the target selection has indeed some overdensity close to bright or medium stars.
Another potential improvement could come from the pipeline, with for instance a better sky subtraction.
Lastly, as already said, the reliable $\zspec$ criterion could be refined.
Nevertheless, we expect that fraction of redshift failures to remain non-negligible in the end.
The VI analysis done on deep exposures showed that about 14 percent of the ELG\_LOP selection do not provide a VI-reliable $\zspec$; even if the VI was done on data processed with a less advanced reduction pipeline, it gives the order of magnitude of the effect.
The bottom panel of Figure~\ref{fig:nz} displays the redshift distribution of the ELG\_VLO sample.
As seen in Section~\ref{sec:photsyst}, this selection is less sensitive to the imaging depth, leading to less significant differences in the redshift distribution among the three footprints.
Nevertheless, their performances are still in the same order, the South-DES footprint providing the cleaner, higher-redshift sample, and the North footprint performing less well at high-redshift.
For all footprints, the ELG\_VLO selection should provide about 130 deg$^{-2}$ reliable redshifts in the $0.6<z<1.1$ range and about 50 deg$^{-2}$ reliable redshifts in the $1.1<z<1.6$ range, mostly because of the low fiber assignment rate of that sample (0.42).\\
\section{Conclusion} \label{sec:concl}
The ELG sample will constitute one-third of the 40 million extra-galactic DESI redshifts and will be used to probe the Universe over the $0.6 < z < 1.6$ range, and in particular over the $1.1 < z < 1.6$ range, where it will bring the tightest of the DESI cosmological constraints.
We presented the DESI ELG target selection used for survey validation and the final selection that was derived from it for the Main Survey.
The Main Survey ELG selection is composed of two disjoint sets of cuts, the ELG\_LOP and the ELG\_VLO selections, which have target densities of about 1940 deg$^{-2}$ and 460 deg$^{-2}$, respectively.
The ELG\_LOP sample, which has a higher fiber assignment priority, favors the $1.1 < z < 1.6$ range, whereas the ELG\_VLO sample, at lower fiber assignment priority, favors the $0.6 < z < 1.1$ range.
These two samples are completed by the ELG\_HIP sample, a 10 percent random subsample of the ELG\_LOP and ELG\_VLO samples, which has the same fiber assignment priority as the LRG one.
The target selection is based on the $grz$-band photometry rom the DR9 release of the Legacy Surveys.
As the ELG targets are at low SNR in the imaging, we define three footprints, isolating the three regions linked to the underlying observing programs with different imaging depths: North, South-DECaLS, and South-DES.
Both Main Survey ELG\_LOP and ELG\_VLO samples are selected with a $g$-band fiber magnitude cut $g_{\rm{fib}} < 24.1$, which favors \oii emitters and higher successful $\zspec$ measurement rates, and a specific $(g-r)$ vs.\ $(r-z)$ color box, which primarly selects the redshift range.
The Survey Validation SV1 ELG sample, which was used to tune the Main Survey cuts is an extended version of those, noticeably towards:
(1) bluer $r-z$ targets, exploring the $1.1 < z < 1.6$ range;
(2) redder $r-z$ targets, exploring the $0.6 < z < 1.1$ range;
(3) fainter blue targets
(4) $g$-band total magnitude selected targets.
We then presented the photometric properties of the Main Survey ELG selection.
In terms of magnitude, the ELG\_LOP sample is 0.2 mag fainter in the $r$-band and 0.5 mag fainter in the $z$-band than the ELG\_VLO sample, because of the different selection boxes in the $g-r$ vs. $r-z$ diagram.
For both samples, the target density is slightly different in each footprint, mostly because of the difference in imaging depth: ELG\_LOP ranges from 1900 to 1950 deg$^{-2}$ and ELG\_VLO from 410 to 490 deg$^{-2}$.
The imaging and foreground maps causing the largest target density fluctuations are the imaging depth, in particular in the $g$-band, and the Galactic dust extinction.
Attempts to correct those fluctuations are deferred to subsequent papers, when a cleaner sample will have been defined, e.g., after inclusion of appropriate angular masking and correction for $\zspec$ measurement failures.
Lastly, we presented the spectroscopic properties of the Main Survey ELG selection.
For that purpose, we used observations from three DESI surveys covering two phases of validation and the first 7 months of the Main Survey.
We define a preliminary criterion to select reliable $\zspec$ measurements, that requires a minimal \oii doublet flux SNR as a function of the $\chi^2$ difference between the first and second redshift values that best fit the observed spectrum.
This criterion exploits the fact that the \oii doublet is the key emission line to measure accurate redshifts of star-forming ELGs.
Using visually inspected (VI) spectra tiles and repeat observations, we demonstrate that such a criterion is extremely efficient, since it selects most of the VI-confirmed $\zspec$ and keeps the fraction of catastrophic $\zspec$ measurements below one percent.
Nevertheless, this discards about 30 percent of the observed ELG\_LOP spectra (and 6 percent of the ELG\_VLO ones): even if some improvement in the data reduction and in the reliability criterion could reduce those percentages, we expect that it will remain non-negligible for the ELG\_LOP sample, and it will be necessary to characterize the redshift properties of the discarded spectra.
We defined the efficiency in a given redshift range as the fraction of observed ELG spectra providing a reliable $\zspec$ in that redshift range.
Depending on the footprint, the ELG\_LOP selection has an efficiency of 2 to 5, 29 to 33, and 32 to 39 percent in the $0 < z < 0.6$, $0.6 < z < 1.1$, and $1.1 < z < 1.6$ ranges, respectively, with deeper imaging providing less contamination and more high-redshift spectra.
That sample will thus fulfill the DESI requirements: with the expected fiber assignment rate of 0.69, it should provide more than 400 deg$^{-2}$ observed, reliable $\zspec$ in both the $0.6 < z < 1.1$ and the $1.1 < z < 1.6$ ranges.
The ELG\_VLO selection has an efficiency of 0 to 1, 67 to 70, and 21 to 26 percent in the $0 < z < 0.6$, $0.6 < z < 1.1$, and $1.1 < z < 1.6$ ranges, respectively.
As expected from its design, that sample has an overall very high efficiency, extremely few contaminants, and peaks in the $0.6 < z < 1.1$ range.
With the expected fiber assignment rate of 0.42, that should provide 130 deg$^{-2}$ and 50 deg$^{-2}$ reliable $\zspec$ in the $0.6 < z < 1.1$ and the $1.1 < z < 1.6$ ranges, respectively.\\
\vspace{\baselineskip}
\section*{Acknowledgments}
JM gratefully acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0020086.
This research is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; additional support for DESI is provided by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF’s National Optical-Infrared Astronomy Research Laboratory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: \niceurl{https://www.desi.lbl.gov/collaborating-institutions}.
The DESI Legacy Imaging Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall z-band Legacy Survey (MzLS). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory. Legacy Surveys also uses data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Legacy Surveys was supported by: the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility; the U.S. National Science Foundation, Division of Astronomical Sciences; the National Astronomical Observatories of China, the Chinese Academy of Sciences and the Chinese National Natural Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy. The complete acknowledgments can be found at \niceurl{https://www.legacysurvey.org}.
The authors are honored to be permitted to conduct scientific research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation.
\vspace{\baselineskip}
\section*{Data avaibility}
The ELG targets for the SV1, the One-Percent, and the Main Surveys are accessible at \niceurl{https://data.desi.lbl.gov/public/ets/target/}.
We refer the reader to \citet{myers22a} for a description of the files and of the folders structure.
All data points shown in the published graphs are available in a machine-readable form on the following website: \niceurl{https://doi.org/10.5281/zenodo.6950999}.
\facility{Mayall}
\bibliography{ms}{}
\bibliographystyle{aasjournal}
\appendix
\numberwithin{table}{section}
\numberwithin{figure}{section}
\section{SV1 Survey Target selection cuts} \label{app:svcuts}
We present in the Table~\ref{tab:svcuts} the detailed cuts of the Survey Validation SV1 Survey ELG selection discussed in the Section~\ref{sec:svts}.
This selection is the union of two samples, SVGTOT and SVGFIB, which have similar cuts but are based either on $g_{\rm tot}$ or $g_{\rm fib}$.
The overall selection has a target density of $\sim$7000 deg$^{-2}$, as the SVGFIB and SVGTOT selections have a large overlap.
The names SVGTOT and SVGFIB are names assigned to targeting bits by \texttt{desitarget}, the target selection pipeline \citep{myers22a}.
For completeness, we also report the cuts for the FDRGTOT and FDRGIB selections, which are other targeting bits.
The FDRGTOT (FDRGFIB, respectively) is fully included in SVGTOT (SVGFIB, respectively).\\
\begin{table*}
\centering
\begin{tabular}{lccl}
\hline
\hline
Sample & Density & Cuts & Comment\\
\hline
\multirow{4}{*}{Clean} & \multirow{4}{*}{-} & \texttt{brick\_primary} = True & Unique object\\
& & $\texttt{nobs\_\{grz\}} > 0$ & Observed in the $grz$-bands\\
& & $\texttt{flux\_\{grz\}} \times \sqrt{\texttt{flux\_ivar\_\{grz\}}} > 0$ & Positive SNR in the $grz$-bands\\
& & $(\texttt{maskbits} \; \& \; 2^1) = 0$, $(\texttt{maskbits} \; \& \; 2^{12}) = 0$, $(\texttt{maskbits} \; \& \; 2^{13}) = 0$ & Not close to bright star/galaxy\\
\hline
\multirow{5}{*}{SVGTOT} & \multirow{5}{*}{$\sim$5200 deg$^{-2}$} & Clean & Clean sample\\
& & $g > 20$ & Bright cut\\
& & $(g - r) + 1.2 \times (r - z) < 1.6 - 7.2 \times (g_{\rm tot} - \texttt{GTOTFAINT\_FDR})$ & Sliding faint cut\\
& & $(g - r < 0.2)$ or $(g - r < 1.15 \times (r - z) + \texttt{LOWZCUT\_ZP} + 0.10)$ &Star/low-z cut\\
& & $g - r < -1.2 \times (r - z) + 2.0$ & Redshift/\oii cut\\
\hline
\multirow{5}{*}{SVGFIB} & \multirow{5}{*}{$\sim$5600 deg$^{-2}$} & Clean & Clean sample\\
& & $g > 20$ & Bright cut\\
& & $(g - r) + 1.2 \times (r - z) < 1.6 - 7.2 \times (g_{\rm fib} - \texttt{GFIBFAINT\_FDR})$ & Sliding faint cut\\
& & $(g - r < 0.2)$ or $(g - r < 1.15 \times (r - z) + \texttt{LOWZCUT\_ZP} + 0.10)$ &Star/low-z cut\\
& & $g - r < -1.2 \times (r - z) + 2.0$ & Redshift/\oii cut\\
\hline
\multirow{5}{*}{FDRGTOT} & \multirow{5}{*}{$\sim$2400 deg$^{-2}$} & Clean & Clean sample\\
& & $g > 20$ & Bright cut\\
& & $g < \texttt{GTOTFAINT\_FDR}$ & Faint cut\\
& & $0.3 < r - z < 1.6$ & $r - z$ cut\\
& & $g - r < 1.15 \times (r - z) + \texttt{LOWZCUT\_ZP}$ &Star/low-z cut\\
& & $g - r < -1.2 \times (r - z) + 1.6$ & Redshift/\oii cut\\
\hline
\multirow{5}{*}{FDRGFIB} & \multirow{5}{*}{$\sim$2500 deg$^{-2}$} & Clean & Clean sample\\
& & $g > 20$ & Bright cut\\
& & $g_{\rm fib} < \texttt{GFIBFAINT\_FDR}$ & Faint cut\\
& & $0.3 < r - z < 1.6$ & $r - z$ cut\\
& & $g - r < 1.15 \times (r - z) + \texttt{LOWZCUT\_ZP}$ &Star/low-z cut\\
& & $g - r < -1.2 \times (r - z) + 1.6$ & Redshift/\oii cut\\
\hline
\end{tabular}
\caption{
Survey Validation SV1 Survey target selection cuts.
The following quantities have values defined for the North and the South region:
$\texttt{GTOTFAINT\_FDR}: \rm{North}=23.5, \; \rm{South}=23.4$;
$\texttt{GFIBFAINT\_FDR}: \rm{North}=24.1, \; \rm{South}=24.1$;
$\texttt{LOWZCUT\_ZP}: \rm{North}=-0.20, \; \rm{South}=-0.15$.
See Table~\ref{tab:maincuts} for the definitions of the terms in the cuts.
}
\label{tab:svcuts}
\end{table*}
\section{HSC/DR2 $\zphot$ comparison to DESI/ELG $\zspec$} \label{app:hsczphot}
We illustrate in Figure~\ref{fig:desi_hsc} how the HSC/DR2 $\zphot$ estimates perform against the Main Survey ELG DESI $\zspec$ measurements corresponding to observations until December 2021 (see Section~\ref{sec:specobs}).
We emphasize that DESI $\zspec$ data were not used to train the HSC/DR2 $\zphot$ method.
The matched sample has $\sim$175 thousand targets with a $\zspec$ passing our reliability criterion of Equation~\ref{eq:zcrit}.
We do not make any quality cuts on the HSC $\zphot$.
The left-hand panels compare the two redshift measurements, the color-coding indicating the HSC \texttt{risk} parameter, which quantifies the reliability of the $\zphot$ estimate \citep[with $\texttt{risk} = 0$ being the most secure and $\texttt{risk} = 1$ the less secure; see][]{tanaka18a}.
Overall the HSC $\zphot$ estimates perform very well, noticeably over the whole $0.6 < z < 1.6$ range, thanks to its very deep imaging and the presence of $y$-band imaging.
This justifies the use of the HSC $\zphot$ in Figures~\ref{fig:maingrz} and \ref{fig:svgrz}.
In particular, the \texttt{risk} parameter provides a sensible estimation of the $\zphot$ reliability.
This last point is illustrated in the bottom panel of the left-hand plots, where we also display the median value of $\Delta z = (\zphot - \zspec) / (1 + \zspec)$ for three subsamples: all matches (black), the 50 percent lower \texttt{risk} parameter values (cyan), and the 25 percent lower \texttt{risk} parameter values (red).
The $1.48 \times \rm MAD(\Delta z)$ values are displayed as shaded regions, with typical values of 0.08 (all matches), 0.05 (50 percent lower \texttt{risk}), and 0.02 (25 percent lower \texttt{risk}).
Nevertheless, those cuts on \texttt{risk} are biasing the sample towards the redder ELG targets -- hence the lower redshift ones, as those are easier to model by the $\zphot$ algorithms; said differently, applying cuts on \texttt{risk} will exclude from the sample the blue, high-redshift ELGs.
That is illustrated in the right-hand panels, where we plot for each subsample (all, 50 percent and 25 percent lower \texttt{risk} parameter values) the DESI $\zspec$ distribution (filled histograms) and the HSC $\zphot$ distribution (empty histograms); the DESI $\zspec$ distribution with no cut on \texttt{risk} is displayed with a black, hatched histogram.
As a consequence, a careful balance between the $\zphot$ reliability and the sample representativeness will be required for any analysis using the HSC $\zphot$ distribution for an ELG DESI-like sample, as for instance trying to infer the redshift distribution of the DESI ELG targets which do not provide a reliable $\zspec$.
|
Title:
One-Electron Quantum Cyclotron as a Milli-eV Dark-Photon Detector |
Abstract: We propose using trapped electrons as high-$Q$ resonators for detecting meV
dark photon dark matter. When the rest energy of the dark photon matches the
energy splitting of the two lowest cyclotron levels, the first excited state of
the electron cyclotron will be resonantly excited. A proof-of-principle
measurement, carried out with one electron, demonstrates that the method is
background-free over a 7.4 day search. It sets a limit on dark photon dark
matter at 148 GHz (0.6 meV) that is around 75 times better than previous
constraints. Dark photon dark matter in the 0.1-1 meV mass range (20-200 GHz)
could likely be detected at a similar sensitivity in an apparatus designed for
dark photon detection.
| https://export.arxiv.org/pdf/2208.06519 |
\title{One-Electron Quantum Cyclotron as a Milli-eV Dark-Photon Detector}
\date{\today}
\author{Xing Fan}
\email{xingfan@g.harvard.edu}
\affiliation{Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA}
\affiliation{Center for Fundamental Physics, Northwestern University, Evanston, Illinois 60208, USA}
\author{Gerald Gabrielse}
\email{gerald.gabrielse@northwestern.edu}
\affiliation{Center for Fundamental Physics, Northwestern University, Evanston, Illinois 60208, USA}
\author{Peter W.~Graham}
\email{pwgraham@stanford.edu}
\affiliation{Stanford Institute for Theoretical Physics, Department of Physics, Stanford University, Stanford, CA 94305, USA}
\affiliation{Kavli Institute for Particle Astrophysics \& Cosmology, Department of Physics, Stanford University, Stanford, CA 94305, USA}
\author{Roni Harnik}
\affiliation{Superconducting Quantum Materials and Systems Center (SQMS), Fermilab, Batavia, IL 60510, USA}
\affiliation{Theoretical Physics Division, Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA}
\author{Thomas G. Myers}
\affiliation{Center for Fundamental Physics, Northwestern University, Evanston, Illinois 60208, USA}
\author{Harikrishnan Ramani}
\email{hramani@stanford.edu}
\affiliation{Stanford Institute for Theoretical Physics, Department of Physics, Stanford University, Stanford, CA 94305, USA}
\author{Benedict A. D. Sukra}
\affiliation{Center for Fundamental Physics, Northwestern University, Evanston, Illinois 60208, USA}
\author{Samuel S. Y. Wong}
\affiliation{Stanford Institute for Theoretical Physics, Department of Physics, Stanford University, Stanford, CA 94305, USA}
\author{Yawen Xiao}
\affiliation{Stanford Institute for Theoretical Physics, Department of Physics, Stanford University, Stanford, CA 94305, USA}
\preprint{FERMILAB-PUB-22-482-SQMS-T
}
The particle nature of dark matter (DM) and its interactions with the standard model (SM) of particle physics remains a mystery, despite decades of experimental scrutiny~\cite{DMDiscovery1933,DMRotationCurve1980,DMGravitationalLensing2006,DMBulletCollision2000,DM_WMAPGalaxyCenter2007,Planck2018-1}. The mass of the DM is unknown and the possibility that it is made of ultralight bosons and can be described as a classical wave has received significant inquiry in recent years~\cite{FuzzyCDM2000,DarkPhotonMisalignmentMechanism2011,ATheoryOfDarkMatter2009,WISPyDarkMatter2012,ReviewScalarFieldDarkMatter,UltraLightScalarCosmologicalDarkMatter}. One such ultralight dark matter candidate is the dark photon (DP), a hypothetical spin-1 particle~\cite{DPTheory1986,Okun:1982xi} that is theoretically well-motivated and possesses cosmological production mechanisms that can produce the observed DM abundance~\cite{Graham:2015rva,Dror:2018pdh,Agrawal:2018vin,LowEnergyFrontierReview2010, Ahmed:2020fhc,GravitationalProductionOfDP2021}.
Such a dark photon will generically have a kinetic mixing with the SM photon because this term is allowed by the symmetries of the theory (so long as the dark photon does not have a non-Abelian gauge symmetry). This kinetic mixing allows dark photon dark matter (DPDM) to be looked for in existing~\cite{DPLimitsReview2021,HuntForDarkPhoton2020} and forthcoming experiments~\cite{Antypas:2022asj}.
In this work, we propose a promising new direct detection technique using one-quantum transitions of one or more trapped electrons that are initially cooled to their cyclotron ground state. We demonstrate the viability of this technique with a proof-of-principle measurement that sets a limit 75 times better than previous constraints. This new limit is only for a narrow mass range because of limitations of an apparatus designed for making the most accurate measurements of the electron and positron magnetic moments~\cite{atomsNewMeasurement2019}---to test the Standard Model's most precise predictions~\cite{HarvardMagneticMoment2008,HarvardMagneticMoment2011,Nio2018TenthOrder,CsAlphaScience2018,RbAlpha2020Nature,Alpha2006fixed,atomsTheoryReview2019,DehmeltMagneticMoment,EfficientPositronAccumulation}. With an apparatus designed for DPDM detection, including efficient scanning of the resonant frequency, the mass range could be greatly extended.
The relevant properties of the DPDM are captured by the Lagrangian (in natural units)~\cite{DPTheory1986},
\begin{align}
\mathcal{L}\supset -\frac{1}{4}F'_{\mu\nu}F'^{\mu\nu} + \frac{\epsilon}{2} F^{\mu\nu}F'_{\mu\nu}+\frac{1}{2}m_{A'}^2A'_\mu A'^\mu.
\end{align}
Here $A'_\mu$ is the DP vector, $F'$ and $F$ are the DP and SM photon field strengths respectively, $\epsilon$ is the kinetic mixing parameter, and $m_{A'}$ is the mass of the DP. The DPDM manifests as dark electric and magnetic fields oscillating at a frequency set by the DP mass $\omega_{A'}=m_{A'}c^2/\hbar$, where $c$ is the speed of light and $\hbar$ is the reduced Planck constant. In the presence of a kinetic mixing with the SM photon, these dark fields cause effective ($\epsilon$-suppressed) SM electromagnetic fields. These can be detected by devices sensitive to tiny electric or magnetic fields at the frequency $\omega_{A'}$.
A plethora of complementary experiments have been designed with sensitivities to different DM masses.
The frequency range we focus on, 20~to~200~GHz (i.e.\ 0.1 to 1~meV) is particularly challenging experimentally, yet well-motivated theoretically by the minimal dark photon dark matter model with purely gravitational production \cite{Graham:2015rva}. This range is too high for extremely high-$Q$ resonators (e.g.~as used by ADMX \cite{ADMXSideCar2018,ADMXTechnicalDesignReview2021}, CAPP \cite{CAPP2020_7ueV,CAPP2020_13ueV,CAPP2021_10ueV}, and HAYSTAC \cite{HAYSTAC2018_24ueV,HAYSTAC2021_17ueV}). At the same time, the corresponding photons are below the energy threshold for existing single photon detection experiments such as~\cite{Chiles:2021gxk,Hochberg:2021yud,SinglePhotonReview2011}. Alternate experiments involving dish antennae or metal plates have been proposed or are underway around our frequency range \cite{DOSUE2022_100ueV, DarkPhoton_Tomita2020_115ueV, DarkPhoton_Knirck_2018_0.8meV, BREAD:2021tpx}.
The use of trapped ion crystals was proposed for the MHz frequency range \cite{gilmore2021quantum}.
The new DPDM detector proposed and demonstrated here is one (or more) electron in a Penning trap [Fig.~\ref{fig:TrapAndQuantumStates}(a)]---a ``one-electron quantum cyclotron'' \cite{QuantumCyclotron}. The trapped electron is a high-$Q$ resonator with a 20--200~GHz resonant frequency determined by the applied magnetic field of the trap. The DPDM wave would drive the electron to jump from the cyclotron ground state to the first excited state [Fig.~\ref{fig:TrapAndQuantumStates}(b)] if the cyclotron level spacing corresponds to the DP's frequency. The cyclotron quantum state is monitored in real-time [Fig.~\ref{fig:TrapAndQuantumStates}(c)] to search for excitations. To determine the cyclotron excitation rate, we compute the SM electric field induced by the DPDM inside the microwave cavity formed by the electrodes of the Penning trap.
Two motions of the trapped electron (mass $m_e$ and charge $-e$) are key. The quantized cyclotron oscillation (in a plane perpendicular to a strong magnetic field $B_0 \hat{z}$) is potentially excited by the DPDM-generated photon field in the $xy$-plane. For $B_0 = 5.3$~T, a photon resonant at the cyclotron frequency, $\omega_c/(2\pi) = eB_0/(2\pi m_e)=148$~GHz could increase the cyclotron energy by one quantum, $\hbar \omega_c$ [Fig.~\ref{fig:TrapAndQuantumStates}(b)]. The frequency of the electron's classical axial oscillation, along the magnetic field, is used to detect cyclotron excitations~\cite{DehmeltMagneticBottle}.
The axial oscillation frequency $\omega_z/(2\pi)=114$~MHz is set by the static potentials applied to the trap electrodes.
A quantum nondemolition (QND) coupling of the two motions makes it possible to detect one-quantum cyclotron excitations without causing a change to the cyclotron quantum number~\cite{FanBackActionPRL2021}. The monitored axial frequency shifts in proportion to the cyclotron quantum number $n_c$ by $\Delta \omega_z = n_c \delta$ [Fig.~\ref{fig:TrapAndQuantumStates}(c)], due to a magnetic bottle gradient that adds $B_2z^2 \hat{z}$ to the magnetic field~\cite{DehmeltMagneticBottle}. A nickel ring encircling the trap generates $B_2=300$~T/m$^2$, making $\delta/(2\pi) \equiv \hbar eB_2/(2\pi m_e^2\omega_z) = 1.3$~Hz.
The axial shift $\delta/(2\pi)$ from a one-quantum cyclotron excitation is 8 times larger than the $\sigma/(2\pi) = 0.16~\mathrm{Hz}$ standard deviation for fluctuations from other sources. The distribution of measured axial frequencies for 2~s averaging time is displayed in Fig.~\ref{fig:DistributionOfMeasuredNc}.
The cyclotron resonance frequency is deliberately broadened to increase as much as possible the range of DPDM frequencies that could be detected. This is done by driving the axial oscillation amplitude to $z_\text{max}=60~\mu\mathrm{m}$ by feeding back an electrical signal induced by the oscillation itself~\cite{SelfExcitedOscillator}.
The detection quality factor $Q=10^7$ (much broader than the unbroadened $Q=10^9$ \cite{HarvardMagneticMoment2008}) slightly broadens the DPDM sensitivity bandwidth beyond the intrinsic $Q=10^6$ bandwidth of the dark matter \cite{Bovy_2012_DMVelocity,Schonrich_DMVelocity,Golubov_DMVelocity}, but does not improve the sensitivity.
The electron is suspended at the center of a trap~[Fig.~\ref{fig:TrapAndQuantumStates}(a)] that a dilution refrigerator keeps at a temperature of $T=50$~mK. The electron cyclotron motion cools via synchrotron radiation and is not excited by blackbody photons.
In thermal equilibrium, the average quantum number is \cite{QuantumCyclotron}
\begin{equation}
\bar{n}_c = \left[\exp\left(\frac{\hbar\omega_c}{k_\mathrm{B}T}\right)-1\right]^{-1}= 1.9\times10^{-62}\approx 0.
\end{equation}
The electron is thus essentially always in its quantum cyclotron ground state $n_c=0$, with no background excitations from blackbody photons estimated to take place for many years \cite{QuantumCyclotron}.
Photons dynamically induced by the DPDM field could produce cyclotron transitions from $n_c=0\rightarrow n_c=1$.
For radiation broader than the linewidth, the transition rate is~\cite{loudon2000quantum,NoiseDrivenTransition}
\begin{equation}
\Gamma= \int \frac{\pi e^2}{2 m_e \hbar\omega_c} S_E\left(\omega\right)\chi(\omega,\omega_c)d\omega,
\label{eq:GammaDPDM}
\end{equation}
where $\chi(\omega,\omega_c)=\tfrac{1}{\sqrt{2\pi}\Delta\omega_c}\exp\left[-\tfrac{1}{2}\left(\tfrac{\omega-\omega_c}{\Delta\omega_c}\right)^2\right]$ is the normalized cyclotron line shape with linewidth $\Delta\omega_c$, and
$S_E(\omega)\,d\omega$ is the power in the interval $\{\omega,\omega+d\omega\}$
for the component of the DM-induced electric field in the $xy$-plane.
For DPDM with spread $\Delta\omega_{A'} \approx 10^{-6}\omega_{A'}$~\cite{DMDispersion2000}, and for a cylindrical cavity, $S_E(\omega)$ can be approximated as a boxcar window function with value
\begin{align}
S_E(\omega)=\kappa^2\times\epsilon^2\frac{\rho_{\rm DM}c^2}{\varepsilon_0\Delta\omega_{A'}} \langle \sin^2\theta \rangle
\label{eq:SE_DPDM}
\end{align}
in the interval $\{\omega_{A'},\omega_{A'}+\Delta\omega_{A'}\}$ and zero outside. Here $\rho_\mathrm{DM}c^2=0.3$~GeV/cm$^3$ is the local DM density~\cite{Planck2018-1} and $\varepsilon_0$ is the vacuum permittivity. We assume that the angle between the DP electric field and the $z$-axis, $\theta$, changes randomly
and is adequately sampled in observation times $T_{\rm obs}\gg {1}/{\Delta\omega_{A'}}\approx 10^{-6}~\mathrm{s}\times \left(\frac{2\pi\times 148 ~\textrm{GHz}}{\omega_{A'}}\right)$.
Thus the angular average that captures the component along the $xy$-plane evaluates to
$\langle \sin^2\theta \rangle =2/3$.
A fixed DPDM polarization~\cite{DPLimitsReview2021} would essentially not change the result given that our apparatus, off the Earth's rotation axis, changes orientation as the Earth rotates during the $T_{\rm obs}\gg $ 1 day observation time for this experiment~\cite{forthcoming}.
We calculate the effect of fixed DPDM polarization on our projected future sensitivity in \cite{forthcoming}.
Finally, $\kappa$ is the enhancement of the DPDM-induced electric field at the position of the electron by the trap's microwave structure~\cite{DMRadioPRD_2015}:
\begin{equation}
\kappa=\left|\sum_n\frac{\omega^2}{\omega^2-\omega_n^2(1-\tfrac{2i}{Q_n})}\frac{\int dV\vec{E}_n^*(\bf{r})\cdot \hat{\mathbf{x}} }{\int dV|\vec{E}_n(\bf{r})|^2}\vec{E}_{n}(\bm{0})\cdot \hat{\mathbf{x}} \right|.
\label{eq:KappaExplicitExpression}
\end{equation}
Without loss of generality, $\mathbf{x}$ is taken to be the DPDM polarization direction,
$n$ runs over all resonant modes, $\omega_n$, $Q_n$, and $\vec{E}_n(\bf{r})$ are the resonant frequency, quality factor, and electric field of the mode at position $\bf{r}$ respectively.
The last term $\vec{E}_{n}(\bm{0})\cdot \hat{\mathbf{x}}$ captures the transverse electric field at the center that drives electron cyclotron transition.
Figure~\ref{fig:kappaplot} shows the calculated frequency spectrum for $\kappa^2$ using measured resonant frequencies and $Q$ factors. The sharp peaks are from cavity modes that couple strongly to the cyclotron motion of an electron suspended at the cavity center. The microwave cavity resonances below 170~GHz for the cylindrical Penning trap \cite{CylindricalPenningTrap,CylindricalPenningTrapDemonstrated} (with radius $\rho_0=4.527$~mm and height $2z_0=7.790$~mm) have all been carefully mapped using parametrically-pumped electrons~\cite{SynchronizedElectronsPRL,SynchronizedElectronsPRA}. The measured frequencies agree with an ideal cylindrical model to within a few percent.
The high $\kappa$ values at cavity mode resonances are unfortunately not compatible with the existing $2 \, \text{s}$ averaging time needed to resolve the one-quantum cyclotron transitions in our apparatus.
For this averaging time, the magnetic field must be chosen to keep the electron cyclotron frequency far from resonance with all cavity modes that couple to the centered electron.
This inhibits the spontaneous emission of synchrotron radiation to lengthen the lifetime of the excited cyclotron state $\tau_c$ \cite{InhibitionLetter}. For this demonstration, $\tau_c = 7.2$~s is about a factor of 80 longer than its free space value \cite{InhibitionLetter,HarvardMagneticMoment2011}.
The photons induced by the DPDM, being away from resonance, thus cannot build up in a cavity radiation mode. Fortunately, our calculation shows that $\kappa^2$ remains usefully large, with $\kappa^2=2.37$ pertaining to our demonstration at 148~GHz. The cylindrical symmetry of the conducting cavity boundary causes an enhancement in the DM-induced electric field at the trap center (akin to the ``focussing'' effect found in the dish antenna proposals \cite{DishAntennaProposal-1,DishAntennaProposal-2}). A possible future optimization is a larger spherical trap cavity %
that can result in a 25-fold increase in $\kappa$.
The new search for 148~GHz DPDM [Fig.~\ref{fig:MeasurementCycle}(a)] is for $n_c=0$ to $n_c\ge 1$ cyclotron excitations over $T_\mathrm{obs} = 7.4$~days (Table~\ref{tab:datasets}). Shifts of the electron's axial frequency are averaged over $t_\mathrm{ave}=2$~second intervals and recorded as a function of time. The trapping potential is slowly adjusted to eliminate slow drifts of the axial frequency. Figure~\ref{fig:MeasurementCycle}(b) shows $\Delta \omega_z/\delta$ for 24 hours of the 7.4 day search. A cyclotron excitation to the first excited state would produce $\Delta \omega_z/\delta = 1$. Any $\Delta \omega_z$ larger than a $5\sigma$ threshold (i.e.\ $\Delta \omega_z/\delta = 5\sigma/\delta \ge 0.65$) would be interpreted as being potentially caused by DPDM. No such excitation is detected during the 7.4 days.
The search was suspended for 25~min of calibration every 6 hours, indicated by the breaks in Fig.~\ref{fig:MeasurementCycle}(b). The one-quantum detection sensitivity is verified using microwave photons sent into the cavity. The detector bandwidth of 33~kHz is also deduced by measuring the shift $\Delta \omega_z /\delta$ as a microwave drive is swept through resonance with the 148~GHz cyclotron frequency [Fig.~\ref{fig:MeasurementCycle}(c)]. The width is broadened by the large self-excited axial oscillation in the magnetic gradient described above. Cyclotron frequency shifts are negligible given the extremely low magnetic field drift rate of $\Delta B/B=10^{-10}$ per hour that is realized using a carefully shimmed self-shielding solenoid \cite{SelfShieldingSolenoid,Helium3NMR2019,atomsNewMeasurement2019}. This 33 kHz detector bandwidth slightly broadens the sensitivity bandwidth beyond the ${\sim}100 \, \text{kHz}$ bandwidth expected of the dark matter.
\begin{table}[]
\begin{tabular}{c|c|c}
\hline
run \# & time (date.hour:minute) &observation length (s) \\
\hline
\hline
1 & 11.12:46~ -- ~13.13:15 &148058 \\
2 & 14.18:26~ -- ~15.11:33 &58162 \\
3 & 15.11:50~ -- ~17.17:22 &179698 \\
4 & 17.18:38~ -- ~18.18:40 &80640 \\
5 & 19.12:15~ -- ~21.15:43 &172312 \\
\hline\hline
total & --- &638870 \\
\end{tabular}
\caption{Datasets for DPDM search in March 2022. Each run consists of the repeated measurement cycle in Fig.~\ref{fig:MeasurementCycle}.}
\label{tab:datasets}
\end{table}
The lowest cyclotron excited state decays to the ground state by the spontaneous emission of synchrotron radiation photons. The decay time for each excitation is a random selection from an exponential distribution with an average lifetime of $\tau_c =7.2$~s. The choice of a detection threshold at $5\sigma=0.65\delta$ means that an excitation that decays in less than $0.65\times t_\mathrm{ave}=1.3$~s will be missed, giving a detection efficiency
\begin{equation}
\zeta = \int_{1.3\mathrm{s}}^\infty\frac{1}{\tau_c}\exp\left(-\frac{t}{\tau_c}\right)dt=83~\%.
\end{equation}
Correcting for the observed fluctuation spectrum in Fig.~\ref{fig:DistributionOfMeasuredNc} affects the result by only 1\%.
The conversion to $\Gamma$ is now straightforward. Using the standard estimate of upper limit of null measurement \cite{LeoParticlePhysicsTextBook}, the upper limit on the DPDM excitation rate with $CL=90$\% confidence level is
\begin{equation}
\Gamma < -\frac{1}{\zeta T_\mathrm{obs}}\log\left(1-CL\right)=4.33\times10^{-6}~\mathrm{s}^{-1}.
\end{equation}
Using measured values and Eqs.~\eqref{eq:GammaDPDM} and \eqref{eq:SE_DPDM}, our limit on the kinetic mixing parameter is
\begin{equation}
\epsilon < 3.2\times10^{-11},
\end{equation} which improves on previous limits by a factor of 75.
The corresponding microwave electric field detected, specified by $\sqrt{2\pi S_E(\omega)}$ is given by $2.5~\mathrm{pV/(cm}\sqrt{\text{Hz}})$, with the measurement bandwidth, 0.45~nV/cm.
The new $\epsilon$ limit is shown in Fig.~\ref{fig:LimitOnDPDM148GHz}(a), with the limit from the XENON1T (black hatched) \cite{DMDetectorAsDPHelioscope,Xenon1TResultDarkPhoton,Xenon1TPRL2019} and the limit from DM cosmology (dashed) \cite{WISPyDarkMatter2012}.
Only a narrow DPDM mass range is accessed in this initial demonstration due to limitations of the apparatus, which was designed to make the magnetic field exceptionally stable rather than readily swept.
Searching 20--200~GHz ($\sim$0.1~meV to 1~meV) seems feasible in an apparatus that is designed for dark matter searches. Affordably sweeping the magnetic field over such a broad range requires cooling with a refrigerator rather than cryogenic liquids.
For DM with $Q=10^6$, making $t_m=15$~s measurements spaced by $10^{-6}$ relative frequency steps would cover the mentioned range in about a year.
The DPDM sensitivity established above is approximately
\begin{align} \label{eq:futurelim}
\epsilon\approx8\times10^{-11}\frac{\omega_{A'}}{2\pi\times150~\textrm{GHz}}\frac{40}{\kappa}\left(\frac{10}{n_{\rm e}}\right)^\frac{1}{2} \left(\frac{15~\textrm{s}}{t_{\rm m}}\right)^\frac{1}{2},
\end{align}
where $n_e$ is the number of electrons used to sense DM and $t_m$ is the measurement time. A shorter measurement time (15~s rather than 7.4~days) would decrease the sensitivity $\epsilon$ by a factor of 200. It seems feasible to largely recapture this factor by using $n_e=10$ electrons and increasing $\kappa$ to ${\sim} 40$ by using a spherical geometry and increasing the radius of the sphere to $r=25~\textrm{mm}$~\cite{forthcoming}.
Resulting reductions in the induced axial oscillation signal needed to observe one-quantum cyclotron jumps would be compensated by greatly increasing the size of magnetic bottle gradient that couples the cyclotron and axial motion. The blue dashed line in Fig.~\ref{fig:LimitOnDPDM148GHz}(b) is an estimate of what may be possible, assuming that the trap cavity can be tuned during the sweep to avoid cavity mode resonances. For a spherical cavity, $\kappa \propto \omega_{A'} $ which cancels the $\omega_{A'}$ in Eq.~\eqref{eq:futurelim}, leading to a flat sensitivity curve. A more detailed optimization is clearly warranted \cite{forthcoming}.
In conclusion, we have proposed and demonstrated the possibility of using one-electron quantum cyclotrons within a microwave trap cavity to search for DPDM. A big advantage is that detection is essentially free of background excitations, making the detection sensitivity of the transition rate scale with observation time as $T_\mathrm{obs}^{-1}$ rather than $T_\mathrm{obs}^{-\frac{1}{2}}$. The obtained limit is the most sensitive ever obtained in the challenging meV range and all required parameters for the DPDM search are measured \textit{in-situ}. The narrow frequency range realized in this first demonstration could be greatly extended in an apparatus designed and optimized for dark matter detection. This proposal and demonstration thus opens a new direction for DPDM searches.
\begin{acknowledgments}
This work was supported by the U.S. DOE, Office of Science, National QIS Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under the contract No.\ DE-AC02-07CH11359. Additional support was provided by NSF Grant No.~PHY-1903756, No.~PHY-2110565, and No.~PHY-2014215, by the John Templeton Foundation Grant No.~61906 and No.~61039, by the Simons Investigator Award No.~824870, by the DOE HEP QuantISED Award No.~100495, by the Gordon and Betty Moore Foundation Grant No.~GBMF7946, and by the Masason Foundation. S.W.~was supported in part by the Clark Fellowship. Y.X.~was supported in part by the Vincent and Lily Woo Fellowship.
\end{acknowledgments}
\bibliographystyle{prsty_gg}
\bibliography{PenningTrapExperimentRefs}
|
Title:
Spectroscopic analysis of BPS CS 22940-0009: connecting evolved helium stars |
Abstract: BPS CS 22940-0009 is a helium-rich B-star that shares characteristics with
both helium-rich B subdwarfs and extreme helium stars. The optical spectrum of
BPS CS 22940-0009 has been analysed from SALT observations. The atmospheric
parameters were found to be $T_{\rm eff} = 34970 \pm 370$ K, $\log g/{\rm cm \,
s^{-2}} = 4.79 \pm 0.17$, $n_{\rm H}/n_{\rm He} \simeq 0.007$, $n_{\rm
C}/n_{\rm He} \simeq 0.007$, $n_{\rm N}/n_{\rm He} \simeq 0.002$, although
further improvement to the helium line fits would be desirable. This places the
star as a link between the He-sdB and EHe populations in $g$-$T$ space. The
abundance profile shows enrichment of N from CNO-processing, and C from
$3\alpha$ burning. Depletion of Al, Si, S and a low upper limit for Fe show the
star to be intrinsically metal-poor. The results are consistent with BPS CS
22940-0009 having formed from the merger of two helium white dwarfs and
currently evolving toward the helium main sequence.
| https://export.arxiv.org/pdf/2208.07720 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
stars: abundances -- stars: chemically peculiar -- stars: early-type -- stars: individual: BPS\,CS\,22940$-$0009 -- subdwarfs
\end{keywords}
\section{Introduction}
Hot subdwarf B (sdB) stars occupy the blue end of the horizontal branch and can be found in both the field and in globular clusters. They have colours typical of B stars and effective temperatures and surface gravities similar to helium (He) main sequence stars of about 0.5 solar masses. Like other horizontal branch stars, they have helium-burning cores, but their hydrogen (H) envelopes are much thinner and cannot sustain shell burning. The spectra of sdBs are typically helium-deficient and show strong Balmer lines. It is thought that gravitational settling and radiative levitation sink the atmospheric helium below the hydrogen surface \citep{heber86}. However, about 4\% of sdBs display very helium-rich atmospheres characterised by strong neutral He lines \citep{green86}, with similar temperatures but lower surface gravities than their H-rich counterparts. Many of these helium-rich sdB stars (He-sdBs) also have strong lines of nitrogen (N {\sc ii} and N {\sc iii}) and in some cases, carbon (C {\sc ii} and C {\sc iii}) \citep{naslim10}.
Extreme helium (EHe) stars are a rare class of peculiar supergiants with effective temperatures of 8\,000-32\,000\,K \citep{drilling84}. EHe spectra are characterised by strong lines of neutral He and singly-ionised C, with the Balmer lines being very weak or absent \citep{hunger75}. An overabundance of nitrogen in most EHe stars and a very high carbon abundance in all implies the presence of both H-processed and He-processed material at the surface \citep{heber83, jeffery96}. Two principal evolutionary models emerged. In the \textquoteleft double degenerate\textquoteright\,model \citep{webbink84,iben84,saio02}, a He white dwarf merges with a more massive C-O white dwarf companion. If the mass of the C-O component remains below the Chandrasekhar limit, the base of the accreted envelope ignites causing the envelope to expand. As the resulting H-deficient supergiant contracts, it will cool to become an EHe star and eventually a white dwarf. In the \textquoteleft final flash\textquoteright\,model \citep{iben83} a late ignition in the helium shell of a post-AGB star causes the outer layers to rapidly expand. Hydrogen in the envelope is consumed by burning and the resulting H-deficient supergiant contracts to become an EHe star.
He-sdBs pose several questions in regards to their origins, evolution, and connection to similar populations. For example, how are they formed and what causes them to be so He-rich? Why are some stars abundant in C and/or N but not others? Do He-sdBs form a sequence with normal sdBs or with the He-rich O subdwarfs? Are there any links with other types of helium-rich stars such as R Coronae Borealis variables or helium white dwarfs \citep{jeffery08b}? To answer these questions, observing campaigns by (for example) \citet{naslim10, jeffery20} have focused on identifying helium-rich objects from surveys of faint blue stars, and systematically obtaining atmospheric parameters and chemical abundances from their spectra. This builds on earlier efforts to determine the physical parameters of He-sdBs using optical spectroscopy \citep[e.g.][]{ahmad03, ahmad04}. By collecting this information, it becomes possible to search for patterns within the He-sdB class and to identify possible connections with other classes of evolved stars.
Several scenarios have been postulated to explain the evolution of helium subdwarfs. \citet{brown01} proposed that a star undergoing a late helium-core flash while descending the white dwarf cooling curve could produce enhanced He and C abundances via flash mixing. \citet{lanz04} showed how flash mixing can be \textquoteleft deep\textquoteright \,or \textquoteleft shallow\textquoteright, depending on how deeply the hydrogen envelope is mixed into the site of the flash. Both kinds produce He-rich surfaces with enhanced C and N, but shallow mixing also leaves a significant remaining hydrogen fraction. These scenarios were expanded upon by \citet{millerbertolami08} who used 1D numerical simulations to study the effects of non-standard assumptions such as chemical gradients on hot-flasher mixing. Other authors have suggested that the coalescence of two helium white dwarfs (mergers) can account for both conventional and helium-rich subdwarfs. \cite{saio00} attempted to model such a merger by computing the consequences of rapid accretion of He-rich matter onto a helium white dwarf; this accretion results in off-centre helium ignition and causes the star to first expand, then contract through repeating helium-shell flashes. The resulting stellar surface is abundant in He and N, but shows no C enrichment. \citet{zhang12} simulated similar double helium white dwarf merger models assuming three combinations of rapid and slow accretion. They found that slow accretion allows the N-rich material of the accreted white dwarf to form the surface of the product without mixing. Fast accretion leads to convection zones that dredge up C-rich material to the surface. A combination of rapid (dynamical) followed by slow (thermal) accretion yields final abundances that depend on the mass of the merger, with high-mass systems producing deeper convective dredge-up after the first helium shell flash. Thus low-mass systems are expected to be N-rich, whilst high-mass systems are both N-rich and C-rich.
BPS\,CS\,22940$-$0009 (EC 20262$-$6000) is a hot, He-rich star \citep{beers92} that is part of a distinct sequence of rare, H-poor stars with strong N lines. Similar objects include LS\,IV+6\textdegree2 \citep{jeffery98} and PG\,1415+492 \citep{drilling13}. These stars are typified by luminosity classes of V or less, and surface gravities similar to those of main sequence B stars. BPS\,CS\,22940$-$0009 was studied together with other N-rich He-sdBs by \cite{naslim10} and found to be the most C-rich member of the sample. Our goal for this work is to re-examine BPS\,CS\,22940$-$0009 using more recent spectra with improved signal-noise ratios in order to assess its relation to other He-sdB and EHe stars, and if it represents a connection between these classes. Additionally, we also aim to investigate the chemical profile of this star to search for any peculiarities. These could provide further clues to the evolution of this star and others like it.
In this paper, we present a spectroscopic analysis of BPS\,CS\,22940$-$0009. Section 2 describes the observations used for this paper. In section 3 we obtain the atmospheric parameters by fitting optical spectra to a grid of model atmospheres in local thermodynamic equilibrium (LTE), and calculate chemical abundances by measuring the equivalent widths of key spectral lines. We also use {\it TESS} photometry and cross-correlation of the HRS spectra to search for variability in flux and radial velocity. Section 4 discusses our abundance and parameter results and compares them to similar He-sdB and EHe stars. The implications of our results on the evolutionary status of BPS\,CS\,22940$-$0009 are given in section 5. The potential impact of the LTE approximation is considered in Section \ref{sec:nlte}.
\section{Observations}
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Instrument/ & $R$/ & $t_{\rm exp}$ & S/N & Sampling\\
Date & UT start & (s) & & (\AA/pix.) & \\
\hline
AAT/UCLES & 32\,000 & & \\
2005 08 26 & 09:25:40 & 1800 & 10 & 0.06\\
& 09:56:33 & 1800 & 10 & 0.06\\
& 10:27:29 & 1800 & 12 & 0.06\\
& 10:58:22 & 1800 & 10 & 0.06\\
ESO/FEROS & 48\,000 & & & \\
2011 05 08 & 06:37:42 & 1800 & 20 & 0.6 \\
& 07:08:35 & 1800 & 20 & 0.6 \\
SALT/RSS & 2250 & & & \\
2018 05 05 & 01:59:14 & 100 & 73 & 1.1 \\
& 02:01:15 & 100 & 73 & 1.1 \\
& 02:08:25 & 150 & 73 & 1.1 \\
& 02:11:16 & 150 & 73 & 1.1 \\
SALT/HRS & 43\,000 & & &\\
2016 06 23 & 02:49:43 & 1050 & 19 & 0.03\\
& 03:08:24 & 1050 & 19 & 0.03\\
2016 06 30 & 02:35:32 & 1250 & 20 & 0.03\\
& 02:57:33 & 1250 & 20 & 0.03\\
2018 05 07 & 01:00:12 & 900 & 16 & 0.03\\
& 01:16:26 & 900 & 16 & 0.03\\
2019 04 25 & 01:42:05 & 1450 & 38 & 0.03\\
& 02:07:25 & 1450 & 38 & 0.03\\
\hline
\end{tabular}
\caption{Record of observations of BPS\,CS\,22940$-$0009. The sampling of the reduced spectrum is given as an average in \AA/pixel over the full spectrum.}
\label{tab:obslog}
\end{table}
BPS\,CS\,22940$-$0009 was selected for analysis due to its unique position as a bridge between stellar classes. This star was previously studied by \cite{naslim10}, who found it to be the coolest and lowest-gravity member of their sample. Additionally, a coarse analysis by \cite{jeffery20} placed the star as the hottest and highest-gravity member of a group of helium-rich stars that are not subdwarfs.
With $\log g/{\rm cm\,s^{-2}} \approx 5$, it lies below the hydrogen main sequence ($\log g/{\rm cm\,s^{-2}} \approx 4$ for B stars).
This makes BPS\,CS\,22940$-$0009 a potentially interesting edge case among the He-sdB population, bridging the gap between more conventional hot subdwarfs and hot extreme helium stars. In recent years, further observations of BPS\,CS\,22940$-$0009 have made it possible to improve the accuracy of parameter and abundance measurements by increasing both resolution and signal-noise ratio.
Observations of BPS\,CS\,22940$-$0009 were made at the Southern African Large Telescope ({\it SALT}). Detailed optical spectra were obtained using the High Resolution Spectrograph ({\it HRS}: $R\simeq43\,000,\,\lambda\lambda=4100-5200$\,\AA) which is an \'echelle format spectrograph. Since broad-lines frequently span more than one \'echelle order, targets are also observed with the lower-resolution Robert Stobie Spectrograph ({\it RSS}: $R\simeq 2000,\,\lambda\lambda=3850-5150$\,\AA) in order to guide the correction of the \'echelle blaze function.
The HRS spectra for this analysis were obtained on the night of 24-25 April 2019 and consisted of 4x1450 second exposures; 2 red and 2 blue. The RSS observations were performed on the night of 4-5 May 2018 and consisted of 2x100 second and 2x150 second exposures. We used the PG2300 grating and a slit width 1.5\AA\, to obtain a full-width half-maximum resolution $R$ between 1800 at 4000\AA\, and 2300 at 5000\AA\ (as measured from the CuAr calibration lamp), corresponding to $\Delta \lambda \approx 2.2$\AA\, throughout. The RSS detector subsystem comprises 3 CCD chips separated by 2 gaps, so double exposures are taken from 2 different grating angles and merged to produce a continuous spectrum. The data reduction process for the RSS spectra is described by \citet{jeffery20}. The HRS data were reduced using standard {\sc iraf} routines to subtract the bias signal, divide stellar spectra by combined flat field spectra, correct pixels affected by cosmic rays, reduce to one dimensional spectra, and apply the wavelength scale using thorium-argon lamp spectra. Spectra from previous observations were also used to search for variability in radial velocity over time. The full list of available observations is given in Table \ref{tab:obslog}. A combined spectrum was produced from a S/N-weighted average of all available HRS observations. This offered the highest-available signal-noise ratio (S/N = 52), but did not produce a significantly better fit to interpolated grid models than the 2019 spectrum alone ($\chi^2_{\rm all}/\chi^2_{2019} = 0.89$).
Furthermore, including the 2016 and 2018 observations introduced noise in the Г©chelle overlap regions of the combined spectrum. As a result, only the 2019 spectrum was used for the majority of the analysis.
\section{Atmospheric Analysis}
The atmospheric parameters effective temperature ($T_{\rm{eff}}$), surface gravity ($g$), and fractional helium abundance by number ($n_{\rm{He}}$) were determined simultaneously using the software package {\sc sfit} \citep{jeffery01}, which finds the best match to the observed spectrum in a grid of theoretical spectra. Chemical abundances were calculated from the equivalent widths of absorption lines in the HRS spectrum.
\subsection{Model atmospheres}
Grids of model atmosphere structures were constructed using the model atmosphere code {\sc sterne} \citep{behara06}. The models assume local thermodynamic equilibrium, hydrostatic equilibrium, and use plane-parallel geometry. Grids of high-resolution synthetic spectra were computed therefrom using the LTE formal solution code {\sc spectrum} \citep{jeffery01}. Subgrids were created by resampling the master grid onto a wavelength interval of 1.1\,\AA. This grid would be used for fitting the RSS spectrum. Both grids covered a parameter space of $T_{\rm eff}$/K = 28\,000 (2\,000) 40\,000, log $g$/cm\,s$^{-2}$ = 4.0 (0.25) 5.5 and $n_{\rm He} = 0.95, 0.99, 1.0$.
\subsection{Atmospheric parameters}
{\sc sfit} interpolates in a grid of model spectra using a set of initial guesses for the atmospheric parameters $T_{\rm{eff}}$, $g$ and $n_{\rm{He}}$. The interpolated spectrum is compared with the observed spectrum by evaluating the $\chi^2$ statistic. A best-fit solution is found using a Levenburg-Marquardt algorithm to iterate the guess parameters until $\chi^2$ is minimised \citep{press89}.
The atmospheric parameters were first measured for the RSS spectrum. Reference regions relatively free of absorption lines and distributed across the full wavelength range (4100-5100\,\AA) were defined manually.
A reference spectrum was defined using initial parameter estimates $T_{\rm eff}=34\,000$\,K, $\log g/{\rm cm\,s^{-2}} = 4.5$, $n_{\rm He}=0.99$. The reference regions were used with a high-pass filter having FWHM 200\,\AA\ to renormalise the observed spectrum to the reference spectrum. {\sc sfit} then solved for $T_{\rm eff}$, $\log g$, and $n_{\rm He}$ individually, assuming an instrumental broadening of 1\,\AA. $T_{\rm eff}$ was measured by fitting the temperature-sensitive He\,{\sc ii} 4686\,\AA\,line. $\log g$ was measured from the gravity-sensitive He\,{\sc i} 4388, 4471, and 4921\,\AA\,lines. The cores of these lines were excluded from the fit as they are not well-reproduced by the LTE model. $n_{\rm He}$ was obtained by fitting the whole spectrum; {\sc sfit} calculates the hydrogen abundance of the fit, and then reports $n_{\rm He} = 1 - n_{\rm H}$.
The parameter results were found to be $T_{\rm eff}=34\,970 \pm 370$\,K, $\log g/{\rm cm\,s^{-2}}=4.79 \pm 0.17$, $n_{\rm He}=0.978\pm0.006$ (by number) from the RSS spectrum and $T_{\rm eff}=34\,420 \pm {240}$\,K, $\log g/{\rm cm\,s^{-2}}=4.66 \pm 0.10$, $n_{\rm He}=0.974\pm0.006$ from HRS (Table\,\ref{tab:params}). Errors are the formal {\sc sfit} errors obtained by solving for each parameters individually, with the others held constant. The small error on $n_{\rm He}$ arises because this is essentially a measurement of $1-n_{\rm H}$ which is tightly constrained by the Balmer lines. A comparison of the normalised RSS spectrum and a model spectrum created with these parameters is shown in Fig.\,\ref{fig:fit}. The model atmosphere having parameters closest to those of the best-fitting RSS spectrum was then used to measure $v_{\rm t}$, and $v{\rm sin}\,i$ from the HRS spectrum (\S\,3.3), and the stellar radius, mass, and luminosity from the {\it TESS} spectral energy distribution (\S\,3.7).
\begin{table*}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
& SALT/RSS & SALT/HRS & Naslim 2010 & Jeffery 2020\\
\hline
$T_{\rm eff}$ (K) & $34\,970 \pm 370$ & $34\,420 \pm 240$ & 33\,700 $\pm$ 800 & 34\,650 $\pm$ 110\\
log $g$ (cm\,s$^{-2}$) & $4.79 \pm 0.17$ & $4.66 \pm 0.10$ & 4.7 $\pm$ 0.2 & 4.89 $\pm$ 0.04\\
$n_{\rm He}$ & 0.978 $\pm$ 0.006 & 0.974 $\pm$ 0.06 & 0.993 & 0.98 $\pm$ 0.01\\
$v_{\rm rad}$ (km\,s$^{-1}$) & - & 32.7 $\pm$ 2.0 & -- & 28 $\pm$ 3\\
$v_{\rm t}$ (km\,s$^{-1}$) & - & 7.8 $\pm$ 0.2 & 10 & --\\
$v$sin$i$ (km\,s$^{-1}$) & - & 15.0 $\pm$ 3.3 & 4 $\pm$ 3 & --\\
\hline
\end{tabular}
\caption{Atmospheric parameters for the spectrum of BPS\,CS\,22940$-$0009 obtained using {\sc sfit}. Results from \citep{naslim10} and \citep{jeffery20} are shown for comparison.}
\label{tab:params}
\end{table*}
\subsection{Microturbulent velocity and chemical abundances}
Using the above parameters, the model atmosphere corresponding to the best-fit spectrum was selected from the grid. Using {\sc spectrum}, predicted equivalent widths ($W_{\lambda}$) were computed for all lines in the observed spectral window. Lines predicted to have $W_{\lambda}\geq5$\,m\AA\ were identified from this list; equivalent widths were then measured for all qualifying C, N, O, Ne, Mg, Al, Si, and S lines observable in the HRS spectrum. A complete list of all line measurements made for the abundance calculation is provided in Appendix \ref{app:linelist}.
For each line the part of the profile to be measured was defined manually (the segment), along with a region of continuum on either side of the line. A linear fit was computed from the continuum regions, and the equivalent width was obtained by integrating the line segment under the continuum fit. A parabola was fit to the line segment to provide the line wavelength. Errors were derived from an estimate of the flux error in the continuum regions and from the formal parabola fit.
For a given (assumed) microturbulent velocity ($v_{\rm t}$), {\sc spectrum} may be used to compute a predicted line equivalent width for a given chemical abundance or, by Newton-Raphson iteration, the chemical abundance corresponding to a measured equivalent width. Nitrogen lines were used to determine $v_{\rm t}$ using the classical method of minimising the gradient of abundances with respect to equivalent width \citep{gray75}. Abundances were calculated for each of 36 N {\sc ii} lines with assumed $v_{\rm t}$ between 0 and 10 km\,s$^{-1}$. The abundance-equivalent width gradients ($\nabla\log\epsilon_{\rm N} = d\log\epsilon_{\rm N}/dW_{\lambda}$) were determined by $\chi^2$ minimisation. Only lines giving abundances of $8.0 \le \log\epsilon_{\rm N} \le 10.0$ were considered to avoid errors due to saturated, weak ($W_{\lambda}<5$\,m\AA) or blended lines. A linear fit of the gradients gives $\nabla\log\epsilon_{\rm N}=0$ for $v_{\rm t}=7.8 \pm 0.2\,{\rm km\,s^{-1}}$ (Fig.\, \ref{fig:vturb_regression}, Table\,\ref{tab:params}).
Adopting $v_{\rm t}=7.8$, chemical abundances were derived for all lines with measured $W_{\lambda}\geq5$\,m\AA. Rejecting outliers, mean abundances were derived for 8 species (C, N, O, Ne, Mg, Al, Si, and S: Table\,\ref{tab:ew_abunds}). Abundances are given in the form $\log \epsilon_{i} = \log n_{i} +c$, where $c=11.5725$ and $n_{i}$ are number fractions\footnote{Stellar abundances are conventionally given such that for hydrogen $\log\epsilon_{\rm H}=12$, and for other species $i$ by $\log\epsilon_{i} = \log n_i/n_{\rm H}$.} This assumes a hydrogen-dominated composition, which is not appropriate for helium-rich stars, like BPS\,CS\,22940$-$0009, where a) the denominator may approach zero and b) number fractions of conserved species change simply because 4 protons combine to form a single helium nucleus. With atomic weights $a_{i}$, $c$ is computed such that $\log \Sigma_{i} a_{i} n_{i} + c = \log \Sigma_{i} a_{i} n_{i,\odot} + c_{\odot} = 12.15$. This conserves both $\log \epsilon_{i}$ and mass fraction for each species whose abundance is otherwise unchanged \citep{jeffery11}.. The error in the mean abundances was propagated quadratically from the errors in the individual abundances in each line, and hence from the error in the equivalent width measurements. Upper limits were obtained for P and Fe. Individual species are discussed below.
\begin{table}
\centering
\caption{Chemical abundances for BPS\,CS\,22940$-$0009. Abundances are given in the form $\log \epsilon_{i} = \log n_{i} +c$ as defined in the text. $N$ is the number of lines used in the abundance calculation. Abundance errors for metals are propagated from the errors in the individual line measurements reported by {\sc spectrum}. Errors for H and He are taken from the fractional error in $n_{\rm He}$ and $n_{\rm H}$. Results from \citet{naslim10} and photospheric solar values from \citet{asplund09} are included for comparison. These are normalised such that $\log \Sigma_{i} a_{i} n_{i}=12.15$.}
\label{tab:ew_abunds}
\begin{tabular}{lcccc} %
\hline
Species & $N$ & $\log\epsilon$ & $\log\epsilon$ (Naslim) & $\log\epsilon$ (Solar) \\
\hline
H & - & 9.91 $\pm$ 0.06 & 9.10 $\pm$ 0.20 & 12.00\\
He & - & 11.56 $\pm$ 0.01 & 11.54 & 10.93 $\pm$ 0.01\\
C & 45 & 9.43 $\pm$ 0.68 & 8.94 $\pm$ 0.35 & 8.43 $\pm$ 0.05\\
N & 69 & 8.85 $\pm$ 0.66 & 8.46 $\pm$ 0.22 & 7.83 $\pm$ 0.05\\
O & 9 & 7.31 $\pm$ 0.45 & 7.11 $\pm$ 0.34 & 8.69 $\pm$ 0.05\\
Ne & 15 & 7.93 $\pm$ 0.47 & 8.27 $\pm$ 0.45 & 7.93 $\pm$ 0.10\\
Mg & 3 & 7.81 $\pm$ 0.41 & 7.27 $\pm$ 0.18 & 7.60 $\pm$ 0.04\\
Al & 6 & 6.01 $\pm$ 0.67 & 6.12 $\pm$ 0.15 & 6.45 $\pm$ 0.03\\
Si & 5 & 6.98 $\pm$ 0.28 & 7.23 $\pm$ 0.24 & 7.51 $\pm$ 0.03\\
P & - & $\leq$ 5.25 & - & 5.41 $\pm$ 0.03\\
S & 4 & 6.25 $\pm$ 0.43 & 6.45 $\pm$ 0.15 & 7.12 $\pm$ 0.03\\
Fe & - & $\leq$ 6.30 & - & 7.50 $\pm$ 0.04\\
\hline
\end{tabular}
\end{table}
The abundances obtained by the equivalent width method were used in {\sc spectrum} to construct a formal solution for the whole spectrum. A comparison of the synthetic spectrum and the re-normalised HRS spectrum is shown in Appendix \ref{app:hrs_spec}.
\subsubsection{H and He}
The spectrum of BPS\,CS\,22940$-$0009 is dominated by very strong helium lines, while the Balmer lines are comparatively weak and blended with weak He\,{\sc ii}. The LTE models do not fit the cores of these lines well; the LTE assumption is discussed in Section \ref{sec:nlte}, in which a comparable NLTE model shows overpopulation of the lower levels of the He\,{\sc i} line transitions, leading to deeper line cores than the LTE case.
The gravity-sensitive He\,{\sc i} (and He\,{\sc ii}) lines have widths comparable with those of individual orders in the HRS \'echelle. Moreover, blaze removal and order merging in the HRS data reduction process remain imperfect. Consequently, the wings of broad lines in the HRS spectrum may be unsuitable for measuring $T_{\rm eff}$, $\log g$ and $n_{\rm He}$. The final He and H abundances were therefore obtained via $\chi^2$ minimisation using the RSS spectrum. Fractional abundances $n_{\rm H} = 0.022 \pm 0.003$ and $n_{\rm He} = 0.978 \pm 0.006$ yield $\log\epsilon_{\rm H} = 9.40 \pm 0.06$ and $\log\epsilon_{\rm He} = 11.56 \pm 0.01$ as defined above. The H abundance was measured from the H$_{\beta}$ and H$_{\gamma}$ lines as a check on the results from {\sc sfit}, giving $\log\epsilon_{\rm H} = 9.44 \pm 0.07$. The atmosphere is strongly He-rich and H-poor, with $n_{\rm H}/n_{\rm He}\simeq0.007$. This is typical for He-sdB and EHe stars, though hydrogen is less depleted than in the similar stars LS\,IV+6\textdegree2 \citep{jeffery98} and BX\,Cir \citep{drilling98}.
\subsubsection{C, N, and O}
The spectrum contains a large number of carbon and nitrogen lines with a wide range of strengths, including several that are blended with other lines and many that are saturated. Taking the abundance as a simple mean of all line measurements is not viable as blended lines skew the average abundance to a higher apparent value. Instead, the modal abundance from all lines with equivalent widths $\geq5$\,m\AA\, was taken as a starting point. All lines with a measured abundance outside of one standard deviation from the mode were excluded. The overall abundance was then taken to be the mean of the remaining lines, with the error being propagated forward from the errors in each abundance measurement reported by {\sc spectrum}. For carbon, this resulted in an abundance of $\log\epsilon_{\sc \rm C} = 9.43 \pm 0.68$, based on measurements of 45 lines. For nitrogen, the result was $\log\epsilon_{\sc \rm N} = 8.85 \pm 0.66$ based on 69 lines.
There is a broad feature overlapping the C\,{\sc II} lines at $\sim$4618\,{\AA}. These lines arise from the transition between electronic states which can both autoionise to the ground state of C\,{\sc III}. This discrete-to-continuum transition produces the characteristic broad feature whose absorption cross section can be fitted by a Fano profile \citep{fano1961}, as in e.g. \citet{yan1987}. There was no need to attempt to fit this feature in our case since the other carbon lines were adequate for abundance measurement. There are also a number of C\,{\sc II} lines in the 4720 to 4760\,{\AA} region which appear in the modelled LTE spectrum but are absent in the data. These lines are known to be sensitive to NLTE effects and are partially in emission.
Both features are well known in the spectra of carbon-rich and hydrogen-deficient stars of appropriate temperature and surface gravity
\citep{klemola61, heber86.hdef1a, leuenhagen94a}.
The spectrum lacks strong oxygen lines. The oxygen abundance was measured from 9 relatively distinct lines. These gave a mean abundance of $\log \epsilon_{\sc \rm O} = 7.31 \pm 0.45$.
\subsubsection{Other metals}
The neon abundance was measured from 15 lines between 4219.74-4522.72\,\AA, giving a mean result of $\log \epsilon_{\sc \rm Ne} = 7.93 \pm 0.47$. The only magnesium feature in the spectrum is a blend of 3 lines at 4481\,\AA. The Mg abundance from this blend had to be measured carefully, since it lies within the wing of a broad He line which causes curvature in the pseudo-continuum. The equivalent width measurement was repeated 3 times, keeping the designated continuum as close to the line as possible. This gave a mean Mg abundance of $\log \epsilon_{\sc \rm Mg} = 7.81 \pm 0.41$. Using this value in the model reproduced the width of the line well, but the core was too shallow. However, the Mg line is saturated, so increasing the abundance in the model caused the wings to broaden and reduced the accuracy of the fit. This suggests a supersolar Mg abundance, but without any other Mg features to measure this cannot be determined conclusively.
The aluminium abundance was measured from 6 lines between 4149.90 and 4529.20\,\AA\, to be $\log \epsilon_{\sc \rm Al} = 6.01 \pm 0.67$. Similarly, the sulphur abundance was found using lines at 4253.59, 4284.98, 4332.69, and 4361.53\,\AA\, to be $\log \epsilon_{\sc \rm S} = 6.25 \pm 0.43$.
The spectrum contains both strong and weak silicon lines. The shapes of the strong lines are not well-reproduced by the LTE model, so the abundance was measured from weaker lines, specifically 4212.41\,\AA\, and the four lines between 4716-4830\,\AA. The mean Si abundance was found to be $\log \epsilon_{\sc \rm Si} = 6.98 \pm 0.28$.
The iron and phosphorus abundances are important for determining the overall metallicity, but neither species has lines with $W_{\lambda}\geq5$\,m\AA\, within the 4100-5100\,\AA\, window. An upper limit on these abundances was established by finding the minimum abundances at which the strongest lines in the window (Fe {\sc iii} 4164.73\,\AA\, and P {\sc iii} 4222.20\,\AA) become detectable to a confidence level of 68\% (Appendix \ref{app:width}). These were found to be $\log \epsilon_{\sc \rm Fe} \leq 6.30$ for iron and $\log \epsilon_{\sc \rm P} \leq 5.35$ for phosphorus. The S/N-weighted average spectrum from all available HRS observations was used to measure the Fe upper limit as it provided an improved signal-noise ratio in the region around Fe {\sc iii} 4164.73\,\AA. P {\sc iii} 4222.20\,\AA\, lay in a noisy Г©chelle overlap region and so the P upper limit was measured using only the 2019 HRS spectrum.
Taking the Al, Si, and S abundances as a proxy of metallicity, BPS\,CS\,22940$-$0009 is metal poor by $\simeq 0.6 \pm 0.3$ dex on average compared to solar values \citep{asplund09}. These measurements are $\simeq 0.2 \pm 0.2$ dex lower on average than those from \cite{naslim10}, with the results being in agreement for most species. The noisier AAT/UCLES spectrum makes it very difficult to measure the equivalent widths of the weak lines, so the abundances were based on less sensitive saturated lines. The Mg blend is also strongly affected by noise in the continuum and in the blue wing of He 4471.48\,\AA. This makes the equivalent width of the feature harder to measure in the UCLES spectrum than in the HRS spectrum and may explain the difference in Mg abundance results. Similarly, the weak Si lines are not easily measurable in the UCLES spectrum, which may explain the significant differences in Si abundance.
\subsection{Rotational Broadening}
The projected equatorial velocity ($v \sin i$) was assumed to be 0\,km\,s$^{-1}$ for the RSS spectrum as this would not have a significant effect due to the low resolution. For the HRS spectrum, {\sc sfit} was used to measure $v \sin i$ by fitting a formal solution to the spectrum in the region 4770--4830\,\AA\,with all other parameters fixed to their values in the RSS solution. This region has a high signal-noise ratio and contains metal lines (3 Si {\sc iii}, 7 N {\sc ii}, 1 C {\sc ii}/N {\sc ii} blend) that are sensitive to $v \sin i$ broadening. This gave a result of $v \sin i = 15.0 \pm 3.3\,{\rm km\,s^{-1}}$.
\subsection{Spectral Energy Distribution}
The stellar radius ($R$), mass ($M$) and luminosity ($L$) were obtained by fitting the reddened spectral energy distribution (SED) of the best-fit model atmosphere ($T_{\rm eff}$ / K = 34\,970, $\log g/{\rm cm\,s^{-2}} = 4.79$) to observed visual and near-infrared photometry (Fig.\,\ref{fig:phot}), using the {\sc isis} SED fitting toolkit \citep{heber18,irrgang21}.
The reddening was calculated using the extinction law of \citet{fitzpatrick19} with $E(44-55) = 0.060 \pm 0.015$.
The fit then yielded an angular diameter $\log \theta /{\rm rad} = -11.054 \pm 0.008.$ (where $\theta = 2R/d$).
Using the {\it Gaia} EDR3 parallax of $0.47\pm 0.04$\,mas and subtracting a zero-point correction of $-26\,{\rm \upmu as}$ \citep{lindegren21}, we obtain $d = 2.10^{+0.19}_{-0.16}$\,kpc and thus $R = 0.42 \pm 0.04$\,${\rm R_\odot}$. Using $R$ with $T_{\rm eff} = 34\,970 \pm 370 $\,K from the RSS spectrum, we obtain $L = 230^{+50}_{-40}\,{\rm L_\odot}$.
Using $R$ with $\log g / {\rm cm\,s^{-2}}=4.79 \pm 0.17$, we obtain $M = 0.39^{+0.08}_{-0.06}\,{\rm M_\odot}$.
To indicate the impact of systematic errors, the SED fit obtained using the HRS parameters (Table\,\ref{tab:params}) yields $L = 210^{+50}_{-40}\,{\rm L_\odot}$ and $M = 0.29^{+0.06}_{-0.05}\,{\rm M_\odot}$.
\subsection{Radial velocity}
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Date & $v_{\rm rad}$ & $\delta v_{\rm rad}$\\
& (km\,s$^{-1}$) & (km\,s$^{-1}$)\\
\hline
2016 06 23 & 24.6 & 0.9\\
2016 06 30 & 22.8 & 1.7\\
2018 05 07 & 31.5 & 0.8\\
2019 04 25 & 32.7 & 2.0\\
\hline
\end{tabular}
\caption{Heliocentric radial velocities from cross-correlation of each available HRS observations of BPS\,CS\,22940$-$0009 with the best-fit model spectrum.}
\label{tab:vrads}
\end{table}
The radial velocity ($v_{\rm rad}$) was determined by cross-correlation of the HRS spectrum and the best-fitting model spectrum. The peak of the cross-correlation function provided a wavelength shift that was used to calculate $v_{\rm rad}$ after applying a barycentric correction. The error in $v_{\rm rad}$ was calculated by measuring $\chi^2$ between the model spectrum and the observed spectrum across the range of wavelength shifts used for the cross-correlation function. The $1\sigma$ error bar was taken to be where $\Delta\chi^2=\chi^2-\chi^2_{\rm min}=1$.
Radial velocities were measured for all 4 available HRS observations of BPS\,CS\,22940$-$0009, to investigate any possible variability. Table\,\ref{tab:vrads} shows a distinct difference in $v_{\rm rad}$ between the 2016 and 2018/2019 spectra. However, pre-2018 HRS observations have shown significant velocity errors associated with the HRS calibration programme \citep{jeffery19}. Therefore it is unclear whether there is real variability in the $v_{\rm rad}$ of BPS\,CS\,22940$-$0009.
\subsection{Photometry}
BPS\,CS\,22940$-$0009 was observed in 2\,m cadence in a 600-1000\,nm bandpass with the Transiting Exoplanet Survey Satellite ({\it TESS}) during Sectors 13 and 27 (Fig.\,\ref{fig:tess}). In relative flux units, the mean individual datum error is 2.9\%. The standard deviation of the data is 2.2\% in a total of 17\,546 observations. The Fourier transform was investigated for periodic content (Fig.\,\ref{fig:tess}). For frequencies ($\nu$) greater than 0.25 d$^{-1}$ (periods less than 4 days) there is no evidence for periodic variability with (semi-) amplitude $a > 0.06\%$.
Low-frequency signal at $\nu<0.25\,{\rm d}^{-1}$ with $a\approx 0.1\%$ is likely to be red noise.
\section{Discussion}
\subsection{Atmospheric parameters}
The full list of atmospheric and stellar parameters obtained for BPS\,CS\,22940$-$0009 is given in Table \ref{tab:endtable}. The star's physical properties lie between the helium-rich hot subdwarfs and the extreme helium stars in g-T space (Fig.\,\ref{fig:param_comp}). In terms of helium abundance, it lies close to the He-sdBs and is notably less H-deficient than EHe stars of similar temperature, such as LS IV +6\textdegree2. The star has a luminosity class of V \citep{jeffery20}, compared to true hot subdwarfs which are of classes VI-VII \citep{drilling13}. It therefore can be considered a particularly hot, high-gravity extreme helium star, with the closest counterpart being LS IV +6\textdegree2 \citep{jeffery98}. If BPS\,CS\,22940$-$0009 formed from a double helium white dwarf merger, then it is likely to continue to evolve toward the helium main sequence and become a He-sdB \citep{saio00}.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\hline
Property & Value & Error \\
\hline
$T_{\rm eff}$ (K) & 34\,970 & 370\\
$\log g$ (cm\,s$^{-2}$) & 4.79 & 0.17 \\
$n_{\rm He}$ & 0.978 & 0.006 \\
$p$ (mas) & 0.47 & 0.04 \\
$d$ (kpc) & 2.10 & $^{+0.19}_{-0.16}$ \\[.1cm]
$R/{\rm R_\odot}$ & 0.42 & ${\pm0.04}$ \\[.1cm]
$L/{\rm L_\odot}$ & 230 & $^{+50}_{-40}$ \\[.1cm]
$M/{\rm M_\odot}$ (median) & 0.39 & $^{+0.08}_{-0.06}$ \\[.1cm]
$v_{\rm rad}$ (km\,s$^{-1}$) & 32.7 & 2.0 \\
$v_{\rm t}$ (km\,s$^{-1}$) & 7.8 & 0.2 \\
$v\sin{i}$ (km\,s$^{-1}$) & 15.0 & 3.3 \\%& 9.5 & 0.6 \\
\hline
\end{tabular}
\caption{Atmospheric and stellar parameters of BPS\,CS\,22940$-$0009.}
\label{tab:endtable}
\end{table}
\subsection{Abundances}
The CNO profile of BPS\,CS\,22940$-$0009 resembles that of extreme helium stars such as BX Cir \citep{jeffery99} and LS IV +6\textdegree2 \citep{jeffery98}. The depletion of H and O coupled with the enhancement of He and N indicate that the stellar material has been CNO-processed. The high C abundance also suggests some triple-$\alpha$ processing. During the merger of two helium white dwarfs, buried carbon-rich material can be dredged to the surface by opacity-driven convection caused by He shell flashes \citep{zhang12}.
The abundances of Al, Si, and S are all significantly subsolar by $\sim0.4-0.9$ dex, while the upper limit for Fe is subsolar by $\sim0.8$ dex. The Ne abundance is approximately solar, which is low compared to other extreme helium stars which also show enhanced carbon (e.g. LS\,IV$+6^{\circ}2$ and LS\,IV$-14^{\circ}109$: Fig.\,\ref{fig:abund_comp}). The ratio of Ne to light elements Al, Si, and S is in line with measurements for C-weak helium stars and subdwarfs (e.g. GALEX J175548.5+501210 (J1755+5012), Ton\,414, V652\,Her and GALEX J184559.8$-$413827 (J1846$-$4138): Fig.\,\ref{fig:abund_comp}). This suggests that the Ne abundance has not been significantly enhanced, for example by $\alpha$-captures on $^{14}$N at the time the excess carbon was produced.
The low metallicity could alternatively be caused by diffusion and stratification of these elements. For stratification to be significant, the star must be long-lived during its current evolution stage with respect to diffusion timescales ($\sim10^5$ yr). This means the star must lie on the helium main sequence or the extended horizontal branch and so must have subdwarf-like surface gravity, which BPS\,CS\,22940$-$0009 does not. Convection currents from He flashes would also disrupt the effects of any diffusion that existed prior to the flash \citep{byrne18}. Therefore if the star is currently evolving towards the helium main sequence, the low metallicity is likely to be intrinsic and not caused by diffusion.
\subsection{Effects of the local thermodynamic equilibrium approximation}
\label{sec:nlte}
For reasons of familiarity with the model atmosphere codes {\sc sterne} and {\sc spectrum}, our analysis of BPS\,CS\,22940-0009 has assumed the approximation of local thermodynamic equilibrium (LTE) for the determination of electron-level populations used in the equation of state and opacity calculation.
For stars with low surface gravities (or densities) or high effective temperatures, this approximation becomes less secure as the local radiation field increasingly perturbs the population distribution.
The boundary beyond which the LTE approximation is deemed inappropriate has been discussed severally \citep{anderson91,grigsby92,nieva07,pereira11}.
For hot subdwarfs LTE appears satisfactory for $T_{\rm{eff}}< 30\,000$\,K and may be useful up to $T_{\rm{eff}}< 40\,000$\,K (Rauch 2019, private communication).
\citet{jeffery20} found good agreement between effective temperatures and surface gravities using grids of model atmospheres computed with and without the LTE approximation up to $T_{\rm{eff}}< 40\,000$\,K.
Rather, the use or neglect of the correct line opacity has a far greater influence on the result \citep{anderson91}.
However, this boundary must shift to lower temperatures as surface gravity (or density) is reduced.
In the present case, the surface gravity exceeds that of main-sequence stars, and so should be satisfactory, but is significantly lower than that of most hot “subdwarfs”, and so should be tested.
For the purpose of validating the LTE analysis, we examine how LTE and non-LTE models differ for the properties of this star.
To this end we have computed model atmospheres and emergent spectra with the codes {\sc tlusty} and {\sc synspec} (version 208) \citep{hubeny17a,hubeny17b,hubeny17c,hubeny21}, including metals up to zinc at the same abundances as the {\sc sterne} models.
We have computed models both with and without LTE in order to avoid systematic differences arising from idiosyncrasies peculiar to either {\sc tlusty} and {\sc synspec} or to {\sc sterne} and {\sc spectrum}. The models were computed at 34\,000\,K with a $\log{g}$/cm\,s$^{-2}$ of 4.75, a helium number fraction of 99\% and a $v_{\rm t}$ of 5\,km\,s$^{-1}$.
The distribution of temperature with optical depth is shown in Fig.\,\ref{fig:tempstruc}.
The three models shown are similar above $\log{\tau}>0$, but begin to diverge in the higher layers of the atmosphere. Systematic differences between the {\sc sterne} and {\sc tlusty} LTE models, such as the different treatment of opacities and opacity sampling, contribute to the relatively hotter outer layers of the {\sc sterne} model.
The flux in the NLTE model is $\sim$4\% lower in the region for which broadband photometry data are available (3\,000 to 50\,000\,\AA). This effect would contribute to a $\sim$4\% lower luminosity and a $\sim$2\% lower radius estimation.
In the NLTE model, the neutral He line profiles are both deeper in the core and have less flux in the wings. An NLTE analysis with these models would likely measure a lower surface gravity, but only by $<0.25$ dex. The temperature would likewise have been measured slightly lower, by $<1000$\,K, as the NLTE model has deeper He\,{\sc II} lines and the singly ionised states of carbon and nitrogen are underpopulated compared to the doubly ionised states. This would also affect the abundance measurement of these species, leading to an estimated increase in abundance of $\sim20\%$, given the values of the departure coefficients in the line-forming region.
Whilst models with non-LTE ion populations will certainly improve the fits to most H and He\,{\sc i} line profiles, it is not yet clear that all discrepancies in the fits will be resolved. Further work might include better theoretical profiles in {\sc synspec}, with new line broadening tables and consideration of other potential systematic errors such as departures from plane-parallel geometry.
This brief investigation of LTE vs. NLTE and {\sc tlusty} vs. {\sc sterne} has provided an order of magnitude estimate of the systematic uncertainty which arises from the choice of physics. It has not indicated what the optimum approach might be, though the hybrid LTE model with NLTE formal solution recommended by \citet{nieva07} may prove applicable here also.
\section{Conclusions}
We have presented a detailed analysis of optical spectra of BPS\,CS\,22940$-$0009 to investigate its properties and compare it to similar stars. Precise measurements of the atmospheric parameters and surface abundances have been obtained using grids of line-blanketed, LTE model atmospheres. We found BPS\,CS\,22940$-$0009 to lie on the boundary between the He-sdB and EHe stars in $g$-$T$ space, connecting the two classes. Due to its luminosity class, the star should not be considered as a true helium-rich subdwarf, but rather a particularly hot, high-gravity extreme helium star. The surface abundances show a composition of CNO-processed material with an intrinsically low metallicity. Strong carbon enhancement suggests that the star formed from a high-mass composite merger of two helium white dwarfs. BPS\,CS\,22940$-$0009 is likely now in a post-merger process of evolving towards the helium main sequence and becoming a true He-sdB star.
This conclusion would fail were BPS\,CS\,22940$-$0009 to be a member of a close binary. There is no evidence for a cool companion in the SED. We found no evidence of periodic variability in the {\it TESS} photometry for periods $\leq4$\,d. Our attempts to identify variability in the radial velocity (e.g. due to binarity) were inconclusive.
As a footnote (Section \ref{sec:nlte}), we investigated the impact of the LTE assumption on our analysis, and infer that a modest reduction in $T_{\rm eff}$ and $g$ and a modest increase in elemental abundances would result from a non-LTE analysis. However the differences were comparable with those obtained when comparing LTE models computed with two different model atmosphere codes. Consequently, the LTE assumption has little effect on our overall conclusion.
\section*{Acknowledgements}
The authors thank Andreas Irrgang and Matti Dorsch for the use of and assistance with the implementation of the {\sc isis} SED fitting routines and Matti Dorsch for advice on the use of {\sc tlusty}.
They thank the referee for constructive remarks which have led to substantial revision of the paper.
Some observations reported in this paper were obtained with the Southern African Large Telescope (SALT). This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate.
This research has made use of {\sc isis} functions ({\sc isisscripts}) provided by ECAP/Remeis observatory and MIT (http://www.sternwarte.uni-erlangen.de/isis/).
This research has made use of the VizieR catalogue access tool, CDS,
Strasbourg, France \citep{vizier}.
EJS is supported by the United Kingdom (UK) Science and Technology Facilities Council (STFC) via UK Research and Innovations (UKRI) doctoral training grant ST/R504609/1.
LJAS and CSJ are supported by the STFC via UKRI research grant ST/V000438/1.
The Armagh Observatory and Planetarium (AOP) is funded by direct grant from the Northern Ireland Dept for Communities.
This funding also provides for AOP membership of the United Kingdom SALT consortium (UKSC).
For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.
\section*{Data Availability}
The raw and pipeline reduced SALT observations are available from the SALT Data Archive (https://ssda.saao.ac.za). The model atmospheres and spectra computed for this project are available via the Armagh Observatory and Planetarium web server (https://armagh.space/$\sim$SJeffery/). TESS photometric data are available through the MAST portal (https://mast.stsci.edu).
\bibliographystyle{mnras}
\bibliography{bps_spec} %
\appendix
\renewcommand\thefigure{A.\arabic{figure}}
\section{Renormalised HRS spectrum with formal solution}
\label{app:hrs_spec}
The SALT HRS spectrum of BPS\,CS\,22940$-$0009 and our computed synthetic spectrum are shown in Figs.\ref{fig:app1a}-\ref{fig:app1d}.
\renewcommand\thetable{B.\arabic{table}}
\begin{table*}
\centering
\begin{tabular}{ccccccccc}
& Species/ & & & Species/ & & & Species/ & \\
$\lambda$ (\AA) & $W_{\lambda}$ (m\AA) & $\log\epsilon$ & $\lambda$ (\AA) & $W_{\lambda}$ (m\AA) & $\log\epsilon$ & $\lambda$ (\AA) & $W_{\lambda}$ (m\AA) & $\log\epsilon$ \\
\hline
& H {\sc i} & & & N {\sc ii} (cont.) & & & N {\sc iii} & \\
4340.46 & 382 $\pm$ 82 & 9.44 & 4227.74 & 52 $\pm$ 30 & 8.54 $\pm$ 6.12 & 4378.93 & 85 $\pm$ 30 & 8.57 $\pm$ 0.33 \\
4861.32 & 570 $\pm$ 53 & 9.50 & 4236.93 & 103 $\pm$ 37 & 9.04 $\pm$ 0.41 & 4379.11 & 79 $\pm$ 32 & 8.40 $\pm$ 0.38 \\
& C {\sc ii} & & 4237.05 & 102 $\pm$ 38 & 8.84 $\pm$ 0.46 & 4544.80 & 15 $\pm$ 8 & 8.30 $\pm$ 1.41 \\
4267.02 & 287 $\pm$ 74 & 9.50 $\pm$ 0.27 & 4241.18 & 34 $\pm$ 15 & 8.71 $\pm$ 0.30 & 4546.32 & 7 $\pm$ 5 & 7.83 $\pm$ 0.25 \\
4267.27 & 281 $\pm$ 71 & 9.30 $\pm$ 0.26 & 4241.76 & 107 $\pm$ 30 & 8.59 $\pm$ 0.33 & 4547.30 & 7 $\pm$ 5 & 8.64 $\pm$ 0.42 \\
4285.70 & 31 $\pm$ 21 & 9.33 $\pm$ 0.41 & 4241.76 & 104 $\pm$ 28 & 8.56 $\pm$ 0.33 & 4640.64 & 133 $\pm$ 32 & 8.67 $\pm$ 0.61 \\
4307.58 & 23 $\pm$ 20 & 8.96 $\pm$ 0.50 & 4242.44 & 34 $\pm$ 16 & 8.71 $\pm$ 0.32 & 4640.64 & 131 $\pm$ 31 & 8.65 $\pm$ 0.30 \\
4313.10 & 74 $\pm$ 29 & 9.87 $\pm$ 0.53 & 4247.22 & 62 $\pm$ 24 & 9.85 $\pm$ 0.36 & 4641.85 & 59 $\pm$ 20 & 8.67 $\pm$ 0.32 \\
4317.26 & 92 $\pm$ 34 & 9.83 $\pm$ 0.58 & 4417.10 & 48 $\pm$ 19 & 9.02 $\pm$ 0.33 & 4641.85 & 58 $\pm$ 20 & 8.65 $\pm$ 0.34 \\
4318.60 & 51 $\pm$ 26 & 9.48 $\pm$ 0.49 & 4427.24 & 62 $\pm$ 28 & 8.95 $\pm$ 0.46 & & O {\sc ii} & \\
4321.65 & 44 $\pm$ 22 & 9.84 $\pm$ 0.42 & 4427.96 & 129 $\pm$ 48 & 9.99 $\pm$ 0.47 & 4319.63 & 11 $\pm$ 14 & 7.35 $\pm$ 0.61 \\
4323.10 & 16 $\pm$ 13 & 9.39 $\pm$ 0.45 & 4431.82 & 35 $\pm$ 18 & 8.60 $\pm$ 0.37 & 4345.56 & 21 $\pm$ 19 & 7.66 $\pm$ 0.55 \\
4368.27 & 102 $\pm$ 31 & 9.76 $\pm$ 0.40 & 4432.74 & 74 $\pm$ 26 & 8.53 $\pm$ 0.42 & 4349.43 & 10 $\pm$ 14 & 6.87 $\pm$ 0.66 \\
4368.27 & 102 $\pm$ 31 & 9.53 $\pm$ 0.39 & 4433.48 & 34 $\pm$ 18 & 8.46 $\pm$ 0.36 & 4351.26 & 15 $\pm$ 14 & 7.35 $\pm$ 0.49 \\
4369.87 & 54 $\pm$ 20 & 9.36 $\pm$ 0.34 & 4442.02 & 46 $\pm$ 18 & 8.35 $\pm$ 0.31 & 4414.90 & 31 $\pm$ 15 & 7.49 $\pm$ 0.34 \\
4370.69 & 5 $\pm$ 6 & 8.46 $\pm$ 0.58 & 4447.03 & 82 $\pm$ 25 & 8.67 $\pm$ 0.48 & 4590.97 & 14 $\pm$ 8 & 7.23 $\pm$ 0.31 \\
4372.33 & 129 $\pm$ 32 & 9.67 $\pm$ 0.22 & 4507.56 & 24 $\pm$ 18 & 8.96 $\pm$ 0.43 & 4661.63 & 8 $\pm$ 6 & 7.16 $\pm$ 0.34 \\
4372.33 & 129 $\pm$ 32 & 9.69 $\pm$ 0.29 & 4508.79 & 14 $\pm$ 13 & 9.11 $\pm$ 0.46 & 4699.22 & 19 $\pm$ 11 & 7.87 $\pm$ 0.32 \\
4372.49 & 130 $\pm$ 32 & 9.62 $\pm$ 0.27 & 4530.40 & 81 $\pm$ 22 & 8.57 $\pm$ 0.79 & 4705.35 & 6 $\pm$ 5 & 6.84 $\pm$ 0.39 \\
4372.49 & 130 $\pm$ 32 & 9.62 $\pm$ 0.29 & 4552.53 & 120 $\pm$ 30 & 9.51 $\pm$ 0.31 & & Ne {\sc ii} & \\
4374.27 & 95 $\pm$ 28 & 9.18 $\pm$ 0.25 & 4601.48 & 99 $\pm$ 31 & 9.31 $\pm$ 0.57 & 4219.74 & 41 $\pm$ 22 & 8.00 $\pm$ 0.39 \\
4375.01 & 58 $\pm$ 21 & 9.40 $\pm$ 0.30 & 4602.53 & 25 $\pm$ 16 & 9.01 $\pm$ 0.38 & 4233.85 & 7 $\pm$ 10 & 7.57 $\pm$ 0.66 \\
4376.56 & 52 $\pm$ 19 & 8.91 $\pm$ 0.28 & 4607.16 & 91 $\pm$ 28 & 9.27 $\pm$ 0.51 & 4250.65 & 6 $\pm$ 8 & 7.51 $\pm$ 0.63 \\
4409.16 & 42 $\pm$ 18 & 8.94 $\pm$ 0.33 & 4608.09 & 23 $\pm$ 12 & 8.86 $\pm$ 0.30 & 4290.37 & 17 $\pm$ 15 & 7.84 $\pm$ 0.50 \\
4409.99 & 38 $\pm$ 16 & 8.57 $\pm$ 0.31 & 4613.87 & 61 $\pm$ 20 & 8.80 $\pm$ 0.42 & 4290.60 & 17 $\pm$ 15 & 7.94 $\pm$ 0.48 \\
4411.20 & 94 $\pm$ 29 & 9.31 $\pm$ 0.36 & 4621.29 & 98 $\pm$ 24 & 9.40 $\pm$ 0.44 & 4369.86 & 34 $\pm$ 24 & 8.54 $\pm$ 0.48 \\
4411.52 & 108 $\pm$ 22 & 9.31 $\pm$ 0.26 & 4630.54 & 156 $\pm$ 31 & 9.60 $\pm$ 0.26 & 4379.55 & 81 $\pm$ 32 & 8.63 $\pm$ 0.41 \\
4413.26 & 23 $\pm$ 11 & 9.37 $\pm$ 0.27 & 4643.09 & 86 $\pm$ 24 & 9.09 $\pm$ 0.49 & 4391.99 & 29 $\pm$ 20 & 7.64 $\pm$ 0.43 \\
4637.63 & 32 $\pm$ 11 & 9.64 $\pm$ 0.23 & 4654.53 & 60 $\pm$ 23 & 9.56 $\pm$ 0.37 & 4397.99 & 20 $\pm$ 13 & 8.21 $\pm$ 0.36 \\
4638.92 & 51 $\pm$ 19 & 9.66 $\pm$ 0.33 & 4667.21 & 17 $\pm$ 8 & 8.78 $\pm$ 0.23 & 4409.30 & 31 $\pm$ 13 & 7.81 $\pm$ 0.28 \\
4867.07 & 38 $\pm$ 11 & 9.73 $\pm$ 0.22 & 4674.91 & 18 $\pm$ 10 & 8.83 $\pm$ 0.30 & 4413.22 & 27 $\pm$ 13 & 8.01 $\pm$ 0.30 \\
4953.86 & 34 $\pm$ 17 & 9.75 $\pm$ 0.35 & 4678.14 & 66 $\pm$ 21 & 8.59 $\pm$ 0.33 & 4428.63 & 31 $\pm$ 19 & 8.08 $\pm$ 0.40 \\
5032.13 & 58 $\pm$ 16 & 9.60 $\pm$ 0.28 & 4694.70 & 73 $\pm$ 30 & 8.97 $\pm$ 0.40 & 4430.79 & 15 $\pm$ 16 & 7.77 $\pm$ 0.58 \\
5035.94 & 56 $\pm$ 23 & 9.68 $\pm$ 0.40 & 4718.38 & 17 $\pm$ 9 & 8.87 $\pm$ 0.29 & 4430.95 & 15 $\pm$ 17 & 8.16 $\pm$ 0.60 \\
5040.71 & 36 $\pm$ 21 & 9.70 $\pm$ 0.42 & 4774.24 & 11 $\pm$ 6 & 8.40 $\pm$ 0.31 & 4511.48 & 18 $\pm$ 19 & 7.88 $\pm$ 0.59 \\
5044.36 & 23 $\pm$ 20 & 9.31 $\pm$ 0.53 & 4779.72 & 42 $\pm$ 14 & 8.79 $\pm$ 0.29 & 4522.72 & 12 $\pm$ 8 & 7.94 $\pm$ 0.36 \\
& C {\sc iii} & & 4788.13 & 57 $\pm$ 21 & 8.89 $\pm$ 0.41 & & Mg {\sc ii} & \\
4152.51 & 110 $\pm$ 50 & 9.92 $\pm$ 0.58 & 4793.65 & 18 $\pm$ 12 & 8.70 $\pm$ 0.37 & 4481.13 & 86 $\pm$ 32 & 7.86 $\pm$ 0.41 \\
4186.90 & 175 $\pm$ 62 & 9.18 $\pm$ 0.44 & 4803.29 & 131 $\pm$ 27 & 9.80 $\pm$ 0.28 & 4481.13 & 85 $\pm$ 32 & 7.84 $\pm$ 0.41 \\
4315.44 & 52 $\pm$ 25 & 9.27 $\pm$ 0.39 & 4810.31 & 8 $\pm$ 7 & 8.25 $\pm$ 0.46 & 4481.33 & 87 $\pm$ 34 & 7.74 $\pm$ 0.41 \\
4382.90 & 86 $\pm$ 38 & 9.82 $\pm$ 0.42 & 4970.23 & 15 $\pm$ 10 & 8.71 $\pm$ 0.57 & & Al {\sc iii} & \\
4515.33 & 29 $\pm$ 17 & 8.98 $\pm$ 0.40 & 4987.38 & 32 $\pm$ 15 & 8.72 $\pm$ 0.34 & 4149.90 & 7 $\pm$ 15 & 5.72 $\pm$ 1.04 \\
4515.78 & 47 $\pm$ 22 & 8.87 $\pm$ 0.33 & 4991.22 & 41 $\pm$ 22 & 9.28 $\pm$ 0.49 & 4150.14 & 5 $\pm$ 14 & 5.72 $\pm$ 1.31 \\
4516.77 & 65 $\pm$ 27 & 8.93 $\pm$ 0.37 & 4994.35 & 47 $\pm$ 30 & 9.76 $\pm$ 0.53 & 4479.89 & 19 $\pm$ 9 & 6.07 $\pm$ 0.27 \\
4651.01 & 54 $\pm$ 28 & 9.06 $\pm$ 7.20 & 4994.36 & 48 $\pm$ 29 & 9.77 $\pm$ 0.50 & 4479.97 & 17 $\pm$ 11 & 6.02 $\pm$ 0.33 \\
4651.47 & 180 $\pm$ 50 & 9.36 $\pm$ 0.38 & 4994.37 & 46 $\pm$ 29 & 9.73 $\pm$ 0.51 & 4512.54 & 20 $\pm$ 25 & 6.12 $\pm$ 0.71 \\
4659.06 & 46 $\pm$ 19 & 9.14 $\pm$ 8.01 & 4997.23 & 16 $\pm$ 12 & 9.12 $\pm$ 0.43 & 4529.20 & 41 $\pm$ 15 & 6.38 $\pm$ 0.33 \\
4663.64 & 56 $\pm$ 14 & 9.22 $\pm$ 0.26 & 5001.13 & 101 $\pm$ 30 & 9.19 $\pm$ 0.54 & & Si {\sc iii} & \\
4665.86 & 101 $\pm$ 20 & 9.39 $\pm$ 0.30 & 5001.47 & 112 $\pm$ 35 & 9.20 $\pm$ 0.56 & 4813.30 & 32 $\pm$ 11 & 7.08 $\pm$ 0.22 \\
4665.86 & 102 $\pm$ 19 & 9.40 $\pm$ 0.28 & 5002.70 & 41 $\pm$ 19 & 8.93 $\pm$ 0.41 & 4819.72 & 32 $\pm$ 11 & 6.96 $\pm$ 0.22 \\
4673.95 & 101 $\pm$ 20 & 9.54 $\pm$ 0.36 & 5005.15 & 89 $\pm$ 27 & 8.65 $\pm$ 0.54 & 4828.96 & 48 $\pm$ 16 & 7.15 $\pm$ 0.27 \\
& N {\sc ii} & & 5007.33 & 69 $\pm$ 17 & 8.71 $\pm$ 0.35 & & Si {\sc iv} & \\
4133.67 & 37 $\pm$ 22 & 9.04 $\pm$ 9.76 & 5010.62 & 62 $\pm$ 21 & 8.91 $\pm$ 0.45 & 4212.41 & 23 $\pm$ 14 & 6.69 $\pm$ 0.39 \\
4171.60 & 83 $\pm$ 28 & 8.84 $\pm$ 0.38 & 5011.30 & 20 $\pm$ 12 & 8.75 $\pm$ 0.35 & & S {\sc iii} & \\
4173.57 & 36 $\pm$ 20 & 8.81 $\pm$ 0.36 & 5012.03 & 37 $\pm$ 16 & 8.87 $\pm$ 0.36 & 4253.59 & 33 $\pm$ 21 & 6.17 $\pm$ 0.43 \\
4176.16 & 75 $\pm$ 28 & 8.39 $\pm$ 0.40 & 5023.05 & 36 $\pm$ 16 & 9.17 $\pm$ 0.36 & 4284.98 & 26 $\pm$ 16 & 6.31 $\pm$ 0.38 \\
4179.67 & 58 $\pm$ 25 & 8.92 $\pm$ 0.36 & 5025.66 & 31 $\pm$ 11 & 8.47 $\pm$ 0.26 & 4332.69 & 7 $\pm$ 8 & 5.98 $\pm$ 0.52 \\
4195.97 & 65 $\pm$ 36 & 9.16 $\pm$ 2.00 & 5045.09 & 100 $\pm$ 38 & 9.44 $\pm$ 0.66 & 4361.53 & 16 $\pm$ 12 & 6.52 $\pm$ 0.38 \\
4199.98 & 73 $\pm$ 32 & 8.96 $\pm$ 0.47 & 5073.59 & 25 $\pm$ 10 & 8.75 $\pm$ 0.25 & & & \\
\end{tabular}
\caption{All spectral line measurements used in the abundance calculations for BPS\,CS\,22940$-$0009. $W_{\lambda}$ is the measured equivalent width for each line, with associated error. $\log\epsilon$ is the abundance result and error calculated from $W_{\lambda}$ by {\sc spectrum}. No errors were reported for the hydrogen lines.}
\label{tab:linelist}
\end{table*}
\section{Table of line measurements}
\label{app:linelist}
The equivalent widths of all lines used in the measurement of elemental abundances are shown in Table\,\ref{tab:linelist}.
\section{Detection thresholds for absorption lines in stellar spectra}
\label{app:width}
Various approaches to determine the detection limit for weak spectral features may be found \citep[e.g.][]{cayrel88,stetson08}; that adopted here is as follows.
The spectrum of radiation $f$ emitted by a body as a function of wavelength $\lambda$ is assumed to consist of a continuum $c(\lambda)$ interrupted by absorption lines. Defining $a\equiv f/c$, the equivalent width of an isolated absorption line is
\begin{equation*}
W_{\lambda}=\int_{\lambda_p}^{\lambda_q}\Big(1-\frac{f(\lambda)}{c(\lambda)}\Big)d\lambda=\int_{\lambda_p}^{\lambda_q}(1-a(\lambda))d\lambda,
\end{equation*}
where the integration limits ${\lambda_p},{\lambda_q}$ bracket the line in question.
The continuum flux $c(\lambda)$ in the vicinity of an absorption line may be approximated by a polynomial fit $c_{0}(\lambda)$ to regions of local spectrum excluding absorption lines.
Supposing that the spectrum is sampled discretely at wavelengths $\lambda_{i}$ (pixels) and at intervals $\delta\lambda$ and rectifying such that $a_{i} = f_{i}/c_{0}$, then
\begin{equation*}
W_{\lambda}\approx\sum_{i=p}^{q}(1-a_{i})\delta\lambda.
\end{equation*}
In continuum regions containing N pixels, the mean value $\bar{a}\equiv\langle f_{i}/c_{0}\rangle = 1$ and has a standard deviation
\begin{equation*}
\sigma_{a}=\sqrt{\frac{1}{N-1}\sum_{i=1}^{N}(a_{i}-\bar{a})^{2}}.
\end{equation*}
If each measurement $a_{i}$ is associated with a measurement error $\sigma_{i}$,
and assuming errors associated with each pixel are independent and Poissonian, and that $\langle\sigma_{i}\rangle\approx\sigma_{a}$, the associated error in $W_{\lambda}$ is
\begin{equation*}
\sigma_{W}\approx\sigma_{a}\sqrt\frac{W_{\lambda}}{\delta\lambda}.
\end{equation*}
In general, $\sigma_{i}\approx\sigma_{a}/\sqrt{a_{i}}$. Hence, more precisely,
\begin{equation*}
\sigma_{W}\approx\frac{\sigma_{a}}{\sqrt{\langle a_{i}\rangle}}\sqrt{\frac{W_{\lambda}}{\delta\lambda}}.
\end{equation*}
The corollary is that, for weak lines where $\langle a_{i}\rangle \approx 1$, the signal-to-noise ratio $R_{\rm SN}$ for a line of equivalent width $W_{\lambda}$ is given by
\begin{equation*}
R_{\rm SN}\equiv\frac{W_{\lambda}}{\sigma_{W}}
\approx \frac{\sqrt{W_{\lambda}\delta\lambda}}{\sigma_{a}}
\end{equation*}
and hence
\begin{equation*}
R_{\rm SN}\geq n\;{\rm if}\;W_{\lambda}\geq\frac{n^2\sigma_{a}^2}{\delta\lambda}.
\end{equation*}
The latter provides a threshold for the detection (or non-detection) or a weak line, with $n=1,2$ and $3$ representing confidence levels for detection of 68\%, 90\%, and 99\% respectively.
\renewcommand\thefigure{D.\arabic{figure}}
\renewcommand\thetable{D.\arabic{table}}
\bsp %
\label{lastpage} |
Title:
Dynamical cluster masses from photometric surveys |
Abstract: The masses of galaxy clusters can be measured using data obtained exclusively
from wide photometric surveys in one of two ways: directly from the amplitude
of the weak lensing signal or, indirectly, through the use of scaling relations
calibrated using binned lensing measurements. In this paper, we build on a
recently proposed idea and implement an alternative method based on the radial
profile of the satellite distribution. This technique relies on splashback, a
feature associated with the apocenter of recently accreted galaxies that offers
a clear window into the phase-space structure of clusters without the use of
velocity information. We carry out this dynamical measurement using the stacked
satellite distribution around a sample of luminous red galaxies in the fourth
data release of the Kilo-Degree Survey and validate our results using
abundance-matching and lensing masses. To illustrate the power of this
measurement, we combine dynamical and lensing mass estimates to robustly
constrain scalar-tensor theories of gravity at cluster scales. Our results
exclude departures from General Relativity of order unity. We conclude the
paper by discussing the implications for future data sets. Because splashback
mass measurements scale only with the survey volume, stage-IV photometric
surveys are well-positioned to use splashback to provide high-redshift cluster
masses.
| https://export.arxiv.org/pdf/2208.09369 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
gravitational lensing: weak -- large-scale structure of Universe -- galaxies: clusters: general
\end{keywords}
\section{Introduction}
The majority of ordinary matter, a.k.a. baryonic matter, is trapped inside the potential wells of the large-scale structure of the Universe. The main constituent of this invisible scaffolding is dark matter, and its fully collapsed overdensities, known as haloes, contain most of the mass in the Universe. These structures are not isolated, and the process of structure formation is known to be hierarchical \citep{1974ApJ...187..425P}. In simple terms, this means that smaller haloes become subhaloes after they are accreted onto larger structures. Unsurprisingly, baryonic matter also follows this process, resulting in today's clusters of galaxies. Due to their joint evolution, a tight relationship exists between the luminosity of a galaxy and the mass of the dark matter halo it inhabits.
These galaxy clusters are associated with the largest haloes in the Universe and they are still accreting matter from the surrounding environment, i.e. they are not fully virialized yet.
Galaxies can be divided into two populations: red and blue \citep{2001AJ....122.1861S}. Whereas red galaxies derive their color from their aging stellar population, blue galaxies display active star formation, and young stars dominate their light. The exact mechanism behind quenching, i.e., the transition from star-forming to ``red and dead'', is still not fully understood \citep[see, e.g.,][]{2010MNRAS.402.1536S, 2015MNRAS.452.2879T}, but it is known to be connected to both baryonic feedback \citep[see, e.g.,][]{2008MNRAS.391..481S, 2010MNRAS.402.1536S} and interactions inside the dense cluster environment \citep[see, e.g.,][]{1980ApJ...237..692L, 1996Natur.379..613M, 2008MNRAS.387...79V}. An important consequence of this environmental dependence is the formation of a red sequence, i.e., a close relationship between the color and magnitude of red galaxies in clusters. By calibrating this red sequence as a function of redshift, it is possible to identify clusters in photometric surveys, even in the absence of precise spectroscopic redshifts \citep{2000AJ....120.2148G}.
In recent years, splashback has been recognized as a feature located at the edge of galaxy clusters. The radius of this boundary, $r_\text{sp}$, is close to the apocenter of recently accreted material \citep[see, e.g.,][]{Adhikari_2014, Diemer_2017, Diemer_2017b} and it is associated with a sudden drop in matter density. This is because it naturally separates the single and multi-stream regions of galaxy clusters: orbiting material piles up inside this radius, while collapsing material located outside it is about to enter the cluster for the first time.
In simulations and observations, the distribution of red satellite galaxies and dark matter seem to trace this feature in the same fashion \citep{2021MNRAS.tmp.1404C, 2021MNRAS.504.4649O}, but a possible dependence on satellite properties is currently being explored \citep{2021arXiv210505914S, 2022arXiv220205277O}. In fact, in the context of galaxy evolution models, the mechanism behind this feature has been known under the name backsplash for almost two decades and has been previously explored both in observations and simulations \citep{2005MNRAS.356.1327G, 2011MNRAS.416.2882M}. Compared to these efforts, however, the recent interest in this feature is guided by theoretical and observational implications for the study of the large-scale structure of the Universe.
Since haloes are perturbations on top of a background of constant density, their size can be quantified in terms of overdensity masses. For example, $M_\text{200m}$ is defined as the mass contained within a sphere of radius $r_\text{200m}$ such that the average density within it is $200$ times the average matter density of the Universe $\rho_\text{m}(z)$,
\begin{equation}
\label{eq:200m}
M_\text{200m} = 200 \times \frac{4\pi}{3} \rho_\text{m}(z) r_\text{200m}^3.
\end{equation}
From a theoretical perspective, the splashback radius defines a more accurate cluster mass and sidesteps the issue of pseudo evolution due to an evolving $\rho_\text{m}(z)$ as a function of redshift $z$ \citep{2013ApJ...766...25D, More_2015}. Thanks to this property, this definition implies a universal mass function that is valid for a variety of cosmologies \citep{2020ApJ...903...87D}. Moreover, the shape of the matter profile around this feature can also be used to learn about structure formation, the nature of dark matter \citep{2020JCAP...02..024B} and dark energy \citep{Contigiani_2019}.
Observationally, one of the most noteworthy applications of the splashback feature is the study of quenching through the measurement of the spatial distribution of galaxy populations with different colors \citep{2020arXiv200811663A}. While notable, this was not the earliest result from the literature, and many other measurements preceded it. Published works can be divided into three groups: those based on targeted weak lensing observations of X-ray selected clusters \citep{2017ApJ...836..231U, 2019MNRAS.485..408C}, those based on the lensing signal and satellite distributions around SZ-selected clusters \citep[see, e.g., ][]{Shin_2019}, and those based on samples constructed with the help of cluster-finding algorithms applied to photometric surveys \citep[see, e.g.,][]{2016ApJ...825...39M, 2018ApJ...864...83C}. However, we note that in the case of the last group, the results are difficult to interpret because the splashback signal correlates with the parameters of the cluster detection method \citep{Busch_2017}.
In this work, we implement an application of this feature based on \cite{2021MNRAS.tmp.1404C}. The location of the splashback radius is connected to halo mass, and its measurement from the distribution of cluster members can therefore lead to a mass estimate. Because this distribution can be measured without spectroscopy, this means that we can extract a dynamical mass purely from photometric data. To avoid the issues related to cluster-finding algorithms explained above, we studied the average distribution of faint galaxies around luminous red galaxies (LRGs) instead of the targets identified through overdensities of red galaxies. If we consider only passive evolution, the observed magnitude of the LRGs can be corrected to construct a sample with constant comoving density \citep{2016MNRAS.461.1431R,2019MNRAS.487.3715V}, and, by selecting the brightest among them, we expect to identify the central galaxies of groups and clusters.
We present our analysis in Section~\ref{sec:profiles} and produce two estimates of the masses of the haloes hosting the LRGs in Section~\ref{sec:fit}. The first is based on the splashback feature measured in the distribution of faint galaxies, while the second is based on the amplitude of weak lensing measurements. After comparing these results with an alternative method in Section~\ref{sec:discussion}, we discuss our measurements in the context of modified models of gravity. We conclude by pointing out that, while we limit ourselves to redshifts $z<0.55$ here, the sample constructed in this manner has implications for the higher redshift range probed by future stage-IV photometric surveys \citep{2006astro.ph..9591A} such as
\emph{Euclid} \citep{laureijs2011euclid} and the Legacy Survey of Space and Time \citep[LSST,][]{2009arXiv0912.0201L}.
Section~\ref{sec:future} discusses these complications in more detail and explores how this method can be used to complement the use of lensing to extract the masses of X-ray \citep{2019MNRAS.485..408C} or SZ selected clusters \citep{Shin_2019}.
Unless stated otherwise, we assume a cosmology based on the 2015 Planck data release \citep{Planck2015}. For cosmological calculations, we use the Python packages \textsc{astropy} \citep{Price-Whelan:2018hus} and \textsc{colossus} \citep{Diemer:2017bwl}. The symbols $R$ and $r_\text{sp}$ always refer to a comoving projected distance and a comoving splashback radius. %
\section{Data}
\label{sec:data}
This section introduces both the Kilo-Degree Survey \citep[KiDS,][]{deJong2013} and its infrared companion, the VISTA Kilo-degree INfrared Galaxy survey \citep[VIKING,][]{Edge2013}. Their combined photometric catalog and the sample of LRGs extracted from it \citep{2020arXiv200813154V} are the essential building blocks of this paper.
\subsection{KiDS}
KiDs is a multi-band imaging survey in four filters ($ugri$) covering $1350$ deg$^2$. Its fourth data release \citep[DR4, ][]{Kuijken2019DR4} is the basis of this paper and has a footprint of 1006 deg$^2$ split between two regions, one equatorial and the other in the south Galactic cap ($770$ deg$^2$ in total after masking). The $5\sigma$ mean limiting magnitudes in the $ugri$ bands are, respectively, 24.23, 25.12, 25.02, and 23.68. The mean seeing for the $r$-band data, used both as a detection band and for the weak lensing measurements, is 0.7\arcsec. The companion survey VIKING covers the same footprint in five infrared bands, $ZYJHK_s$.
The raw data have been reduced with two separate pipelines, THELI \citep{2005AN....326..432E} for a lensing-optimized reduction of the $r$-band data, and AstroWISE \citep{2013ExA....35...45M}, used to create photometric catalogs of extinction corrected magnitudes. The source catalog for lensing was produced from the THELI images. Lensfit \citep{2013MNRAS.429.2858M, Conti:2016gav, 2019A&A...624A..92K} was used to extract the galaxy shapes.
\subsection{LRGs}
\label{sec:datalrg}
The LRG sample presented in \cite{2020arXiv200813154V} is based on KiDS DR4. In order to construct the catalogue, the red sequence up to redshift $z=0.8$ was obtained by combining spectroscopic data with the $griZ$ photometric information provided by the two surveys mentioned above. Furthermore, the near-infrared $K_s$ band from VIKING was used to perform a clean separation of stellar objects to lower the stellar contamination of the sample.
The color-magnitude relation that characterizes red galaxies was used to calibrate redshifts to a precision higher than generic photometric-redshift (photo-zs) methods, resulting in redshift errors for each galaxy below $0.02$. For more details on how the total LRG sample is defined and its broad properties, we direct the interested reader to \cite{2020arXiv200813154V}, or \cite{2019MNRAS.487.3715V}, a similar work based on a previous KiDS data release.
\cite{MCF2021} further analyzed this same catalog and calculated absolute magnitudes for all LRGs using \textsc{LePHARE} \citep{2011ascl.soft08009A} and \textsc{EZGAL} \citep{2012PASP..124..606M}. The first code corrects for the redshift of the rest-frame spectrum in the different passbands (k-correction), while the second corrects for the passive evolution of the stellar population (e-correction). For this work, we used these (k+e)-corrected luminosities as a tracer of total mass since the two are known to be highly correlated \citep[see, e.g.,][]{2006MNRAS.368..715M, 2015A&A...579A..26V}. Based on this, we then defined two samples with different absolute r-band magnitude cuts, $M_r<-22.8$ and $M_r<-23$, that we refer to as \emph{all} and \emph{high-mass} samples. These are the $10$ and $5$ percentile of the absolute magnitude distribution of the \emph{luminous} sample studied in \cite{MCF2021}, and the two samples contain $5524$ and $2850$ objects each.
Because the (k+e)-correction presented above is designed to correct for observational biases and galaxy evolution, the expected redshift distribution of the LRGs should correspond to a constant comoving density. However, when studying our samples (see Figure~\ref{fig:redshift}), it is clear that this assumption holds only until $z=0.55$. This suggests that the empirical corrections applied to the observed magnitudes are not optimal. It is important to stress that this discrepancy was not recognized before because our particular selection amplifies it: because we consider here the tail of a much larger sample ($N\sim10^5$) with a steep magnitude distribution, a small error in the lower limit induced a large mismatch at the high-luminosity end. To overcome this limitation, we discard all LRGs above $z=0.55$. After fitting the distributions in Figure~\ref{fig:redshift}, we obtained comoving densities $n = 7.5\times 10^{-6}$ Mpc$^{-3}$ and ${n= 4.0\times 10^{-6}~\text{Mpc}^{-3}}$ for the full and the high-mass samples.
\section{Profiles}
\label{sec:profiles}
In this section, we discuss how we used our data sets to produce two stacked signals measured around the LRGs: the galaxy profile, capturing the distribution of fainter red galaxies, and the weak lensing profile, a measure of the projected mass distribution extracted from the distorted shapes of background galaxies. We present these two profiles and the $68$ percent contours of two separate parametric fits in Figure~\ref{fig:measurement}. The details of the fitting procedure are explained in Section~\ref{sec:fit}.
\subsection{Galaxy profile}
\label{sec:galaxyanalysis}
We expect bright LRGs to be surrounded by fainter satellites, i.e., we expect them to be the central galaxies of galaxy groups or clusters. To obtain the projected number density profile of the surrounding KiDS galaxies, we split the LRG samples in $7$ redshift bins of size $\delta_z = 0.05$ in the range $z\in[0.2, 0.55]$. We then defined a corresponding KiDS galaxy catalog for each redshift bin, obtained the background-subtracted distribution of these galaxies around the LRGs, and finally stacked these distributions using the weights $w_i$ defined below.
We did not select the KiDS galaxies by redshift due to their large uncertainty. Instead, for each redshift bin, we used the entire KiDS catalogs and only applied two redshift-dependent selections: one in magnitude and one in color space. The reason behind the first selection is simple: compared to a flat signal-to-noise ratio (SNR) threshold, a redshift-dependent magnitude limit does not mix populations with different intrinsic magnitudes as a function of redshift \citep[as suggested by][]{2016ApJ...825...39M}. On the other hand, the color cut has a more physical explanation. Red satellites are the most abundant population in galaxy clusters and, due to their repeated orbits inside the host cluster, they are known to better trace dynamical features such as splashback \citep[see, e.g.,][]{2017ApJ...841...18B}. Combining these two criteria also has the effect of selecting a similar population even in the absence of k-corrected magnitudes.
For the highest redshift considered here, $z_\text{max}$, we limited ourselves to observed magnitudes $m_r<23$, equivalent to a $10$ SNR cut. We then extrapolated this limit to other redshift bins by imposing
\begin{equation}
\label{eq:magcut}
m_r < 23 - 5\log \left(
\frac{d_L(z_\text{max})}{d_L(z_i)} \right),
\end{equation}
where $z_i$ is the upper edge of the redshift bin considered, and $d_L(z)$ is the luminosity distance as a function of redshift. Afterward, we divided the galaxy catalogs into two-color populations by following the method of \cite{2020arXiv200811663A}. Compared to random points in the sky, the color distribution of KiDS galaxies around LRGs contains two features: an overdensity of ''red`` objects and a deficit of ''blue`` objects. Based on the red-sequence calibration of \cite{2020arXiv200813154V} and the location of the $4000$ \AA~break, we identified the ${(g-r)-(r-i)}$ plane as the most optimal color space to separate these two populations at redshifts $z\leq 0.55$. We also noted that the ${(i-Z)-(r-i)}$ plane would be better suited for higher redshifts. From the distribution in the color-color plane, the two classes can then be separated by the line perpendicular to the segment connecting these two loci and passing through their midpoint. Figure~\ref{fig:color_split} provides an example of this procedure. We point out that a more sophisticated selection could be used since the structure in color space suggests the existence of a compact red cloud. For the purposes of this work, however, we do not find this to be necessary.
We used \textsc{treecorr} \citep{2004MNRAS.352..338J, 2015ascl.soft08007J} to extract the correlation functions from the red galaxy catalogs defined above
\begin{equation}
\xi_i = \frac{DD_i}{DR_i} - 1,
\end{equation}
where $DD$ and $DR$ are the numbers of LRG-galaxy pairs calculated using the KiDS catalogs or the random catalogs, respectively. These randoms are composed of points uniformly distributed in the KiDS footprint. The error covariance matrices of these measurements were obtained by dividing the survey area into $50$ equal-areal jackknife regions. Because the signal is statistics limited, the off-diagonal terms of this matrix are found to be negligible. To further support this statement, we point out that due to the low number density of the sample (see Figure~\ref{fig:redshift}), the clusters do not overlap in real space.
Formally, the correlation function written above is related to the surface overdensity of galaxies:
\begin{equation}
\Sigma_{i}(R) = \xi_i(R) \Sigma_{0, i},
\end{equation}
where $\Sigma_{0, i}$ is the average surface density of KiDS galaxies in the $i$-th redshift bin. However, since we are interested in the shape of the profile and not its amplitude, we did not take this parameter into account when stacking the correlation functions $\xi_i$.
The signal considered in this paper is a weighted sum of the individual correlation functions. Formally:
\begin{equation}
\frac{\Sigma_\text{g}(R)}{\Sigma_0} = \frac{\sum_i w_i(R) ~\xi_i(R)}{\sum_i w_i(R)},
\end{equation}
where $\Sigma_0$ is a constant needed to transform the dimensionless correlation function into the projected mass density. Because we decided to fit the combination $\Sigma_\text{g}(R)/\Sigma_0$ directly, the value of this constant is unimportant. To optimize the stacked signal, we used as weights $w_i$ the inverse variance of our measurement. This corresponds to an SNR weighted average, where the SNR is, in our case, dominated by the statistical error of the DD counts.
The left side of Figure~\ref{fig:measurement} presents our measurement of the galaxy profile around the LRGs. As expected, the high-mass subsample has a higher amplitude compared to the entire sample.
\subsection{Weak lensing profile}
\label{sec:lensinganalysis}
The shapes of background sources are deformed, i.e., lensed, by the presence of matter along the line of sight. In the weak lensing regime, this results in the observed ellipticity $\bm{\epsilon}$ of a galaxy being a combination of its intrinsic ellipticity and a lensing shear. If we assume that the intrinsic shapes of galaxies are randomly oriented, the coherent shear in a region of the sky can therefore be computed as the mean of the ellipticity distribution. %
Consider a circularly symmetric matter distribution acting as a lens. In this case, the shear only contains a tangential component, i.e., the shapes of background galaxies are deformed only in the direction parallel and perpendicular to the line in the sky connecting the source to the center of the lens. Because of this, we can define the lensing signal in an annulus of radius $R$ as the average value of the tangential components of the ellipticities $\epsilon^{(t)}$. The next few paragraphs provide the details of the exact procedure we followed to measure this lensing signal around the LRGs in our samples. For this second measurement, we used the weak lensing KiDS source catalog extending up to redshift $z=1.2$ \citep[see also,][]{2015MNRAS.452.3529V, Dvornik_2017}.
Based on the lensfit weights $w_s$ associated with each source, we defined \emph{lensing} weights for every lens-source combination,
\begin{equation}
\label{eq:lensingeff}
w_\text{l,s} = w_\text{s} \left(\Sigma_{\text{crit, l}}^{-1}\right)^{2},
\end{equation}
where the two indices $\text{l}$ and $\text{s}$ are used to indicate multiple lens-source pairs. The second factor in the product above represents a lensing efficiency contribution and, in our formalism, this quantity does not depend on the source. It is calculated instead as an average over the entire source redshift distribution $n(z_\text{s})$:
\begin{equation}
\label{eq:lensingeff2}
\Sigma_\text{crit, l}^{-1} = \frac{4\pi G}{c^2} \frac{d_\text{A}(z_\text{l})}{(1+z_\text{l})^2} \int_{z_\text{l}+\delta}^{\infty} dz_\text{s} \; \frac{d_\text{A}(z_\text{l}, z_\text{s})}{d_\text{A}(0, z_\text{s})} n(z_\text{s}),
\end{equation}
where $d_\text{A}(z_1, z_2)$ is the angular diameter distance between the redshifts $z_1$ and $z_2$ in the chosen cosmology. Sources that belong to the correlated structure surrounding the lens might scatter behind it due to the uncertainty of the photometric redshifts. The gap between the lens plane and the source plane in the expression above ($\delta=0.2$) ensures that our signal is not diluted by this effect \citep[see appendix A4 of][]{Dvornik_2017}.
Once all of these ingredients are computed, an estimate of the measured lensing signal is given by:
\begin{equation}
\label{eq:deltaS}
\Delta \Sigma (R) =
\frac{
\sum_\text{l,s} \epsilon^\text{(t)}_{\text{l,s}} w_\text{l,s} \Sigma_{\text{crit, l}}
}{
\sum_\text{l,s} w_\text{l,s}
}
\frac{1}{1+m},
\end{equation}
where the sums are calculated over every source-lens pair, and $m$ is a residual multiplicative bias of order $0.014$ calibrated using image simulations \citep{Conti:2016gav, 2019A&A...624A..92K}. This signal is connected to the mass surface density $\Sigma_\text{m}(R)$ and its average value within that radius, $\overline{\Sigma}_\text{m}(<R)$.
\begin{equation}
\label{eq:esd}
\Delta \Sigma (R) = \overline{\Sigma}_\text{m}(<R) - \Sigma_\text{m} (R).
\end{equation}
The covariance matrix of this average lensing signal was extracted through bootstrapping, i.e., by resampling $10^5$ times the $1006$ $1\times1$ deg$^2$ KiDS tiles used in the analysis. This signal, like the galaxy profile before, is also statistics limited. Therefore we have not included the negligible off-diagonal terms of the covariance matrix in our analysis.
Finally, we note that we have thoroughly tested the consistency of our lensing measurement. We computed the expression in Equation~\eqref{eq:deltaS} using the cross-component $\epsilon^{(\times)}$ instead of the tangential $\epsilon^\text{(t)}$ and verified that its value was consistent with zero. Similarly, we also confirmed that the measurement was not affected by additive bias by measuring the lensing signal evaluated around random points.
\section{Three ways to measure cluster masses}
\label{sec:fit}
This section presents three independent measures of the total mass contained in the LRG haloes. We refer to these estimates as splashback (or dynamical) mass, lensing mass and abundance mass. The first two are extracted by fitting parametric profiles to the two signals presented in the previous section (Figure~\ref{fig:measurement}), and the third is based on a simple abundance matching argument. Fitting the galaxy profile allows us to constrain the splashback feature and provides a dynamical mass, while fitting the amplitude of the lensing signal provides a lensing mass.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c}
\hline
Parameter & Prior \\ \hline
$\alpha$ & $\mathcal{N}(0.2, 2)$ \\
$g$ & $\mathcal{N}(4, 0.2)$ \\
$\beta$ & $\mathcal{N}(6, 0.2)$ \\
$r_\text{t}/(1~\text{Mpc})$ & $\mathcal{N}(1, 4)$ \\
$s_\text{e}$ & $[0.1, 2]$ \\ \hline
\end{tabular}
\end{center}
\caption{The priors used in the fitting procedure of Section~\ref{sec:fit}. When fitting the data in the left panel of Figure~\ref{fig:measurement}, we employ the model in Equation~\eqref{eq:DK14} with the priors presented above. For some parameters, we impose flat priors in a range, e.g. $[a, b]$, while for others we impose a Gaussian prior $\mathcal{N}(m, \sigma)$ with mean $m$ and standard deviation $\sigma$. We do not restrict the prior range of the two degenerate parameters $\bar{\rho}$ and $r_0$.}
\label{tab:priors}
\end{table}
\subsection{Splashback mass}
\label{sec:fitmass}
Thanks to the splashback feature, it is possible to estimate the total halo mass by fitting the galaxy distribution with a flexible enough model. The essential feature that such three-dimensional profile, $\rho(r)$, must capture is a sudden drop in density around $r_\text{200m}$. Its most important parameter is the point of steepest slope, also known as the splashback radius $r_\text{sp}$. Equivalently, this location can be defined as the radius where the function $d \log \rho/d \log r$ reaches its minimum.
In general, the average projected correlation function can be written in terms of the average three-dimensional mass density profile as:
\begin{equation}
\label{eq:S}
\frac{\Sigma_\text{g} (R)}{\Sigma_0} = \frac{2}{\Sigma_0}\int_0^{\infty} d\Delta \, \rho\left(\sqrt{\Delta^2 + R^2}\right),
\end{equation}
In practice, we evaluated this integral in the range [$0$, $40$] Mpc and confirmed that our results are not sensitive to the exact value of the upper integration limit.
The specific density profile that we have used is based on \cite{2014ApJ...789....1D}, and it has the following form:
\begin{align}
\label{eq:DK14}
\rho(r) &=
\rho_{\text{Ein}}(r) f_{\text{trans}} (r) + \rho_{\text{out}} (r), \\
\rho_{\text{Ein}}(r) &= \rho_{\text{s}} \exp \left( -\frac{2}{\alpha}\left[\left( \frac{r}{r_{\text{s}}} \right)^{\alpha} - 1 \right] \right), \\
f_{\text{trans}} (r) &= \left[ 1+ \left(\frac{r}{r_\text{t}}\right)^{\beta}\right]^{-g/\beta}, \\
\rho_{\text{out}} &= \bar{\rho} \left( \frac{r}{r_0} \right)^{-s_\text{e}}.
\end{align}
These expressions define a profile with two components: an inner halo and an infalling region.
The term $\rho_\text{Ein}(r) f_\text{trans}(r)$ represents the collapsed halo through a truncated Einasto profile with shape parameter $\alpha$ and amplitude $\rho_s$ \citep{Einasto1965}.
The parameters $g, \beta$ in the transition function determine the maximum steepness of the sharp drop between the two regions, and $r_\text{t}$ determines its approximate location. Finally, the term $\rho_\text{out}(r)$ describes a power-law mass distribution with slope $s_\text{e}$ and amplitude $\bar{\rho}$, parametrizing the outer region dominated by infalling material. For more information about the role of each parameter and its interpretation, we refer the reader to \cite{2014ApJ...789....1D}, and previous measurements presented in the introduction \citep[see, e.g.,][for more details about the role of the truncation radius $r_\text{t}$]{2019MNRAS.485..408C}.
This profile is commonly used to parameterize mass profiles but is used in this section to fit a galaxy number density profile. When performing this second type of fit, the amplitudes $\rho_\mathrm{s}$ and $\bar{\rho}$ are dimensionless and, together with the flexible shape of the profile, completely capture the connection between the galaxy and matter density fields. Similarly to $\Sigma_0$, the value of these constants is not the focus of this paper.
To extract the location of the splashback radius for our two LRG samples, we fitted this model profile to the correlation function data using the ensemble sampler \textsc{emcee} \citep{Foreman-Mackey2013}. The priors imposed on the various parameters are presented in Table~\ref{tab:priors}, and we highlight in particular that the range for $\alpha$ is a generous scatter around the expectation from numerical simulations \citep{Gao2008}. The best-fitting profiles extracted from this procedure are shown in Figure~\ref{fig:measurement}.
In clusters, the location of the central galaxy might not correspond to the barycenter of the satellite distribution. While this discrepancy is usually accounted for in the modeling of the projected distribution in Equation~\eqref{eq:S}, we chose not to consider this effect in our primary analysis. This is justified by the fact that the miscentering term affects the profile within $R\sim0.1$ Mpc, while we are interested in the measurement around $R\sim 1$ Mpc \citep{2021arXiv210505914S}, and the data do not require a more flexible model to provide a good fit.
Finally, to transform the $r_\text{sp}$ measurements into a value for $M_\text{200m}$, we used the relations from \citet{2020ApJS..251...17D}, evaluated at our median redshift of $\bar{z}=0.44$. In this transformation, we employed the suggested theoretical definition of splashback, based on the $75$th percentile of the dark matter apocenter distribution. In the same paper, this definition of splashback based on particle dynamics has been found to accurately match the definition based on the minimum of $\log \rho / \log r$ used in this work. For more details about the relationship between these two definitions, we refer the reader to section 3.1 of \cite{2021MNRAS.tmp.1404C}.
Because the splashback radius depends on accretion rate, we used the median value of this quantity as a function of mass as a proxy for the effective accretion rate of our stacked sample. We note in particular that the additional scatter introduced by the accretion rate and redshift distributions is expected to be subdominant given the large number of LRGs we have considered.
\subsection{Lensing mass}
To extract masses from the lensing signal, we performed a fit using an NFW profile \citep{Navarro1996, Navarro1997}:
\begin{equation}
\label{eq:NFW}
\rho(r) =
\frac{1}{4 \pi F(c_\text{200m})}
\frac{M_\text{200m}}{r(r+ r_\text{200m}/c_\text{200m})^2},
\end{equation}
where $M_\text{200m}$ and $r_\text{200m}$ are related by Equation \eqref{eq:200m}, $c_\text{200m}$ is the halo concentration, and the function appearing in the first term is defined as:
\begin{equation}
\label{eq:f}
F(c) =\ln(1+c)-c/(1+c).
\end{equation}
From this three-dimensional profile, the lensing signal can be derived by replacing $\Sigma_\text{g}/\Sigma_0$ with $\Sigma_\text{m}$ in the projections Equations~\eqref{eq:esd} and \eqref{eq:S}.
We point out that we did not use the complex model of Equation~\eqref{eq:DK14} for the lensing measurement. This is because, the differences between the Einasto profile used there and the NFW profile presented above are not expected to induce systematic biases at the precision of our measurements \citep[see, e.g.,][]{2016JCAP...01..042S}. Although extra complexity might not be warranted, particular care should still be taken when measuring profiles at large scales, where the difference between the more flexible profile and a traditional NFW profile is more pronounced. Consequently, we reduce any bias in our measurement by fitting only projected distances $R<1.5$ Mpc, where the upper limit is decided based on the $r_\text{sp}$ inferred by our galaxy distribution measurement.
Since the mass and concentration of a halo sample are related, several mass-concentration relations calibrated against numerical simulations are available in the literature.
For the measurement presented in this section, we used the mass-concentration relation of \cite{2013ApJ...766...32B}. However, because this relation is calibrated with numerical simulations based on a different cosmology, we also fit the lensing signal while keeping the concentration as a free parameter. This consistency check is particularly important because halo profiles are not perfectly self-similar \citep{2015ApJ...799..108D} and moving between different cosmologies or halo mass definitions might require additional calibration.
We perform the fit to the profiles in the right panel of Figure~\ref{fig:measurement} using the median redshift of our samples, $\bar{z}=0.44$. We find that statistical errors dominate the uncertainties, and we do not measure any systematic effect due to the assumed mass-concentration relation.
\subsection{Abundance mass}
In addition to the two mass measurements extracted from the galaxy and lensing profiles, we also calculated masses using an abundance matching argument.
The comoving density of haloes of a given mass is a function of cosmology \citep{1974ApJ...187..425P}. Since we expect a tight relationship between the mass of a halo and the luminosity of the associated galaxy, any lower limit in the first can be converted into a lower limit in the second. Therefore, our measurement of the comoving density in Figure~\ref{fig:redshift} can be converted into a mass measurement. We note, in particular, that this step assumes that \citet{2020arXiv200813154V} built a complete sample of LRGs with no contamination and that the luminosity estimates obtained in \citet{MCF2021} are accurate, at least in ranking.
We used the mass function of \cite{2008ApJ...688..709T} at the median redshift $\bar{z}=0.44$ to convert our fixed comoving densities into lower limits on the halo mass $M_\text{200m}$. To complete the process, we then extracted the mean mass of the sample using the same
mass function.
The relation between halo mass and galaxy luminosity is not perfect, however, since the galaxy luminosity function is shaped by active galactic nuclei activity and baryonic feedback. These processes induce an increased scatter in the stellar mass to halo mass relation \citep{2014MNRAS.445..175G}, which we have not accounted for. This effect, combined with the uncertainties in the LRG selection and luminosity fitting, are the main sources of error for our abundance matching mass. Since we have not performed these steps in this work, however, we decided not to produce an uncertainty for this measurement and report it here without an error bar.
\section{Discussion}
\label{sec:discussion}
In this section, we compare and validate the measurements presented in the previous one. As an example of the power granted by multiple cluster mass measurements from the same survey, we also present an interpretation of these measurements in the context of modified theories of gravity.
In Figure~\ref{fig:mass} and Table~\ref{tab:table_masses}, we present the results of our two main mass measurements combined with the abundance-matching estimate introduced in the previous subsection. All measurements are in agreement, providing evidence that there is no significant correlation between the selection criteria of our LRG sample and the measurements performed here. The inferred average splashback masses of our LRG samples have an uncertainty of around $50$ percent.
The first striking feature is the varying degree of precision among the different measurements. The lensing result is the most precise, even when the concentration parameter is allowed to vary. In particular, the fact that the inferred profiles do not exhaust the freedom allowed by error bars in the right-hand panel of Figure~\ref{fig:measurement}
implies that our NFW model prior is responsible for the strength of our measurement and that a more flexible model will result in larger mass uncertainties. On the other hand, with splashback, we can produce a dynamical mass measurement without any knowledge of the shape of the average profile and, more importantly, without having to capture the exact nature of the measured scatter.
There is also a second, more important, difference between the two measurements that we want to highlight here. The SNR of the splashback mass is dominated by high-redshift LRGs since $\text{SNR}\sim \sqrt{N_\text{LRG}}$. While the ability to capture intrinsically fainter objects at low redshift might affect this scaling, we point out that the redshift-dependent magnitude cut introduced in Equation~\eqref{eq:magcut} explicitly prevents this. In contrast, the lensing weights in Equations~\eqref{eq:lensingeff} imply that the more numerous high-redshift objects do not dominate the lensing signal. This is due to a combination of the lower number of background sources available, the lower lensfit weights associated with fainter sources, and the geometrical term in Equation~\eqref{eq:lensingeff2}.
This point is explored quantitatively in Figure~\ref{fig:hz}, where we compare the two techniques for different redshift bins. The top panel is a projection of the left-hand panel of Figure~\ref{fig:mass} in terms of $r_\text{sp}$, while the other two are new results. These new measurements at higher redshift are obtained using the same methods presented in Section~\ref{sec:profiles}. To be precise: for the galaxy distribution, we impose a $10$ SNR cut for the KiDS galaxies and a subsequent color selection in the ${(i-Z)-(r-i)}$ plane; while for the lensing signal, we use the same source selection presented before. As visible in the Figure, both measurements degrade for higher redshifts, but the two scale differently. If we consider the size of the $68$ percentile intervals for the two measurements, at $z=[0.2, 0.5]$ we obtain a ratio between the two of $1:7$, while at $z=[0.65, 0.7]$ we obtain a ratio of $1:2.5$, significantly better. As discussed in a future section, this different scaling has important implications for future photometric missions.
As a final note on our main results, we point out that the difference between the masses of the two samples (\emph{all} and \emph{high-mass}) is $2\sigma$ for the lensing measurement, but it is not even marginally significant for the splashback values (due to the large error bars). As already shown in \cite{2019MNRAS.485..408C}, splashback measurements are heavily weighted towards most massive objects. To produce a non-mass weighted measure of the splashback feature, it is necessary to rescale the individual profiles with a proxy of the halo mass. However, because the study of $r_\text{sp}$ as a function of mass is not the main focus of this work, we leave this line of study open for future research.
\begin{table}
\hspace{-0.7cm}
\begin{tabular}{l|c|c|c|c}
\hline
Technique & \multicolumn{2}{c}{$M_\text{200m}$ ($10^{14}$ M$_\odot$)} & \multicolumn{2}{c}{$r_\text{sp}$ (Mpc)} \\
& All & High-mass & All & High-mass \\
\hline
Splashback & $0.57^{+0.36}_{-0.21}$ & $0.9^{+0.85}_{-0.38}$ & $1.48\pm 0.2$ & $1.68\pm 0.28$ \\
Lensing (fixed c) & $0.46\pm 0.03$ & $0.62\pm 0.05$ & $1.40\pm 0.01$ & $1.52\pm 0.02$ \\
\hline
Lensing (free c) & $0.44\pm 0.05$ & $0.54\pm 0.07$ & $1.39\pm 0.03$ & $1.6\pm 0.04$ \\
Abundance & $0.48$ & $0.74$ & $1.42$ & $1.6$\\
\hline
\end{tabular}
\caption{The mass measurements performed in this paper. This table summarizes the discussion of Section~\ref{sec:discussion} and the measurements presented in Figure~\ref{fig:mass} for our LRG samples (\emph{all} and \emph{high-mass}). The quoted splashback radii are in comoving coordinates. The abundance-matching measurements are provided without error bars as we have not modeled the selection function of our LRGs.
Most measurements and conversions between $M_\text{200m}$ and $r_\text{sp}$ are computed using a model at the median redshift $\bar{z}=0.44$, identical for both samples (see the end of Section~\ref{sec:fitmass} for details).
}
\label{tab:table_masses}
\end{table}
\subsection{Gravitational constants}
\label{sec:gravity}
In this subsection, we discuss how the combination of the lensing masses and splashback radii measured above can be used to constrain models of gravity. The principle behind this constraint is the fact that, while General Relativity (GR) predicts that the trajectories of light and massive particles are affected by the same metric perturbation, extended models generally predict a discrepancy between the two.
In extended models, the equations for the linearized-metric potentials
\citep[$\Phi$ and $\Psi$, see][]{1980PhRvD..22.1882B}
can be connected to the background-subtracted matter density $\rho(\bm{x})$ through the following equations \citep{2008JCAP...04..013A, 2008PhRvD..78b4015B, 2010PhRvD..81j4023P},
\begin{align}
\nabla^2 (\Phi + \Psi) = 8 \pi G \Sigma(x) \rho(x),
\label{eq:Poisson}
\\
\nabla^2 \Phi = 4 \pi G \mu(x) \rho(x).
\label{eq:Poisson2}
\end{align}
In the expressions above, the functions $\mu$ and $\Sigma$, also known as $G_\text{matter}/G$ and $G_\text{light}/G$ can be in principle a function of space and time (collectively indicated by $x$). We stress that the symbol $\Sigma$, previously used to refer to projected three-dimensional distributions ($\Sigma_\text{g}, \Sigma_\text{m}$), has a different use in this context. These equations are expressed in terms of $\Phi$ and $\Phi + \Psi$ because the trajectories of particles are affected by the first, while the deflection of light is governed by the second. In the presence of only non-relativistic matter, Einstein's equations in GR reduce to $\Phi=\Psi$ and we have $\Sigma = \mu = 1$.
The same type of deviation from GR can also be captured in the post-Newtonian parametrization by a multiplicative factor $\gamma$ between the two potentials: $\Psi = \gamma\Phi$. If $\mu, \Sigma$, and $\gamma$ are all constants, the three are trivially related:
\begin{equation}
\frac{\mu}{\Sigma} = \frac{1+\gamma}{2}.
\end{equation}
Under this same assumption, the ratio between the masses measured through lensing and the mass measured through the dynamics of test particles (e.g., faint galaxies or stars) can be used to constrain these parameters and the literature contains multiple results concerning these extended models. Solar System experiments have constrained $\gamma$ to be consistent with its GR value ($\gamma=1$) up to $5$ significant digits \citep{2003Natur.425..374B}, but the current measurements at larger scales are substantially less precise. For kpc-sized objects (galaxy-scale), stellar kinematics have been combined with solid lensing measurements to obtain $10$ percent constraints \citep{Bolton:2006yz, 2018Sci...360.1342C}, while large-scale measurements ($\sim 10-100$ Mpc) can be obtained by combining cosmic shear and redshift space distortion measurements to achieve a similar precision \citep[see, e.g.,][]{2013MNRAS.429.2249S, 2018MNRAS.474.4894J}. As for the scales considered in this paper, a precision of about $30$ percent can be obtained by combining lensing masses with either the kinematics of galaxies inside fully collapsed cluster haloes \citep{2016JCAP...04..023P} or the distribution of hot X-ray emitting gas \citep{2015MNRAS.452.1171W}. However, in this case, the effects of the required assumptions (e.g., spherical symmetry and hydrostatic equilibrium for the gas) are harder to capture. In all cases, no deviation from GR has been measured.
As an example of the power of the measurements presented in Section~\ref{sec:fit}, we present here their implication for beyond-GR effects. On one hand, our lensing signal is a measurement of the amplitude $M_\text{200m, L}$ of the lensing matter density $\rho_L = \rho \Sigma$. On the other hand, the splashback radius $r_\text{sp}$ depends on the amplitude of $ \rho_L \times \mu/\Sigma$ and it is related to the splashback mass $M_\text{200m, sp}$. Therefore, we focus on the ratio of these two amplitudes measured in the high-mass sample:
\begin{align}
\frac{\mu}{\Sigma} = \frac{M_\text{200m, L}}{M_\text{200m, sp}} = 0.8 \pm 0.4 && \Leftrightarrow && \gamma = 0.6\pm 0.8.
\end{align}
In high-density regions such as the Solar System, the expectation $\gamma = 1$ must be recovered with high precision. Hence, alternative theories of gravity commonly predict scale- and density-dependent effects, which cannot be captured through constant values of $\mu$ and $\Sigma$. Because $r_\text{sp}$ marks a sharp density transition around massive objects, it is more suited to test these complicated dependencies. To provide an example of the constraints possible under this second, more complex, interpretation, we followed \cite{Contigiani_2019} to convert the effects of an additional scale-dependent force (also known as a fifth force) on the location of the splashback radius $r_\text{sp}$. In particular, the model we employed is an extension of self-similar spherical collapse models and neglects any non-isotropic effects, e.g. those introduced by miscentering and halo ellipticity.
In the context of symmetron gravity \citep{2011PhRvD..84j3521H}, the change in $r_\text{sp}$ introduced by the fifth force is obtained by integrating the trajectories of test particles in the presence or absence of this force. In total, the theory considered has three parameters: 1) $\lambda_0/R(t_0)$, the dimensionless vacuum Compton wavelength of the field that we fix to be $0.05$ times the size of the collapsed object; 2) $z_\text{SSB}$, the redshift corresponding to the moment at which the fifth force is turned on in cosmic history, that we fix at $z_\text{SSB}=1.25$; and 3) $f$, a dimensionless force-strength parameter that is zero in GR. The choices of the fixed values that we imposed are based on physical considerations due to the connection of these gravity models to dark energy while maximizing the impact on splashback. See \cite{Contigiani_2019} for more details.
To match the expectation of the model to observations, we first converted the $M_\text{200m}$ lensing measurement into an expected splashback radius $r_\text{sp, L}$ by reversing the procedure explained at the end of Section~\ref{sec:fitmass} and then compared the measured $r_\text{sp}$ to this value. From the high-mass data, we obtained the following $1\sigma$ constraints:
\begin{align}
\label{eq:resultf}
\frac{ r_\text{sp, L} - r_\text{sp}}{r_\text{sp, L}} = 0.07 \pm 0.20 && \implies && f < 1.8.
\end{align}
The symmetron theories associated with $z_\text{SSB}\sim 1$ and cluster-sized objects correspond to a coupling mass $M_S$ scale of the order of $10^{-6}$ Planck masses, a region of the parameter space which is still allowed by the solar-system constraints \citep{2011PhRvD..84j3521H} and which has not been explored by other tests of symmetron gravity \citep[see, e.g.,][]{2018PhRvD..98f4019O, 2018LRR....21....1B}. In particular, the upper limit on $f$ produced here directly translates into a constraint on the symmetron field potential of \cite{Contigiani_2019}.\footnote{However, we stress here that this constraint does not have implications for dark energy, as the model considered is not able to drive cosmic acceleration in the absence of a cosmological constant.} In terms of the explicit parameters of the potential, reported here with an additional subscript $s$ for clarity ($M_s, \lambda_s, \mu_s$), we can define the degeneracy line delimiting the boundary of the constraint using the following relations:
\begin{align}
f \propto \mu_s \lambda_s^{-1} M_s^{-4} && (1+z_\text{ssb})^3
\propto M_s^2\mu_s^2.
\end{align}
Therefore, our result shows that we can test the existence of scalar fields with quite weak couplings and directly project these measurements into a broader theory parameter space.
\subsection{Future prospects}
\label{sec:future}
Our results show that the precision of the recovered splashback mass is not comparable to the low uncertainty of the lensing measurements. Because of this, every constraint based on comparing the two is currently limited by the uncertainty of the first. While this paper's focus is not to provide accurate forecasts, we attempt to quantify how we expect these results to improve in the future with larger and deeper samples. In particular, we focus our attention on wide stage-IV surveys such as \emph{Euclid} \citep{laureijs2011euclid} and Legacy Survey of Space and Time \citep[LSST,][]{2009arXiv0912.0201L}.
First, we investigate how our results can be rescaled. In the process of inferring $M_\text{200m}$ from $r_\text{sp}$, we find that the relative precision of the former is always a multiple ($3-4$) of the latter. This statement, which we have verified over a wide range of redshifts ($z \in [0, 1.5]$) and masses ($M_\text{200m} \in [10^{13}, 10^{15}]~\text{M}_\odot$), is a simple consequence of the low slope of the $M_\text{200m}-r_\text{sp}$ relation. Second, we estimate the size of a cluster sample we can obtain and how that translates into an improved errorbar for $r_\text{sp}$. LSST is expected to reach $2.5$ magnitudes deeper than KiDS and to cover an area of the sky $18$ times larger \citep{2009arXiv0912.0201L}. Part of this region is covered by the Galactic plane and will need to be excluded in practice, but the resulting LRG sample will reach up to $z\sim1.2$ and cover a comoving volume about a factor $100$ larger than what is considered in this work. Because the selected LRGs are designed to have a constant comoving density, we can use this estimate to scale the error bars of our galaxy profile measurement. A sample $N=100$ times the size would result in a relative precision in $r_\text{sp}$ of about $2.5$ percent, which translates into a measured $M_\text{200m}$ below $10$ percentage points. This result is obtained by simply re-scaling the error bars of the galaxy profiles by a factor $\sqrt{N} = 10$, but we stress that the effects do not scale linearly for $r_\text{sp}$ due to the slightly skewed posterior of this parameter. While this uncertainty is still larger than what is allowed by lensing measurements, we point out that this method can easily be applied to high-redshift clusters, for which lensing measurements are difficult due to the fewer background sources available (see Figure~\ref{fig:redshift}).
We note that this simple forecast sidesteps a few issues. Here we consider three of them and discuss their implications and possible solutions. 1) At high redshift, color identification requires additional bands, as the $4000$ \AA~break moves out of the LSST $grizy$ filters. Additional photometry will be required to account for this. 2) Even if we assume that an LRG sample can be constructed, the population of orbiting satellites at high redshift might not necessarily be easy to identify as the red sequence is only beginning to form. Ideally, there is always a color-magnitude galaxy selection that provides a profile compatible with the dark matter profile, but, at this moment, further investigation is required. 3) Finally, with more depth, we also expect fainter satellites to contribute to the galaxy profile signal, but the details of this population for large cluster samples at high redshift are not known. A simple extrapolation of the observed satellite magnitude distribution implies that the number of satellites forming the galaxy distribution signal might be enhanced by an additional factor $10$, reducing the errors in mass to a few percentage points. This, however, is complicated by the fact that different galaxy populations might present profiles inconsistent with the dark matter features \citep{2022arXiv220205277O}.
In addition to the forecast for the galaxy profiles discussed above, we also expect a measurement of $r_\text{sp}$ with a few percentage point uncertainty directly from the lensing profile \citep{2020MNRAS.499.3534X}. This precision will only be available for relatively low redshifts ($z\sim0.45$), enabling a precise comparison of the dark matter and galaxy profiles. This cross-check can also be used to understand the effects of galaxy evolution in shaping the galaxy phase-space structure \citep{2021arXiv210505914S} and help disentangle the effects of dynamical friction, feedback, and modified models of dark matter \citep{2016JCAP...07..022A, 2020JCAP...02..024B}.
\section{Conclusions}
\label{sec:conclusion}
Accretion connects the mildly non-linear environment of massive haloes to the intrinsic properties of their multi-stream regions. In the last few years, precise measurements of the outer edge of massive dark matter haloes have become feasible thanks to the introduction of large galaxy samples and a new research field has been opened.
In this paper, we have used the splashback feature to measure the average dynamical mass of haloes hosting bright KiDS LRGs. To support our result, we have validated this mass measurement using weak lensing masses and a simple abundance-matching argument (see Figure~\ref{fig:mass} and Table~\ref{tab:table_masses}).
The main achievement that we want to stress here is that these self-consistent measurements are exclusively based on photometric data. In particular, the bright LRG samples used here can be easily matched to simulations, offer a straightforward interpretation, and, in general, are found to be robust against systematic effects in the redshift calibration \citep{2021arXiv210106010B}. This is in contrast to other dynamical mass results presented in the literature: such measurements are based on expensive spectroscopic data \citep[see, e.g., ][]{2016ApJ...819...63R} and are found to produce masses higher than lensing estimates \citep{2020MNRAS.497.4684H}, an effect which might be due to systematic selection biases afflicting these more accurate measurements \citep{2015MNRAS.449.1897O}.
Because the relation between $r_\text{sp}$ and halo mass depends on cosmology, this measurement naturally provides a constraint on structure formation.
In this work, we have shown how the combination of splashback and lensing masses has the ability to constrain deviations from GR and the presence of fifth forces (see Section~\ref{sec:gravity}).
Although the precision of the splashback measurement is relatively low with current data, trends with redshift, mass, and galaxy properties are expected to be informative in the future \citep{2020MNRAS.499.3534X, 2021arXiv210505914S}. Next-generation data will enable new studies of the physics behind galaxy formation \citep{2020arXiv200811663A}, as well as the large-scale environment of massive haloes \citep{2021MNRAS.tmp.1404C}.
As mentioned in Section~\ref{sec:future}, stage IV surveys will substantially advance these new research goals. In particular, we have shown that splashback masses scale purely with survey volume, unlike lensing. This implies that this technique is uniquely positioned to provide accurate high-redshift masses.
\section*{Acknowledgements}
OC is supported by a de Sitter Fellowship of the Netherlands Organization for Scientific Research (NWO) and by the Natural Sciences and Engineering Research Council of Canada (NSERC). HH, MCF, and MV acknowledge support from the Vici grant No. 639.043.512 financed by the Netherlands Organisation for Scientific Research (NWO). AD is supported by a European Research Council Consolidator Grant No. 770935. ZY acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research (Germany). CS acknowledges support from the Agencia Nacional de Investigaci\'on y Desarrollo (ANID) through FONDECYT grant no.\ 11191125. All authors contributed to the development and writing of this paper. The authorship list is given in two alphabetical groups: two lead authors (OC, HH) and a list of authors who made a significant contribution to either the data products or the scientific analysis.
\section*{Data Availability}
The Kilo-Degree Survey data is available at the following link \url{https://kids.strw.leidenuniv.nl/}. The intermediate data products used for this article will be shared at reasonable request to the corresponding authors.
\bibliographystyle{mnras}
\bibliography{bibliography}
\bsp %
\label{lastpage} |
Title:
The radiation emitted from axion dark matter in a homogeneous magnetic field, and possibilities for detection |
Abstract: We study the direct radiation excited by oscillating axion (or axion-like
particle) dark matter in a homogenous magnetic field and its detection scheme.
We concretely derive the analytical expression of the axion-induced radiated
power for a cylindrical uniform magnetic field. In the long wave limit, the
radiation power is proportional to the square of the B-field volume and the
axion mass $m_a$, whereas it oscillate as approaching the short wave limit and
the peak powers are proportional to the side area of the cylindrical magnetic
field and $m_a^{-2}$. The maximum power locates at mass
$m_a\sim\frac{3\pi}{4R}$ for fixed radius $R$. Based on this characteristic of
the power, we discuss a scheme to detect the axions in the mass range
$1-10^4$\,neV, where four detectors of different bandwidths surround the
B-field. The expected sensitivity for $m_a\lesssim1\,\mu$eV under
typical-parameter values can far exceed the existing constraints.
| https://export.arxiv.org/pdf/2208.10398 |
\title{The radiation emitted from axion dark matter in a homogeneous magnetic field, and possibilities for detection}
\author{Shuo Xu}
\affiliation{School of Physics, Sun Yat-sen University, Guangzhou, GuangDong, People's Republic of China}
\affiliation{Department of Astronomy, Tsinghua University, People's Republic of China}
\author{Siyu Chen}
\author{Hong-Hao Zhang}
\email[Corresponding author. ]{zhh98@mail.sysu.edu.cn}
\author{Guangbo Long}
\email[Corresponding author. ]{longgb@mail2.sysu.edu.cn}
\affiliation{School of Physics, Sun Yat-sen University, Guangzhou, GuangDong, People's Republic of China}
\begin{center}
\end{center}
\section{Introduction}
\label{sec:intro}
The axion originates from the Peccei-Quinn solution to the strong $CP$ problem. Its generalization, axionlike particles (ALPs, in the following referred to as axion), are predicted by many extensions of the Standard Model~\cite{Svrcek:2006yi}. These well-motivated pseudoscalar particles of electrically-neutral are also the prime candidates for the dark matter (DM)~\cite{Preskill1983,Abbott1983,Dine1983}.
The interaction between the axion and the photon is described by the Lagrangian ${\cal L}_{a\gamma}=g_{a\gamma}{\bf E}\cdot{\bf B}\,a$, where ${\bf E}$ represents the electric field, ${\bf B}$ the magnetic field, $a$ the axion, and $g_{a\gamma}$ the coupling constant. If the axion is the DM particle, there is a oscillating axion field $a(t)$ in our Galactic halo. The external magnetic field ${\bf B}_{\rm e}$ with the axion field induces an extra source term in Maxwell's equations as the effective electric current density~\cite{Millar2017JCAP}
\begin{equation}
{\bf J}_a(t)=g_{a\gamma}{\bf B}_{\rm e}\dot a(t).
\label{Ja}
\end{equation}
Therefore, the energy of axion dark mater (ADM) can be converted to the electromagnetic field and the converted photons is expected to be observed as long as the B-field is strong enough. A number of experimental schemes have been designed to detect this axion-induced signal~\cite{Sikivie:2020zpn}.
At present, most of the existing or proposed experiments (haloscopes) take the advantage of the resonant enhancement on the EM modes drived by ${\bf J}_a(t)$~\cite{Sikivie:2020zpn}. For $1\,\mu {\rm eV} \lesssim m_{a}\lesssim 50\,\mu$eV, The cavity haloscopes (e.g.~ADMX~\cite{ADMX:2021nhd}, WISPDMX~\cite{WISPDMX}, CAPP~\cite{CAPP:2020utb}, HAYSTAC~\cite{Brubaker:2016ktl}, QUAX~\cite{Alesini:2019ajt}, GrAHal~\cite{Grenet:2021vbb}, RADES~\cite{CAST:2020rlf},ORGAN~\cite{Goryachev:2017wpw}) in strong magnetic fields are optimal strategy, where a narrow-band axion-induced excitation around each of the cavity resonances can be achieved~\cite{Sikivie:1983ip}. For smaller mass $m_{a}\lesssim 1\, \mu$eV, it can be explored by LC circuit~\cite{DMRadio,Cabrera2008,Sikivie:2013laa,Kahn:2016aff,Zhang:2021bpa,Chen:2021bgy} or radio-frequency cavity haloscopes~\cite{Berlin:2019ahk,Bogorad:2019pbu,Lasenby:2019prg, Berlin:2020vrk,Sikivie1009}. The former has been experimentally realized by SHAFT~\cite{Gramolin:2020ict}, ABRACADABRA~\cite{Ouellet:2018beu}, ADMX-SLIC~\cite{Crisosto:2019fcj} and the BASE Penning trap~\cite{Devlin:2021fpq}. For higher mass $50\,\mu {\rm{eV}}\lesssim m_{a}\lesssim\,1\,$eV, the traditional experiment of resonance cavity is limited by the smaller volume required by the resonant enhancement and impractical scan rates for high mass. As a result, several novel detection techniques of resonance or constructive interference, such as topological insulators target~\cite{Marsh:2018dlj,Schutte-Engel:2021bqm}, plasma Haloscopes~\cite{Lawson:2019brd}, multiple-cavity~\cite{Aja:2022csb,Jeong:2020mtp,Melcon:2018dba,Baryakhtar:2018doz} and dielectric haloscopes~\cite{Caldwell:2016dcw,MADMAX:2019pub,Millar2017JCAP}, have been proposed to exploit the ADM of this mass band.
Another type of strategy to probe the high-mass ADM is the dish-antenna~\cite{Horns:2012jf} or solenoidal haloscope~\cite{BREAD:2021tpx} without resorting to the resonant enhancement on the signal. The designed detector catch the axion-induced emission from the magnetized
dish-antenna or cylindrical metal barrel surface and the detectable power is proportional to the area of the reflecting surface $A_{\rm ref}$~\cite{Horns:2012jf,Jaeckel:2013eha}.
These haloscopes compensate the loss of the resonance enhancement by increasing the volume of the B-field and can work in a broadband state. Note that, the dielectric haloscopes~\cite{Caldwell:2016dcw,MADMAX:2019pub,Millar2017JCAP} evolved from the dish-antenna scheme.
In this work, we generalize the strategy of dish-antenna to a fully open case without a reflector in the B-field, which (or a similar case) was also mentioned in an preprint~\cite{Horns:2013ira}. We focus on investigating the direct radiation excited by the oscillating electric current density ${\bf J}_a(t)$, and derive the analytical expression of radiated power for a cylindrical uniform B-field concretely, which is more common in scientific detection instruments. Then, we discuss the possibility of detecting the ADM by means of the derived
radiated power.
The plan of this paper is as follows.
In Sec.~\ref{secII}, we will derive the radiated power and the energy flux from axion conversion in a homogeneous magnetic field.
Then, in Sec.~\ref{secIII} we will discuss the experimental detection of the ADM. Finally, discussions and conclusions are presented in Sec.~\ref{sec:discussion} and~\ref{sec:conclusion} respectively.
\section{The radiated power from axion conversion}
\label{secII}
This section derives the radiated power and the energy flux from the axion-photon conversion in a homogeneous static B-field $\mathbf{B}_e$. We use the Heaviside-Lorentz units, in which $c=1$, $k_{\rm B}=1$, the permeability and permitivity of the vacuum $\mu_0= \epsilon_0 = 1$.
\subsection{Axion electrodynamics}
We assume the axion field $a(t) = {\rm{Re}} (a_0\, e^{\rm{i}\,(\mathbf{k}_a\cdot \mathbf{x} - \omega t)})$ $\simeq$ ${\rm{Re}} (a_0\,e^{-\rm{i}\,\omega t})$ since ADM particles move non-relativistically based on $k_a\ll\omega$ and $\omega=m_a$. The retarded potential can be obtained from solving Maxwell's equations with the only source term $\mathbf{J}_a$ in Eq.\,(\ref{Ja})~\cite{Sikivie:2020zpn}
\begin{equation}
\mathbf{A} (\mathbf{x}) ={1\over 4 \pi}
\int_V
{\mathbf{j}_a\,{\rm e}^{-{\rm i}\,kr} \over r}~dV^{\,\prime},
\label{A1}
\end{equation}
where $k = \omega$, $\mathbf{j}_a={\rm i}\,g_{a\gamma} \omega a_0 \mathbf{B}_{\rm e}$, and $\epsilon=\mu=1$. Here, $r=\vert\mathbf{x} - \mathbf{x}^{\,\prime}\vert>$\,0 is the distance from the observation location $\mathbf{x}$ to $\mathbf{x}^{\,\prime}$ of the magnetic field region. The range of integration is in the region extended by the B-field volume $V$.
The axion-induced radiated the energy-flux $\mathbf{S}$ for E-field $\mathbf{E}$ and B-field $\mathbf{B}$ is given by
\begin{equation}
\langle \mathbf{S} \rangle\,=\frac{\rm Re(\mathbf{E}^{*}\times \mathbf{B})}{2}
,\,\,\, \mathbf{B}=\bigtriangledown\times \mathbf{A}, \,\,\, \mathbf{E}=\frac{\rm i}{\omega}\bigtriangledown\times \mathbf{B}.
\label{S}
\end{equation}
The bracket $\langle ... \rangle$ represents a time average.
For convenience, we calculate the radiated power from the axion conversion in the Fraunhofer radiated zone, i.e.,~$r>\frac{L^2\omega}{\pi}$ and $r\simeq\vert\mathbf{x}\vert\gg L$, where $L$ is the feature size of the B-field region.
The retarded potential in Eq.\,(\ref{A1}) can be reduced to
\begin{equation}
\mathbf{A}(\mathbf{x}) = {{\rm e}^{{\rm i}\,kr} \over 4 \pi r}\,\mathbf{j}_a(\mathbf{k})
+ {\rm O}({1 \over r^2}),
\label{A2}
\end{equation}
where $\mathbf{j}_a(\mathbf{k})=\int_V
{{\rm e}^{-{\rm i}\,\mathbf{k}\cdot\,\mathbf{x}^{\,\prime}}\mathbf{j}_a}~dV^{\,\prime}$ is the Fourier transform of $\mathbf{j}_a$. The direction of the wave vector satisfies $\mathbf{n}=\mathbf{k}/k=\mathbf{r}/r$.
The averaged electromagnetic power $dP$ radiated for area d$\mathbf{s}$ in
direction $\mathbf{n}$ can be represented by $dP=\langle \mathbf{S}\rangle\cdot d\mathbf{s}$=$\langle \mathbf{S} \rangle\cdot\mathbf{n}\,r^2 d\Omega$. Then, the time-averaged power in the Fraunhofer radiated zone can be estimated as
\begin{eqnarray}
P &=&\int_\Omega \langle \mathbf{S} \rangle\cdot\mathbf{n}\,r^2\,d\Omega
\nonumber\\
&=&\int_\Omega{k \omega \over 32\pi^2}
\vert \mathbf{n} \times \mathbf{j}_a (\mathbf{k})\vert^2\,d\Omega\nonumber\\
&=&
\frac{g_{a\gamma}k \omega^3 \vert a_0\vert^2}{32\pi^2}\,\int_\Omega\,\bigg| \int_V\,\mathbf{n}\times\mathbf{B_{\rm e}}\,
{{\rm e}^{-{\rm i}\,\mathbf{k}\cdot\,\mathbf{x}^{\,\prime}}}dV^{\,\prime}
\bigg|^2\,d\Omega \nonumber\\
&=& {\rho_a g_{a\gamma}^2 B_{\rm e}^2 \omega^2 \over 16 \pi^2}
\int_{0}^{2\pi}d\varphi\int_{0}^{\pi} d\theta\, {\rm sin}^3\theta\,
\bigg| \int_V\,
{{\rm e}^{-{\rm i}\,\mathbf{k}\cdot\,\mathbf{x}^{\,\prime}}}dV^{\,\prime}
\bigg|^2 \nonumber\\& & ~\
\label{kpow}
\end{eqnarray}
where $\rho_a=\frac{\omega^2 \vert a_0\vert^2}{2}$ is the energy density of the ADM, and we asumme $\vert\mathbf{n}\times\mathbf{B_{\rm e}}\vert=B_{\rm e}\,{\rm sin}\theta$ for the homogenous extra B-field (see, Fig.\,\ref{B}).
Note that the excited E-field in the homogeneous B-field is also an oscillating field ${\bf E}_a(t)=-g_{a\gamma}{\bf B}_{\rm e}\,a(t)$~\cite{Millar2017JCAP}.
\subsection{The axion-induced radiated power for a cylindrical shape of the magnetic field}
We study the axion-induced radiation in a cylindrical uniform B-field
with height $h$ and radius $R$, as shown in Fig.\,\ref{B}.
The time-averaged radiated power can be further simplified by integrating over the volume of the B-field in Eq.\,(\ref{kpow}). We calculate $\int_V\,
{{\rm e}^{-i\,\mathbf{k}\cdot\,\mathbf{x}^{\,\prime}}}dV^{\,\prime}$ in cylindrical coordinate for $\mathbf{x}^{\,\prime}$ and in spherical coordinate for $\mathbf{k}$ as
\begin{eqnarray}
I_V &=&
\int_{-\frac{h}{2}}^{\frac{h}{2}}{\rm exp}(-{\rm i}k{\rm cos}\theta z) dz\,\int_{0}^{2\pi}d\phi\int_{0}^{R}d\rho\,{\rm exp}(-{\rm i} \rho k {\rm sin}\theta \nonumber\\
& &({\rm cos}\varphi {\rm cos}\phi+{\rm sin}\varphi {\rm sin}\phi))\nonumber\\
&=& \frac{2{\rm sin}(\frac{h}{2}k{\rm cos}\theta)}{k{\rm cos}\theta}\int_{0}^{R} d\rho\int_{0}^{2\pi}d\phi\,{\rm e}^{-{\rm i} \rho k {\rm sin}\theta {\rm cos}(\varphi-\phi)} \nonumber\\
&=& \frac{2{\rm sin}(\frac{h}{2}k{\rm cos}\theta)}{k{\rm cos}\theta}\,\int_{0}^{R}d\rho\int_{0}^{2\pi}d\phi\,{\rm e}^{-{\rm i} \rho k {\rm sin}\theta {\rm cos}(\phi)},
\label{pow}
\end{eqnarray}
where we have used the column symmetry of the radiation and have chosen $\varphi$=0. By means of the
zero-order Bessel-function ${\rm J}_0(x)=\frac{1}{2\pi}\int_{0}^{2\pi}{\rm exp}(-{\rm i} x {\rm cos}(\phi))d\phi$, the representation of $I_V$ can be reduced to
\begin{eqnarray}
I_V &=& \frac{2{\rm sin}(\frac{h}{2}k{\rm cos}\theta)}{k{\rm cos}\theta}\,\int_{0}^{R}2\pi{\rm J}_0(\rho k {\rm sin}\theta)\rho\, d\rho \nonumber\\
&=& \frac{2{\rm sin}(\frac{h}{2}k{\rm cos}\theta)}{k{\rm cos}\theta}\frac{2\pi}{(k {\rm sin}\theta)^2}\int_{0}^{Rk{\rm sin}\theta}(\rho k{\rm sin}\theta){\rm J}_0(\rho k {\rm sin}\theta)\nonumber\\
& &d(k\rho {\rm sin}\theta)\nonumber\\
&=& \frac{2{\rm sin}(\frac{h}{2}k{\rm cos}\theta)}{k{\rm cos}\theta}\,2\pi R^2\frac{{\rm J}_1(Rk{\rm sin}\theta)}{Rk{\rm sin}\theta},
\label{pow1}
\end{eqnarray}
where we have used the relation of $\int x{\rm J}_0(x)=x{\rm J}_1(x)$ in the last step of the derivation. The integral over $\rho$ and $\phi$ above implies the Fraunhofer diffraction of a circular hole.
Therefore, one can obtain the time-averaged radiated power for the ``cylindrical'' B-field as
\begin{eqnarray}
P &=& {\rho_a g_{a\gamma}^2 B_{\rm e}^2 \omega^2 \over 16 \pi^2}
\int_{0}^{2\pi}d\varphi\int_{0}^{\pi}d\theta\,{\rm sin}^3\theta\,
|I_V|^2\nonumber\\
&=&{2\pi \rho_a g_{a\gamma}^2 B_{\rm e}^2 R^4}\int_{0}^{\pi}{\rm sin}^3\,\theta \Big(\frac{{\rm sin}(\frac{h}{2}\omega{\rm cos}\theta)}{{\rm cos}\theta}\Big)^2\,\nonumber\\
&&\Big(\frac{{\rm J}_1(R\omega{\rm sin}\theta)}{R\omega{\rm sin}\theta}\Big)^2 d\theta.
\label{pow2}
\end{eqnarray}
Now, the integral above cannot be further calculated analytically and we need to study the integral function to approximate it. Fig.\,\ref{fig:f} shows the numerical plots of the integral function in Eq.\,(\ref{pow2}) for different sizes of the B-field volume, where the yellow lines represent a approximation with taking $\theta=\pi/2$ in the ``aperture diffraction function'' $\big(\frac{{\rm J}_1(R\omega{\rm sin}\theta)}{R\omega{\rm sin}\theta}\big)^2$. The ordinates of the top and middle panels are shown in logarithmic form. We can find that the radiation is concentrated around the horizontal direction ($\theta=\frac{\pi}{2}$), which is similar to the principal maximum of Fraunhofer diffraction. The secondary peak number in the radiation angular distribution is $N=2(N_h+ N_R-2)$ when $N_h=\frac{h \omega}{2\pi}$ and $N_R=\frac{R\omega}{\pi}$ are integers. When $h>\frac{2\pi}{\omega}$ and $R>\frac{\pi}{\omega}$, the radiation is more concentrated horizontally. The approximations are more accurate for $h\gtrsim2R$, which means the dominance of the longitudinal radiation over the overall radiation, as shown in the top panel.
Inspired from the above analysis, we can use the delta function to approximate the longitudinal integral function as
\begin{eqnarray}
P&=&{2\pi \rho_a g_{a\gamma}^2 B_{\rm e}^2 R^4}\int_{0}^{1}-{\rm sin}^3\theta \pi\frac{{\rm sin^2}(\frac{\omega h}{2}{\rm cos}\theta)}{\pi \frac{\omega h}{2}{\rm cos}^2\theta} \frac{\omega h}{2}\nonumber\\
&&\Big(\frac{{\rm J}_1(R\omega{\rm sin}\theta)}{R\omega{\rm sin}\theta}\Big)^2 \frac{d({\rm cos}\theta)}{{\rm sin}\,\theta} \nonumber\\
&\simeq& {2\pi^2 \rho_a g_{a\gamma}^2 B_{\rm e}^2 R^4}\int_{0}^{1}-{\rm sin}^3\,\theta \delta({\rm cos}\theta) \frac{\omega h}{2}\nonumber\\
&&\Big(\frac{{\rm J}_1(R\omega{\rm sin}\theta)}{R\omega{\rm sin}\theta}\Big)^2 \frac{d({\rm cos}\theta)}{{\rm sin}\,\theta}\nonumber\\
&=& {\pi^2 \rho_a g_{a\gamma}^2 B_{\rm e}^2 R^4 h \omega}
\Big(\frac{{\rm J}_1(R\omega)}{R\omega}\Big)^2
\label{pd}.
\end{eqnarray}
To ensure the validity of the $\delta$ approximation, $\frac{\omega h}{2}$ is required to be sufficiently large compared to the other variate or parameter of the internal function, such as $\rm cos\theta$ and $\omega R$. Here, we require $\frac{\omega h}{2}>1$ (the maximum of ${\rm cos}\theta$) to ensure the horizontal focusing of the radiation and $h\gtrsim2R$ to ensure the dominance of the longitudinal radiation, as analyzed above.
We further expand the Bessel function for $\omega R>1$ so that Eq.\,(\ref{pd}) is reduced to
\begin{eqnarray}
P&\simeq& 2\pi \rho_a g_{a\gamma}^2 B_{\rm e}^2 \frac{R h}{\omega^2}
{\rm cos}^2(R\omega-\frac{3\pi}{4}),\,\,(\frac{\omega h}{2}\gtrsim\omega R>1) . \nonumber\\
\label{pJ}
\end{eqnarray}
We are more interested in the extremum (resonance point) of the power $P$, around which the $\delta$ approximation is also more valid. The power at the extreme points is given by
\begin{eqnarray}
P &=&\rho_a g_{a\gamma}^2 B_{\rm e}^2 \frac{A}{\omega^2},\,\big(\frac{\omega h}{2}\gtrsim R\omega=(n+\frac{3}{4}) \pi, n=0,1,2\cdot\cdot\cdot\big)\nonumber \\
&=&6.9\times10^{-23}{\rm W}\,\frac{\rho_a}{0.3\, \rm GeV\,cm^{-3}}\frac{h}{2\,\rm m}\frac{R}{\rm m}\Big(\frac{B_{\rm e}}{10\,\rm T}\Big)^2 \nonumber \\
&&\Big(\frac{\rm \mu eV}{m_a}\Big)^2\Big(\frac{g_{a\gamma}}{10^{-14}\rm GeV^{-1}}\Big)^2,\,\,\,\big(h\gtrsim 2R, \nonumber \\
&&5.1 \frac{R}{\rm m}\frac{m_a}{\mu \rm eV}=(n+\frac{3}{4})\pi, \, n=0,1,2\cdot\cdot\cdot\big)\,\,\,,
\label{Pr}
\end{eqnarray}
where $A=2\pi Rh$ is the side area of the cylinder. The extreme radiated power is proportional to the surface area of B-field region and is inversely proportional to the square of the frequency, which is the same as the disk antenna experiment~\cite{Horns:2012jf}.
Fig.\,\ref{fig:ph} shows the comparison between the radiated power (blue line) resulting from the numerical integral according to Eq.\,(\ref{pow2}) and its approximation (yellow line) according to Eq.\,(\ref{pJ}). The approximation is good around the peaks when $\frac{\omega h}{2}\gtrsim\omega R>1$. However, when $\omega R<1$, the approximation fails at small $R$ or $\omega$ as shown in all three figures, and the middle panel shows a similar case of $h<2 R$ when $R>2\,$m.
For a fixed B-field volume, the maximum of the radiated power is the first peak at $\omega=\frac{3}{4R} \pi$ ($R\omega=(n+\frac{3}{4}) \pi>1$, n=0), before which the power $P(\omega)$ is an increasing function of the frequency $\omega^2$, and after which decreasing as $\frac{1}{\omega^2}$. This feature helps us find the best measurement frequency for a fixed B-field volume.
Fig.\,\ref{fig:pr} shows the radiated power of Eq.\,(\ref{Pr}) at extreme points which varies with R. The spacing between the extreme points is very small, which means that detecting such peak powers for $m_a=100\,\mu$eV requires the centimeter-level accuracy of the magnetic field device.
The results are obtained in the short wave approximation for the radiated power. The result of the long wave approximation is easy to obtain under the approximation of ${{\rm e}^{-{\rm i}\,\mathbf{k}\cdot\,\mathbf{x}^{\,\prime}}}\simeq1$, that is
\begin{eqnarray}
P &=& {\rho_a g_{a\gamma}^2 B_{\rm e}^2 \omega^2 \over 16 \pi^2}
\int_{0}^{2\pi}d\varphi\,\int_{0}^{\pi}d\theta\, {\rm sin}^3\theta\,
\bigg| \int_V\,dV^{\,\prime}
\bigg|^2 ,\nonumber\\
&= &{\rho_a g_{a\gamma}^2 B_{\rm e}^2 V^2 \omega^2 \over 6 \pi},~\ (R\omega\ll1,\,h\omega\ll1)\nonumber \\
&=&7.8\times10^{-23}{\rm W}\,\frac{\rho_a}{0.3 \rm GeV\,cm^{-3}} \Big(\frac{B_{\rm e}}{10\rm T}\Big)^2 \Big(\frac{m_a}{\rm 100\,neV}\Big)^2\nonumber \\
&&\Big(\frac{g_{a\gamma}}{10^{-14}\rm GeV^{-1}}\Big)^2\Big(\frac{h}{\rm 2m}\Big)^2\big(\frac{R}{\rm m}\big)^4,\,\,\,(0.51 \frac{R}{\rm m}\frac{m_a}{100 \rm neV} \nonumber \\
&&\ll1,\,0.51 \frac{h}{\rm m}\frac{m_a}{100 \rm neV}\ll1).
\label{Pl}
\end{eqnarray}
The radiated power is proportional to the square of the B-field volume and the frequency, and is independent of the shape of the homogenous B-field region. Fig.\,\ref{fig:pl} (yellow line) shows the power in Eq.\,(\ref{Pl}) as a function of $R$ and $\omega$, respectively. Blue lines represent the radiated power (blue line) according to Eq.\,(\ref{pow2}). We can find the approximation is consistent with the numerical results when the condition $\omega R\ll1$ is satisfied.
This calculation pattern for the radiated power under the homogenous ``cylindrical magnetic field'' above can be generalized to the case with a B-field of cubic volume.
\subsection{The energy flux}
The radiated power is independent of the distance to the radiation source, but the energy flux depends. In this part, we give the formula of the energy flux , which may be used to detect the axion-induced radiation.
The electric and magnetic fields can be derived with Eq.\,(\ref{A1}) and Eq.\,(\ref{S}) as
\begin{eqnarray}
\mathbf{B}&=&-{\rm i}\frac{a_0 \omega g_{a\gamma}B_e}{4\pi}\left(I_{1y}\mathbf{e}_x-I_{1x}
\mathbf{e}_y\right) \label{equ:B}\\
\mathbf{E}&=&\frac{a_0 g_{a\gamma}B_e}{4\pi}\big(I_{2xz}\mathbf{e}_x+I_{2yz}-(I_{2x^2}+I_{2y^2}
+2I_{0})\mathbf{e}_y\big)\nonumber\\
&&
\label{equ:E}
\end{eqnarray}
with x-y-z coordinate basis vector $\mathbf{e}_x$, $\mathbf{e}_y$, and $\mathbf{e}_z$. The integrals $I$ are defined as
\begin{eqnarray}
I_0&=&\int_{V'}\frac{{\rm i}\omega r -1}{r^3}e^{{\rm i}\omega r}dV' \\
I_{1x}&=&\int_{V'}\frac{{\rm i}\omega r -1}{r^3}e^{{\rm i}\omega r}(x-x')dV' \\
I_{2x^2}&=&\int_{V'}\frac{3-3i\omega r-\omega^2r^2}{r^5}e^{{\rm i}\omega r}(x-x')^2dV' \\
I_{2xz}&=&\int_{V'}\frac{3-3i\omega r-\omega^2r^2}{r^5}e^{{\rm i}\omega r}(x-x')(z-z')dV', \nonumber\\
&&
\end{eqnarray}
where
\[
r=\sqrt{(x'-x)^2+(y'-y)^2+(z'-z)^2}
\]
is the distance from the observation position ($\mathbf{x}$) to the source point ($\mathbf{x}'$). Then, we can obtain the average energy flux according to Eq.\,(\ref{A1})
\begin{eqnarray}
S_x&=&\frac{\rho_ag_{a\gamma}^2B_e^2}{16\pi^2\omega}\textrm{Re}\left[{\rm i}I_{1x}
(I^\ast_{2x^2}+I^\ast_{2y^2}+2I^\ast_0)\right] \label{equ:numSx} \\
S_y&=&\frac{\rho_ag_{a\gamma}^2B_e^2}{16\pi^2\omega}\textrm{Re}
\left[{\rm i}I_{1y}
(I^\ast_{2x^2}+I^\ast_{2y^2}+2I^\ast_0)\right] \label{equ:numSy}\\
S_z&=&\frac{\rho_ag_{a\gamma}^2B_e^2}{16\pi^2\omega}\textrm{Re}
\left[{\rm i}
(I^\ast_{2xz}I_{1x}+I^\ast_{2yz}I_{1y})\right]. \label{equ:numSz}
\end{eqnarray}
The component of the flux is defined as $S_{x,y,z}=\langle \mathbf{S} \rangle\cdot \mathbf{e}_{x,y,z}$\,.
They can be rewritten in the international unit, e.g.,\,for $S_x$, that is
\begin{gather}
S_x=1.76\times10^{-20}\textrm{W}/\textrm{m}^2\frac{\rho_a}
{0.3\textrm{GeV}/\textrm{cm}^3}\left(\frac{g_{a\gamma}}{10^{-12}
\textrm{GeV}^{-1}}\right)^2\nonumber\\
\frac{100\textrm{neV}}{m_a}\left(\frac{B_e}{10\textrm{T}}\right)^2
\textrm{Re}
\left[{\rm i}\frac{I_{1x}}{\rm m}(I^\ast_{2x^2}+I^\ast_{2y^2}+2I^\ast_0)\right]. \label{equ:numSx0}
\end{gather}
When the observation is located at the far-field region ($|\mathbf{x}|\gg| \mathbf{x}'|$) and the wave length is much larger than the size of the radiation source ($\lambda\gg| \mathbf{x}'|$), we can easily get the magnitude of the flux from Eq.\,(\ref{kpow}) as
\begin{eqnarray}
S&=& {\rho_a g_{a\gamma}^2 B_{\rm e}^2 \omega^2 V^2\over 16 \pi^2r^2}{\rm sin}^2\theta\nonumber\\
&=&2.3\times10^{-25}\textrm{W}/\textrm{m}^2\frac{\rho_a}
{0.3\textrm{GeV}/\textrm{cm}^3}\left(\frac{g_{a\gamma}}{10^{-12}
\textrm{GeV}^{-1}}\right)^2\nonumber\\
&&\left(\frac{m_a}{100\textrm{neV}}\right)^2\left(\frac{B_e}{10\textrm{T}}\right)^2
\left(\frac{V}{\rm m^3}\right)^2 \left(\frac{100\rm m}{r}\right)^2 {\rm sin}^2\theta\,.
\label{equ:S0}
\end{eqnarray}
Fig.\,\ref{fig:S} shows the comparison between the energy flux (blue line) according to Eq.\,(\ref{equ:numSx}-\ref{equ:numSz}) and its approximation in the long wave and the far-field limit (red line) as Eq.\,(\ref{equ:S0}). For $\omega=100\,$neV, when $r\gtrsim3\,$m, the condition $|\mathbf{x}|\gg| \mathbf{x}'|$ is satisfied, and the approximation is consistent with the numerical result. For higher frequency or axion mass, the integral function in Eq.\,(\ref{equ:numSx}-\ref{equ:numSz}) oscillates much faster, some special numerical techniques are required and it will be left to subsequent research.
\section{The experimental detection}
\label{secIII}
We discuss the possible to detect the ADM-induced radiation for the homogenous ``cylindrical B-field''.
The radiated power increases with the volume or the side area of the magnetic field, which however must be finite. For a fixed B-field volume or the side area of the cylinder, the maximum of the power $P(\omega)$ occurs at the first peak determined by $\omega\simeq\frac{3}{4R} \pi$ ($R\omega=(n+\frac{3}{4}) \pi>1$, n=0), before which the power $P(\omega)$ is an increasing function of the frequency $\omega^2$, and after which decreasing as $\frac{1}{\omega^2}$, e.g., see the bottom panel of Fig.\,\ref{fig:ph}. If the maximum allowable radius $R$ is 2.5\,m, the maximum value of the $P(\omega)$ occurs at $\gtrsim$200\,neV.
Therefore, we attempt to focus on the bandwidth of $1-10^4$\,neV, which corresponds to the frequencies $0.24\,{\rm MHz}-24\,{\rm GHz}$. At this bandwidth, the typical detection level with techniques of the radio-astronomical measurement is $\lesssim10^{-22}$\,W~\cite{Horns:2013ira}. For typical parameters as in Eq.\,(\ref{Pr}) and (\ref{Pl}), the detectable axion-photon coupling $g_{a\gamma}$ can reach about $10^{-14}\,{\rm GeV^{-1}}$ which is allowed by the present observation except for a very narrow mass band excluded by ADMX~\cite{ADMX:2021nhd}. It seems to be promising to detect the radiation from such axions on experiment.
The thermal noise of the detecting instrument is the main noise.
Within a bandwidth of $\Delta\nu$, the ratio of the signal to one $\sigma$ fluctuation in the noise, i.e,\, the signal to noise ratio (SNR), is given by Dicke's radiometer equation~\cite{Dicke:1946glx}:
\begin{equation}
{\rm SNR} = {P_{\rm signal} \over T_{\rm n}} \sqrt{{\Delta t\over \Delta\nu}}
\label{radio}
\end{equation}
where $T_{\rm n}$ represents the total noise temperature and $P_{\rm signal}$ is the ADM-induced radiated power.
\subsection{broadband searches}
As we can see in Fig.\,\ref{fig:ph} and \ref{fig:pl}, the radiation peaks present wide, especially at low frequencies, since our scheme gives up the resonance enhancement on the ADM-induced signal.
The measurement bandwidth $\Delta \nu$ of our scheme mainly depends on detector technology. We can use dish antennas as the detectors, which are widely used in radio astronomy. For modern detectors of radio astronomy, the bandwidths $\Delta \nu$ can be
in excess of 1\,GHz and the spectral resolutions can be better than $10^6$~\cite{Horns:2013ira}. We conservatively divide the interested detection bandwidth $1-10^4\,$neV into four section: Band\,I (1-11\,neV), Band\,II (11-111\,neV), Band\,III (0.111-1.11$\mu$eV), Band\,IV (1.11-11.11\,$\mu$eV). Since the radiometry device of cylinder is very large and has symmetry with respect to the x-y plane and z-axial, we can use four detectors with different bandwidths surrounding the magnetic field to detect the radiation at the Band\,I-IV simultaneously . Each detector occupies about a quarter of a sphere in space, see Fig.\,\ref{fig:scheme}.
The ideal detectable power for each detector is a quarter of total power
\begin{equation}
P_{\rm signal} = \frac{1}{4} P.
\label{Eq:psignal}
\end{equation}
Note that the detectors should closely surround the magnetic field to avoid radiation energy leaking due to the diffraction effect. Each detector with a specific bandwidth can consist of two identical self-detectors, as a small volume is conducive to lowering the operating temperature.
\subsection{The expected sensitivity}
Using Eq.\,(\ref{pow2}), (\ref{radio}) and (\ref{Eq:psignal}),
the expected sensitivity for our detection scheme (Fig.\,\ref{fig:scheme}) is shown in shades of light blue in Fig.\,\ref{fig:sensitivity}. The detection time is assumed to $\Delta t=3\,$years, SNR=5, total noise temperature $T_{\rm n}=1\,$K, $B=10\,{\rm T}$, and $\rho_{a}=0.3\,{\rm GeV cm^{-3}}$. The height and radius of the cylindrical magnetic field are 1\,m and 2\,m, respectively.
In general, the sensitivity curve shows a shape with low middle and high ends, which is mainly determined by the radiated power. The strongest limitation can reaches $g_{a\gamma}\simeq10^{-14}\rm GeV^{-1}$ at about 500\,neV, as the maximum of the radiation power occurs at this mass point. The dark shading shows existing constraints. Our sensitivity is far beyond existing constraints except for ADMX which operates in the narrow bandwidth mode. If our scheme also only focus on a very narrow bandwidth in which the frequency corresponding to the maximum power is located by adjusting the radius $R$ of the B-field to satisfy $\omega\sim\frac{3}{4R} \pi$, the sensitivity is expected to increase by an order of magnitude.
\section{Discussion}
\label{sec:discussion}
We mainly discuss the comparison of our scheme with other experiments, especially the dish-antenna experiment that is similar to ours and the high mass axion detections.
In the dish-antenna or cylindrical metal barrel experiment, the emitted power from the dish surface is $P=\frac{1}{2}\rho_a g_{a\gamma}^2 B_{\rm e}^2 \frac{A_{\rm dish}}{m_a^2}$~\cite{BREAD:2021tpx}. It is similar to our result $P=\frac{1}{2}\rho_a g_{a\gamma}^2 B_{\rm e}^2 \frac{A}{m_a^2}$ in Eq.\,(\ref{pJ}) using $\langle \rm{cos^2} \rangle=\frac{1}{2}$. When the side area $A$ of the B-field parallel to $\mathbf{B}$ is equal to $A_{\rm dish}$, the two powers are the same. It seems that the photon from axion conversion is emitted from the interface between the magnetic field and the outside. But, in our scheme, when $R\omega\ll1$, the power in proportional to the square of the volume and the mass so that the maximum power $P(\omega)$ for $5\gtrsim R\gtrsim0.5\,$m appears at $0.1\lesssim\omega\lesssim1\,\mu$eV. In this sense, our scheme is more suitable to detect the axions at $0.1\lesssim\omega\lesssim1\,\mu$eV band.
Compared to the LC circuit (e.g.,\,\cite{Ouellet:2018beu}) and cavity experiments (e.g.,\,\cite{ADMX:2021nhd}), our detection scheme is simpler but requires a much larger volume of B-field. It seems more reasonable to exploit the detection of the axion-induced radiation by using some other particular experiments (e.g.,\,stellarators) with a very strong and large B-field. The main challenge is to detect the un-concentrated radiant energy in very low temperature mode.
Though the proposed detection mass in our scheme is in $1-10^4\,$neV band, it is not constrained by resonance requirements as in cavity detections. At higher frequency, the diffraction effect of radiation can be weakened so that we can focus the radiation to a small detector according to geometric optics. Therefore, it could, in principle, be used to detect axion with mass $m_a>100\,\mu$eV. Fig.\,\ref{fig:scheme1} shows the detection scheme for infrared radiated photons. Two parabolic mirrors surround the magnetic field and reflect the incident ADM-induced radiation to the receivers at the focal point. The energy flux can be calculated by Eqs.\,(\ref{equ:numSx})-(\ref{equ:numSz}).
\section{Conclusion}
\label{sec:conclusion}
In summary, we have studied the direct radiation excited by the oscillating axion DM in a homogenous magnetic field and tried to devise a scheme to detect such axions.
We have concretely derived the analytical expression of the axion-induced radiated power for cylindrical uniform magnetic field, see Eqs.\,(\ref{pow2})-(\ref{Pl}). In the long wave limit $R\omega\ll1$, the radiation power is proportional to the square of the volume and the axion mass $m_a$. When $R\omega\gtrsim1$, the target radiations oscillate and their peak powers are proportional to the side area and inversely to $m_a^2$. The maximum power locates at $\omega\sim\frac{3\pi}{4R}$ a for fixed radius $R$. Therefore, the axion in the mass range $1-10^4$\,neV is suitable for detecting for a finite volume of the magnetic field. The radiated energy flux is also derived in Eqs.\,(\ref{equ:numSx})-(\ref{equ:S0}).
Finally, we have discussed a scheme to detect the axions in this mass range using four detectors of different bandwidths surrounding the B-field, see Fig.\,\ref{fig:scheme}. Our expected sensitivity under the typical parameter values is far beyond the existing constraints except for ADMX experiment, see Fig.\,\ref{fig:sensitivity}.
More advanced and realistic detection schemes and techniques can be further developed in the future, and our results are useful for detecting the ADM in strong man-made or natural magnetic fields.
\begin{acknowledgements}
We thank Seishi\,Enomoto, Chengfeng\,Cai, and Yi-Lei\,Tang for useful discussions and comments.
This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11875327, the Fundamental Research Funds for the Central Universities, China, and the Sun Yat-sen University Science Foundation.
\end{acknowledgements}
|
Title:
Multiply lensed star forming clumps in the A521-sys1 galaxy at redshift 1 |
Abstract: We study the population of star-forming clumps in A521-sys1, a $\rm z=1.04$
system gravitationally lensed by the foreground ($\rm z=0.25$) cluster Abell
0521. The galaxy presents one complete counter--image with a mean magnification
of $\rm \mu\sim4$ and a wide arc containing two partial images of A521-sys1
with magnifications reaching $\rm \mu>20$, allowing the investigations of
clumps down to scales of $\rm R_{eff}<50$ pc. We identify 18 unique clumps with
a total of 45 multiple images. Intrinsic sizes and UV magnitudes reveal clumps
with elevated surface brightnesses, comparable to similar systems at redshifts
$\rm z\gtrsim1.0$. Such clumps account for $\sim40\%$ of the galaxy UV
luminosity, implying that a significant fraction of the recent star-formation
activity is taking place there. Clump masses range from $\rm 10^6\ M_\odot$ to
$\rm 10^9\ M_\odot$ and sizes from tens to hundreds of parsec, resulting in
mass surface densities from $10$ to $\rm 10^3\ M_\odot\ pc^{-2}$, with a median
of $\rm \sim10^2\ M_\odot\ pc^{-2}$. These properties suggest that we detect
star formation taking place across a wide range of scale, from cluster
aggregates to giant star-forming complexes. We find ages of less than $100$
Myr, consistent with clumps being observed close to their natal region. The
lack of galactocentric trends with mass, mass density, or age and the lack of
old migrated clumps can be explained either by dissolution of clumps after few
$\sim100$ Myr or by stellar evolution making them fall below the detectability
limits of our data.
| https://export.arxiv.org/pdf/2208.02863 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
gravitational lensing: strong -- galaxies: high-redshift -- galaxies: individual: A521-sys1 -- galaxies: star formation -- galaxies: star clusters
\end{keywords}
\section{Introduction}
The study of galaxies at Cosmic Noon (redshift $\rm z\sim1-3$) reveals morphologies dominated by clumpy structures, particularly at rest-frame ultraviolet (UV) wavelengths \citep[e.g.][]{cowie1995,vandenbergh1996}.
Clumps have typical sizes of $\lesssim1$ kpc \citep[e.g.][]{elmegreen2007,forsterschreiber2011b}, typical stellar masses of $\rm M_* \sim10^7-10^9\ M_\odot$ \citep[e.g.][]{forsterschreiber2011a,guo2012,soto2017} and typical star--formation rates (SFRs) from $\rm 0.1-10\ M_\odot/yr$ \citep[e.g.][]{guo2012,soto2017}.
The presence of UV clumps is closely related to gas properties observed in those galaxies, characterized by higher gas fractions \citep{daddi2010,tacconi2010,tacconi2013,genzel2015} and velocity dispersions \citep{elmegreen2005,forsterschreiber2006} than local main sequence (MS) star-forming galaxies; yet, overall they show rotation features indicating the presence of disk structure \citep{forsterschreiber2006,genzel2006,shapiro2008,wisnioski2018}.
The commonly accepted interpretation of these findings is that clumps result from \textit{in--situ} gas collapse due to gravitational instabilities in the disc, which can fragment at much larger scales at high redshift than in local MS galaxies because of the gas-rich, turbulent composition of these objects \citep[e.g][]{elmegreen2009,immeli2004a,tamburello2015,renaud2021}. This interpretation is supported by recent observations of dense giant molecular cloud complexes from CO data in galaxies at $\rm z\sim1$ \citep{dessauges2019}, as well as by simulations of turbulent high-redshift galaxies \citep[e.g.][]{vandonkelaar2021arxiv} and by observations in nearby analogs \citep[e.g.][]{fisher2017a,fisher2017b,messa2019}.
An additional confirmation of the link between clumps and their host galaxies is given by the evolution of the clump densities with redshift (clumps are denser at higher redshifts), tracing the evolution of star formation (SF) with cosmological time \citep{livermore2015}. We note though that the interpretation of the underlying observations is complicated by the difference in surface-brightness completeness limits \citep{ma2018} and the different resolution achievable at different redshifts and at different gravitational lensing magnifications. %
In addition, high-redshift clumps may affect the process of galaxy assembly; hydro-dynamical and cosmological simulations have suggested that, if clumps are able to survive as bound systems for hundreds of Myr, dynamical friction could cause them to migrate toward the centre of the galaxy \citep{bournaud2014,mandelker2014,mandelker2017}. Such spiralling inward would generate torque that, in turn, funnels inward large amounts of gas, which, along with clump merging, could contribute to the formation of the thick galactic disk and to the bulge growth \citep{noguchi1999,immeli2004b,carollo2007,genzel2008,elmegreen2008,dekel2009,bournaud2007,bournaud2009,bournaud2011,gabor2013}. However, not all simulations predict clumps surviving for long time-scales \citep{oklopcic2017}.
Observations of individual galaxies seem to support this scenario \citep[e.g.][]{guo2012,adamo2013,cava2018}, but the large uncertainties on age determinations and the lack of larger statistical samples prevent us from assessing if, and in what conditions, clumps could survive long enough to migrate from their natal region.
High-redshift clumps contribute by a large fraction to the emission in the rest-frame UV \citep{elmegreen2005b} and in nebular lines (e.g., Balmer transitions, \citealp{livermore2012,mieda2016,zanella2019}) of their host galaxies, suggesting that they trace giant star-forming regions and that those regions constitute the bulk of their host galaxy's recent star-formation activity.
Due to their elevated specific star-formation rate ($\rm sSFR = SFR/M_*$), which can exceed the integrated sSFR of their host galaxies by orders of magnitude, it has been suggested that clumps are starbursting \citep{bournaud2015,zanella2015,zanella2019}.
We expect feedback from star-forming clumps to affect the evolution of galaxies, suppressing the global star formation and leading to the formation of a multiphase interstellar medium (ISM) \citep[e.g][]{hopkins2012,goldbaum2016}. Evidence from local analogs suggests that stellar feedback from clumps could facilitate the escape of UV radiation into the intergalactic medium (e.g., \citealp{bik2015,bik2018,herenz2017} in local galaxies, \citealp{riverathorsen2019} at $\rm z\sim2$); if this process is efficient, clump feedback could even contribute to the reionization of the Universe \citep{bouwens2015}.
Recent studies of lensed high-redshift galaxies \citep[e.g.][]{livermore2012,adamo2013,johnson2017,cava2018,mestric2022} at higher angular resolution offer the possibility to investigate the substructure of clumps \citep{meng2020}. At the highest resolution, potential clusters have been detected on scales of a few parsecs \citep{vanzella2019,vanzella2021b,vanzella2021}.
One of the challenges for the upcoming James Webb Space Telescope (JWST) and adaptive-optic instruments on the European Extremely Large Telescope (E-ELT) will be the detection of possible high-redshift progenitors of the globular clusters (GCs) observed in the local universe, to help solve the many open questions about their origin \citep[e.g.][for a review]{bastian2018}.
In the context of analyses of clumps on small physical scales, we here present the study of the strongly lensed arc at $\rm z=1.04$ in Abell 0521 (A521); following the nomenclature in \citet{patricio2018} we will refer to the galaxy as A521-sys1 in the rest of the paper.
With a stellar mass of $\rm M_*=(7.4\pm1.2)\times10^{10}\ M_\odot$ and a SFR of $\rm (26\pm5)\ M_\odot yr^{-1}$ \citep{nagy2021}, A521-sys1 can be considered a typical main-sequence star-forming galaxy at $\rm z\sim1$ \citep[e.g.][]{speagle2014}.
The kinematic analysis reveals a rotation-dominated galaxy typical of systems at cosmic noon, with a high velocity dispersion \citep{patricio2018,girard2019}.
In addition, both the molecular gas mass surface density, $\rm \Sigma(M_{mol})$, and the SFR surface density, $\rm \Sigma(SFR)$, are elevated by a factor of $\sim10$ compared to local MS galaxies, as expected for high-z gas-rich galaxies. The radial profiles of $\rm \Sigma(M_{mol})$ and $\rm \Sigma(SFR)$ are very shallow \citep{nagy2021}, suggesting an intense star-formation activity throughout the entire galaxy, as also indicated by the presence of UV clumps in various sub--regions of A521-sys1.
The gravitational lensing produced by the foreground cluster allows the analysis of A521-sys1 clumps down to scales of few tens of parsecs. In addition, the presence of multiple images of A521-sys1 at different magnification factors allows the comparison of the same clumps seen at different resolution, and hence tests of the effect of resolution on the study of clump populations.
This paper is structured as follows: we present the data and the lensing model in Section~\ref{sec:data}; the analyses, including the model used to fit the clumps, are described in Section~\ref{sec:datanalysis}. The results are collected in Section~\ref{sec:results_phot} (photometric properties of the clumps) and in Section~\ref{sec:results_sed} (physical properties of the clumps), followed by their discussion in Section~\ref{sec:discussion}. An overall summary of the paper is given in Section~\ref{sec:conclusion}.
Throughout this paper, we adopt a flat $\rm \Lambda$-CDM cosmology with $H_0=68$ km s$^{-1}$ Mpc$^{-1}$ and $\rm \Omega_M = 0.31$ \citep{planck13_cosmo}, and the \citet{kroupa2001} initial mass function.
\section{Data}
\label{sec:data} %
\subsection{Hubble Space Telescope (HST)}
\label{sec:data_hst}
A521-sys1 was observed with WFC3/UVIS in the F390W passband, with WFC3/IR in F105W and F160W (ID: 15435, PI: Chisholm, exposure times: $2470$, $2610$ and $5220$ s, respectively), with ACS/WFC in the F606W and F814W filters (ID: 16670, PI: Ebeling, exposure times $1200$ s).
Individual flat-fielded and CTE-corrected exposures were aligned and combined in a single image using the \texttt{AstroDrizzle} procedure from the \texttt{DrizzlePac} package \citep{hoffmann2021}; the final images have pixels scales of 0.06 arcsec/pixel. The astrometry was aligned to the Gaia DR2 \citep{gaia2018}.
We model the instrumental point-spread function (PSF) from a stack of isolated bright stars within the field of view of the observations. The stack in each filter is fitted with an analytical function described by a combination of Moffat, Gaussian, and $\rm 4^{th}$ degree power-law profiles, to mitigate bias introduced by the choice of a specific function. The fit provides a good description of the stacked stars up to a radius of $\sim20$ pixels (corresponding to $1.20''$).
The minimum detectable magnitude limit, $\rm mag_{lim}$, is estimated from the standard deviation $\rm \sigma$ of the background level in the proximity of A521-sys1; we consider the minimum flux of a PSF light profile whose four brightest pixels are above the $\rm 3\sigma$ level, similarly to the procedure applied to extract sources (see Section~\ref{sec:sextraction}); this minimum flux is converted to an AB magnitude for each filter. We point out that these values are representative of the depth of the observations in the proximity of A521-sys1; the clumps within this system are observed above the diffuse galaxy background, and their detection limits are discussed in Section~\ref{sec:completeness}.
The FWHM values of the PSF, exposure times, zeropoints and depth of the exposures are listed in Tab.~\ref{tab:data}.
\begin{table}
\centering
\begin{tabular}{lrrrrr}
\multicolumn{1}{l}{Filter} & \multicolumn{1}{c}{$\rm \lambda_{rest}$} & \multicolumn{1}{c}{$\rm t_{exp}$} & \multicolumn{1}{c}{$\rm ZP_{AB}$} & \multicolumn{1}{c}{$\rm mag_{lim}$} & \multicolumn{1}{c}{$\rm PSF_{FWHM}$} \\
\multicolumn{1}{c}{\ } & \multicolumn{1}{c}{(\AA)}& \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(arcsec)} \\
\hline
WFC3-UVIS-F390W & 1920 & 2470 & 25.4 & 27.6 & 0.097 \\
ACS-WFC-F814W & 2900 & 1200 & 26.5 & 27.5 & 0.112 \\
ACS-WFC-F606W & 3940 & 1200 & 25.9 & 27.2 & 0.116 \\
WFC3-IR-F105W & 5160 & 2610 & 26.3 & 27.0 & 0.220 \\
WFC3-IR-F160W & 7520 & 5220 & 26.0 & 26.8 & 0.237 \\
\hline
\end{tabular}
\caption{Rest--frame pivotal wavelengths ($\rm \lambda_{rest}$), exposure times ($\rm t_{exp}$), AB magnitude zeropoints ($\rm ZP_{AB}$), depth of the observations ($\rm mag_{lim}$) and FWHM of the PSF ($\rm PSF_{FWHM}$).}
\label{tab:data}
\end{table}
A521-sys1 appears as a series of multiple distorted images (Fig.~\ref{fig:hstdata}); in particular, a complete counter--image of A521-sys1 is observed to the north-east of the brightest cluster galaxy (BCG), and two additional, partially lensed images of the galaxy (one mirrored) are observed west and north-west of the BCG. We will refer to these different images of the A521-sys1 galaxy as counter--image (CI), lensed--north (LN) and lensed--south (LS), as showed in the left panel of Fig.~\ref{fig:hstdata}. The division between LN and LS is traced following the critical line, with the help of the lens model described in Section~\ref{sec:lens_model}.
Black crosses in the left panel of Fig.~\ref{fig:hstdata} mark the position of bright foreground or cluster galaxies in the field of view; the relative contribution from such galaxies to the A521-sys1 photometry increases with the wavelength of the respective observation.
On the other hand, they would have a strong effect on the analysis of the clumpiness of A521-sys1; for this reason their flux is subtracted in the latter analysis (see Section~\ref{sec:clumpiness} for more details). Single--band observations are shown in Fig.~\ref{fig:390data} for F390W and in Appendix~\ref{sec:app:completetab} for the other filters.
\subsection{Ancillary data}
A521-sys1 was observed with VLT-MUSE as part of the MUSE Guaranteed Time Observations (GTO) Lensing Clusters Programme (ID: 100.A-0249, PI: Richard). Observations and data reductions are presented in \citet{patricio2018}. The PSF of MUSE observation is $0.57''$, almost 5 times larger than the PSF of HST-F390W, the reference filter for our clump extraction and analysis, and therefore MUSE data cannot be used for the study of individual clumps. We use MUSE data to estimate the average extinction in radial regions of the galaxy, using the relative strength of nebular emission lines, as described in Appendix~\ref{sec:app:extinction}.
ALMA observations of A521-sys1 were acquired during Cycle 4 (ID: 2016.1.00643.S) in band 6, targeting the CO(4-3) emission line, and were presented in \citet{girard2019} and in \citet{nagy2021}, along with their data reduction analysis.
The high resolution of the ALMA observations (beam size: $0.19''\times0.16''$) allows the study of molecular gas on the same scales as the stellar content; the study of the individual giant molecular clouds (GMCs) is presented in \citet{dessauges2022}.
\subsection{Gravitational lens model}
\label{sec:lens_model}
The gravitational lens model used in this paper to recover the source properties of the individual clumps was constructed using the \textsc{lenstool}\footnote{\hyperlink{https://projets.lam.fr/projects/lenstool/wiki}{https://projets.lam.fr/projects/lenstool/wiki}} software \citep{jullo2007}, and is described in detail in Appendix~\ref{sec:app:lensmodel}.
Its final Root Mean Square (RMS) accuracy in the image plane, based on the positions of 33 multiple images, is $0.08''$ i.e. comparable to the pixel scale of the HST data.
The amplification map, showing the magnification factor, $\rm \mu$, associated to each position of A521-sys1, is showed in the right panel of Fig.~\ref{fig:hstdata}. The magnification factor in the CI region ranges from $\rm \mu\sim2$ to $\rm \mu\sim6$, with a median of 4 and a shallow spatial gradient across the image. In LN and LS, magnifications are typically higher (median $\rm \mu\sim10$) with sub–regions reaching values $\rm \mu>20$ for the majority of the arc.
\section{Data analysis}
\label{sec:datanalysis}
\subsection{Clump extraction}\label{sec:sextraction}
We use the F390W observations, corresponding to rest--frame UV, as reference to extract the clump catalog. F390W is the filter where the clumps are more easily detectable; the galaxy looks less clumpy when moving to longer wavelengths, as also quantitatively shown in the clumpiness analysis of Section~\ref{sec:clumpiness}.
We use the \texttt{SExtractor} software \citep{bertin1996} on a portion of the F390W data centred on A521-sys1 to extract sources that have a minimum of 4 pixels with $\rm S/N>3\sigma$ in background-subtracted images.
The local background is estimated using a convolution grid of 30 pixels ($\rm BACK\_SIZE=30$ in the configuration file); smaller grid would result in considering sources as part of the background, and consequently in removing them.
Using the galaxy cluster mass model to trace the counter--images of all extracted sources, we notice that one clump (clump `9') is detected in LN but its counter--images in CI and LS are not, the latter being below the detection limits of \texttt{SExtractor}; those were therefore added manually to the catalog.
We also search the images in redder filters looking for red clumps that would have missed in the extraction in F390W; only one such source is found (clump `4'), lying below the detection limit in F390W but bright in all other filters, which is added to the sample.
Finally, by a visual inspection we verify that none of the UV clump clearly recognizable by eye is missed by our extraction and we remove foreground galaxies from the catalog.
The final catalog counts 18 unique clumps. Many of those have multiple images; different images of the same clump have been assigned the same ID number, preceded by the sub-region where the image is observed (e.g. `$\rm ci\_1$', `$\rm ln\_1$' and `$\rm ls\_1$' are the same source `1' observed in the counter--image, the lensed-north and the lensed--south regions, respectively). The cross--identification of various images of the same clump was done with the help of the lens model.
In addition, some clumps were divided in multiple sub--peaks in the photometric analysis (see Section~\ref{sec:fitmultiple}); each peak was considered as a single entry in the catalog and we add letters to the ID to differentiate the entries (e.g. clumps `ci\_7a' and `ci\_7b' are two peaks of clump `7'). As consequence, the final catalog counts 45 entries, spread across the 3 images of A521-sys1. The position of all clumps on the F390W observations is shown in Fig.~\ref{fig:390data}.
\subsection{Clump modeling}
\label{sec:modelling}
We modelled the clumps on the image plane, deriving their sizes and magnitudes on the observed data, and later convert those to intrinsic values.
We assume that clumps have intrinsic 2D Gaussian profiles in the source plane and that local lensing transformations still result in Gaussian ellipses in the image plane; in order to describe the observed clump light profile we convolve the 2D Gaussian profiles with the instrumental point spread function i.e. the response of the instrument. Asymmetric gaussian profiles are used to take into account both intrinsic asymmetries in the clump shapes and distortions introduced by the lensing.
We perform the fits in cutouts of $9\times9$ pixels, centered on each of the clumps. In order to take into account possible background luminosity in the vicinity of the clumps, we add to the clump model a $\rm 1^{st}$ degree polynomial function, described by three parameters ($c_0$, $c_x$ and $c_y$). The choice of a non-uniform background helps avoiding the contamination to the fit from the tails of nearby bright sources.
The `observable' model, $M_f$, to be fitted to the data in filter $f$ can be therefore summarized as:
\begin{multline}
M_f(x,y|x_0,y_0,F,\sigma_x,axr,\theta,c_0,c_x,c_y) = \\
F\cdot K_f\ast G_{2D}(x_0,y_0,\sigma_x,axr,\theta)+c_0+c_xx+c_yy,
\end{multline}
where $K_f$ parametrizes the PSF in filter $f$ (as described in Section~\ref{sec:data_hst}) and $F$ parametrizes the observed flux (both the PSF and the gaussian model are normalized). The gaussian model, $G_{2D}$, is parametrized by the minor standard deviation $\sigma_x$, the axis ratio $axr$ defined by $axr\equiv\sigma_y/\sigma_x>1$ and the angle $\theta$, using the \texttt{astropy.modeling} package; by construction we impose that $\sigma_x$ refers to the minimum axis of the 2D gaussian function.
The fit is performed using a least-squared method via the \texttt{python} package \texttt{lmfit} \citep{lmfit}. We calculate and report $\rm1\sigma$ uncertainties derived from the covariance matrix.
Each clump was fitted separately in each of the filters. Due to the clumps being more easily detectable in F390W, we use the latter as the reference one for determining the clump position and size. As first step, we fit the clumps in F390W leaving all parameters free. The F390W data, along with clump best--fit models and residuals, are shown in Appendix~\ref{sec:app:completetab}.
For the fit in F606W, F814W, F105W and F160W, we keep the resulting values for the clump centre ($x_0$ and $y_0$) and its size ($\sigma_x$, $axr$, and $\theta$) as fixed parameters, i.e. we fix the gaussian shape and its position, leaving free only the flux (and the background parameters). This choice assumes that the source has intrinsically the same shape and size in all bands.
\subsubsection{Fitting together multiple sources}
\label{sec:fitmultiple}
A variation to the fitting method described above is employed for clumps whose central positions are less than 4 pixels apart. Due to such closeness the fit of each of the sources would be greatly affected by the other one, bringing unreliable results. For this reason we choose to fit nearby clumps in a single fitting run, by using a larger cutout of $11\times11$ px and modelling two separate gaussians within it; this kind of fit applies only to 3 pairs of sources. In naming these cases we use the same numeric ID for the two sources, adding a letter to differentiate them (e.g. clumps `ci\_7a' and `ci\_7b' have been fitted together). In doing so we are therefore considering the two as separate peaks of the same source; this choice is driven solely by the resolution of our data.
An extreme case is clump `9', that, while in the LS image it appears as a single peak, it can be separated into 4 different sub-peaks (plus a separate image) in LN and into 3 sub-peaks in CI. For the fit of its LN representation we choose to fit at the same time all 4 peaks in a $11\times11$ cutout, imposing circular symmetry for the sources. This last choice is motivated by the too large number of free parameters if asymmetric profiles were considered. The same approach is used to fit the 3 peaks in the CI region.
\subsubsection{Minimum resolvable $\rm \sigma_x$}
\label{sec:minreff}
Our fitting method has an intrinsic resolution limit driven mainly by the instrumental PSF, with a FWHM equal to 1.6, 1.9, 1.9, 3.7 and 4.0 px for F390W, F606W, F814W, F105W and F160W, respectively.
The convolution of the PSF with very narrow gaussian functions will be indistinguishable from the PSF itself. To test what is the minimum size we can resolve, we simulate clumps with various combinations of $\sigma_x$ and axis ratios, add them on top of the galaxy observations and fit them in the same way we do for the real data. We derive a minimum resolvable size $\rm \sigma_{x,min}=0.4$ px for F390W.
All the sources whose fit results in $\rm \sigma_{x}<0.4$ px will be considered as upper limits in size, as shown in Fig.~\ref{fig:sizemag_obs}.
More details on the process to derive $\rm \sigma_{x,min}$ are given in Appendix~\ref{sec:app:reffmin}.
\subsubsection{Completeness of the sample}\label{sec:completeness}
We test the magnitude completeness of the clump sample by simulating clumps of various magnitudes, including them at random positions on top of the galaxy, and fitting them in the same way as for the real sources.
We estimate the completeness limit, $\rm lim_{com}$, as the magnitude above which the fit results become unreliable, using simulated sources of different sizes, $\rm \sigma_x=0.4$, $1.0$ and $2.0$ px, corresponding to $0.024"$, $0.06"$ and $0.12"$ respectively.
More details on the completeness test are given in Appendix~\ref{sec:app:completeness}.
The derived values for F390W are compared to the photometry of the actual clump sample in Fig.~\ref{fig:sizemag_obs}; for an easier comparison to clump magnitudes we we corrected $\rm lim_{com}$ values by the Galaxy reddening in the figure. We find a completeness $\rm lim_{com}=27.4$ mag for point--like sources ($\rm \sigma_x\leq0.4$ px), consistent with the faintest unresolved clumps of our sample. This value is only slightly brighter than the minimum detectable magnitude ($\rm mag_{lim}$) discussed in Section~\ref{sec:data_hst}.
The completeness values get brighter for larger sources, namely $\rm lim_{com}=26.7$ mag and $25.2$ mag for sources with $\rm \sigma_x=1.0$ px ($0.06"$) and $2.0$ px ($0.12"$), respectively. These values are still consistent with the faintest clumps we observed at the corresponding sizes and suggest that $\rm lim_{com}$ traces the magnitudes of the sources which are $3\sigma$ above their local background, i.e. the lower limit chosen for extracting the clump catalog (as seen in Section~\ref{sec:sextraction}).
\subsection{Conversion to intrinsic sizes and magnitudes}\label{sec:convert_intrinsic}
The fluxes, F (in $\rm e^-/s$), are converted into observed AB magnitudes by considering the instrumental zeropoints relative to each filter (Tab.~\ref{tab:data}); the reddening introduced by the Milky Way ($0.29$, $0.19$, $0.11$, $0.07$ and $0.04$ magnitudes for F390W, F606W, F814W, F105W and F160W, respectively) is subtracted in each filter.
The photometry of all A521-sys1 clumps is collected in Appendix~\ref{sec:app:completetab} for all filters.
In order to convert observed magnitudes into absolute ones we subtract the distance modulus ($44.3$ mag) and we add the $k$ correction, a factor $\rm 2.5\log(1+z)$.
Concerning the clump sizes measured in F390W, we calculate the geometrical mean of the minor and major $\rm \sigma$ derived from the fit, i.e. $\rm \sigma_{xy}\equiv \sqrt{\sigma_x\sigma_y}=\sigma_x\sqrt{axr}$, and we convert it to an effective radius. In the case of the gaussian function, the effective radius is equivalent to the half width at half maximum, $\rm HWHM=FWHM/2$ and therefore $\rm R_{eff,xy}\equiv FWHM/2=\sigma_{xy}\sqrt{2\ln{2}}$. The conversion from pixels to parsec is $\rm 1\ px\equiv 498.5\ pc$, derived considering the angular diameter distance of the galaxy of $1713$ Mpc and the pixel scale of the observations, $0.06$ arcsec/px.
The fitting method and the steps just described return sizes and luminosities as observed in the image plane, i.e. after the effect of the gravitational lensing. In order to recover the intrinsic properties of the clumps, we consider the lensing model, described in detail in Appendix~\ref{sec:app:lensmodel}. First, we focus on the best fit model, resulting in the magnification map shown in Fig.~\ref{fig:hstdata} (right panel);
for each clump we identify the region enclosed within $\rm R_{eff}$ and use the median amplification value of the selection as the face--value considered for de-lensing sizes and luminosities. We use the standard deviation of the values within the selected region as a first estimate of the uncertainty on the magnification, $\rm \delta\mu_1$.
Second, we consider 500 models from the MCMC chain produced with \textsc{lenstool} (Appendix~\ref{sec:app:lensmodel}). These models sample the posterior distribution of each parameter in the mass model of the cluster. For each of those realisations, we re-measure the median amplification value of each clump and use their standard deviation as a measure of the uncertainties related to the best fit model, $\rm \delta\mu_2$. We have checked that for each clump the magnification of the best fit model is not biased against the median of the distribution of magnifications for the 500 models.
We account for both the magnification uncertainty related to the clump extension ($\rm \delta\mu_1$) and the one related to the lens model uncertainties ($\rm \delta\mu_2$) by considering their sum root squared, $\rm \delta\mu = \sqrt{\delta\mu_1^2+\delta\mu_2^2}$.
Intrinsic luminosities and sizes are derived by dividing the observed quantities by the magnification value and by its square-root, respectively.
The final uncertainties combine both photometric and magnification uncertainties via the root sum squared. In this way they include possible magnification gradients close to the source positions; regions with higher magnifications also have a steeper $\rm \mu$ gradient, such that the sources within those regions have large uncertainties associated.
\subsection{Broadband SED fitting}\label{sec:BBSED}
We use the broadband photometry to estimate ages and masses of the clumps. The limited number of filters available, covering the rest--frame wavelength range $\sim1700-8500$ \AA, do not allow to fully break the degeneracy between ages and extinctions, nor to constrain the metallicity or the star formation history of the clumps. In order to mitigate the effect of degeneracies, we limit the number of free--parameters making some \textit{a--priori} assumptions. In detail, we use the Yggdrasil stellar population synthesis code \citep{zackrisson2011};
Yggdrasil models are based on Starburst99 Padova-AGB tracks \citep{leitherer1999,vazquez2005} with a universal \citet{kroupa2001} initial mass function (IMF) in the interval $\rm 0.1-100\ M_\odot$. Starburst99 tracks are processed through \texttt{Cloudy} software \citep{ferland2013} to obtain the evolution of the nebular continuum and line emission, produced by the ionized gas surrounding the clumps. Yggdrasil adopts a spherical gas distribution around the emitting source, with hydrogen number density $\rm n_H = 10^2\ cm^{-3}$ and gas filling factor (describing the porosity of the gas) $\rm f_{fill} = 0.01$, typical of \HII\ regions \citep{kewley2002}, and assumes that the gas and the stars form from material of the same metallicity. We choose the models with a gas covering fraction $\rm f_{cov} = 0.5$, i.e. only $50\%$ of the Lyman continuum photons produced by the central source ionize the gas, but we point out that our fit results are basically not affected by the choice of $\rm f_{cov}$.
As fiducial model we consider the stellar tracks obtained assuming a continuum star formation for 10 Myr (C10), a Milky Way extinction law \citep{cardelli1989} and Solar metallicity ($\rm Z=0.02$ as suggested by the analysis in \citealp{patricio2018}).
The C10 assumption is motivated by most of the clumps in the sample having physical sizes of $\sim100$ pc. For star--forming regions at larger scales we can expect more complex star formation histories (SFHs), in particular prolonged star--formation events; the opposite is true at smaller scales, for stellar clusters and small clumps (few tens of parsecs), where the hypothesis of instantaneous burst (`single stellar population' model, or SSP) is usually assumed.
Our clump sample contains sources with a wide range of physical scales (Section~\ref{sec:sizelum}); for this reason, in addition to the fiducial model, we consider a SSP model and a model assuming a continuum star formation for 100 Myr (C100). The comparison between these two `extreme' assumptions will give the magnitude of the effect of the SFH on the derived properties.
To test the effects of the choice of the extinction curve, we consider a fourth model with the starburst curve \citep{calzetti2000} instead of the MW one.
Due to the uncertainties associated to the study of stellar metallicity in A521-sys1 in \citet{patricio2018}, we consider a further model, assuming sub--Solar metallicity ($\rm Z=0.008$).
All the models used in the SED-fitting are summarized in Tab.~\ref{tab:SED_models}.
Considering the assumptions described above, we are left with 3 free parameters in our fits, age, mass and extinction, parametrised by the color excess $\rm E(B-V)$.%
The photometric data of our catalog are fitted to the spectra from the models considered using a minimum-$\chi^2$ technique. Only sources with magnitude uncertainties below $0.6$ mag in more than 3 filters have been fitted.
We report in Section~\ref{sec:results_sed} the face--values relative to the minimum reduced $\rm \chi^2$ ($\rm \chi^2_{red.,min}$) for each clump, and we assign to it an uncertainty given by the range in properties spanned by the results satisfying the condition $\rm \chi^2_{red.}\le 1.07$ (consistent with $\rm 1\sigma$ uncertainties for fits with two degrees of freedom). In cases where the minimum \chisqred\ is above that threshold, we retained within the uncertainty range the values within $10\%$ of $\rm \chi^2_{red.,min}$.%
The differences in derived properties for each clump given by the choice of the different models of Tab.~\ref{tab:SED_models} are considered and discussed in Section~\ref{sec:results_sed}.
\begin{table}
\centering
\begin{tabular}{lllll}
Model & SFH & Ext. curve & Z \\
\hline
C10 (reference) & Const. SFR (10 Myr) & MW & 0.020 \\
SSP & Single burst & MW & 0.020 \\
C100 & Const. SFR (100 Myr) & MW & 0.020 \\
C10-SB & Const. SFR (10 Myr) & Starburst & 0.020 \\
C10-008 & Const. SFR (10 Myr) & MW & 0.008 \\
\hline
\end{tabular}
\caption{Models and relative assumptions used in the broad--band SED-fitting process.
In all cases spectra from the Yggdrasil stellar population synthesis code \citep{zackrisson2011} (based on Starburst99 Padova-AGB tracks), with \citet{kroupa2001} IMF, are considered.}
\label{tab:SED_models}
\end{table}
\subsection{Alternative clump selection and photometry}\label{sec:alternative}
Literature studies offer a variety of methods for extracting clump samples and analyzing them.
To test the reliability of our extraction and photometric analysis we consider an alternative method: we draw elliptical regions that best follow $3\sigma$ contours above the level of the galaxy background to define the clump extent and measure the flux of the clumps within those regions.
Such method is used in the analysis on GMC complexes from CO data \citep[e.g][]{dessauges2019,dessauges2022} but has also been applied to the study of stellar clumps \citep[e.g.][]{cava2018}.
More details on the source extraction, size and photometry measurements with this alternative method are given in Appendix~\ref{sec:app:alternative}, while the derived properties and their differences to the ones of the reference method are discussed in Section~\ref{sec:alternative_results}.
\section{Photometric Results}
\label{sec:results_phot}
\begin{table*}
\centering
\begin{tabular}{lccrrrrrrrr}
\multicolumn{1}{c}{ID} & \multicolumn{1}{c}{RA} & \multicolumn{1}{c}{Dec} & \multicolumn{1}{c}{$\mu$} & \multicolumn{1}{c}{$\rm R_{eff}$} & \multicolumn{1}{c}{$\rm Mag_{UV}$} & \multicolumn{1}{c}{Age} & \multicolumn{1}{c}{$\rm log(M)$} & \multicolumn{1}{c}{E(B-V)} & \multicolumn{1}{c}{$\rm log\langle\Sigma_M\rangle$} & \multicolumn{1}{c}{$\rm T_{cr}$} \\
\ & \multicolumn{1}{c}{[hh:mm:ss]} & \multicolumn{1}{c}{[hh:mm:ss]} & \ & \multicolumn{1}{c}{[pc]} & \multicolumn{1}{c}{[AB]} & \multicolumn{1}{c}{[Myr]} & \multicolumn{1}{c}{[$\rm M_\odot$]} & \multicolumn{1}{c}{[mag]} & \multicolumn{1}{c}{[$\rm M_\odot pc^{-2}$]} & \multicolumn{1}{c}{[Myr]} \\
\multicolumn{1}{c}{(0)} & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} \\
\hline
\hline
ci\_1 & 4:54:07.0521 & -10:13:16.964 & $3.7^{\pm 0.2}$ & <$138.0^{\pm 3.7}$ & $-17.6^{\pm 0.1}$ & $4^{+2}_{-3}$ & $7.38^{+0.11}_{-0.06}$ & $0.22^{+0.01}_{-0.04}$ & >$2.3^{+0.11}_{-0.06}$ & <$1.7^{+0.1}_{-0.3}$ \\[1mm]
ci\_3 & 4:54:07.0607 & -10:13:17.565 & $3.9^{\pm 0.2}$ & $314.8^{\pm 89.0}$ & $-16.3^{\pm 0.2}$ & $30^{+10}_{-0}$ & $7.89^{+0.05}_{-0.02}$ & $0.18^{+0.01}_{-0.03}$ & $2.1^{+0.25}_{-0.25}$ & $3.2^{+1.4}_{-1.4}$ \\[1mm]
ci\_4 & 4:54:07.0179 & -10:13:17.879 & $4.8^{\pm 0.3}$ & $132.5^{\pm 115.8}$ & $-15.0^{\pm 0.2}$ & $11^{+2}_{-3}$ & $7.64^{+0.08}_{-0.06}$ & $0.53^{+0.07}_{-0.09}$ & $2.6^{+0.76}_{-0.76}$ & $1.2^{+1.5}_{-1.5}$ \\[1mm]
ci\_5 & 4:54:07.0897 & -10:13:17.389 & $3.5^{\pm 0.2}$ & <$237.5^{\pm 60.9}$ & $-15.7^{\pm 0.2}$ & $50^{+0}_{-0}$ & $7.28^{+0.00}_{-0.00}$ & $0.00^{+0.00}_{-0.00}$ & >$1.74^{+0.22}_{-0.22}$ & <$4.2^{+1.6}_{-1.6}$ \\[1mm]
ci\_7a & 4:54:06.9343 & -10:13:17.386 & $6.0^{\pm 0.5}$ & $196.4^{\pm 68.9}$ & $-15.5^{\pm 0.2}$ & $50^{+50}_{-49}$ & $7.94^{+0.13}_{-0.50}$ & $0.19^{+0.41}_{-0.13}$ & $2.55^{+0.33}_{-0.58}$ & $1.5^{+0.9}_{-0.8}$ \\[1mm]
ci\_7b & 4:54:06.9206 & -10:13:17.390 & $6.3^{\pm 0.5}$ & $298.0^{\pm 99.3}$ & $-16.1^{\pm 0.2}$ & $11^{+69}_{-10}$ & $7.28^{+0.54}_{-0.09}$ & $0.31^{+0.16}_{-0.31}$ & $1.53^{+0.61}_{-0.3}$ & $6.0^{+3.0}_{-8.0}$ \\[1mm]
ci\_8 & 4:54:07.0529 & -10:13:16.650 & $3.5^{\pm 0.2}$ & <$138.0^{\pm 57.7}$ & $-16.1^{\pm 0.1}$ & $20^{+0}_{-5}$ & $8.29^{+0.07}_{-0.28}$ & $0.47^{+0.06}_{-0.04}$ & >$3.21^{+0.37}_{-0.46}$ & <$0.6^{+0.4}_{-0.4}$ \\[1mm]
ci\_9a & 4:54:07.0006 & -10:13:16.819 & $4.1^{\pm 0.2}$ & <$115.7^{\pm 3.3}$ & $-14.5^{\pm 0.3}$ & $15^{+5}_{-1}$ & $6.91^{+0.34}_{-0.13}$ & $0.41^{+0.08}_{-0.09}$ & >$1.98^{+0.34}_{-0.13}$ & <$2.2^{+0.3}_{-1.3}$ \\[1mm]
ci\_9b & 4:54:06.9922 & -10:13:16.951 & $4.3^{\pm 0.3}$ & $148.9^{\pm 41.8}$ & $-15.0^{\pm 0.3}$ & $50^{+0}_{-10}$ & $6.89^{+0.04}_{-0.02}$ & $0.00^{+0.05}_{-0.00}$ & $1.75^{+0.25}_{-0.24}$ & $3.3^{+1.4}_{-1.4}$ \\[1mm]
ci\_9c & 4:54:07.0007 & -10:13:17.050 & $4.3^{\pm 0.3}$ & <$113.3^{\pm 3.4}$ & $-14.7^{\pm 0.3}$ & $60^{+40}_{-10}$ & $7.06^{+0.20}_{-0.05}$ & $0.00^{+0.04}_{-0.00}$ & >$2.15^{+0.2}_{-0.06}$ & <$1.8^{+0.1}_{-0.5}$ \\[1mm]
ci\_10 & 4:54:06.9492 & -10:13:16.684 & $4.7^{\pm 0.3}$ & $163.2^{\pm 124.4}$ & $-15.0^{\pm 0.4}$ & $14^{+26}_{-5}$ & $7.36^{+0.54}_{-0.08}$ & $0.40^{+0.17}_{-0.15}$ & $2.14^{+0.86}_{-0.67}$ & $2.2^{+2.5}_{-3.7}$ \\[1mm]
ci\_11 & 4:54:06.9141 & -10:13:17.163 & $5.9^{\pm 0.4}$ & $111.4^{\pm 26.8}$ & $-15.2^{\pm 0.2}$ & $12^{+18}_{-11}$ & $6.84^{+0.47}_{-0.08}$ & $0.26^{+0.14}_{-0.18}$ & $1.95^{+0.52}_{-0.22}$ & $2.3^{+0.8}_{-2.4}$ \\[1mm]
ci\_14 & 4:54:07.1624 & -10:13:16.335 & $2.7^{\pm 0.1}$ & $448.6^{\pm 46.3}$ & $-18.1^{\pm 0.1}$ & $40^{+0}_{-27}$ & $8.61^{+0.04}_{-0.47}$ & $0.13^{+0.15}_{-0.02}$ & $2.5^{+0.1}_{-0.48}$ & $2.4^{+0.9}_{-0.4}$ \\[1mm]
ci\_15a & 4:54:07.0211 & -10:13:16.236 & $3.6^{\pm 0.2}$ & <$278.1^{\pm 7.3}$ & $-17.1^{\pm 0.1}$ & $5^{+2}_{-1}$ & $7.76^{+0.02}_{-0.08}$ & $0.40^{+0.01}_{-0.05}$ & >$2.08^{+0.03}_{-0.09}$ & <$3.1^{+0.3}_{-0.1}$ \\[1mm]
ci\_15b & 4:54:07.0140 & -10:13:16.392 & $3.7^{\pm 0.2}$ & $137.3^{\pm 3.6}$ & $-15.6^{\pm 0.1}$ & $20^{+0}_{-0}$ & $7.73^{+0.02}_{-0.04}$ & $0.31^{+0.01}_{-0.02}$ & $2.66^{+0.03}_{-0.04}$ & $1.1^{+0.1}_{-0.1}$ \\[1mm]
ci\_16 & 4:54:07.0497 & -10:13:16.259 & $3.4^{\pm 0.2}$ & $577.3^{\pm 115.7}$ & $-17.3^{\pm 0.2}$ & $60^{+0}_{-0}$ & $8.14^{+0.02}_{-0.00}$ & $0.00^{+0.01}_{-0.00}$ & $1.82^{+0.18}_{-0.17}$ & $6.0^{+1.8}_{-1.8}$ \\[1mm]
ci\_17 & 4:54:06.9778 & -10:13:16.144 & $3.9^{\pm 0.2}$ & <$126.3^{\pm 81.2}$ & $-15.3^{\pm 0.2}$ & $4^{+2}_{-1}$ & $6.76^{+0.08}_{-0.09}$ & $0.27^{+0.04}_{-0.06}$ & >$1.76^{+0.56}_{-0.57}$ & <$3.0^{+2.9}_{-2.9}$ \\[1mm]
ci\_18 & 4:54:07.1194 & -10:13:16.912 & $3.1^{\pm 0.1}$ & $178.2^{\pm 70.9}$ & $-16.1^{\pm 0.1}$ & $20^{+0}_{-8}$ & $7.76^{+0.02}_{-0.27}$ & $0.24^{+0.12}_{-0.01}$ & $2.46^{+0.35}_{-0.44}$ & $1.6^{+1.0}_{-0.9}$ \\[1mm]
ln\_1 & 4:54:06.6065 & -10:13:20.897 & $11.0^{\pm 0.8}$ & <$80.1^{\pm 2.8}$ & $-17.6^{\pm 0.1}$ & $11^{+1}_{-2}$ & $7.12^{+0.06}_{-0.06}$ & $0.07^{+0.04}_{-0.05}$ & >$2.52^{+0.07}_{-0.07}$ & <$1.0^{+0.1}_{-0.1}$ \\[1mm]
ln\_2 & 4:54:06.5362 & -10:13:21.911 & $21.8^{\pm 1.6}$ & <$50.3^{\pm 9.9}$ & $-15.2^{\pm 0.1}$ & $11^{+1}_{-1}$ & $6.58^{+0.04}_{-0.03}$ & $0.19^{+0.03}_{-0.04}$ & >$2.38^{+0.18}_{-0.17}$ & <$0.9^{+0.3}_{-0.3}$ \\[1mm]
ln\_3 & 4:54:06.7141 & -10:13:20.003 & $6.4^{\pm 0.6}$ & $214.1^{\pm 72.9}$ & $-15.6^{\pm 0.3}$ & $7^{+93}_{-6}$ & $7.48^{+0.54}_{-0.11}$ & $0.47^{+0.13}_{-0.42}$ & $2.02^{+0.61}_{-0.32}$ & $2.9^{+1.5}_{-3.8}$ \\[1mm]
ln\_4 & 4:54:06.7692 & -10:13:19.588 & $3.4^{\pm 0.4}$ & <$140.2^{\pm 61.4}$ & $-15.0^{\pm 0.3}$ & $10^{+30}_{-9}$ & $7.61^{+0.45}_{-0.12}$ & $0.55^{+0.16}_{-0.28}$ & >$2.52^{+0.59}_{-0.4}$ & <$1.3^{+0.9}_{-1.5}$ \\[1mm]
ln\_5 & 4:54:06.6649 & -10:13:20.718 & $8.0^{\pm 0.7}$ & $170.2^{\pm 35.1}$ & $-15.3^{\pm 0.2}$ & $40^{+20}_{-32}$ & $7.15^{+0.09}_{-0.53}$ & $0.03^{+0.27}_{-0.03}$ & $1.89^{+0.2}_{-0.56}$ & $3.0^{+1.4}_{-1.0}$ \\[1mm]
ln\_6 & 4:54:06.5781 & -10:13:19.957 & $16.5^{\pm 1.0}$ & <$57.9^{\pm 36.6}$ & $-13.7^{\pm 0.3}$ & $4^{+1}_{-1}$ & $6.40^{+0.06}_{-0.06}$ & $0.37^{+0.03}_{-0.04}$ & >$2.07^{+0.55}_{-0.55}$ & <$1.4^{+1.3}_{-1.3}$ \\[1mm]
ln\_7 & 4:54:06.7850 & -10:13:18.739 & $1.5^{\pm 0.2}$ & $484.2^{\pm 112.4}$ & $-17.2^{\pm 0.2}$ & $1^{+99}_{-0}$ & $8.07^{+0.40}_{-0.31}$ & $0.46^{+0.05}_{-0.46}$ & $1.91^{+0.44}_{-0.37}$ & $5.0^{+2.1}_{-4.1}$ \\[1mm]
ln\_8 & 4:54:06.5573 & -10:13:22.002 & $20.0^{\pm 1.7}$ & <$98.1^{\pm 16.2}$ & $-14.5^{\pm 0.2}$ & $15^{+15}_{-4}$ & $7.02^{+0.47}_{-0.07}$ & $0.31^{+0.13}_{-0.10}$ & >$2.24^{+0.49}_{-0.16}$ & <$1.5^{+0.4}_{-1.5}$ \\[1mm]
ln\_9 & 4:54:06.7297 & -10:13:18.834 & $15.8^{\pm 7.8}$ & $115.8^{\pm 62.5}$ & $-14.4^{\pm 0.6}$ & $90^{+110}_{-89}$ & $7.17^{+0.40}_{-0.83}$ & $0.01^{+0.56}_{-0.01}$ & $2.25^{+0.62}_{-0.95}$ & $1.6^{+1.5}_{-1.8}$ \\[1mm]
ln\_9a & 4:54:06.6850 & -10:13:19.162 & $119.4^{\pm 76.4}$ & $25.3^{\pm 9.5}$ & $-12.1^{\pm 0.7}$ & $---$ & $---$ & $---$ & $---$ & $---$ \\[1mm]
ln\_9b & 4:54:06.6938 & -10:13:19.247 & $40.8^{\pm 11.4}$ & $42.8^{\pm 11.2}$ & $-13.2^{\pm 0.4}$ & $7^{+93}_{-6}$ & $6.39^{+0.66}_{-0.21}$ & $0.43^{+0.17}_{-0.43}$ & $2.33^{+0.7}_{-0.31}$ & $0.9^{+0.4}_{-1.7}$ \\[1mm]
ln\_9c & 4:54:06.7074 & -10:13:19.115 & $106.9^{\pm 44.8}$ & $39.1^{\pm 11.9}$ & $-12.2^{\pm 0.5}$ & $50^{+252}_{-49}$ & $6.82^{+0.44}_{-0.73}$ & $0.26^{+0.50}_{-0.26}$ & $2.84^{+0.52}_{-0.78}$ & $0.5^{+0.3}_{-0.5}$ \\[1mm]
ln\_9d & 4:54:06.6937 & -10:13:19.053 & $642.4^{+1338.6}_{-641.4}$ & $14.1^{+15.0}_{-14.1}$ & $-10.1^{+2.3}_{-1.1}$ & $---$ & $---$ & $---$ & $---$ & $---$ \\[1mm]
ln\_10 & 4:54:06.7057 & -10:13:17.705 & $2.7^{\pm 0.5}$ & <$237.1^{\pm 54.2}$ & $-16.3^{\pm 0.3}$ & $1^{+89}_{-0}$ & $7.49^{+0.44}_{-0.33}$ & $0.39^{+0.07}_{-0.39}$ & >$1.95^{+0.48}_{-0.38}$ & <$3.3^{+1.4}_{-3.1}$ \\[1mm]
ln\_12 & 4:54:06.5671 & -10:13:22.744 & $50.2^{\pm 11.4}$ & <$74.2^{\pm 15.1}$ & $-13.3^{\pm 0.3}$ & $3^{+7}_{-2}$ & $6.07^{+0.24}_{-0.16}$ & $0.34^{+0.07}_{-0.10}$ & >$1.53^{+0.3}_{-0.24}$ & <$3.0^{+1.0}_{-1.4}$ \\[1mm]
ln\_13 & 4:54:06.5553 & -10:13:22.927 & $225.6^{\pm 78.5}$ & $19.3^{+27.6}_{-19.3}$ & $-10.6^{\pm 0.5}$ & $---$ & $---$ & $---$ & $---$ & $---$ \\[1mm]
ls\_1 & 4:54:06.4604 & -10:13:24.085 & $7.8^{\pm 0.5}$ & <$84.1^{\pm 2.9}$ & $-17.3^{\pm 0.1}$ & $5^{+1}_{-2}$ & $7.18^{+0.04}_{-0.03}$ & $0.19^{+0.03}_{-0.02}$ & >$2.53^{+0.05}_{-0.04}$ & <$1.0^{+0.1}_{-0.1}$ \\[1mm]
ls\_2 & 4:54:06.4853 & -10:13:23.066 & $25.9^{\pm 2.5}$ & <$46.1^{\pm 4.1}$ & $-14.6^{\pm 0.1}$ & $3^{+1}_{-1}$ & $6.61^{+0.07}_{-0.04}$ & $0.34^{+0.02}_{-0.02}$ & >$2.48^{+0.11}_{-0.09}$ & <$0.8^{+0.1}_{-0.1}$ \\[1mm]
ls\_3 & 4:54:06.4322 & -10:13:25.466 & $4.3^{\pm 0.3}$ & <$142.2^{\pm 68.6}$ & $-15.5^{\pm 0.2}$ & $12^{+1}_{-0}$ & $7.39^{+0.02}_{-0.04}$ & $0.38^{+0.01}_{-0.06}$ & >$2.29^{+0.42}_{-0.42}$ & <$1.7^{+1.3}_{-1.3}$ \\[1mm]
ls\_4 & 4:54:06.3976 & -10:13:26.049 & $3.4^{\pm 0.2}$ & <$165.8^{\pm 54.3}$ & $-15.2^{\pm 0.2}$ & $8^{+32}_{-7}$ & $7.71^{+0.41}_{-0.07}$ & $0.58^{+0.14}_{-0.30}$ & >$2.48^{+0.5}_{-0.29}$ & <$1.5^{+0.7}_{-1.4}$ \\[1mm]
ls\_5 & 4:54:06.4618 & -10:13:24.964 & $5.5^{\pm 0.4}$ & <$142.0^{\pm 50.2}$ & $-15.4^{\pm 0.2}$ & $40^{+20}_{-32}$ & $7.20^{+0.08}_{-0.54}$ & $0.03^{+0.26}_{-0.03}$ & >$2.1^{+0.32}_{-0.62}$ & <$2.1^{+1.4}_{-1.2}$ \\[1mm]
ls\_6 & 4:54:06.3934 & -10:13:24.044 & $5.6^{\pm 0.4}$ & $276.8^{\pm 57.9}$ & $-16.0^{\pm 0.2}$ & $40^{+50}_{-33}$ & $7.71^{+0.13}_{-0.51}$ & $0.11^{+0.29}_{-0.10}$ & $2.03^{+0.22}_{-0.54}$ & $3.3^{+1.5}_{-1.2}$ \\[1mm]
ls\_7 & 4:54:06.3565 & -10:13:25.426 & $3.4^{\pm 0.2}$ & $404.7^{\pm 59.2}$ & $-17.0^{\pm 0.2}$ & $50^{+50}_{-10}$ & $8.28^{+0.14}_{-0.05}$ & $0.13^{+0.05}_{-0.12}$ & $2.27^{+0.19}_{-0.14}$ & $3.0^{+0.7}_{-0.9}$ \\[1mm]
ls\_8 & 4:54:06.4956 & -10:13:23.390 & $18.1^{\pm 1.9}$ & $115.6^{\pm 35.0}$ & $-14.3^{\pm 0.2}$ & $12^{+1}_{-2}$ & $7.12^{+0.03}_{-0.04}$ & $0.45^{+0.05}_{-0.05}$ & $2.19^{+0.27}_{-0.27}$ & $1.7^{+0.8}_{-0.8}$ \\[1mm]
ls\_9 & 4:54:06.4098 & -10:13:24.546 & $5.0^{\pm 0.3}$ & <$206.4^{\pm 78.0}$ & $-15.2^{\pm 0.3}$ & $5^{+2}_{-1}$ & $6.98^{+0.03}_{-0.10}$ & $0.38^{+0.03}_{-0.06}$ & >$1.55^{+0.33}_{-0.34}$ & <$4.9^{+2.8}_{-2.8}$ \\[1mm]
ls\_11 & 4:54:06.3552 & -10:13:25.084 & $3.6^{\pm 0.2}$ & <$168.8^{\pm 31.7}$ & $-15.7^{\pm 0.1}$ & $13^{+1}_{-1}$ & $7.20^{+0.02}_{-0.02}$ & $0.28^{+0.04}_{-0.04}$ & >$1.94^{+0.16}_{-0.16}$ & <$2.8^{+0.8}_{-0.8}$ \\[1mm]
ls\_12 & 4:54:06.5318 & -10:13:23.573 & $22.5^{\pm 2.4}$ & <$80.8^{\pm 26.1}$ & $-13.6^{\pm 0.3}$ & $3^{+3}_{-2}$ & $5.91^{+0.14}_{-0.12}$ & $0.26^{+0.03}_{-0.05}$ & >$1.3^{+0.31}_{-0.3}$ & <$4.1^{+2.0}_{-2.1}$ \\[1mm]
\hline
\end{tabular}
\caption{Main intrinsic properties of the clumps in A521-sys1 and relative uncertainties: (1)-(2) RA and Dec coordinates; (3)-(5) magnification factors, effective radii and absolute UV magnitudes (from F390W), derived as described in \S~\ref{sec:modelling} and \S~\ref{sec:convert_intrinsic} and presented in \S~\ref{sec:sizelum}; (6)-(8) ages, masses and color excesses, for the reference SSP model (Tab.~\ref{tab:SED_models}), derived as described in \S~\ref{sec:BBSED} and presented in \S~\ref{sec:results_sed}; (9) mass surface densities, defined as $\rm \langle\Sigma_M\rangle = M/(2\pi R_{eff}^2)$ and discussed in \S~\ref{sec:masses}; (10) crossing times, defined as $\rm T_{cr}\equiv 10 \sqrt{{R^{3}_{eff}/GM}}$ and discussed in \S~\ref{sec:ages}. Upper and lower limits are indicated by `$<$' and `$>$', respectively. }
\label{tab:main_prop}
\end{table*}
\subsection{UV sizes and magnitudes of the clumps}
\label{sec:sizelum}
We show the distribution of observed sizes and F390W magnitudes of the clumps in Fig.~\ref{fig:sizemag_obs}. Magnitudes have been considered after correcting for Galactic reddening. We plot apparent sizes, i.e. not corrected for the effect of magnification. The observed magnitudes ranges mostly between 27 and 25 mag (AB system), while sizes are mainly clustered below 600 pc. The minimum size, $235$ pc, is set by the choice of $\rm \sigma_{x,min}=0.4$ px described in Section~\ref{sec:minreff} and Appendix~\ref{sec:app:reffmin}.
Many of the clumps observed have upper limits in size, i.e. they show a light profile consistent with the instrumental PSF, at least on their minor axis.
We do not observe systematic differences for clumps in different counter--images of the galaxy as can be verified comparing the median sizes and magnitudes reported at the top and on the right side of Fig.~\ref{fig:sizemag_obs}. In the same figure we report the completeness limits, $\rm lim_{com}$, derived in Appendix~\ref{sec:app:completeness} and discussed in Section~\ref{sec:completeness}, as black stars connected by a dashed line; all sources are above the $\rm lim_{com}$ value or consistent with it.
Absolute UV magnitudes and clump sizes after correcting for the de-lensing are shown in Fig.~\ref{fig:sizemag_dl}. The values shown are the intrinsic sizes and luminosities of the clumps, also reported in Tab.~\ref{tab:main_prop}. De--lensing reveals a wide range of intrinsic properties spanning $\sim$~$8$ magnitudes and sizes between $\sim10$ and $\sim600$ pc.
This suggests that we are observing a wide variety of clumps, from large star-forming regions on scales of hundreds of parsecs to almost star clusters.
The distribution of sizes and magnitudes are summarized in histograms in Fig.~\ref{fig:sizemag_dl}; while clumps in the CI and LS regions have similar distribution of properties, clumps in the LN region are on average smaller and less bright, as suggested by the median values, $\rm med(R_{eff})=77$, $142$ and $156$ pc and $\rm med(Mag_{UV})=-14.5$, $-15.4$ and $-15.7$ mag for LN, LS and CI, respectively. Such difference is driven by the large amplification factors reached in some sub-regions of the LN image and, is specifically due to few sources in the LN that, thanks to such amplification, can be resolved in their sub--components; four of those sources are the peaks of the same clump `9', already described in Section~\ref{sec:fitmultiple}. We remind that many size measurements return only upper limits, affecting the distributions and median values just discussed. Nevertheless, the differences found between median values in CI, LN and LS remain even when removing clumps with size upper--limits.
Some of the brightest and largest sources in the CI are outside the region that produces multiple images (see Fig.~\ref{fig:hstdata}) and therefore do not have a counterpart either in LN or in LS (black circles in the bottom panel of Fig.~\ref{fig:sizemag_dl}). Neglecting clumps without multiple images would produce a minimal effect on the median values discussed above.
Despite differences in median magnitude and sizes, clumps appear to share similar surface brightnesses between the three sub-regions, consistent with the conservation of surface brightness by gravitational lensing.
\subsection{Clumpiness}
\label{sec:clumpiness}
We measure the clumpiness of A521-sys1 in its three sub--regions for each filter; we consider clumpiness as the fraction of the galaxy luminosity coming from clumps, with respect to the total luminosity of the galaxy. This definition was already used in literature \citep[e.g.][]{messa2019} and in high redshift galaxies has been used also as a proxy for the cluster formation efficiency \citep{vanzella2021b}.
To avoid contamination from nearby cluster members, we subtract them out of the observations using the \texttt{Ellipse} class in the \texttt{photutils} \texttt{python} library, providing the tools for an elliptical isophote analysis (following the methods described by \citealp{jedrzejewski1987}). Such subtraction was not needed in the F390W filter; at the redshift of A521-sys1 this filter corresponds to rest--frame FUV regime and therefore we do not expect significant contamination, as confirmed by visual inspection.
The orange ellipse and blue and green boxes in Fig.~\ref{fig:hstdata} (left panel) mark the regions of the galaxy included in the extraction of the total flux of the system. These contours are driven by ensuring that all the extracted clumps lie within the area and are the same for all filters.
We check that increasing the area covered by these regions we would add $<5\%$ of the galaxy flux, while including mostly local background emission.
In order to exclude the contribution of local background from the measure of the galaxy flux we perform aperture photometry in the aforementioned elliptical and rectangular regions, employing an annular sky region with a width of $0.3"$ (5 px) around each of the three apertures.
A foreground galaxy is located on top of the northern part of the LN image. Despite the subtraction of the galaxy some residuals remains and for this reason a small circular region covering the galaxy is excluded from the flux measurement. Since we are interested in measuring the source-plane flux of the galaxy, the nearby region within the close critical line (in red in the magnification map of the right panel of Fig.~\ref{fig:hstdata}), corresponding to the position of the clumps ln\_9a,b,c,d, is also excluded, as it represent a further multiple image of a fraction of the A521-sys1 galaxy.
The source-plane flux of each of the sub-regions is calculated by dividing the observed flux by its magnification, on a pixel-by-pixel basis.
The de-lensed flux of clumps is calculated by dividing the clump photometry by the amplification factor assigned to it, as already described in Section~\ref{sec:convert_intrinsic}. The ratios of these two measurements, for each filter and in each sub-region, give the clumpiness values, reported in Fig.~\ref{fig:clumpiness}.
The main trend observed is that clumpiness is high in the UV and decreases when moving to longer wavelength. This trend confirms what can be noticed from the single-band observations collected in Appendix~\ref{sec:app:completetab}, i.e. that the galaxy has a less clumpy appearance at redder wavelengths. The clumpiness in F390W, tracing rest-frame UV wavelengths ($\sim1900$ \AA) and therefore the massive stars from recent star-formation, suggests that a considerable fraction ($20\%-50\%$) of recent star formation is taking place in the observed clumps. Redder wavelengths trace older population of stars distributed along the entire galaxy.
The clumpiness measurement for the LN sub-region is lower than the ones for CI and LS, though $2\sigma$ consistent in the bluest band. We attribute this difference mainly to the presence of residuals from the foreground galaxy in the north part of LN. This is confirmed by a second measure of the clumpiness in LN, done by excluding the northern part of the sub-region (the one encompassing the clumps ln\_4, ln\_7, ln\_9 and ln\_10); this further measure is plotted as empty blue markers in Fig.~\ref{fig:clumpiness}.
A second cause to this difference could be the lower average physical resolution reached in CI and LS, compared to LN, as literature studies have shown how low clump resolutions lead to over-estimate their contribution to the galaxy luminosity \citep{tamburello2017,messa2019}.
\subsection{Color--color diagrams}
\label{sec:colorcolor}
Color--color diagrams provide an intuitive way of estimating the age range covered by the clumps in our sample. In particular we focus in Fig.~\ref{fig:color_color} on the colors given by the filters F390W--F814W (on the x-axis) and F105W--F160W (on the y-axis); because of the rest-frame wavelengths probed by these filters ($\sim2000$, $\sim4000$, $\sim5300$ and $\sim7700$ \AA) we call these colors $UV-B$ (x-axis) and $V-I$ (y-axis), although no conversion to the Johnson filter system is applied. %
We over-plot on such a diagram the stellar evolution tracks used for the broadband SED fitting (described in Section~\ref{sec:BBSED}), and in particular the SSP and C100 tracks, i.e. the two extreme cases of SFH considered. We notice that they show similar behaviours, with the $UV-B$ color remaining almost constant for ages $1$ to $10$ Myr and then changing by $\sim3$ magnitudes for ages $10$ to $500$ Myr; the opposite is true for the $V-I$ color, that changes by $1$ mag in the first 10 Myr and then remains almost constant for the rest of the stellar evolution. Extinction moves the curve towards redder color and therefore specifically towards the top-right of the diagram in Fig.~\ref{fig:color_color}.
The colors of our clump sample are scattered by $\sim1.5$ mag on both x and y axes. They all fall in the age range $\sim10-200$ Myr, if the no--extinction tracks are considered. However, while their scatter in the UV--B color can be due to a spread in ages in the range $10-200$ Myr, the large spread in $V-I$ suggests the presence of some extinction and of younger ages ($1-10$ Myr). In particular, data--points seem to be well aligned along the track with an extinction of $\rm E(B-V)=0.3$ mag.
\section{Results of Broadband-SED Fitting}
\label{sec:results_sed}
Individual values for the derived masses, ages and extinctions in the case of our reference (SSP) model, are collected in Tab.~\ref{tab:main_prop}; their distributions are shown in Fig.~\ref{fig:property_comparison_values}.
Three clumps have detections in less than 4 filters and therefore were not fitted.
Masses range mainly between $\rm 10^6$ and $\rm 10^8\ M_\odot$, but extends up to $\rm \sim10^9\ M_\odot$; ages are distributed between $1$ and $100$ Myr, with the majority of clumps resulting younger than 20 Myr. %
Extinctions range between $\rm E(B-V)=0.0$ mag and $\rm E(B-V)=0.6$ mag, with a peak around $\rm E(B-V)\sim0.3$ mag.
As discussed in Section~\ref{sec:BBSED}, the limited number of filters available implies taking assumptions on the models to be adopted. We show in Fig.~\ref{fig:property_comparison_values} the distribution of derived properties using the combination of assumptions listed in Tab.~\ref{tab:SED_models}, to help unveiling possible biases associated to the choice of stellar models.
The assumption of longer star formation histories (C100) produce older derived ages, on average (as already pointed out in the literature, e.g. \citealp{adamo2013}), and the opposite is true for instantaneous burst of star formation (SSP); ages derived using our reference model, C10, are on average in-between (top panel of Fig.~\ref{fig:property_comparison_values}). We point out that the difference in median ages for those three models is only $\sim10$ Myr; the main difference is the presence of a considerable fraction of sources (almost one third of the sample) with ages $\gtrsim100$ Myr in the case of C100.
The C100 model also produces on average larger masses (by only $\sim0.10$ dex) and higher extinctions (by $\sim0.1$ mag).
Smaller difference are observed if either a lower metallicity (C10-008) or a difference extinction curve (C10-SB) are assumed (bottom panel of Fig.~\ref{fig:property_comparison_values}).
Overall, we notice that the distribution of ages is the one most affected by the model assumptions, while the distribution of derived masses is similar in all cases.
We point out that the lowest median \chisqred\ value is found considering the reference C10 model is considered. We find 4 sources of the sample (ci\_8, ci\_9a, ci\_15b, ln\_1) whose SED fit with the SSP model gives a much lower \chisqred\ than with our reference one; the difference in derived properties with the two models is however negligible.
The distributions just discussed only show the best fit values and are associated in some cases to large uncertainties.
The uncertainties within the reference model range to $\sim0.5$ dex, $\sim1.0$ dex and $\sim0.3$ mag for log(M), log(Age) and E(B-V), respectively, but their distributions are mainly distributed around zero.
The difference in derived properties caused by the choice of different models are mostly consistent with the intrinsic uncertainty within the single model.
\subsection{Masses and Densities}
\label{sec:masses}
We compare the derived masses to the sizes of the clumps in Fig.~\ref{fig:mass_size_density} (left panel).
As pointed out in the previous paragraph, the range of masses spans more than two orders of magnitude; this range is similar in all three images of A521-sys1 and difference in the median mass is $\sim0.4$ dex between clumps in the LN field (less massive) and the ones in CI.
We observe quite large scatters in mass ($\rm \gtrsim0.5$ dex) at any given clump size but also a robust correlation between mass and size (Spearman's coefficient: 0.78, p-value: $10^{-9}$), probably driven by incompleteness effects, as low--mass large clumps will fall below our detection limits.
By combining masses and sizes we study the clump average mass density. We choose to focus on the surface densities instead of the volume ones because in many cases we are dealing with star-forming regions of hundreds of parsecs in size and we do not know their 3D intrinsic shape, therefore we cannot assume spherical symmetries.
We define $\rm \langle\Sigma_M\rangle = M/(2\pi R_{eff}^2)$\footnote{The factor 2 at denominator is driven by $\rm R_{eff}$ being defined as the radius enclosing half of the source mass.} and plot the derived values in Fig.~\ref{fig:mass_size_density} (right panel). They span $\sim2$ orders of magnitude, in the range $\rm 10-1000\ M_\odot/pc^2$. We observe only a weak anti-correlation between clump size and surface density (Spearman's $\rho_s=-0.3$, \textit{p-val}: $0.06$). There is not a significant density difference for clumps in different fields, with a $0.12$ dex difference between LN (denser clumps) and CI.
For comparison, a typical low-redshift young massive star cluster of $\rm 10^5\ M_{\odot}$ has a median size of $4$ pc \citep{brown2021} and therefore a typical surface density of $\rm 10^3\ M_\odot/pc^2$; this value, shown as a black solid line on the right panel of Fig.~\ref{fig:mass_size_density} is almost one order of magnitude larger than the median values found for our sample, but we remind that a good fraction of our measurements are upper limits in size and therefore lower limit in terms of mass density. Two clumps have $\rm \langle\Sigma_M\rangle$ values comparable to the one of local massive clusters, namely one of the sub-peaks of clump ln\_9 and ci\_8. The latter displays a large mass density despite being observed at scales $>10$ times larger in size than local massive clusters and is discussed in more detail in Section~\ref{sec:radial_trends}.
\subsection{Age distributions}
\label{sec:ages}
Fig.~\ref{fig:property_comparison_values} suggests that the bulk of clumps in A521-sys1 has ages close to $\sim10$ Myr, with few possibly as old as $\sim100$ Myr. This picture does not drastically change when considering age uncertainties and other stellar models; we observe that all clumps have derived ages $<200$ Myr, and the majority of them $<100$ Myr. The derived age distribution is therefore consistent with clumps being clearly detected in F390W, covering rest-frame $~2000$ \AA\ UV emission, associated to young stars.
Taking $100$ Myr as an upper limit on the age of the clumps (as suggested by our reference C10 model), we estimate SFRs of individual clumps; the derived values span the range $\rm 0.008-4\ M_\odot/yr$, consistent with the range covered by UV magnitudes, if those are converted to SFR values using the factor from \citet{kennicutt2012} (see also Section~\ref{sec:literature_clumps} and Fig.~~\ref{fig:sizemag_literature}).
Summing the contributions from all clumps we obtain $12.4$, $2.9$ and $\rm 3.9\ M_\odot/yr$ in CI, LN and LS, respectively. Compared to the total SFR of the galaxy, $\rm \sim 16\ M_\odot/yr$ \citep{nagy2021}\footnote{The original value $\rm SFR=26\ M_\odot/yr$ reported in \citet{nagy2021} was derived assuming a \citet{salpeter1955} IMF and is here converted to match the assumption of \citet{kroupa2001} IMF used to derive clump masses.}, clumps appear to represent a good fraction of the galaxy current SFR, as already suggested by the clumpiness analysis in Section~\ref{sec:completeness}.
We remind that the clump SFR values just derived are based over an age range of $100$ Myr and therefore constitute lower limits; larger values (by a factor $\sim10$) would result from taking the best-fit individual clump ages, suggesting an increase in the very recent SF activity of A521-sys1.
Clump ages can be compared to their crossing time, which in terms of empirical parameters can be found as:
\begin{equation}
\rm T_{cr}\equiv 10 \left(\frac{R^{3}_{eff}}{GM}\right)^{1/2}
\end{equation}
Their ratio, named dynamical age $\rm \Pi\equiv Age/T_{cr}$ \citep[e.g.][]{gieles2011}, is used to distinguish bound ($\rm \Pi>1$) and unbound ($\rm \Pi<1$) agglomerates \citep[e.g.][for star clusters in local galaxies]{ryon2015,ryon2017,krumholz2019}.
Clumps in A521-sys1 have crossing times in the range $\rm T_{cr}=0.5-6.0$ Myr.
Considering the best-fit age values we derive dynamical ages $\Pi>1$ for most of the sample ($\sim90\%$), suggesting that many clumps may be gravitationally stable against expansion. This result is discussed in light of the apparent lack of old clumps in Section~\ref{sec:radial_trends}.
Similar fractions are found if either the SSP or the C100 models are assumed.
\subsection{Extinctions}
\label{sec:extinctions}
As a sanity check for the extinction values obtained, we leverage archival VLT-MUSE observations of A521 to derive extinction values in annular sub--regions of the galaxy, using the Balmer decrement, i.e. the observed ratio of $\rm H\gamma$ and $\rm H\delta$ emission lines (technical details of this analysis are given in Appendix~\ref{sec:app:extinction}); the depth of the VLT-MUSE data prevents us from constraining with high precision the extinction map of A521-sys1 but the analysis suggests $E(B-V)$ values below $\sim0.7$ mag, confirming the range of extinctions found via the SED fitting process.
We perform an additional test to estimate the impact of assuming \textit{a-priori} an extinction value on the ages and masses derived via broadband SED fit; this test is motivated by the lack of HST multi-band detections affecting the study of high-z clumps (due to rest-frame optical-UV emission falling beyond the observable wavelength range), implying taking further assumptions on the clump models.
We consider two models, taking the same main assumptions of the reference C10 model but limiting the range of extinction values allowed by the fit:
\begin{itemize}
\item C10-LE: the low extinction model, allowing extinctions only in the range $\rm E(B-V)<0.1$ mag;
\item C10-HE: the high extinction model, allowing extinctions only in the range $\rm 0.4<E(B-V)<0.5$ mag.
\end{itemize}
The results of these two models are shown in Fig.~\ref{fig:extinction_comparison_values}; as could be expected, lower (higher) extinctions force the fit to find older (younger) age values. In the case of our sample the low--extinction model is the one performing worst, with the age distribution shifted by $\sim0.75$ dex; we point out again that masses are less affected by the choice of model and in the low--extinction model are shifted to larger values by $0.3$ dex only.
\section{Discussion}
\label{sec:discussion}
\subsection{UV size-magnitude comparison to z=0-3 literature samples}
\label{sec:literature_clumps}
We compare the intrinsic sizes and luminosities of clumps in A521-sys1, presented in Section~\ref{sec:sizelum}, to other samples available in the literature in Fig.~\ref{fig:sizemag_literature}. Although clump masses and ages are derived for A521-sys1 clumps, we remind that it is worth discussing UV magnitudes as tracers of the recent SFR and mass of the clumps for two main reasons; first, they are widely available for many systems both at low and high redshift (while mass estimates are much less common) and, second, they avoid comparing physical quantities typically derived using different assumptions among different samples.
In the same figure we show the sizes and luminosities of \HII\ regions in local ($z=0$) main-sequence (MS) galaxies from the SINGS sample \citep{kennicutt2003}. The SFR values of the SINGS sample have been converted to UV magnitudes using the conversion factor in \citet{kennicutt2012}. We observe that clumps in A521-sys1 are brighter than the ones in \citep{kennicutt2003} when sources at similar scales are compared, suggesting that star--forming regions in A521-sys1 are denser than local \HII\ regions.
Similar sizes and magnitudes are measured in clumps in the redshift range $\rm z=1-3$; we show in Fig.~\ref{fig:sizemag_literature} the clumps samples of the Cosmic Snake (z=1.0, \citealp{cava2018}), \citet{wuyts2014} (z=1.7), \citet{johnson2017} (z=2.5) and three highly magnified clumps from \citet{vanzella2017a,vanzella2017b} ($\rm z\sim3.1$).
Studies of clumps at $z\geqslant1$ suggest an evolution of the clumps' average density with redshift \citep[e.g][]{livermore2015}. We plot the average surface brightness at $z=0$, $1$ and $3$ derived by \citet{livermore2015} using clumps from samples of SINGS, WiggleZ \citep{wisnioski2012}, SHiZELS \citep{swinbank2012}, and the lensed arcs from \citet{jones2010}, \citet{swinbank2007,swinbank2009} and \citet{livermore2012}; our sample of clumps in A521-sys1 lies, similarly to the other samples just presented, in the range of expected densities for redshifts $z=1-3$.
The main possible cause of clumps' density redshift evolution is the effect of galactic environment within galaxies \citep[e.g][]{livermore2015}, at higher redshift characterized by higher gas turbulence and higher hydrostatic pressure at the disk midplane, fragmenting as denser clouds \citep{dessauges2019,dessauges2022}. Detection limit differences could also partly explain the trends as, typically, galaxies at higher redshifts have worse detection limits.
Supporting the hypothesis of the (internal) galactic environmental effect, studies of nearby samples of high-z analogs, e.g. GOALS LIRGs \citep{armus2009,larson2020}, DYNAMO gas-rich galaxies \citep{green2014,fisher2017a} and LARS starbursts \citep{ostlin2014,messa2019}, find clumps with surface densities comparable to the ones observed at redshift 1 and above. We point out that such galaxies sit above the MS for local galaxies (while instead the SINGS sample contain typical MS galaxies at z=0) but are consistent with MS galaxies at $z\gtrsim1$.
\subsection{Properties derived via the alternative photometry method}
\label{sec:alternative_results}
We compare the results presented in Section~\ref{sec:results_phot} and \ref{sec:results_sed} to the ones obtained with the alternative extraction and photometry method introduced in Section~\ref{sec:alternative}. Overall, the alternative method miss to extract 5 sources (2 in CI, 1 in LN and 2 in LS). We checked that for bright isolated sources (e.g. top panel of Fig.~\ref{fig:contours}) we get similar results with the two methods (radii are different by less than a factor 1.5, magnitude differences are $<0.3$ mag).
Large differences are observed for clumps consisting of a bright narrow peak and a diffuse tail (e.g. middle panel of Fig.~\ref{fig:contours}). The 2D fit of the reference method recover only the bright peak, i.e. the densest core of the star-forming region, while the $3\sigma$ contour also include the diffuse tail. This is the case for 6 clumps (ci\_1, ln\_1, ls\_1, ln\_3, ln\_5 and ls\_5); the derived sizes can differ up to factors 4, and magnitudes up to $\sim1$ mag. These differences, in turn, convert into mass values larger by $\sim1$ order of magnitude and mass surface densities lower by $\sim0.4$ dex, for sources ci\_1, ln\_1 and ls\_1, in the case of the alternative photometry. We deduce that, in the cases just mentioned,we are studying large star-forming regions via the alternative method, while the standard method focus on their dense cores.
Another class of sources where we see differences between the two methods are clumps fitted by multiple peaks in the 2D fit but falling within the same $3\sigma$ profile and therefore considered as a single source in the alternative photometry. This is the case for 3 clumps (the groups ln\_9a,b,c,d, see bottom panel of Fig.~\ref{fig:contours}, ci\_7a,b and ci\_17a,b).
Despite the differences just mentioned, the overall distribution of clump sizes and F390W magnitudes are similar in the two analysis; the alternative method recovers, as median values, brighter (by $\sim 0.5$ mag) and larger (by less than a factor $1.5$) clumps, but the median surface brightness of the clumps is the same with both methods. Similarly, the median mass recovered with the alternative method is larger by $0.2$ dex, but its surface density is smaller (by $0.2$ dex) with respect to the median values from the reference method. Age and extinction distributions are similar in the two cases.
We conclude that the methodology for extracting and analyzing clumps can have a strong effect especially when studying non--Gaussian or multiple--peaked systems; on the other hand the average differences between considering $3\sigma$ contours or 2D Gaussian fits in our sample are negligible.
\subsection{Lensing effect on derived properties}
\label{sec:lensing_effects}
Studying the same clumps imaged in the three regions introduced in Section~\ref{sec:data_hst} allows us to understand the effects of gravitational lensing on clump samples overall and on single sources.
Clumps that appear similar, in terms of size and magnitude, on the image plane, i.e. in terms of observed properties (Fig.~\ref{fig:sizemag_obs}), show intrinsic properties that differ on average by a factor $\sim2$ in size and by $\sim1$ mag if clumps in CI and in LN are compared. Despite these differences the surface brightness values observed are similar in all sub--regions, as consequence of its conservation through gravitational lensing.
The mass values resulting from the SED fitting, confirm the photometric results, as clumps in the CI region appear more massive by $0.5$ dex compared to the ones in LN, but median surface densities are similar in all sub--regions.
Overall we are able to observe, on average, smaller, less massive clumps, in regions with larger magnification, but the distribution of such properties are not drastically different in the three sub--regions.
The clumpiness estimates are also similar (Fig.~\ref{fig:clumpiness}) and the slightly lower values retrieved in LN can be mainly attributed the the presence of a bright foreground galaxy, difficult to subtract completely from the data (Section~\ref{sec:clumpiness}).
Moving from the overall distributions to one--to--one analysis of individual clumps as observed in CI, LN and LS, we find that clumps with magnification differences smaller than a factor $\sim2$ between one image and another, e.g. source 4 (ci\_4, ln\_4 and ls\_4 have $\mu=4.8$, $3.4$ and $3.4$ respectively), display similar photometric and physical properties, consistent within uncertainties.
On the other hand, larger differences can be observed when clumps are greatly magnified in some sub--regions, as for clump 1, with an amplification $\mu=11$ in the LN image (ln\_1) but $\mu=3.7$ in the CI (ci\_1); in the latter case the derived mass value is larger by 0.25 dex but with a lower limit on the mass density which is 0.25 dex smaller than the one derived for ln\_1.
A similar case is clump $9$ (bottom--right panel of Fig.~\ref{fig:contours}), which in the LS region (magnification $\mu=5$) appears like a single-peaked source, with an estimated size upper limit $\rm R_{eff}<200$ pc , but with the large magnification of the LN region ($\mu\gtrsim50$) can be separated into 4 narrow peaks, with physical scales between 15 and 50 pc. Individual sub--peaks have smaller derived sizes and masses than the single source ls\_9, but their derived mass surface densities are larger, suggesting that at smaller physical scales we are able to observe denser cores of clumps (Fig.~\ref{fig:mass_size_density}); such trend is confirmed by simulations of resolution effects on derived clump properties \citep{meng2020}.
One extreme case is clump 8, being magnified by $\mu=20$ in the LN and LS images, compared to $\mu=3.5$ in the CI; in case of ci\_8 we derive a mass of $\rm \log(M/M_\odot)=8.3$, more than one order of magnitude larger than for ln\_8 and ls\_8 ($\rm \log(M/M_\odot)=7.0$ and $7.1$); also its mass surface density is one order of magnitude larger than what is found for ln\_8 and ls\_8.
We attribute such large values of mass and density to the position of ci\_8, being consistent with the bulge of the galaxy and with a massive cloud of molecular gas, as found by the analysis of \citet{dessauges2022}. Its derived age, 20 Myr, seems to suggest that some star formation is still going on even there. The image of clump 8 on the lensed arc is heavily distorted and magnified, therefore what we observe as ln\_8 and ls\_8 could be a dense star--forming core within source 8 itself.
\subsection{Galactocentric trends}
\label{sec:radial_trends}
Focusing on the CI, where the entire galaxy can be studied with an almost uniform magnification, we test for possible radial trends of A521-sys1 clumps' properties. In Fig.~\ref{fig:f814_vs_properties} we plot the positions of clumps in the CI, color-coded by their derived properties, on the F814W observations.
Radial trends in clumps' ages and masses can be used to test their survival and evolution within the host galaxies and, as consequence, to test formation models of galaxies and their bulges. The presence of older and more massive clumps near the centre of the galaxy has been interpreted as a sign of the more massive clumps being able to survive bound for hundred of Myr, migrating toward the centre of the galaxy, and there merging to form the galactic bulge, as suggested by simulations by e.g. \citealp{bournaud2007,krumholz2010}, while other simulations argue that such migrating clumps would have marginal effect on bulge growth \citep[e.g.][]{tamburello2015}.
Running Spearman's correlation test we do not find any statistically significant correlation between the clump physical properties plotted in Fig.~\ref{fig:f814_vs_properties} and the galactocentric radius.
We observe massive clumps all over the spiral arms, with the most massive one being at $\rm \sim7.5$ kpc from the centre (ci\_14). In the same way, we observe dense clumps both very close to the centre and further away, along the spiral arms (e.g. ci\_4).
In particular, we observe two massive clumps close to the centre of the galaxy, namely ci\_1 and ci\_8 (the latter sitting at the coordinates of the bulge, \citealp{nagy2021}); their young ages (4 and 20 Myr, respectively) suggest that star formation is taking place also at the centre of the galaxy. At the same time, the large mass, $\rm log(M/M_\odot)=8.3$, and density, $\rm \langle\Sigma_M\rangle> 10^3\ M_\odot pc^{-2}$ of clump ci\_8, may suggest that we are looking at the formation of a proto-bulge.
Fig.~\ref{fig:f814_vs_properties} suggests the presence of an age and extinction asymmetry between the two spiral arms, with the western arm being younger and more extincted than the eastern one. The difference is small (on average $\sim20$ Myr in age, and $0.1$ mag in color excess) but consistent across the stellar models tested. Asymmetries are very common in late-type galaxies but the uncertainties associated to the derived ages prevent us to drive robust conclusions for A521-sys1.
Another useful metric to test the possible migration of clumps is the dynamic time of the galaxy, defined as the ratio between the rotation velocity and the radius; when compared to the age of the clumps it probes whether a clump is still close to the natal region, age$\rm \lesssim t_{dyn}$, or it had survived enough $\rm t_{dyn}$ to have possibly migrated, age$\rm \gtrsim10\times t_{dyn}$ \citep[e.g.][]{forsterschreiber2011b,adamo2013}.
Considering the rotation curve of A521-sys1 \citep[][from MUSE data]{patricio2018} we derive a $\rm t_{dyn}$ varying from $\sim10$ Myr near the centre to $\sim100$ Myr at $6$ kpc; these values are consistent with the ages spanned by the clumps, indicating that they observed close to their natal region. In addition, the clumpiness analysis (Section~\ref{sec:clumpiness}) show that clumps are not dominating the light at wavelengths longer than (rest--frame) $\rm \gtrsim3000$ \AA, suggesting that clumps are not surviving as bound structures for time--scales longer than 100 Myr.
The lack of old and migrating clumps seems in contrast with the large dynamical ages retrieved (Section~\ref{sec:ages}), suggesting that clumps should be gravitationally stable against expansions. One possible cause of this inconsistency could be that the dynamical age is not a suitable metric for the gravitational stability of clumps, at scales $>10$ pc; dynamical ages were introduced to study the stability of stellar clusters on scales of few pc and assuming virial equilibrium \citep{gieles2011}. On the other hand, stellar evolution changes the clump colors to redder values such that a $500$ Myr old clump with $\rm M=2\cdot10^7\ M_\odot$ (the median value for our sample, found in Section~\ref{sec:results_sed}) would have, at the distance of A521-sys1 an observed magnitude of $29.64$ mag in F814W (and fainter magnitudes in bluer filters); while the depth of the observations in F814W reaches $27.5$ mag (Tab.~\ref{tab:data}), the completeness within A521-sys is shallower by $>0.5$ mag and therefore we would expect to observe such old clumps only in case of large magnifications, $\mu\gtrsim10$, thus only in limited regions. Moving to the NIR filters (F105W and F160W) would result in brighter observed magnitudes, but, at the cost of worse spatial resolution and worse completeness, leading similarly to low chances of observing old clumps in A521-sys1.
\section{Conclusions}
\label{sec:conclusion}
We analyzed the clump population of the gravitationally-lensed galaxy A521-sys1, a $\rm z=1.04$ galaxy with properties typical of main sequence systems at similar redshift, i.e. elevated star formation ($\rm SFR=16\pm5\ M_\odot yr^{-1}$) and gas-rich, rotation-dominated disk with high velocity dispersion \citep{patricio2018,girard2019,nagy2021}. A521-sys1 is characterized by a clumpy morphology in the NUV band, observed with HST WFC3-F390W; we use this as the reference filter for extracting the clump catalog and study the sizes and rest-frame UV photometry. Four additional HST filters, F606W, F814W from ACS and F105W, F160W from WFC3/IR, are used to characterize ages and masses of the clumps via broad-band SED fitting.
The appearance of A521-sys is heavily affected by gravitational lensing, producing multiple images of the same system and allowing the study of clumps seen at different intrinsic scales, in the range $10-600$ pc. Roughly half of the galaxy is stretched into a wide arc, with magnification, $\rm \mu$, reaching factors 10 and above; the arc is made by two mirrored images, which we call lensed-north (LN) and lensed-south (LS). The entire system is observable via a counter-image (CI) with a mean magnification $\rm \mu\sim4$.
A gravitational lens model is constructed for the entire A521 galaxy cluster \citep{richard2010} and is later fine-tuned to constrain with better precision the area enclosing the A521-sys1 images, giving a final positional accuracy of $\rm 0.08''$, comparable to the pixel scale of the HST observations.
We derive the following results via photometric and broad-band SED analyses:
\begin{itemize}
\item we extract a sample of 18 unique clumps; many of those are imaged multiple times and some are resolved into sub-clumps when observed at high magnifications. As consequence, the final sample counts 45 entries;
\item the intrinsic clump sizes range from $\sim10$ to $\sim600$ pc, suggesting that we are observing systems that span from almost single clusters to large star-forming regions. Scales below $\sim50$ pc are resolved only in the LN region, hosting small areas close to the critical lines with extreme magnifications ($\rm \mu>20$). Half of the recovered values are upper limits, suggesting that in many cases clumps are more compact that what we are able to resolve;
\item the interval of absolute UV clump magnitudes is comparable to the ones of other literature clump samples at similar redshift and at similar physical scales. We confirm that the surface brightnesses of clumps in $\rm z\gtrsim1$ galaxies are much larger than the corresponding star-forming regions in local galaxies. On the other hand, the completeness analysis reveals that, given the depth of our observations, we would not be able to observe clumps with lower surface brightness;
\item the galaxy appears less clumpy in redder bands; this is quantitatively confirmed by the clumpiness analysis, measuring what fraction of the galaxy luminosity is produced by clumps. The clumpiness is high (around $40\%$) in rest-frame NUV, suggesting that a large fraction of the recent star formation is taking place in the clumps we observe, and decreases moving to V and I bands, where the old stellar population of the galaxy dominates the emission;
\item the derived clump masses range from $\rm 10^{5.9}\ M_\odot$ to $\rm 10^{8.6}\ M_\odot$, confirming that we are studying both cluster or cluster aggregations and large star-forming regions. The overall mass distribution and its median value ($\rm \sim2\cdot10^{7}\ M_\odot$), do not change considerably if either a 10 Myr continuum star formation models (C10, used as reference), a single stellar population model (SSP) or a 100 Myr continuum star formation model are considered; the same is true when testing different extinction models (\citealt{cardelli1989} and \citealt{calzetti2000}) and different metallicities.
The clump sample has a median mass surface density of $\rm \sim10^2\ M_\odot\ pc^{-2}$ but few clumps reach densities typical of the most massive compact ($<5$ pc) stellar clusters observed in local galaxies ($\rm \sim10^3\ M_\odot\ pc^{-2}$). No statistically significant galactocentric trend is observed with either mass or mass density. Dense and massive clumps are observed both close to the galactic bulge and along the outskirts of the spiral arms;
\item the majority of derived ages are $<100$ Myr, with many clumps having a best-fit age close to $10$ Myr. Clumps of such young ages are consistent with being observed close to their natal region, making impossible the study of possible clump migration. The study of the dynamical age, defined by the comparison between clump ages and their density, suggests that most of the clumps may be gravitationally stable against expansion;
\item clump extinctions are distributed in the range $\rm E(B-V)=0.0-0.6$ mag, consistent with the analysis of the Balmer decrement derived from VLT-MUSE observations. Testing the SED fitting with extinction fixed in narrow intervals reveals that inaccurate assumptions (e.g. $\rm E(B-V)\sim0.0$ mag for the entire sample) would result in biasing the derived ages by roughly a factor 10, while having a much smaller impact on the masses;
\item the lack of galactocentric trends for any of the physical properties available and the lack of old migrated clumps can be explained either by dissolution of clumps after few $\sim100$ Myr or by stellar evolution making them fall below the detectability limits of our data.
\item when comparing the properties observed in different galaxy images (CI, LN and LS), clumps appear on average smaller and less bright (and less massive) in LN, suggesting that in regions with large magnifications we are able to observe the cores of the $>100$ pc star-forming regions seen with no or little magnification. Surface brightnesses and mass surface densities are overall very similar in all sub-regions.
\end{itemize}
\section*{Acknowledgements}
We thank the anonymous referee for the useful comments that helped improving the quality of the paper.
This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources \citep{bradley2020}.
M.M. acknowledges the support of the Swedish Research Council, VetenskapsrГҐdet (internationell postdok grant 2019-00502).
\section*{Data Availability}
The HST data underlying this article are accessible from the Hubble Legacy Archive (HLA) at \url{https://hla.stsci.edu/} or through the MAST portal at \url{https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html} (proposal IDs 15435 and 16670). The derived data generated in this research will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
\bibliography{references} %
\appendix
\section{Supplementary photometric table and figures}
\label{sec:app:completetab}
We report in Tab.~\ref{tab:photometry} the clump photometry in all filters; we provide apparent magnitudes (and uncertainties), corrected for Galactic reddening, but uncorrected for lensing.
Data, best--fit clump models and fit residuals in F390W are shown in Fig.~\ref{fig:app:hstdata_390}; the observations in all the the other filters are shown in Fig.~\ref{fig:app:hstdata}.
\begin{table*}
\centering
\begin{tabular}{lccccc}
\multicolumn{1}{c}{ID} & $\rm mag_{F390W}$ & $\rm mag_{F606W}$ & $\rm mag_{F814W}$ & $\rm mag_{F105W}$ & $\rm mag_{F160W}$ \\
\multicolumn{1}{c}{(0)} & (1) & (2) & (3) & (4) & (5) \\
\hline
\hline
ci\_1 & $24.45^{\pm0.05}$ & $24.34^{\pm0.05}$ & $24.30^{\pm0.10}$ & $24.12^{\pm0.21}$ & $24.71^{\pm0.05}$ \\
ci\_3 & $25.75^{\pm0.21}$ & $25.56^{\pm0.09}$ & $24.89^{\pm0.11}$ & $24.74^{\pm0.07}$ & $24.42^{\pm0.07}$ \\
ci\_4 & $26.79^{\pm0.21}$ & $25.72^{\pm0.08}$ & $25.17^{\pm0.08}$ & $24.67^{\pm0.07}$ & $24.22^{\pm0.07}$ \\
ci\_5 & $26.39^{\pm0.21}$ & $26.69^{\pm0.21}$ & $26.02^{\pm0.21}$ & $26.04^{\pm0.26}$ & $26.64^{\pm0.64}$ \\
ci\_7a & $26.02^{\pm0.23}$ & $25.39^{\pm0.14}$ & $24.82^{\pm0.12}$ & $24.36^{\pm0.08}$ & $24.26^{\pm0.10}$ \\
ci\_7b & $25.43^{\pm0.22}$ & $25.10^{\pm0.13}$ & $24.74^{\pm0.16}$ & $24.50^{\pm0.09}$ & $24.36^{\pm0.11}$ \\
ci\_8 & $26.01^{\pm0.13}$ & $25.89^{\pm0.11}$ & $25.25^{\pm0.10}$ & $24.23^{\pm0.07}$ & $23.33^{\pm0.07}$ \\
ci\_9a & $27.44^{\pm0.33}$ & $27.63^{\pm0.05}$ & $27.84^{\pm0.05}$ & $26.42^{\pm0.05}$ & $26.00^{\pm0.05}$ \\
ci\_9b & $26.91^{\pm0.24}$ & $27.30^{\pm0.05}$ & $26.98^{\pm0.05}$ & $26.63^{\pm0.05}$ & $---$ \\
ci\_9c & $27.26^{\pm0.32}$ & $27.28^{\pm0.05}$ & $26.33^{\pm0.05}$ & $26.05^{\pm0.05}$ & $26.88^{\pm0.05}$ \\
ci\_10 & $26.83^{\pm0.38}$ & $26.37^{\pm0.18}$ & $25.81^{\pm0.18}$ & $25.15^{\pm0.15}$ & $24.76^{\pm0.12}$ \\
ci\_11 & $26.38^{\pm0.15}$ & $25.96^{\pm0.12}$ & $25.82^{\pm0.19}$ & $25.60^{\pm0.05}$ & $25.36^{\pm0.25}$ \\
ci\_14 & $24.30^{\pm0.10}$ & $24.06^{\pm0.07}$ & $23.58^{\pm0.08}$ & $23.30^{\pm0.07}$ & $23.09^{\pm0.08}$ \\
ci\_15a & $25.06^{\pm0.05}$ & $24.66^{\pm0.08}$ & $24.32^{\pm0.11}$ & $23.91^{\pm0.05}$ & $24.21^{\pm0.05}$ \\
ci\_15b & $26.43^{\pm0.05}$ & $26.13^{\pm0.16}$ & $25.46^{\pm0.13}$ & $25.26^{\pm0.05}$ & $24.40^{\pm0.05}$ \\
ci\_16 & $24.88^{\pm0.23}$ & $24.64^{\pm0.09}$ & $24.07^{\pm0.14}$ & $23.93^{\pm0.08}$ & $24.29^{\pm0.19}$ \\
ci\_17 & $26.68^{\pm0.23}$ & $25.96^{\pm0.09}$ & $26.18^{\pm0.13}$ & $25.93^{\pm0.11}$ & $26.63^{\pm0.36}$ \\
ci\_18 & $26.18^{\pm0.14}$ & $25.49^{\pm0.08}$ & $25.29^{\pm0.10}$ & $24.98^{\pm0.11}$ & $24.52^{\pm0.12}$ \\
ln\_1 & $23.34^{\pm0.05}$ & $23.14^{\pm0.05}$ & $23.32^{\pm0.06}$ & $23.78^{\pm0.05}$ & $23.51^{\pm0.08}$ \\
ln\_2 & $25.00^{\pm0.12}$ & $24.51^{\pm0.06}$ & $24.68^{\pm0.07}$ & $24.58^{\pm0.07}$ & $24.47^{\pm0.05}$ \\
ln\_3 & $25.86^{\pm0.23}$ & $25.23^{\pm0.10}$ & $24.68^{\pm0.09}$ & $24.36^{\pm0.08}$ & $24.22^{\pm0.08}$ \\
ln\_4 & $27.19^{\pm0.27}$ & $26.18^{\pm0.11}$ & $25.66^{\pm0.11}$ & $25.09^{\pm0.09}$ & $24.73^{\pm0.08}$ \\
ln\_5 & $25.95^{\pm0.17}$ & $25.90^{\pm0.13}$ & $25.68^{\pm0.19}$ & $25.37^{\pm0.11}$ & $25.39^{\pm0.10}$ \\
ln\_6 & $26.72^{\pm0.33}$ & $26.01^{\pm0.15}$ & $25.86^{\pm0.17}$ & $25.47^{\pm0.18}$ & $26.08^{\pm0.26}$ \\
ln\_7 & $25.84^{\pm0.16}$ & $25.41^{\pm0.10}$ & $24.81^{\pm0.10}$ & $24.62^{\pm0.11}$ & $24.74^{\pm0.18}$ \\
ln\_8 & $25.71^{\pm0.16}$ & $25.20^{\pm0.08}$ & $24.77^{\pm0.11}$ & $24.35^{\pm0.08}$ & $23.88^{\pm0.05}$ \\
ln\_9 & $26.13^{\pm0.15}$ & $25.95^{\pm0.13}$ & $25.34^{\pm0.13}$ & $25.07^{\pm0.12}$ & $25.52^{\pm0.41}$ \\
ln\_9a & $26.25^{\pm0.18}$ & $25.98^{\pm0.05}$ & $25.53^{\pm0.18}$ & $25.17^{\pm0.05}$ & $25.19^{\pm0.05}$ \\
ln\_9b & $26.29^{\pm0.18}$ & $25.63^{\pm0.05}$ & $25.21^{\pm0.14}$ & $24.94^{\pm0.05}$ & $24.84^{\pm0.05}$ \\
ln\_9c & $26.19^{\pm0.19}$ & $25.53^{\pm0.05}$ & $24.77^{\pm0.10}$ & $24.27^{\pm0.05}$ & $24.06^{\pm0.05}$ \\
ln\_9d & $26.35^{\pm0.20}$ & $26.07^{\pm0.05}$ & $25.75^{\pm0.23}$ & $25.67^{\pm0.05}$ & $26.73^{\pm0.05}$ \\
ln\_10 & $26.10^{\pm0.13}$ & $25.79^{\pm0.10}$ & $25.31^{\pm0.14}$ & $25.29^{\pm0.19}$ & $25.98^{\pm0.85}$ \\
ln\_12 & $25.96^{\pm0.21}$ & $25.43^{\pm0.09}$ & $25.18^{\pm0.05}$ & $25.33^{\pm0.09}$ & $---$ \\
ln\_13 & $27.02^{\pm0.37}$ & $28.44^{\pm0.72}$ & $28.77^{\pm0.05}$ & $---$ & $---$ \\
ls\_1 & $24.00^{\pm0.05}$ & $23.81^{\pm0.06}$ & $23.93^{\pm0.07}$ & $23.91^{\pm0.05}$ & $24.31^{\pm0.11}$ \\
ls\_2 & $25.33^{\pm0.08}$ & $24.74^{\pm0.07}$ & $24.51^{\pm0.07}$ & $24.75^{\pm0.07}$ & $24.73^{\pm0.05}$ \\
ls\_3 & $26.42^{\pm0.19}$ & $25.67^{\pm0.07}$ & $25.46^{\pm0.09}$ & $24.94^{\pm0.08}$ & $24.68^{\pm0.05}$ \\
ls\_4 & $27.00^{\pm0.23}$ & $26.00^{\pm0.08}$ & $25.29^{\pm0.09}$ & $24.87^{\pm0.07}$ & $24.50^{\pm0.07}$ \\
ls\_5 & $26.25^{\pm0.18}$ & $26.17^{\pm0.12}$ & $26.02^{\pm0.18}$ & $25.62^{\pm0.18}$ & $25.67^{\pm0.22}$ \\
ls\_6 & $25.67^{\pm0.17}$ & $25.39^{\pm0.11}$ & $24.89^{\pm0.12}$ & $24.72^{\pm0.11}$ & $24.56^{\pm0.05}$ \\
ls\_7 & $25.20^{\pm0.15}$ & $24.88^{\pm0.08}$ & $24.22^{\pm0.09}$ & $23.96^{\pm0.08}$ & $23.85^{\pm0.09}$ \\
ls\_8 & $26.10^{\pm0.15}$ & $25.15^{\pm0.08}$ & $24.86^{\pm0.10}$ & $24.32^{\pm0.09}$ & $23.92^{\pm0.09}$ \\
ls\_9 & $26.55^{\pm0.25}$ & $26.03^{\pm0.11}$ & $25.90^{\pm0.13}$ & $25.36^{\pm0.11}$ & $25.75^{\pm0.05}$ \\
ls\_11 & $26.46^{\pm0.13}$ & $26.03^{\pm0.09}$ & $25.98^{\pm0.16}$ & $25.30^{\pm0.09}$ & $25.20^{\pm0.05}$ \\
ls\_12 & $26.57^{\pm0.22}$ & $26.17^{\pm0.16}$ & $26.11^{\pm0.25}$ & $26.37^{\pm0.26}$ & $27.33^{\pm0.88}$ \\
\hline
\end{tabular}
\caption{Apparent AB magnitudes (and relative uncertainties), corrected for Galactic reddening. Empty entries indicate a non-detection in the corresponding filter.}
\label{tab:photometry}
\end{table*}
\section{Update on lensing model}
\label{sec:app:lensmodel}
The starting point of our lens model is the LoCuSS cluster mass model presented in \citet{richard2010}, which was based on a limited number of star-forming clumps in the giant arc at $z=1$. The cluster RXCJ0454 has the smallest Einstein radius ($3.6"$) among the 20 LoCuSS clusters analysed in \citet{richard2010}, making it more similar to a group-like lens dominated by the brightest cluster galaxy (BCG). We have followed here the same approach in the parametrisation but improved the model to include new constraints from HST images and cluster members identified in the MUSE observations, and summarise here the elements of the modelling. The mass distribution of the cluster is parametrised as the sum of double Pseudo Isothermal Elliptical (dPIE) potentials: 1 cluster-scale component and multiple galaxy-scale components. These potentials are characterised by the center, ellipticity and position angle, velocity dispersion $\sigma$ and two characteristic radii $r_{\rm core}$ and $r_{\rm cut}$.
We have selected color-selected cluster members from \citet{richard2010}, complemented by spectroscopically-confirmed cluster members from MUSE, leading to a total of 52 galaxy-scale cluster members (indicated with white arrows in Fig.~\ref{fig:app:members}). To reduce the number of free parameters in the model we have assumed as in previous works (e.g. \citealt{richard2014}) a mass-traces-light approach for these galaxy-scale components, where the geometry (center, ellipticity and position angle) follow the light distribution and the other dPIE parameters are scaled with respect to the values of an L$^*$ galaxy ($\sigma^*$, $r_{\rm core}$ and $r_{\rm cut}$). The two exceptions are the BCG and the brightest galaxy located in the arc, whose $\sigma$ and $r_{\rm cut}$ parameters are fit independently. Regarding the cluster-scale component, we only assumed $r_{\rm cut}=1000$ kpc as it is unconstrained. In total our model is comprised of 12 free parameters.
Regarding the constraints, we have complemented the constraints used in \citet{richard2010} and reach 13 multiple systems of matched clumps in the giant arc, forming a total of 33 multiple images; all of them are included at their spectroscopic redshift. Unfortunately the Einstein radius is too small and the MUSE data is not deep enough to provide us with additional spectroscopic redshifts for multiple images. Accounting for the image multiplicity and the unknown source location, these clump locations %
give us 40 constraints, which gives us a well-constrained model with regard to the 12 free parameters. The 33 multiple images of the clumps used to constrain the lens model are shown in Fig.~\ref{fig:app:members} as red circles.
The best fit parameters of the \textsc{lenstool} mass model are presented in Tab~\ref{tab:lenstool_params}. This model gives us an rms of $0.08"$ between the observed and the predicted location of all constraints, which is close to the precision of the HST locations. The velocity dispersion of the main dark matter halo (cluster-scale) component is $\sim$ 600 km/s, again confirming that the lens is somewhere in between a massive group and a low-mass cluster.
\begin{table*}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline\hline
Potential & $\Delta\alpha$ & $\Delta\delta$ & $e$ & $\theta$ & r$_{\rm core}$ & r$_{\rm cut}$ & $\sigma$ \\
& [arcsec] & [arcsec] & & [deg] & kpc & kpc & km$\,$s$^{-1}$ \\
\hline
DM1 & $ -0.7^{+ 0.2}_{ -0.2}$ & $ -0.5^{+ 0.2}_{ -0.3}$ & $ 0.65^{+ 0.02}_{-0.03}$ & $ 53.2^{+ 0.2}_{ -0.2}$ & $23^{+1}_{-1}$ & $[1000]$ & $610^{+4}_{-5}$ \\
BCG & $[ 0.0]$ & $[ -0.0]$ & $[0.24]$ & $[ 47.6]$ & $[0]$ & $81^{+47}_{-18}$ & $215^{+33}_{-12}$ \\
GAL1 & $[ 2.1]$ & $[ 6.8]$ & $[0.13]$ & $[ 58.0]$ & $[0]$ & $5^{+1}_{-1}$ & $27^{+29}_{-50}$ \\
L$^{*}$ galaxy & & & & & $[0.15]$ & $10^{+3}_{1}$ & $180^{+4}_{-13}$\\
\hline
\end{tabular}
\caption{Best fit parameters of the \textsc{lenstool} mass model.}
\label{tab:lenstool_params}
\end{table*}
\section{Minimum resolvable size}
\label{sec:app:reffmin}
In order to test what is the minimum clump size measurable with our method, we simulate synthetic sources with asymmetric Gaussian profiles and we fit them in the same way as the real clumps. In more details, we produce 3 sets of synthetic sources, with axis ratios uniformly distributed in the ranges $[1.0; 1.5]$, $[1.5; 2.0]$ and $[2.0; 4.0]$, respectively. We add a fourth set of sources with axis ratio fixed at $axr=1.0$, i.e. with a fixed circular symmetric Gaussian profile.
For each set we simulate 500 sources with sizes uniformly distributed in the range $\rm \log(\sigma_{x,in}/[px])=[-2; 0.6]$, fluxes uniformly distributed in the range $\rm \log(flux_{in}/[e/s])=[0.0; 0.5]$ and random angle $\theta$. These ranges are chosen to cover the range of properties of the A521-sys1 clump catalog. The sources are introduced at a random position in the region of the observations covered by the images of the A521-sys1 galaxy and then fitted one at a time, in order to avoid the manually-introduced crowding we would have by adding all the 500 sources together.
We define the Gaussian standard deviations derived from the fit as $\rm \sigma_{x,out}$, in contrast to the intrinsic ones, used as input for the simulated clusters, $\rm \sigma_{x,in}$. We consider good fits the ones where the relative difference $\rm \sigma_{x,rel}\equiv|\sigma_{x,out}-\sigma_{x,in}|/\sigma_{x,out}$ is less than 0.2, i.e. the relative error on the retrieved size is less than $20\%$. We show the results of the test in Fig.~\ref{fig:test_reffmin}. In the left panel it can be observed how the fraction of good fits steeply increases for $\rm \sigma_{x,out}>0.4$ px. Above this value, the fraction of good fits stabilizes above $\sim50\%$, with a clear dependence on the axis ratio, as for more circular sources better fits are returned, on average. If, instead of $\rm \sigma_x$, we consider the geometrical mean of the minor and major axes of the gaussian $\rm \sigma_{xy}\equiv\sqrt{\sigma_{y}\cdot\sigma_{y}}=\sigma_x\sqrt{axr}$, as done for estimating the effective radius of the real clumps, we see that the fraction of good fits with $
\rm \sigma_{xy}>0.4$ flattens to a value $\sim80\%$, indicating that, for large $axr$, the derived $\rm \sigma_{xy}$ is more robust than $\rm \sigma_{x}$ and $\rm \sigma_{y}$ considered alone. We observe a small decline of the fraction of good fits for large sizes, possibly driven by their lower average surface brightness. We deal in detail with the completeness in surface brightness in Appendix~\ref{sec:app:completeness}. We consider $\rm \sigma_{x}=0.4$ px as the lowest size recognizable by our routine, as below such value the derived sizes seem to be totally uncorrelated to the input values.
We use $\rm \sigma_{x,out}$ instead of $\rm \sigma_{x,in}$ as reference as this is the quantity we derive for the real clumps.
\section{Completeness test}
\label{sec:app:completeness}
We test the luminosity completeness of our observation in a similar way as described in Appendix~\ref{sec:app:reffmin}, i.e. by introducing synthetic sources in the field of view of the galaxy and fitting them in the same way as for the real clumps.
We use the map of the galaxy after having subtracted the flux of the real clumps. Despite the fact that most of the observed clumps have profiles consistent with the instrumental PSF, we simulate sources with different sizes, in order to derive a surface brightness limit. In more details, we simulate 3 sets of clumps, with $\rm \sigma_x=0.4$, $1.0$ and $2.0$ px ($0.024"$, $0.06"$ and $0.12"$ respectively); sources with larger sizes are not measured in this galaxy and therefore are not necessary to be simulated. For all sets we simulate circularly symmetrical sources, i.e. we set $axr\equiv1$. For each set we simulate 500 sources with fluxes randomly drawn from a uniform distribution in the range $\rm \log(flux_{in}/[e/s])=[-2.0; 1.0]$ for sources with $\sigma_x=0.4$ and $1.0$ px, and in the range $\rm \log(flux_{in}/[e/s])=[-1.0; 2.0]$ for sources with $\sigma_x=2.0$ px.
Some of the synthetic clumps have recovered fluxes $\rm flux_{out}$ consistent with zero ($<10^{-4}$ e/s, i.e. more than two orders of magnitude lower than the input values), meaning that the fitting process do not recognize the source and consider the cutout as only filled by background emission. Those are 27 sources with $\rm \sigma_{x,in}=0.4$ px and $\rm flux_{in}<0.07$ e/s ($\rm 28.3\ mag$), 31 with $\rm \sigma_{x,in}=1.0$ px and $\rm flux_{in}<0.12$ e/s ($\rm 27.7\ mag$), and 11 with $\rm \sigma_{in}=2.0$ px and $\rm flux_{in}<0.39$ e/s ($\rm 26.4\ mag$). We call these $\rm flux_{in}$ values detectability limits, $\rm lim_{det}$.
We observe that some of the sources with fluxes higher that the detectability limits are still not well-fitted and we therefore investigate the precision in recovering the input properties.
We calculate for each of the synthetic sources the relative error on the recovered flux, i.e. $\rm flux_{rel}=|flux_{in}-flux_{out}|/flux_{in}$. The values of $\rm flux_{rel}$ are clustered around zero for bright sources, but they start deviating to larger values (suggesting larger uncertainties in fitting the source) when considering dimmer sources.
The cases where the relative error on the recovered flux is above $50\%$, i.e. $\rm flux_{rel}\ge0.5$, can be considered unreliable fits. We plot the fraction of acceptable fits, satisfying $\rm flux_{rel}<0.5$, in function of $\rm flux_{in}$ in the left panel of Fig.~\ref{fig:app:completeness}. We name the flux values at which the fraction goes above $80\%$ completeness limits, $\rm lim_{com}$; these are more conservative values compared to the detectability limits described above. The completeness limits for the three sets of sources are $\rm lim_{com,0.4}=0.15$ e/s ($\rm 27.5\ mag$), $\rm lim_{com,1.0}=0.30$ e/s ($\rm 26.7\ mag$) and $\rm lim_{com,2.0}=1.20$ e/s ($\rm 25.2\ mag$).
We repeat this process by calculating the relative error on the recovered size, i.e. $\rm \sigma_{rel}=|\sigma_{in}-\sigma_{out}|/\sigma_{in}$, and plotting the fraction of acceptable fits with $\rm \sigma_{rel}<0.5$ in the right panel of Fig.~\ref{fig:app:completeness}. The $\rm flux_{in}$ values corresponding to fractions above $80\%$ are the same or smaller than $\rm lim_{com}$ discussed above therefore we kept the latter as more conservative values.
In Section~\ref{sec:sizelum} of the main text we compare $\rm lim_{com}$ values found with this analysis to the magnitudes of the observed clumps.
As final remark, we tested an average completeness over the entire area covered by the 3 images of A521-sys1; keeping separated the 3 regions defined in Section~\ref{sec:data_hst} would not affect very much the values recovered.
\section{Extinction map from MUSE}\label{sec:app:extinction}
We leverage the VLT-MUSE observations of A521 to estimate the nebular extinction of the galaxy. The spectrum at the redshift of A521-sys1 covers the wavelengths of two Balmer lines, namely $\rm H\gamma$ and $\rm H\delta$. At fixed gas density and temperature these lines have a fixed ratio i.e. $\rm R_{\gamma\delta,intr}\equiv L_{H\gamma}/L_{H_\delta}=1.81$ for electron density $\rm n_e=10^2\ cm^{-3}$ and electron temperature $\rm T_e=10000\ K$. The ratio change only by $\pm0.01$ if $\rm T_e$ varies in the range $5000-20000$ K (values from \citealp{dopita2003}, based on \citealp{storey1995}).
A non-zero extinction changes the value of the ratio by a factor proportional to the magnitude of the extinction itself. We can use the observed line ratio $\rm R_{\gamma\delta,obs}$ to derive the color excess $\rm E(B-V)$ from:
\begin{equation}\label{eq:ebv_muse}
\rm R_{\gamma\delta,obs} = R_{\gamma\delta,intr}\cdot10^{0.4\cdot E(B-V) [k(H\gamma)-k(H\delta)]}
\end{equation}
where $\rm k(H\gamma)$ and $\rm k(H\delta)$ are set by the extinction curve considered, in this case the Milky Way one \citep{cardelli1989}.
We divide the galaxy in 6 concentric annular regions with radii of 2 kpc, using the source--plane image to define the annuli and transposing them to the CI, LN and LS images using the lensing model, as described in \citet{nagy2021}. This division assumes that the largest extinction differences would appear studying the galaxy radially.
In each of the 6 bins, we use the \texttt{pPXF} tool \citep{cappellari2017} to fit and subtract the spectral continuum (including self-absorption of the lines) and the \textsc{Pyplatefit} tool to perform the line fit of the $\rm H\gamma$ and $\rm H\delta$ lines\footnote{\textsc{Pyplatefit} is a tool developed for the MUSE deep fields and is a simplified python version of the \textsc{Platefit IDL} routines developed by \citet{tremonti2004} and \citet{brinchmann2004} for the SDSS project.}.
Before deriving $\rm R_{\gamma\delta,obs}$ we de--redden the line flux for the Milky Way extinction ($\rm A_{V,MW}=0.21$ mag), using the \citet{cardelli1989} extinction function. We consider the same extinction function to derive $\rm E(B-V)$ in Eq.~\ref{eq:ebv_muse}.
The derived $\rm E(B-V)$ values are shown in Fig.~\ref{fig:app:ebv}, along with the uncertainties coming from the line and continuum fitting. Due to the large uncertainties, all values are consistent within $1\sigma$ with zero extinction. However, we notice that extinction in the 3 internal bins is consistently higher than in the external bins (where the face values goes to unphysical negative values). The outermost bin has lower S/N compared to the other ones, translating into very large uncertainties that makes it unreliable.
If differential extinction is considered, as in \citet{calzetti2000}, the nebular extinction we derived should be rescaled, $\rm E(B-V)_{star} = 0.44\cdot E(B-V)_{gas}$; in this case, the stellar extinction within the galaxy would be even lower.
Despite not being able to put hard constraint on the extinction values, this analysis suggests the presence of only low average extinction in A521-sys1, ranging up to $E(B-V)\approx0.5$ mag in the internal regions and close to $E(B-V)\approx0.0$ mag in the outskirts. These overall values are consistent with the extinction values of the individual clumps, mainly distributed in the range $E(B-V)$ range $0.0-0.5$ mag (Section~\ref{sec:results_sed}).
\section{Comparison between fiducial and alternative extraction and photometry}
\label{sec:app:alternative}
To test the reliability of our results we implement an alternative method for extracting and analyzing the clumps.
We measure the properties of the galactic diffuse background (median value and standard deviation, $\rm \sigma$) in a region within the galaxy devoid of clumps. We use contours at $3\sigma$ level (using a smoothing of 3 pixels) above the median value of the background to extract clumps and define their extent. The sizes of clumps are calculated using ellipses that better trace the $3\sigma$ contours. We used $6\sigma$ contours to separate multiple peaks within the same $3\sigma$ contours, considering them as separate clumps.
When two $6\sigma$ peaks are in the same $3\sigma$ contour, two ellipses are considered, trying to cover the entire region within the contour without intersecting them. We consider the geometric mean of the major and minor axis of each ellipses, $R_3 = \sqrt{ab}$, where the subset $3$ is used to indicate that this radius refer to the extent of the $3\sigma$ contours. In order to convert $R_3$ into an effective radius we assume that clumps have Gaussian profiles and we first derive an \textit{observed} effective radius:
\begin{equation}
\rm R_{eff,obs}=R_3\sqrt{\frac{\ln{(2)}}{\ln{(r_{peak}/3)}}},
\end{equation}
where $\rm r_{peak}$ is the ratio of the peak of each region over the RMS value. Then we find the intrinsic effective radius by subtracting, in quadrature, the HWHM of the instrumental PSF, which, for F390W, is $0.8$ px,
\begin{equation}\label{eq:app:reff}
\rm R_{eff} = \sqrt{R_{eff,obs}^2-0.8^2}.
\end{equation}
Where $\rm R_{eff,obs}$ is smaller than the HWHM of the PSF we set manually the intrinsic $\rm R_{eff}$ to the minimum value detectable, $\rm R_{eff,min} = \sigma_{x,min}\sqrt{2\ln{2}} \approx 1.8\sigma_{x,min} = 0.47$ px, described in Section~\ref{sec:minreff}.
Photometry is performed using aperture photometry in the ellipses defined above, and subtracting the background estimated as the median value of the sky evaluated in a annular region around the aperture. Aperture correction is needed to correct the flux for losses due to finite apertures. We simulate sources with the sizes found using Eq.~\ref{eq:app:reff}, we perform aperture photometry using the same apertures used on the real data and then we calculate what is the fraction of flux we are missing. The missing flux is then converted into an aperture correction; we therefore have a specific aperture correction for each source. The values hence found may constitute, in some cases, overestimates; some of the clumps present a bright peak and then some more diffuse light filling the $3\sigma$ contour and the assumption of a 2D--Gaussian profile may not be accurate in these cases.
The conversion of sizes from pixels to parsecs and of flux into observed and absolute magnitudes is done in the same way as for the reference sample, as well as the de-lensing\footnote{with the only exception that we use different apertures, i.e. the ones also used for photometry, to estimate the median amplification factors and their uncertainties.} and the SED fitting (see Sections~\ref{sec:convert_intrinsic} and \ref{sec:BBSED}).
\bsp %
\label{lastpage} |
Title:
Assessing the robustness of sound horizon-free determinations of the Hubble constant |
Abstract: The Hubble tension can be addressed by modifying the sound horizon ($r_s$)
before recombination, triggering interest in $r_s$-free early-universe
estimates of the Hubble constant, $H_0$. Constraints on $H_0$ from an
$r_s$-free analysis of the full shape BOSS galaxy power spectra within LCDM
were recently reported and used to comment on the viability of physics beyond
LCDM. Here we demonstrate that $r_s$-free analyses with current data depend on
the model and the priors placed on the cosmological parameters, such that LCDM
analyses cannot be used as evidence for or against new physics. We find that
beyond-LCDM models which introduce additional energy density with significant
pressure support, such as early dark energy (EDE) or additional neutrino energy
density ($\Delta N_{\rm eff}$), lead to larger values of $H_0$. On the other
hand, models which only affect the time of recombination, such as a varying
electron mass ($\Delta m_e$), produce $H_0$ constraints similar to LCDM. Using
BOSS data, constraints from light element abundances, cosmic microwave
background (CMB) lensing, a CMB-based prior on the scalar amplitude ($A_s$),
spectral index ($n_s$), and $\Omega_m$ from the Pantheon+ supernovae data set,
we find that in LCDM, $H_0=64.9\pm 2.2$ km/s/Mpc; EDE, $H_0=68.7^{+3}_{-3.9}$;
$\Delta N_{\rm eff}$, $H_0=68.1^{+2.7}_{-3.8}$; $\Delta m_e$,
$H_0=64.7^{+1.9}_{-2.3}$. Using a prior on $\Omega_m$ from uncalibrated BAO and
CMB measurements of the projected sound horizon, these values become in LCDM,
$H_0=68.8^{+1.8}_{-2.1}$; EDE, $H_0=73.7^{+3.2}_{-3.9}$; $\Delta N_{\rm eff}$,
$H_0=72.6^{+2.8}_{-3.7}$; $\Delta m_e$, $H_0=68.8\pm 1.9$. With current data,
none of the models are in significant tension with SH0ES, and consistency tests
based on comparing $H_0$ posteriors with and without $r_s$ marginalization are
inconclusive with respect to the viability of beyond LCDM models.
| https://export.arxiv.org/pdf/2208.12992 |
\title{Assessing the robustness of sound horizon-free determinations of the Hubble constant}
\author{Tristan L.~Smith}
\affiliation{Department of Physics and Astronomy, Swarthmore College, Swarthmore, PA 19081, USA}
\author{Vivian Poulin}
\affiliation{Laboratoire Univers \& Particules de Montpellier (LUPM), CNRS \& Universit\'e de Montpellier (UMR-5299), Place Eug\`ene Bataillon, F-34095 Montpellier Cedex 05, France}
\author{Th\'eo Simon}
\affiliation{Laboratoire Univers \& Particules de Montpellier (LUPM), CNRS \& Universit\'e de Montpellier (UMR-5299), Place Eug\`ene Bataillon, F-34095 Montpellier Cedex 05, France}
\section{\label{sec:Intro}Introduction}
As cosmological measurements have become more precise they have revealed a few potential issues within the core cosmological model. This model, referred to as `\LCDM', consists of a geometrically flat universe filled with baryons, photons, three flavors of neutrinos with the standard weak interactions, cold dark matter (CDM), and a cosmological constant, $\Lambda$, with dynamics described by general relativity. The success of this model to describe an exceedingly wide variety of measurements-- from light element abundances produced during big bang nucleosynthesis (BBN), to the cosmic microwave background (CMB), to the clustering of galaxies and the more recent expansion history-- is remarkable (e.g., Refs.~\cite{2018FoPh...48.1226E,1804633}).
These measurements allow for a number of non-trivial consistency tests, the most discerning of which allow us to take observations of the early universe (roughly at or before recombination) and \emph{predict} the values of quantities that are measured in the late universe, within a given model. Interestingly, within \LCDM\ applying this to the expansion rate of the universe today, known as the Hubble constant ($H_0$) and to the current amplitude of the clustering of galaxies, quantified by the standard deviation of the mass contained within spheres with radii equal to $8 h^{-1}\ {\rm Mpc}$ ($\sigma_8$), leads to mismatches between the predicted and directly measured values (known as the `Hubble tension' and `$\sigma_8$ tension', respectfully).
Barring the presence of systematic errors affecting multiple, independent measurements (see Refs.~\cite{Freedman:2021ahq,Riess:2021jrx,Abdalla:2022yfr,Amon:2022azi,Amon:2022ycy} for discussion), this would indicate that one needs to modify \LCDM, and in the process identify new physics that dictate some aspects of the structure and evolution of the universe \cite{Knox:2019rjx,DiValentino:2021izs,Schoneberg:2021qvd}.
The statistical significance of these mismatches depends on the particular measurements. However, in all cases the value of $H_0$ predicted within \LCDM\ from measurements of pre-recombination physics (the CMB or the baryon acoustic oscillations-- BAO) are smaller than the direct measurements, and in all cases the predicted value of $\sigma_8$ is larger (see, e.g., Refs.~\cite{Freedman:2021ahq,Riess:2021jrx,Riess:2022mme,Abdalla:2022yfr,DiValentino:2021izs,HSC:2018mrq,Heymans:2020gsg,DES:2021wwk}). For individual experiments the mismatch for $H_0$ reaches $\sim 5 \sigma$ (between \textit{Planck} and \shoes~ \cite{Riess:2021jrx,Riess:2022mme}), whereas for $\sigma_8$ it is $\sim 3 \sigma$ (between \textit{Planck} and KiDS-1000 \cite{Heymans:2020gsg}). Regardless of whether or not these discrepancies are due to physics beyond \LCDM\ or yet undiscovered experimental complexities, the increased precision of current cosmological data sets gives us clear motivation to identify additional ways to assess the consistency of \LCDM.
A fundamentally different type of consistency test focuses on whether a given set of measurements are internally consistent. In the context of CMB measurements, one such approach is to split the data up in multipoles (for the {\it Planck} satellite the split has been typically taken at $\ell \sim 700-800$) and compare the inferred values of the \LCDM\ cosmological parameters \cite{Addison:2015wyg,Planck:2016tof}. Another approach proposes a set of parameters that divides the CMB data into pre- and post-recombination physics \cite{Vonlanthen:2010cd,Audren:2012wb,Audren:2013nwa,Verde:2016wmz}.
Here we focus on tests based on obtaining constraints on $H_0$ using observations of pre-recombination physics with and without information on the sound horizon, $r_{s}$.\footnote{The sound horizon is time-dependent and hence there are two different values of the sound horizon that impact cosmological measurements: the sound horizon at recombination, $r_{s,{\rm rec}}$, and at baryon decoupling, $r_{s,d}$. The first value is relevant for the CMB and the second for BAO. While the value of either sound horizon can be different in different cosmological models, the difference between them is relatively model-independent with $(r_{s,d}-r_{s,{\rm rec}})H_0 \simeq 6 \times 10^{-4}$ \cite{Lin:2021sfs}.} In general, determinations of $H_0$ rely on a calibrator, usually in the form of a standard ruler (for CMB/BAO) or a standard candle (for Type Ia supernovae, SNeIa), that breaks the degeneracy between the observed angular size/relative flux of an object, and its true distance to us. In fact, the Hubble tension is often described as a tension between calibrators of the distance ladder, which rely either on the Cepheid variable calibration for the absolute magnitude of SNeIa or the \LCDM\ value of the sound horizon inferred from CMB data \cite{Bernal:2016gxb,Aylor:2018drw}. Consequently, all currently successful attempts to construct `beyond'-\LCDM\ models to address the Hubble tension propose new physics that changes $r_{s}$\footnote{We note that it is not possible to address the Hubble tension by modifying the late-time expansion history \cite{Benevento:2020fev,Efstathiou:2021ocp}.} \cite{Knox:2019rjx,Schoneberg:2021qvd}.
A determination of $H_0$ using observations sensitive to pre-recombination physics which is independent of $r_{s,d}$ (i.e., `$r_{s}$-free') has the potential to provide useful evidence for or against these models \cite{Farren:2021grl}.
A program of conducting $r_{s}$-free analyses using CMB lensing (along with priors on some of the cosmological parameters) has the potential to achieve this goal, but is fundamentally limited by cosmic variance \cite{Baxter:2020qlr}. It is also possible to use measurements of galaxy clustering, along with an effective marginalization over the value of $r_{s,d}$ \cite{Philcox:2020xbv,Farren:2021grl,Philcox:2022sgj}. Since galaxy surveys have access to a large number of independent modes this has the potential to significantly increase the precision of such an analysis.
To do so, one uses the effective field theory (EFT) of large scale structure \cite{Baumann:2010tm,Carrasco:2012cv,Senatore:2014via,Senatore:2014eva,Senatore:2014vja,Perko:2016puo} applied to the BOSS DR12 galaxy clustering data (EFT BOSS) \cite{BOSS:2016wmc}. The EFT BOSS data have been shown to allow for determination of the $\Lambda$CDM parameters at a precision higher than that from conventional BAO and redshift space distortions, as well as to provide interesting constraints on models beyond $\Lambda$CDM (see, e.g., Refs.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret,DAmico:2020kxu,DAmico:2020tty,Chen:2021wdi,Zhang:2021yna,Zhang:2021uyp,Philcox:2021kcw,Simon:2022ftd,Kumar:2022vee,Nunes:2022bhn,Lague:2021frh,Carrilho:2022mon,Simon:2022adh}).
The way in which $r_{s}$-free inferences of $H_0$ may impact models that attempt to resolve the Hubble tension is two-fold. First, as a predictive test, it could indicate that models which alter $r_{s}$ to address the Hubble tension are disfavored if the $r_{s}$-independent value of $H_0$ is in tension with direct measurements of $H_0$ \cite{Philcox:2022sgj}. Second, as an internal consistency test, a comparison between constraints to $H_0$ with and without $r_{s,d}$ can serve as an indicator for or against beyond-\LCDM\ physics \cite{Farren:2021grl}.
Here we explore whether these analyses provide a robust test of new physics by considering three \LCDM\ extensions which affect $r_{s,d}$: an axion-like model of early dark energy (EDE), a model with additional free-streaming ultra-relativistic energy density ($\Delta N_{\rm eff}$), and a model with a value of the electron mass which is different at recombination than it is today ($\Delta m_e$). We also investigate how various external priors affect these results. Fig.~\ref{fig:intro_whisker} summarizes the 1D posteriors of $H_0$ in the $r_{s,d}$-marginalized analysis for the four models considered in this work.
Using an $r_{s}$-free analysis of BOSS DR12, \textit{Planck} CMB lensing, a BBN prior, and $\Omega_m$ estimated from Pantheon+ \cite{Brout:2022vxf}, we find that both $\Delta N_{\rm eff}$ and EDE open up a new degeneracy between $H_0$ and the primordial power spectrum (i.e., the scalar amplitude, $A_s$, and index, $n_s$) leading to a posterior distribution for $H_0$ that is shifted to higher values compared to \LCDM. We find that for all four models we consider, the posterior for $H_0$ is consistent with the \shoes~ determination of $H_0$ at $\sim 1.5\sigma$\footnote{In this paper we quote tension assuming Gaussian posteriors for simplicity. This slightly overestimates the level of tension, due to long tails of distribution, but does not affect our conclusions.}. When imposing an additional CMB-inspired prior on the primordial power spectrum, the inferred value of $H_0$ in \LCDM\ and $\Delta m_e$ is in tension with \shoes~ at $\sim 3.5\sigma$, whereas for $\Delta N_{\rm eff}$ and EDE the tension drops to $1.7 \sigma$ and $1.3 \sigma$, respectively. As a result we find that, the value of $H_0$ infered from an $r_{s,d}$-marginalized analysis is model dependent. We also find that, as an internal consistency test, with and without $r_{s,d}$ marginalization, the $H_0$ posteriors are in statistical agreement for all of the models we consider.
This paper is organized as follows. In Sec.~\ref{sec:scalings} we establish the way in which the various quantities in the $r_{s}$-free data depend on \LCDM\ parameters. This allows us to anticipate the various degeneracies in a full analysis of these data and establish the role played by $A_s$ and $n_s$ in constraining $h$. In Sec.~\ref{sec:data} we describe the data sets we use as well as the Markov Chain Monte Carlo (MCMC) analysis we perform. In Sec.~\ref{sec:LCDM} we establish that within \LCDM, constraints to $h$ are driven by measurements of the amplitude of the matter power spectrum and constraints to $\Omega_m h^p$ with $1\lesssim p \lesssim 2$. In Sec.~\ref{sec:beyondLCDM} we perform $r_{s}$-free analyses on three beyond-\LCDM\ models. We conclude and discuss the implications of our results in Sec.~\ref{sec:conclusions}. In Appendix \ref{app:scaling} we give details about how the various data we use depends on cosmological parameters, in Appendix \ref{app:peak} we demonstrate that the peak of the matter power spectrum does not play a significant role in constraining the Hubble constant, in Appendix \ref{app:check} we demonstrate that the broadband/BAO split algorithm works even in cases where the sound horizon deviates significantly from the \LCDM\ value, and in Appendix \ref{app:PanPlus_check} we show that for the models we consider the full Pantheon+ likelihood is well captured by using a prior on $\Omega_m$.
\section{$H_0$ from galaxy clustering and CMB lensing}
\label{sec:scalings}
To build an intuition as to how $h$ can be constrained without the sound horizon, it is helpful to establish the approximate relationship between the galaxy power spectrum/CMB lensing and the \LCDM\ parameters whose values we infer from these data.
In this discussion we make the important distinction between how the \emph{amplitude} (i.e., $k$-independent part) and \emph{shape} (i.e., $k$-dependent part) of the galaxy power spectrum provides information about the Hubble constant. The work in Refs.~\cite{Philcox:2020xbv,Farren:2021grl, Philcox:2022sgj} emphasizes the role that the shape of the galaxy power spectrum plays-- in particular the wavenumber which enters the horizon at matter/radiation equality $k_{\rm eq}$. Here we show that the amplitude of the $k>k_{\rm eq}$ part of the galaxy power spectrum also plays an important role in constraining $h$.
The basic shape of the galaxy power spectrum is set by two main scales: the sound horizon at baryon decoupling
\begin{equation}
r_{s,d} \equiv \int_{z_d}^{\infty} \frac{c_s(z')}{H(z')} dz', \label{eq:rs}
\end{equation}
where $z_d$ is the redshift at which baryons decouple and $c_s(z)$ is the photon/baryon sound speed (see, e.g., Ref.~\cite{Planck:2013pxb})
and the wavenumber which enters the horizon at matter/radiation equality,
\begin{equation}
k_{\rm eq} = \frac{\omega_m}{h\sqrt{\omega_r/2}} \frac{100\ h {\rm km/s/Mpc}}{c},
\end{equation}
where the last term comes from introducing $h \equiv H_0/(100 \ {\rm km/s/Mpc})$.
The effects of baryons are imprinted through $r_{s,d}$ and an additional scale, $k_d \equiv H(z_d)/(1+z_d)$, the size of the horizon when baryons decouple from photons and start to fall into the gravitational potentials. The largest effect is a suppression of power at wavenumbers larger than $k_d$ compared to a CDM-only universe. The acoustic oscillations in the baryon/photon fluid (i.e., the BAO) are also imprinted into the galaxy power spectrum as oscillations with a frequency set by integer multiples of $k_{s,d} \equiv 2\pi/r_{s,d}$ \cite{Brieden:2021edu}. We note that since $k_d < k_{\rm eq} \simeq 0.01\ h{\rm Mpc^{-1}}$ this scale is too large to be probed with current galaxy surveys.
The value of $k_{\rm eq}$ plays two important roles in the galaxy power spectrum: it sets the wavenumber at the peak, as well as the range of scales experiencing a logarithmic enhancement in power at $k>k_{\rm eq}$. In practice, measurements of the galaxy power spectrum cannot probe scales large enough to get a precise measure of the location of the peak \cite{Philcox:2020xbv} (though we note that future HI surveys will be able to measure the peak \cite{Cunnington:2022ryj}). For the main analysis presented here we take $k_{\rm min} = 0.01\ h{\rm Mpc}^{-1}$ which is just slightly smaller than the typical values of $k_{\rm eq}$. In Appendix \ref{app:peak} we also perform an analysis with a larger $k_{\rm min}$ in order to demonstrate that the location of the peak of the galaxy power spectrum does not play a dominant role in constraining $h$. Because of this, most of the sensitivity to $k_{\rm eq}$ is not in the peak of the galaxy power spectrum, but from the amplitude at scales $k>k_{\rm eq}$ \cite{Philcox:2020xbv}.
Yet, the measurements of $h$ do not only rely on $r_{s,d}$ and $k_{\rm eq}$, but also on the overall amplitude of the galaxy power spectrum. As discussed in more detail in Appendix \ref{app:scaling}, the galaxy power spectrum amplitude reflects the fact that during radiation domination Hubble friction limits the growth of dark matter perturbations. Once radiation domination ends, the dark matter perturbations grow proportional to the scale factor, $a$. Therefore, the amplitude of the matter power spectrum scales with $(a/a_{\rm eq})^2 \propto a^2 \Omega_m^2 h^4$. In this way information about $h$ contained in the amplitude of the galaxy power spectrum provides us with a `standard clock', measuring how much the dark matter perturbations have grown since matter/radiation domination.
Summarizing the results of Appendix \ref{app:scaling}, we can write how the galaxy power spectra and CMB lensing potential power spectrum depend on the amplitude of the primordial power spectrum, $A_s$, the normalized Hubble constant, $h$, the `geometric' matter density, $\Omega_m$, and the physical radiation energy density, $\omega_r \equiv \Omega_r h^2$. Note that all measured quantities are dimensionless, so in the following equations lengths are written in $h^{-1}\ {\rm Mpc}$.
The overall amplitudes of the galaxy power spectrum, $P_{\rm gal}$, and the CMB lensing power spectrum, $C_L^{\phi \phi}$, scale as
\begin{eqnarray}
P_{\rm gal} &\propto& b^2 R_c^2 A_s\Omega_m^{2.25} \omega_r^{-2}\boldsymbol{h^4} , \label{eq:galAmp}\\
L^4 C_L^{\phi \phi} &\propto& A_s \Omega_m^{3.5} \omega_r^{-1}\boldsymbol{h^{2.6}}, \label{eq:lensAmp}
\end{eqnarray}
where $b$ is the linear bias, $R_c \equiv \omega_{cdm}/\omega_m = 1-\omega_b/\omega_m$ is the baryon suppression \cite{Bernal:2020vbb}, and $\omega_{cdm}$ is the physical cold dark matter density today.
It is interesting to note that, while these are all proportional to $A_s$, they depend on different powers of with $h$ indicating that the combination of $P_{\rm gal}$ and $C_L^{\phi \phi}$ can break the $A_s-h$ degeneracy. It is also evident that additional information on $\Omega_m$ will further help constraining $h$.
The shape of these power spectra depend on
\begin{eqnarray}
\left(\frac{k}{k_p/h}\right)^{n_s-1}&=&\left(\frac{k}{(0.05/{\bf h})\ h{\rm Mpc}^{-1}}\right)^{n_s-1},\\
k_{\rm eq}/h &\propto& \Omega_m\omega_r^{-0.5}\boldsymbol{h},\\
\ell^{\phi \phi}_{\rm peak} &\propto& \Omega_m^{0.75}\omega_r^{-0.5}\boldsymbol{h}\,,
\end{eqnarray}
where $n_s$ is the primordial scalar spectral index and $k_p = 0.05\ {\rm Mpc}^{-1}$ is the standard pivot scale \cite{Planck:2013pxb}. Note that the different $\Omega_m$ scalings provide a way to break the degeneracy between $\Omega_m$ and $h$. Moreover, the baryon suppression and amplitude of the BAO in the galaxy power spectrum gives information about the ratio $\omega_b/\omega_{m}$.
Finally, redshift space distortions provide additional sensitivity to
\begin{equation}
f \sigma_8 \propto A_s^{1/2} \Omega_m^{1.25} \omega_r^{-0.65} \boldsymbol{h^{1.75}},\label{eq:rsd}
\end{equation}
where $f$ is the growth rate and $\sigma^2_8$ is the variance of the fractional mass fluctuations in spheres of comoving radius $R = 8 h^{-1}\ {\rm Mpc}$.
These scaling equations allow us to understand general trends in the posterior distributions. First, as it was noted before, it is clear that knowledge of $k_{\rm eq}\propto \Omega_m h$ and $\Omega_m$ from, e.g. SNeIa, provides a constraint on $h$ \cite{Philcox:2020xbv}. In fact, the whole shape of the power spectra, through $k_{\rm eq}$, baryon suppression, and $\ell_{\rm peak}^{\phi \phi}$, provides constraints to $\Omega_m h^p$, where $1 \lesssim p \lesssim 2$. Yet, these are not the only parameters appearing in this scaling: $A_s$ and $n_s$ also play an important role. For example, in \LCDM\, $\omega_r$ is fixed and with a Pantheon+ prior on $\Omega_m$, Eqs.~(\ref{eq:galAmp}), (\ref{eq:lensAmp}) and (\ref{eq:rsd}) show that an increase in $h$ must be accompanied by a decrease in $A_s$ in order to keep the amplitudes unaffected. This additional degeneracy will be even more important for beyond $\Lambda$CDM determinations of $h$.
\section{Cosmological models and data analysis}
\label{sec:data}
In order to explore the extent to which an $r_{s}$-free analysis may depend on the cosmological model, we consider three beyond-\LCDM\ models that affect the value of the sound horizon in different ways.
The sound horizon is inversely proportional to the Hubble parameter before recombination [see Eq.~(\ref{eq:rs})]. We consider two models which lead to changes in the early-universe Hubble parameter: variations in the number of ultra-relativistic neutrinos, $\Delta N_{\rm eff}$ (we always take one neutrino to have a mass of 0.06 eV), and the ultra-light axion-inspired model for EDE \cite{Poulin:2018cxd,Smith:2019ihp} (we use the scalar field potential $V = m^2 f^2 [1-\cos(\phi/f)]^3$, where $m$ is the axion mass, $f$ is the axion decay constant and $\phi$ the field value). As described in Ref.~\cite{Smith:2019ihp}, we use a shooting method to map the set of phenomenological parameters $\{\log_{10}(z_c), f_{\rm EDE}(z_c)\}$ (which describe when the field becomes dynamical and its maximum fractional contribution to the total energy density, respectively) to the theory parameters $\{m,f\}$. A major difference between these two `energy density modification' models is that while a change to the neutrino energy density has an impact throughout radiation domination, the EDE's energy density makes a dynamically relevant contribution to the total energy density over a relatively short period of time.
The sound horizon depends on the redshift at which baryons decouple from photons, $z_d$ [see Eq.~(\ref{eq:rs})]. We also consider a model in which the mass of the electron may be different around recombination than its value today, leading to a change in the Thomson scattering cross section, and hence changing $z_d$ (see, e.g., Refs.~\cite{Hart:2017ndk,Hart:2021kad}).
Our Markov-Chain Monte Carlo (MCMC) analyses uses \texttt{MontePython-v3}\footnote{\url{https://github.com/brinckmann/montepython_public}} code \cite{Audren:2012wb,Brinckmann:2018cvx} interfaced with modified versions of \texttt{CLASS-PT}\footnote{\url{https://github.com/Michalychforever/CLASS-PT}} which is itself a modified version of \texttt{CLASS}\footnote{\url{https://lesgourg.github.io/class_public/class.html}} \cite{Blas:2011rf}.
In this paper, we carry out various analyses using a combination of the following data sets:
\begin{itemize}
\item {\bf Full-shape galaxy power spectra (FS):} The effective field theory (EFT) of large scale structure applied to the BOSS DR12 galaxy clustering data. For the main analysis we use the same data and code as in Ref.~\cite{Philcox:2022sgj}: we use the power spectrum measured in Ref.~\cite{Philcox:2021kcw} from the $z=0.38$ and 0.61 redshift bins at the Northern and Southern Galactic Caps \cite{2015ApJS..219...12A}. We use the unreconstructed monopole, quadrupole, and hexadecapole galaxy power spectrum multipoles with\footnote{For one part of our analysis we increase the minimum $k$ to $0.05\ h{\rm Mpc}^{-1}$ for the galaxy power spectrum multipoles.} $0.01 h\ {\rm Mpc}^{-1} \leqslant k \leqslant 0.2h \ {\rm Mpc}^{-1}$ and the real-space extension, $Q_0$, with $0.2\ h {\rm Mpc}^{-1} \leqslant k \leqslant 0.4\ h {\rm Mpc}^{-1}$. We include EFT parameters and priors as described in Refs.~\cite{Philcox:2021kcw,Simon:2022lde}. Note that these priors were shown to be informative, and part of our results could be affected by the choice of priors, at the $1\sigma$ level \cite{Simon:2022lde}, but we do not expect our main conclusions to change.
\item {\bf BBN:} The BBN measurement of $\omega_b$ \cite{Schoneberg_2019} that uses the theoretical prediction of \cite{Consiglio_2018}, the experimental Deuterium fraction of \cite{Cooke_2018} and the experimental Helium fraction of \cite{Aver_2015}. Note that this likelihood also tightly constrains $\Delta N_{\rm eff}$ \cite{Schoneberg_2019}. As we are interested in computing constraints driven by galaxy clustering/CMB lensing, when varying $\Delta N_{\rm eff}$ we instead use a Gaussian prior on $\omega_b = 0.02268 \pm 0.00038$ \cite{Ivanov:2019pdj}.
\item {\bf CMB lensing (CMBLens):} The CMB-marginalized gravitational lensing potential from \Planck{} 2018 temperature and polarization data with $8\leqslant L \leqslant 400$ \cite{Planck:2018lbu}.
\item {\bf Pantheon+ (PanPlus):} The Pantheon+ measurement of $\Omega_m=0.338 \pm 0.018$ using uncalibrated Type Ia supernovae (SNeIa), modeled as a Gaussian likelihood \cite{Brout:2022vxf}. We have explicitly checked that this prior captures all of the information contained within the full likelihood in Appendix \ref{app:PanPlus_check}.
\item {\bf Uncalibrated BAO and CMB measurements of the projected sound horizon ($\boldsymbol{\theta_{s,d}^{\rm BAO/CMB}}$):} In some of our analyses we have replaced the $\Omega_m$ prior from PanPlus with one from an analysis of uncalibrated BAO measurements of $r_{s,d}H(z)$ and $\theta_{s,d}(z) =r_{s,d}/D_A(z)$ and the \textit{Planck} inferred value of $100\theta_{s,d}(z_{\rm CMB})$, $\Omega_m = 0.30 \pm 0.01$\cite{Lin:2021sfs}. We have taken the uncertainty to be 25\% larger in order to account for variations in $\theta_{s,d}(z_{\rm CMB})$ when fit to the cosmological models we consider. We note that some of the data used to generate this prior is correlated with the FS data, but stress that our use of this prior is primarily meant to highlight how constraints on $\Omega_m$ affect the $r_s$-free results.
\item {\bf CMB priors:} For some of our analyses we use the Gaussian priors $\ln 10^{10} A_s = 3.044\pm0.08$ and $n_s = 0.96 \pm 0.03$. The prior on $A_s$ is 8\% around the \textit{Planck} mean value \cite{Baxter:2020qlr,Philcox:2022sgj} and the prior on $n_s$ is based on the one used in Ref.~\cite{Philcox:2022sgj}, but is lightly wider in order to account for the fact that some of the beyond-\LCDM\ models we consider, when fit to the CMB, lead to larger values for $n_s$ (see, e.g, Refs.~\cite{Smith:2019ihp,Ye:2021nej,Aloni:2021eaq}).
\end{itemize}
In the following we denote the combination of FS, BBN, CMBLens, and PanPlus as `All', to distinguish it from analyses that just combine a subset of these data sets.
All MCMCs use wide uninformative flat priors on the physical CDM energy density, $\omega_{cdm}$, the Hubble parameter today in units of 100km/s/Mpc, $h$, the logarithm of the variance of curvature perturbations centered around the pivot scale $k_p = 0.05$ Mpc$^{-1}$ (according to the \Planck{} convention \cite{Planck:2013pxb}), $\ln 10^{10}A_s$, and the scalar spectral index $n_s$.
We marginalize over information about the sound horizon in the galaxy power spectra following the procedure introduced in Ref.~\cite{Farren:2021grl}. This involves splitting the linear power spectrum into its broadband (BB) shape and the BAO and marginalizing over a new scaling parameter, $\alpha_{r_s}$,
\begin{equation}
P_{\rm lin}(k) = P_{\rm BB}(k) + P_{\rm BAO}(\alpha_{r_s} k).
\end{equation}
As with the cosmological parameters, we use a wide uninformative flat prior on $\alpha_{r_s}$. We note that Ref.~\cite{Philcox:2022sgj} places a Gaussian prior with mean equal to 1 and a standard deviation of 0.5. Since the value of $\alpha_{r_s}$ only varies by $\sim 0.1$ their choice of prior is also uninformative.
For the three free parameters of the EDE model, we impose a logarithmic priors on $z_c$, and flat priors for $f_{\rm EDE}(z_c)$ and $\theta_i$:
\begin{align*}
&3 \le \log_{10}(z_c) \le 4, \\
&0 \le f_{\rm EDE}(z_c) \le 0.5, \\
&0 \le \theta_i\equiv \phi_i/f \le 3.1.
\end{align*}
When we vary the electron mass we use the prior $0.8 \leqslant m_e/m_{e,0} \leqslant 1.2$, while we take $\Delta N_{\rm eff} \geqslant 0$ when we vary the amount of free-streaming ultra-relativistic energy density.
We define our MCMC chains to be converged when the Gelman-Rubin criterion $R-1 < 0.05$ \cite{Gelman:1992zz}. Finally, we produce our figures using \texttt{GetDist} \cite{Lewis:2019xzd}.
\section{Constraints on $h$ in \LCDM}
\label{sec:LCDM}
We start by comparing the 1D posterior distributions of $h$ from analyzing FS+BBN+CMBLens+PanPlus (the `All' dataset), with and without marginalizing over $r_{s,d}$, without applying any CMB priors on $A_s$ or $n_s$.
The \LCDM\ posterior distributions are summarized in Tab.~\ref{tab:LCDM}, and we find:
\begin{eqnarray}
h &=&0.697^{+0.014}_{-0.016}\,~{\rm w/o}~r_{s,d} {\rm -marg}.,\nonumber\\
h &=&0.687^{+0.030}_{-0.050}\,~{\rm w/}~r_{s,d} {\rm -marg}.\nonumber\,
\end{eqnarray}
which shows no significant tension with \shoes~ even when marginalizing over $r_{s,d}$. We note that, as shown in Table \ref{tab:LCDM} the mean value of $r_{s,d}$ is $\sim 5$ Mpc smaller than the value preferred by \textit{Planck} \cite{Planck:2018vyg}. This is due to the fact that these data prefer a significantly larger mean physical CDM density, $\omega_{cdm} \sim 0.14$, compared to \textit{Planck}, $\omega_{cdm} \sim 0.12$. The larger $\omega_{cdm}$, combined with the relatively large value of $\Omega_m$ from PanPlus, leads to the statistical agreement between constraints to $h$ from the two datasets. We also note that we find no significant shift between the two values of $h$, which has been advocated as being a hint against the presence of new physics affecting the sound horizon \cite{Philcox:2021kcw,Philcox:2022sgj}.
\begin{table}
\def\arraystretch{1.0}
\scalebox{1.0}{
\begin{tabular}{|l|c|c|}
\hline
Parameter & \LCDM\ (no $r_{s}$-marg) & \LCDM\ ($r_{s}$-marg) \\
\hline
\hline
$10^{2}\omega{}_{b }$ & $2.273\pm 0.038$ & $2.273\pm 0.037$ \\
$\omega{}_{cdm }$&$0.1395^{+0.0091}_{-0.012}$ & $0.137^{+0.011}_{-0.022}$\\
$\Omega{}_{m }$ & $0.335\pm 0.013$ & $0.340\pm 0.015$ \\
$h$ & $0.697^{+0.014}_{-0.016}$ & $0.687^{+0.030}_{-0.050}$ \\
$\ln 10^{10}A_s$& $2.839\pm 0.096$ & $2.86^{+0.16}_{-0.13}$ \\
$n_{s }$&$ 0.853\pm 0.052$ & $0.863^{+0.081}_{-0.060}$ \\
$r_{s,d}$ [Mpc] & $141.9^{+2.7}_{-2.4}$ & $142.7^{+5.1}_{-3.2}$ \\
$\alpha_{r_s}$ &-- & $1.011^{+0.036}_{-0.028}$\\
\hline
\end{tabular} }
\caption{The mean and $\pm 1\sigma$ uncertainties of the \LCDM\ cosmological parameters with and without marginalization over $r_{s,d}$ and using `All' of the data.}
\label{tab:LCDM}
\end{table}
This $r_{s,d}$-marginalized value is larger than the main value reported in Ref.~\cite{Philcox:2022sgj} because here we have not imposed any external priors on $n_s$ or $A_s$.\footnote{Ref.~\cite{Philcox:2022sgj} argues that their results are robust to dropping any priors on $n_s$ or $A_s$, in this case reporting $h= 0.660^{+0.027}_{-0.034}$. However, even after adopting the same parameter settings as they use, without these priors we find $h=0.677_{-0.037}^{+0.028}$, giving posterior on $h$ that is consistent with the \shoes~ value at $\sim 2 \sigma$.} As shown in Table \ref{tab:LCDM}, both $A_s$ and $n_s$ are lower than what is found using CMB data, and when imposing priors from the CMB the posterior on $h$ can change appreciably. In Sec.~\ref{sec:As/ns} we explore the degeneracy between $h$ and $A_s/n_s$ and in Sec.~\ref{sec:LCDM_priors} we show the impact of imposing the CMB priors.
\subsection{The $A_s/n_s$-degeneracy}
\label{sec:As/ns}
To understand the role that the FS galaxy power spectra are playing in constraining $h$, Fig.~\ref{fig:LCDM_comp} shows a comparison between two different data analyses: FS+BBN+CMBLens+PanPlus with and without $r_{s}$-marginalization. First, focusing on the constraints to $h$ (left-most column) and on the analysis without $r_{s,d}$-marginalization one can see that the constraint on $h$ is less degenerate with $A_s$, $n_s$, and $\Omega_m$ than with $r_{s,d}$-marginalization. This shows that when including information on $r_{s,d}$ one gains independent information on $h$ through its effect on the projected size of the sound horizon. When marginalizing over $r_{s,d}$ on the other hand, one can see that $h$ is anti-correlated with $A_s$/$\Omega_m$, as expected from the discussion in Sec.~\ref{sec:scalings}. In particular the degeneracy between $h$ and $A_s$ provides evidence that constraints on $h$ when marginalizing over $r_{s,d}$, at least in part, come from the amplitude of the galaxy power spectrum.
Fig.~\ref{fig:LCDM_comp} clearly shows that when marginalizing over $r_{s,d}$, $h$ and $n_s$ are anti-correlated. This anticorrelation is also related to the primordial amplitude of the fluctuations which can be seen in the 3D plot in Fig.~\ref{fig:LCDM_ns_vs_h}. There we can see that a decrease in $n_s$ is compensated by a decrease in $A_s$ and an increase in $h$. This relationship is due to a balance between the enhancement of power for $k>k_{\rm p} = 0.05/h\ h{\rm Mpc^{-1}}$ (for $n_s<1$) and the shift with $h$ in scale at which the logarithmic enhancement starts, $k_{\rm eq} = \Omega_m h$.
We further explore the exact shape of the degeneracies introduced through the amplitudes of the $k>k_{\rm eq}$ galaxy power spectra and the CMB lensing potential power spectrum as discussed in Sec.~\ref{sec:scalings} and Appendix \ref{app:scaling}. The top left panel of Fig.~\ref{fig:h_degen} shows the results of using FS + BBN. The dashed red curves show the mean and $\pm 1 \sigma$ of $A_s \Omega_m^{2.25} h^4$, which sets the $k>k_{\rm eq}$ amplitude of the galaxy power spectrum [see Eq.~(\ref{eq:galAmp})]. The agreement between the red curves and the 2D posterior definitively demonstrates the importance of the $k>k_{\rm eq}$ amplitude of the galaxy power spectrum in constraining $h$ with these data.
The bottom left panel of Fig.~\ref{fig:h_degen} shows the same curve/2D posteriors but with FS+BBN+CMBLens. There we can see that the addition of CMB lensing data shifts the $h$ vs.~$A_s \Omega_m^{2.25}$ contour, and decreases the width of the posterior, indicating that CMB lensing adds information on $h$. The blue curve in this panel shows the $\propto h^{2.6}$ scaling from the amplitude of the lensing potential power spectrum [see Eq.~(\ref{eq:lensAmp})]. Its shape at least partially explains the shift in this parameter plane when the lensing is included.
\subsection{The impact of priors on $n_s$ and $A_s$ in constraining $h$ in \LCDM}
\label{sec:LCDM_priors}
Given the correlation between $h$ and $A_s$/$n_s$, it is of interest to consider how placing priors on the primordial power spectrum affects $h$. Ref.~\cite{Philcox:2022sgj} imposed $n_s = 0.96 \pm 0.02$ or an 8\% prior on $A_s$ centered on the \textit{Planck} value, $\ln 10^{10} A_s = 3.044 \pm 0.08$. Since here we consider both \LCDM\ and beyond-\LCDM\ models which prefer larger values of $n_s$ when fit to CMB data \cite{Smith:2019ihp,Ye:2021nej,Aloni:2021eaq}, we use the same prior on $A_s$ but a slightly wider prior for $n_s = 0.96 \pm 0.03$. We find good agreement with Ref.~\cite{Philcox:2022sgj} when imposing the same priors, and the specific choice of priors do not affect our overall conclusions.
When imposing both $A_s$ and $n_s$ priors, we find that the resulting posterior on $h$ decreases from $h=0.687_{-0.05}^{+0.03}$ to $h=0.649\pm0.022$. This significant downward shift is not surprising, given that $h$ is anti-correlated with both $A_s$ and $n_s$ as discussed above, and that these priors are larger than the values preferred by `All' of the data (see Fig.~\ref{fig:LCDM_comp} and \ref{fig:LCDM_ns_vs_h}). When imposing these priors, we find that the value of $h$ is in $3.3\sigma$ tension with \shoes. The tension level is slightly stronger than that reported in Ref.~\cite{Philcox:2022sgj} because we impose both priors at the same time.
\subsection{The role of $\Omega_m$ in constraining $h$}
The scaling equations discussed in Sec.~\ref{sec:scalings} and the right-hand contours in Fig.~\ref{fig:h_degen}, indicate that the shape of both the FS data and the CMB lensing potential power spectrum constrain various combinations of $\Omega_m h^p$, where $1 \lesssim p \lesssim 2$. These constraints, along with a prior on $\Omega_m$, provides a constraint on $h$.
The red/blue dashed curves in the top right panel shows the mean and $\pm 1 \sigma$ of $\omega_m = \Omega_m h^2$ and $\Omega_m h$, respectively. The rough agreement indicates that some combination of $\Omega_m h^p$, with $1 \lesssim p \lesssim 2$, plays a role in constraining $h$. Since the $\Omega_m h^p$ constraint comes from several aspects of the measurements with slightly different dependencies-- baryonic effects ($\Omega_m h^2$), the logarithmic enhancement of the $k>k_{\rm eq}$ part of the galaxy power spectrum ($\Omega_m h$), the peak of the lensing potential power spectrum ($\Omega_m h^{1.33}$)-- we expect the degeneracy between $h$ and $\Omega_m$ to be less well-defined. The bottom right panel shows that, as with FS+BBN, some combination of $\Omega_m h^p$, with $1 \lesssim p \lesssim 2$, continues to play a role in constraining $h$ in the FS+BBN+CMBLens analysis.
These scaling equations indicate that if the prior on $\Omega_m$ decreases then the inferred value of $h$ will increase.
So far we have used a prior on $\Omega_m$ from PanPlus: $\Omega_m = 0.338 \pm 0.018$ \cite{Brout:2022vxf}. This value is $\sim 2 \sigma$ larger than the value of $\Omega_m$ inferred from the uncalibrated BAO and CMB measurements of the projected sound horizon: $\Omega_m = 0.3 \pm 0.01$ \cite{Lin:2021sfs}.
Replacing the PanPlus prior on $\Omega_m$ with the BAO/CMB angular prior allows us to explore how information about $\Omega_m$ impacts the $r_{s,d}$-marginalized \LCDM\ posterior on $h$. As expected with the lower $\Omega_m$ the mean value of $h$ increases from $h=0.687^{+0.030}_{-0.050}$ to $h=0.734^{+0.033}_{-0.063}$. This demonstrates that at least part of the apparent tension in $\Lambda$CDM with SH0ES comes from the relatively high value of $\Omega_m$ favored by Pantheon+. If we also impose the $A_s$/$n_s$ prior then the posterior distribution for $h$ increases from $h= 0.649\pm 0.022$ to $h=0.688^{+0.018}_{-0.021}$. With the $A_s/n_s$ prior we can see that the change in the $\Omega_m$ prior leads to a $1.3 \sigma$ shift in the mean of $h$.
\section{Constraints in beyond-\LCDM\ models}
\label{sec:beyondLCDM}
We first establish that without marginalizing over $r_{s,d}$ the three beyond-\LCDM\ models that we consider have the expected effect on the value of $r_{s,d}$.
Fig.~\ref{fig:vs_rsd} shows that the three beyond-\LCDM\ models affect $r_{s,d}$ as expected. In particular, $f_{\rm EDE}(z_c)$-- which controls the maximum contribution that the EDE field makes to the total energy density-- is only able to \emph{increase} the pre-recombination value of $H$, and therefore it can only lead to a \emph{decrease} in $r_{s,d}$. Variations in the number of massless neutrinos, $\Delta N_{\rm eff}>0$, can only cause a decrease in $r_{s,d}$ from its \LCDM\ value (shown by the vertical dashed line). The Thomson scattering cross-section scales as $1/m_e^2$, so a larger electron mass leads to a decrease in the scattering rate, which in turn causes the baryons to decouple earlier than they would have. Therefore as $m_e$ increases, $z_{d}$ increases, leading to a decrease in $r_{s,d}$ (see Eq.~\ref{eq:rs}).
\begin{table*}[!t]
\def\arraystretch{1.0}
\scalebox{1.0}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\hline
& \LCDM & EDE & $\Delta N_{\rm eff}$ & $\Delta m_e$ \\
\hline
w/o $r_{s,d}$ marg & $0.697^{+0.014}_{-0.016}$ & $0.736^{+0.027}_{-0.036}$ & $0.724^{+0.021}_{-0.030}$ & $0.671^{+0.031}_{-0.040}$ \\
\hline
with $r_{s,d}$ marg & $0.687^{+0.030}_{-0.050}$ & $0.708^{+0.038}_{-0.049}$ & $0.699^{+0.034}_{-0.050}$ & $0.684^{+0.031}_{-0.049}$ \\
\hline
PanPlus$\rightarrow \theta_{s,d}^{\rm BAO/CMB}$ & $0.734^{+0.033}_{-0.063}$ & $0.748^{+0.038}_{-0.046}$ & $0.739^{+0.035}_{-0.052}$ & $0.716^{+0.032}_{-0.038}$ \\
\hline
+$A_s$ \& $n_s$ prior & $0.649\pm 0.022$ & $0.687^{+0.030}_{-0.039}$ & $0.681^{+0.027}_{-0.038}$ & $0.647^{+0.019}_{-0.023}$ \\
\hline
PanPlus$\rightarrow \theta_{s,d}^{\rm BAO/CMB}$ & $0.688^{+0.018}_{-0.021}$ & $0.737^{+0.032}_{-0.039}$ & $0.726^{+0.028}_{-0.037}$ & $0.688\pm 0.019$ \\
\hline
\hline
\end{tabular} }
\caption{The mean and $\pm 1\sigma$ uncertainties of $h$ in the four models we explore. The `PanPlus' prior is $\Omega_m = 0.338 \pm 0.018$ and the uncalibrated BAO and CMB measurements of the projected sound horizon, `$\theta_{s,d}^{\rm BAO/CMB}$', prior is $\Omega_m = 0.3 \pm 0.01$. When we replace the $\Omega_m$ prior we apply it to the analysis described in the above row.}
\label{tab:h_beyondLCDM}
\end{table*}
\subsection{$r_{s}$-marginalized constraints on $H_0$ beyond $\Lambda$CDM}
We provide the marginalized constraints on $h$ for \LCDM\ and the three beyond-\LCDM\ models we consider in Table \ref{tab:h_beyondLCDM}.
The oscillation frequency of the BAO is equal to $r_{s,d}$. Since we marginalize over the product $\alpha_{r_s} k$ within the BAO, and observations are in angular/redshift space, the directly measured quantity is $\alpha_{r_s} h r_{s,d}$, and therefore should be relatively stable between the different models we have analyzed. In Fig.~\ref{fig:prod} we show that for the three cosmological models we analyze, this combination is relatively unchanged, as expected. This provides evidence that our marginalization over $\alpha_{r_s}$ is correct even in these extended cosmologies. This is discussed further in Appendix \ref{app:check}.
The main result of this section is shown in Fig.~\ref{fig:vs_As}.
There we can see how both $\Delta N_{\rm eff}$ and EDE produce similar posteriors in the $h$ vs. $\ln 10^{10} A_s/n_s$ plane, whereas the varying $m_e$ model is qualitatively different, and similar to what we obtain in \LCDM\ (shown in the brown contour in the bottom plot). The color bars of Fig.~\ref{fig:vs_As} show that the larger values of $\Delta N_{\rm eff}$ and $f_{\rm EDE}(z_c)$ open up a new degeneracy, allowing for a simultaneous increase in $h$, $A_s$, and $n_s$.
This is due to the fact that unlike $\Delta m_e$, both EDE and $\Delta N_{\rm eff}$ introduce additional energy density with significant pressure support. This leads to a suppression of the growth of matter perturbations, leading to a degeneracy with the primordial power spectrum-- i.e., $A_s$ and $n_s$-- for these models, allowing these parameters to take on larger values than they do in \LCDM\ and $\Delta m_e$.
One can see how this increase in parameter space affects the 1D marginalized posterior distribution for $h$ in Fig.~\ref{fig:h_alone}. Without marginalizing over $r_{s,d}$ (top panel) the posterior distribution for $h$ varies significantly between the different models. When marginalizing over $r_{s,d}$ both EDE and $\Delta N_{\rm eff}$ are shifted to larger values of $h$ than \LCDM\ and $\Delta m_e$. It is also clear that EDE opens up more parameter space volume than $\Delta N_{\rm eff}$. An important distinction between the physics of these two models is that the additional neutrino energy density has an effect throughout radiation domination whereas the additional energy density in EDE is only briefly relevant. This leads to a different scale dependence of their effects and different degeneracies with $A_s$ and $n_s$, which allows EDE to achieve a larger posterior for $A_s$ and $n_s$, as shown in Fig.~\ref{fig:EDE_zc}, with larger values of $A_s/n_s$ corresponding to smaller values of $\log_{10} z_c$.
We note that even if the EDE/$\Delta N_{\rm eff}$ posteriors for $h$ were not shifted to larger values, the width of all of the $h$ posteriors can easily account for the \textit{Planck} and \shoes-inferred values. As such, none of these analysis rule out these models as resolutions to the Hubble tension.
\subsection{Impact of $n_s$ and $A_s$ priors }
Just as in \LCDM, given the negative degeneracy between $A_s/n_s$ and $h$ in an $r_{s}$-free analysis (see Fig.~\ref{fig:vs_As}) any priors on these parameters will lead to a significant change in the 1D posterior distribution for $h$.
The result of including a CMB prior on $A_s$ and $n_s$ is shown in the second from the bottom row of Table \ref{tab:h_beyondLCDM}. There we can see that the degeneracies introduced by EDE and $\Delta N_{\rm eff}$ lead to a $\sim 1\sigma$ shift in $h$ to higher values compared to \LCDM\ and $\Delta m_e$.
We can better understand how the $A_s$ and $n_s$ prior affects these analysis by examining Fig.~\ref{fig:As_vs_ns}. There we show the 3D posterior for $n_s$, $\ln 10^{10} A_s$, and $h$. The gray contour shows the $n_s$ vs.~$\ln 10^{10} A_s$ posterior distribution in \LCDM~ and the red contour shows the CMB prior on $A_s$ and $n_s$. The EDE and $\Delta N_{\rm eff}$ panels clearly show that these models open new parameter space to allow for larger values of $A_s$ and $n_s$ at correspondingly larger values of $h$. When placing a prior on $A_s$ and $n_s$ this additional volume leads to a 1D posterior distribution for $h$ which is shifted to larger values (i.e., cyan and yellow points) than in $\Delta m_e$ or \LCDM.
\subsection{Impact of the $\Omega_m$ priors }
Replacing the PanPlus prior on $\Omega_m = 0.338 \pm 0.018$ with $\theta_{s,d}^{\rm BAO/CMB}$, $\Omega_m = 0.3 \pm 0.01$, results in an increase in the 1D posterior for $h$ for all models with or without the $A_s/n_s$ prior, as shown in Table \ref{tab:h_beyondLCDM}.
The red contours in Fig.~\ref{fig:Om_prior} show the $h$ vs.~$\Omega_m$ degeneracy in all four models using FS+BBN+CMBLens (i.e., no prior on $\Omega_m$). One can see that a negative degeneracy between $h$ and $\Omega_ m$ is present in all four models we consider. The dashed blue contours show the posterior when we include the PanPlus prior, the solid blue contours further include the CMB-inspired priors on $A_s/n_s$, and the black contours show the posteriors when PanPlus is replaced with $\theta_{s,d}^{\rm BAO/CMB}$. One can see that, when the prior on $\Omega_m$ decreases, the contours shift along the $h$/$\Omega_m$ degeneracy leading to larger values of $h$ (see the blue vs. black contours in Fig.~\ref{fig:Om_prior}). In addition, this figure clearly shows how the inclusion of the $A_s/n_s$ prior significantly reduces the range of $h$ for both \LCDM\ and $\Delta m_e$, but has a much smaller effect for EDE and $\Delta N_{\rm eff}$ (see the dashed blue vs. solid blue contours in Fig.~\ref{fig:Om_prior}).
Note that in our analysis we fixed the sum of the masses of the neutrinos to their minimum value (0.06 eV). We expect that also allowing the neutrino mass to vary, as done in Ref.~\cite{Philcox:2022sgj}, would make the constraints on $h$ even weaker.
\section{Conclusions}
\label{sec:conclusions}
Full-shape information from measurements of galaxy clustering are poised to contribute important cosmological information when investigating beyond \LCDM\ models. Therefore it is important to clarify what aspects of these measurements are driving the constraints.
The constraining power on $h$ predominately comes from the BAO sensitivity to the sound horizon, and the same is true of measurements of the CMB. This has lead to the development of a number of beyond-\LCDM\ models which change the value of the sound horizon in order to address the Hubble tension. In order to further test these models, it is of interest to develop new analysis methods that extract information about $h$ from observations which are based on pre-recombination physics without relying on the value of the sound horizon.
The full-shape analysis of measured galaxy power spectra can provide such a data set \cite{Philcox:2020xbv}. By marginalizing over the sound horizon and using a BBN prior on $\omega_b$, SNeIa prior on $\Omega_m$, and the measured CMB lensing from \textit{Planck}, the inference of $h$ relies on the amplitude and broad-band shape of the small-scale power spectrum. Previous work has focused on the sensitivity of these data to $k_{\rm eq} = \Omega_m h$, along with a SNeIa-inspired prior on $\Omega_m$, as the main source of sensitivity to $h$.
Here we have demonstrated that the sensitivity is also driven by the amplitude of the small-scale power spectrum.
As a result, beyond-\LCDM\ models which are degenerate with $A_s/n_s$ have the ability to affect the $r_{s}$-free value of $h$.
This also has potential implications for using the extended BAO parameter set presented in Ref.~\cite{Brieden:2021edu} and known as `ShapeFit'. In this approach, the standard BAO and redshift space distortion parameters are augmented with a parameter that measures the slope of the galaxy power spectrum at $k_{\rm slope} = 0.03 h {\rm Mpc}^{-1}$. It has been shown that this extended parameter set is competitive with full-shape analysis of \LCDM\ \cite{Brieden:2022lsd}. Since we show here that constraints to the amplitude of the galaxy power spectrum play an important role when considering beyond-\LCDM\ models, it will be interesting to check whether ShapeFit will be able to capture some of the important effects of these models.
We find that beyond-\LCDM\ models which introduce additional energy density with significant pressure support lead to increased values of $h$ in an $r_s$-independent analysis.
This is due to the ways in which these models suppress the growth of structure and are therefore degenerate with the amplitude of the clustering.
Since the amplitude of the small-scale galaxy power spectrum and lensing potential power spectrum play a central role in determining the $r_{s}$-free value of $h$, models which attempt to address both the Hubble and $S_8$ tensions through a suppression of small-scale power \cite{Allali:2021azp,Clark:2021hlo,Joseph:2022jsf} may be particularly interesting to consider in light of the analysis presented here.
We have also explored how various priors on cosmological parameters affect these conclusions.
When using a CMB-inspired prior on $A_s$ and $n_s$ we found that the model-dependence of these results are even more stark, with EDE and $\Delta N_{\rm eff}$ giving posteriors for $h$ which are $\sim 1\sigma$ larger than in \LCDM.
However, the $\Delta m_e$ model, which only affects recombination, has a posterior for $h$ that is statistically identical to the result in \LCDM.
Additionally, we have emphasized the role played by the Pantheon+ prior on $\Omega_m$ in driving the low-$h$ constraints.
Replacing the Pantheon+ prior on $\Omega_m = 0.338 \pm 0.018$ with one from the uncalibrated BAO and CMB measurements of the projected sound horizon, $\Omega_m = 0.30 \pm 0.01$, leads to a shift to higher values of $h$ for all models, with EDE and $\Delta N_{\rm eff}$ still $\sim 1 \sigma$ larger than \LCDM.
The posteriors for $h$ are listed in Tab.~\ref{tab:h_beyondLCDM}.
We conclude that the Hubble constant inferred from these data depends on both the model and the choice of priors on the cosmological parameters.
Our analysis also allows us to determine whether a comparison between the $H_0$ posteriors with and without marginalizing over $r_{s,d}$ in \LCDM\ provides a robust internal consistency test for physics beyond \LCDM.
A summary of these results is shown in Fig.~\ref{fig:whisker2}. Using FS+BBN+CMBLens+PanPlus, one can see that without any prior on $A_s$ and $n_s$, the agreement in \LCDM\ is better than $1\sigma$. The agreement is slightly worse ($\sim 2-2.5\sigma$) once the $A_s$ and $n_s$ priors are included, with a shift in the means of $\Delta H_0\sim 3$km/s/Mpc.\footnote{Using the same priors on $n_s$ and the sum of the neutrino masses as in Ref.~\cite{Philcox:2022sgj} we find a similar result, where without (with) marginalizing over $r_{s,d}$, $h=0.682_{-0.012}^{+0.011}$ ($h=0.652_{-0.026}^{+0.022}$)} Keeping the $A_s/n_s$ prior and changing the $\Omega_m$ prior to the uncalibrated BAO and CMB measurements of the projected sound horizon brings the $H_0$-values back into excellent agreement.
Given these results, at a minimum we conclude that the consistency of $H_0$ with and without $r_{s,d}$-marginalization in \LCDM\ depends on the choice of priors on the cosmological parameters. In addition, when the \LCDM\ posteriors are consistent, we do not find any indication that the beyond-\LCDM\ models are in tension with the data.
Given this, our results indicate that with current data the internal consistency test proposed in Refs.~\cite{Farren:2021grl,Philcox:2022sgj} is inconclusive.
The results presented here complement those that are presented in Ref.~\cite{Simon:2022adh}. There we show that the BOSS full-shape analysis using both \texttt{PyBird} and \texttt{CLASS-PT} do not rule out the EDE resolution to the Hubble tension. In light of Ref.~\cite{Simon:2022lde}, it will be useful to perform an analysis similar to what we have done here but using \texttt{PyBird}, since this code relies on a different choice of EFT priors and BOSS power-spectrum measurements. Indeed, the constraints from these two codes may differ up to $\sim 1 \sigma$ for \LCDM\ due (mostly) to the impact of priors \cite{Simon:2022lde}.
However, we do not expect the overall conclusions to change, as we have identified physical effects at play in driving degeneracies between $h$ and other parameters.
Current galaxy clustering measurements are not precise enough to rule out or favor beyond \LCDM\ models which address the Hubble tension. However, unlike CMB lensing \cite{Baxter:2020qlr}, there are several near-future galaxy surveys which will significantly improve constraints on $h$ independent of the sound horizon upon BOSS DR12 (e.g., DESI \cite{DESI:2016fyo}, Euclid \cite{EUCLID:2011zbd}, VRO \cite{2009arXiv0912.0201L}).
The work presented here highlights the ways in which beyond-\LCDM\ models which address the Hubble tension may affect the value of $h$ even in an $r_{s}$-free analyses.
\begin{acknowledgements}
We thank Pierre Zhang for contributions at early stages of this work, and his comments and insights throughout the project, Adam Riess, Jose Bernal and Blake Sherwin for helpful comments on the draft, and Eric Jensen and Gerrit Farren for useful conversations. We thank Antony Lewis for help with \texttt{getdist}, Oliver Philcox for help with \texttt{CLASS-PT}, and Adam Riess for providing us with the Pantheon+ likelihood. This work used the Strelka Computing Cluster, which is run by Swarthmore College. TLS is supported by NSF Grant No.~2009377, NASA Grant No.~80NSSC18K0728, and the Research Corporation. This project has received support from the European Union’s Horizon 2020 research and innovation program under the Marie Skodowska-Curie grant agreement No 860881-HIDDeN.
\end{acknowledgements}
\appendix
\section{Derivation of the \LCDM\ parameter scaling equations}
\label{app:scaling}
\subsection{Approximate scalings for the galaxy power spectrum}
It is helpful to recall the basic physics that determines the small-scale ($k>k_{\rm eq}$) form of the matter power spectrum. Roughly speaking, dark matter modes with $k>k_{\rm eq}$ enter the horizon during radiation domination and experience a large Hubble friction, significantly limiting their growth. Once the universe becomes matter dominated all of those modes are able to collapse, growing proportional to $a/a_{\rm eq}$. This scaling gets modified in detail since the dark matter perturbations do grow logarithmically with scale factor during radiation domination \cite{1974A&A....37..225M}, giving an amplitude of the galaxy power spectrum
\begin{eqnarray}
P_{\rm gal}(k>k_{\rm eq}) &\propto& b^2 R_c^2 g(z)^2A_s (a/a_{\rm eq})^{2} \\&\times& [1+\ln(4 a_{\rm eq}/a_k)]^2 \left(\frac{k}{k_p}\right)^{n_s-1}(h/k)^3,\nonumber\\
&=& b^2 f_b\left[\frac{\omega_b}{\omega_{cdm}}\right]\Omega_m^{0.25} A_s a^2 \Omega^2_m h^4 \label{eq:gal} \\&\times& \left[1+\ln\left(\frac{4k/h}{\Omega_m h}\right)\right]^2 \left(\frac{k}{k_p}\right)^{n_s-1}(h/k)^3,\nonumber
\end{eqnarray}
where $b$ is the linear galaxy bias, $R_c \equiv \omega_{cdm}/\omega_m = 1-\omega_b/\omega_m$ is the baryon suppression \cite{Bernal:2020vbb}, horizon crossing occurs when $k=a_k H(a_k)$, $g(z)^2 \propto \Omega_m^{0.25}$ is the growth function at $z \sim 0.3-0.6$, and $k_p$ is the pivot scale (usually chosen to be $k_p = 0.05\ {\rm Mpc}^{-1}$). We note that information about the bias comes from redshift space distortions and the use of informative priors. The second line shows the explicit dependence on $h$ in \LCDM. During radiation domination we have $a_k = 100\ {\rm km/s/Mpc}/c \sqrt{\omega_r}h/k$. Using the fact that $a_{\rm eq} \equiv \omega_r/\omega_m$ we can write $a_{\rm eq}/a_k \simeq k/k_{\rm eq}$. We can see that for $k>k_{\rm eq}$ the logarithmic term enhances the amplitude. A more careful treatment shows that the logarithmic term is $\ln[k/(8k_{\rm eq})]$, so for $k_{\rm eq} \sim 0.01\ h{\rm Mpc}^{-1}$ and $k_{\rm max} = 0.4\ h{\rm Mpc}^{-1}$ we get an enhancement of power at the smallest scales of a factor of $\sim 7$ \cite{Eisenstein:1997ik,Dodelson:2003ft}. This enhancement gives the sensitivity to $k_{\rm eq}$.
The correlation of the monopole and quadrupole moments of the galaxy clustering power spectrum gives us redshift space distortion information which provides sensitivity to the product of the growth rate, $f(z)$, and the variance of mass fluctuations in spheres of radius $R = 8\ {\rm Mpc} h^{-1}$ ($\sigma_8^2$). First, from Ref.~\cite{Planck:2015mym} we have
\begin{equation}
\sigma_8^2 \propto A_s (a/a_{\rm eq})^2 \Omega_m^{0.25} (k_{\rm eq} h^{-1})^{-1.4} \omega_m^{0.45},
\end{equation}
where the dependence on $\Omega_m$ comes from the growth function around the BOSS DR12 redshift bins ($z\sim 0.5$).
In $\Lambda$CDM, the growth rate is approximately \cite{1992ARA&A..30..499C}
\begin{equation}
f(z\sim0.5) \propto \Omega_m^{0.6}.
\end{equation}
\subsection{Approximate scaling for the lensing potential power spectrum}
Since the \textit{Planck} inferred lensing potential power spectrum provides measurements between $8 \leqslant L \leqslant 400$ \cite{Planck:2018lbu}, there are two relevant quantities in the CMB lensing: position of the peak $\ell^{\phi \phi}_{\rm peak}$ and the amplitude of high $L$ power spectrum, $L^4C_L^{\phi \phi}$.
First, the peak of the spectrum is set by $\theta_{\rm eq}$ at $z\sim 2$ \cite{Lewis:2006fu}, so that $\ell^{\phi \phi}_{\rm peak} \propto \Omega_m^{0.75} h\omega_r^{-0.5}$.
Second, the CMB lensing potential power spectrum also has sensitivity to $k_{\rm eq}$. A rough approximation to the combination of parameters measured by estimates of the lensing potential power spectrum is given by \cite{Planck:2015mym}
\begin{eqnarray}
L^4 C_L^{\phi \phi} &\propto& A_s \ell_{\rm eq}^{2} \omega_m^{0.3},\\
&=& A_s h^{2.6}\Omega_m^{3.5}
\end{eqnarray}
where $\ell_{\rm eq} \equiv \chi_{\rm dec} k_{\rm eq}$, $\chi_{\rm dec}$ is the comoving distance to photon decoupling, the power law index for $\omega_m$ is fit around $L \simeq 200$, and the primordial power spectrum was taken to be scale invariant. The product $A_s \ell_{\rm eq}^2$ can be simply understood: the gravitational potential power spectrum is nearly scale invariant up until $k_{\rm eq}$, at which point it becomes small. The number of collapsed halos of size $r \sim k^{-1}$ that a CMB photon passes by is given by $\chi_*/r \sim k\chi_*$, where $\chi_*$ is the comoving distance to the surface of last scattering, and the typical halo potential is $\sim A_s^{1/2}$. Since $\ell_{\rm eq} = k_{\rm eq} \chi_*$ gives the largest number of halos along the line of sight, the overall amplitude of the deflection power spectrum (which, in turn, is proportional to the lensing potential power spectrum) is proportional to $A_s \ell_{\rm eq}^2$ \cite{2010GReGr..42.2197H}. Additionally, it is straightforward to show that the angular scale of matter radiation equality at the CMB is $\ell_{\rm eq} \propto \Omega_m^{0.6} h \omega_r^{-0.5}$.
\section{The effect of removing the galaxy power spectrum peak}
\label{app:peak}
To demonstrate that the location of the peak of the galaxy power spectrum is not playing a role in constraining $h$, we performed an analysis with $k_{\rm min} = 0.05\ h {\rm Mpc}^{-1}$ for the galaxy power spectrum multipoles. This choice is $\sim 5$ times larger than $k_{\rm eq}$, fully removing the peak from the data. The resulting 1D posterior for $h$ is shown in Fig.~\ref{fig:h_kmin}. We can see that the posterior is statistically identical to our fiducial choice of $k_{\rm min} = 0.01\ h {\rm Mpc}^{-1}$. This is not surprising given the fact that the fiducial $k_{\rm min}$ is just slightly less than $k_{\rm eq}$. We note that the signal to noise in the lowest measured modes is smallest since at the largest scales we have the fewest independent measurements.
We note that, although galaxy power spectra may not be able to probe scales large enough to measure the peak, future HI surveys will have enough coverage \cite{Cunnington:2022ryj}.
\section{Checking the BAO smoothing algorithm}
\label{app:check}
Fig.~\ref{fig:vs_rsd} indicates that part of the parameter space may not be modeled correctly. As discussed in Ref.~\cite{Chudaykin:2020aoj} the BAO smoothing algorithm used in \texttt{CLASS-PT} is constructed to work well for $130\ {\rm Mpc} \leqslant r_{s,d} \leqslant 170\ {\rm Mpc}$. Clearly the \LCDM\ MCMCs have samples which are slightly beyond the lower end of this range. The algorithm performs a sine-transform of the matter power spectrum and, excises the BAO bump, interpolates between the two smooth regions on either side, and then inverse transforms back to Fourier space. The excision of the BAO bump is done using fixed boundaries, and so will fail if the BAO bump gets close to either of those boundaries. In order to investigate whether this causes an issue at the lower boundary, we modified the algorithm slightly by allowing the boundary to move as the value of $r_{s,d}$ changes.
Our modified algorithm keeps the distance between the excised points and shifts it linearly with the value of $r_{s,d}$ with the standard value at $r_{s,d}=150$ Mpc. The original algorithm fixes the region of the real-space correlation function that is excised in order to remove the BAO bump. In terms of the indices they remove all points between $N_{\rm left} = 120$ and $N_{\rm right} = 240$ \cite{Chudaykin:2020aoj}. We have modified the range of indices which are removed so that it is translated as the value of $r_{s,d}$ changes:
\begin{equation}
N_{\rm left} = 120-20(1-r_{s,d}/150)/(1-120/150),
\end{equation}
and $N_{\rm right} = N_{\rm left} + 120$. We have verified that this algorithm properly excises the BAO bump when $r_{s,d}$ is varied between $110\ {\rm Mpc} \leqslant r_{s,d} \leqslant 170\ {\rm Mpc}$.
The comparison between the standard and modified algorithm for EDE is shown in Fig.~\ref{fig:EDE_comp}. We focus on EDE here since the value of $r_{s,d}$ has the largest range in this model. There we can see that when `All' of the data is included the two methods are nearly identical. We have checked the other cosmological models we consider show similar insensitivity to the change in the broadband/BAO split.
\section{Verifying the Pantheon+ prior on $\Omega_m$}
\label{app:PanPlus_check}
Instead of using the full Pantheon+ likelihood we have used a prior on $\Omega_m = 0.338 \pm 0.018$. In order to verify that this prior properly captures all of the aspects of this likelihood we have compared the constraints on \LCDM\, marginalizing over $r_{s,d}$, using the FS+BBN+CMBLens+PanPlus, implementing the full Pantheon+ likelihood vs.~using the prior on $\Omega_m$.
The comparison between these two analyses is shown in Fig.~\ref{fig:PanPlus_comp}. There we can see that the posteriors are nearly identical, verifying our use of the Pantheon+ prior on $\Omega_m$. The conclusions of this comparison also hold in the beyond-\LCDM\ models we consider since they all introduce new physics at or before recombination, and therefore are identical to \LCDM\ in the late universe when SNeIa measurements are made.
\newpage
\bibliography{biblio}
|
Title:
Long-Timescale Stability in CMB Observations at Multiple Frequencies using Front-End Polarization Modulation |
Abstract: The Cosmology Large Angular Scale Surveyor (CLASS) is a telescope array
observing the Cosmic Microwave Background (CMB) at frequency bands centered
near 40, 90, 150, and 220 GHz. CLASS measures the CMB polarization on the
largest angular scales to constrain the inflationary tensor-to-scalar ratio and
the optical depth due to reionization. To achieve the long time-scale stability
necessary for this measurement from the ground, CLASS utilizes a front-end,
variable-delay polarization modulator on each telescope. Here we report on the
improvements in stability afforded by front-end modulation using data across
all four CLASS frequencies. Across one month of modulated linear polarization
data in 2021, CLASS achieved median knee frequencies of 9.1, 29.1, 20.4, and
36.4 mHz for the 40, 90, 150, and 220 GHz observing bands. The knee frequencies
are approximately an order of magnitude lower than achieved via CLASS
pair-differencing orthogonal detector pairs without modulation.
| https://export.arxiv.org/pdf/2208.04996 |
\keywords{Cosmic Microwave Background, telescopes, polarization, modulation}
\section{INTRODUCTION}
\label{sec:intro} %
Originating only 380,000 years after the Big Bang, the Cosmic Microwave Background (CMB) is a direct window into the early Universe. The polarization of the CMB contains a wealth of information relating to inflation, reionization, dark matter interactions, and more.\cite{wmapResults,plank2018params,bicep,actBi} The Cosmology Large Angular Scale Surveyor (CLASS) measures the CMB polarization at frequency bands centered near 40, 90, 150, and 220 GHz.\cite{TEH2014,KH2016} Each CLASS telescope consists of a novel, front-end polarization modulator, custom optics and cryogenics, and a focal plane of transition-edge sensor (TES) bolometers\cite{jeff2018, dahal2022}. CLASS has observed from the Atacama Desert in Chile since 2016 with the goal of measuring the largest scales accessible from the ground.
The first optical element in each CLASS telescope is a variable-delay polarization modulator (VPM), which consists of a polarizing wire grid in front of and parallel to a movable mirror.\cite{Chuss2012} This creates a phase delay between incident polarization parallel and perpendicular to the wires. The CLASS VPM mirrors move at a frequency of 10 Hz relative to the wire grid. This rapid modulation encodes the CMB linear polarization (Stokes $Q$/$U$) at a frequency above typical atmospheric and instrumental noise sources that would otherwise impact large angular-scale observations.\cite{miller2016} The VPM also provides sensitivity to circular polarization (Stokes $V$). The CLASS VPM design and 40 GHz performance have been described in Harrington et al. (2018)\cite{KH2018} and Harrington et al. (2021)\cite{KH2021}. In this proceeding we provide an analysis of an initial, representative data set at all four CLASS frequencies.
\section{Data \& Modeling}
\label{sec:data}
Most CMB polarimeters today utilize total power sensors (such as TESs) that are coupled (e.g., by probe antennas) to a single linear polarization. Thus these detectors measure a combination of the total intensity (Stokes $I$), linear polarization (Stokes $Q$/$U$), and circular polarization (Stokes $V$). Orthogonal detector pairs can be differenced to isolate the linear polarization signal. In principle, the pair-difference cancels the common, dominant unpolarized component and the circular polarization. In practice, the cancellation is imperfect and the pair-differenced data will also include spurious signal (i.e., intensity-to-polarization leakage) that could overwhelm the cosmological signal CLASS aims to measure. To suppress the spurious signal on the largest angular scales, CLASS modulates Stokes $U$ and $V$ at a frequency of 10 Hz. (Here, $+Q$ is defined parallel to the VPM wires.) The 10 Hz modulation frequency of the CLASS VPMs is much higher than the rate at which this spurious signal is expected to change, which can then be removed via high pass filtering. CLASS is then left with modulated $U$ and $V$. CLASS tracks the relative position of the VPM wire grid and mirror; thus, the modulation functions are known at all times. This allows the recovery of the incident $U$ and $V$ signals. To observe the $Q$ signal, CLASS performs daily boresight rotations that range between -45$^\circ$ and +45$^\circ$. For a more rigorous discussion of demodulation, see Harrington et al. (2021).\cite{KH2021}
We perform a similar analysis to Harrington et al. (2021),\cite{KH2021} focusing on a representative subset of data collected during November 2021, when CLASS observed at all four frequencies. CLASS observes by performing continuous 720$^{\circ}$ rotations in azimuth at constant elevation. These observations were broken into 3-hour segments, then corrected for glitches and other errors. Data segments from known poorly performing detectors and high glitch rates are excluded. We also excluded segments when the average wind speed at the CLASS site was above 5 m/s and the precipitable water vapor (PWV) was above 3 mm.\footnote[2]{PWV data were acquired from the APEX radiometer website, \url{https://archive.eso.org/wdb/wdb/asm/meteo_apex/form}} After pair differencing and demodulation, power spectral densities are calculated for the pair-differenced, demodulated-$U$, and demodulated-$V$ data. Once spectra have been produced for each data segment, a noise model is fit to each spectrum. This model, given in Equation \ref{eq:noiseModel}, consists of two components: a constant white noise and a power law rising to low frequencies (red noise).
\begin{equation}
\label{eq:noiseModel}
PSD(f) = w_n^2 \Bigg(1 + \Bigg( \frac{f}{f_k} \Bigg)^\alpha \Bigg)
\end{equation}
The model parameters are the power law slope $\alpha$, the white noise level $w_n$, and knee frequency $\fk$, which is the frequency at which the two components of the model are equal. This is demonstrated in Figure \ref{fig:fkExample}, which shows spectra of CLASS pair-differenced, demodulated-$U$, and demodulated-$V$ data from a single 3-hour segment with arbitrary scale and offset for visual clarity. The dashed lines in the $U$ spectrum represent the separate red and white noise components. The colored arrows indicate the knee frequency of each spectra. Using a CLASS azimuthal scan speed of $\omega=2^{\circ}$/s and elevation $\theta=45^{\circ}$, each knee frequency $f$ can be converted into an angular scale on the sky and a corresponding multipole $\ell \approx 360^{\circ} f/ \omega\sin{\theta} $. The multipoles are shown in the top axis of Figure \ref{fig:fkExample}.
\section{Results}
\label{sec:results}
Data from all four observing bands were modeled to extract knee frequency values. Figure \ref{fig:dists} shows the distribution of knee frequencies for each band. The medians and 16$\%$ and 84$\%$ percentiles of each distribution are given in Table \ref{tab:fks}. For all four bands, the overall distribution of knee frequencies is significantly lower for demodulated compared to pair-differenced data. The median knee frequency decreases by approximately an order of magnitude between the pair-differenced and $U$ data. CLASS aims to constrain the CMB polarization down to multipoles $\ell < 10$. From Figures \ref{fig:fkExample} and \ref{fig:dists}, given the CLASS scan speed, this roughly corresponds to a knee frequency $\fk \lesssim 40\,\mathrm{mHz}$. The median knee frequency for all four CLASS bands is below 40 mHz for the $U$ data. The lower knee frequencies provided by demodulation significantly reduce the noise present in the CLASS data at low-$\ell$. The knee frequencies achieved by CLASS are necessary to measure the CMB on the largest angular scales. However, there are other systematic effects, such as scan-synchronous pick-up, present at low-$\ell$ that are the subject of separate studies.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Band & \# segments & pair-diff (mHz) & demod-$U$ (mHz) & demod-$V$ (mHz)\\
\hline
40 GHz & 946 & 170$_{-78.9}^{+297}$ & 9.1$_{-5.4}^{+27}$ & 2.9$_{-2.0}^{+9.6}$ \\
\hline
90 GHz & 2429 & 503$_{-305}^{+752}$ & 29.1$_{-16.7}^{+41.9}$ & 7.1$_{-5.4}^{+12.1}$ \\
\hline
150 GHz & 3447 & 244$_{-134}^{+464}$ & 20.4$_{-13.3}^{+32.0}$ & 6.2$_{-4.7}^{+10.5}$ \\
\hline
220 GHz & 561 & 391$_{-243}^{+629}$ & 36.4$_{-22.1}^{+58.3}$ & 11.5$_{-7.1}^{+20.6}$ \\
\hline
\end{tabular}
\caption{\label{tab:fks} Median knee frequencies for each CLASS observing band and data set from November 2021. Error values give the 68-percentile range of the knee frequency distribution around the median. The second column gives the number of 3-hour data segments analyzed for each band.}
\end{table}
As is clear from Figure \ref{fig:dists} and Table \ref{tab:fks}, the $V$ knee frequencies are systematically lower than those for $U$. This implies that, compared to $U$, the $V$ data has a higher white noise level relative to the red noise component. This may be due to higher $V$ white noise in the presence of comparable red noise, or due to overall weaker red noise in the CLASS $V$ data. Sources of noise in the polarization data that are not modulated by the VPM have their amplitudes increased by correcting the signal and data for the VPM modulation efficiency.\cite{KH2018} White noise for the $V$ data is approximately double the $U$ white noise due to this correction.\cite{KH2021} Whether the higher $V$ white noise is responsible for the lower knee frequencies depends on whether the red noise is uncorrelated with the $V$ modulation function or if it represents on-sky signal.\footnote[2]{While CLASS has made the first detection of continuum circular polarization due to Zeeman splitting of oxygen in the atmosphere\cite{petroff2020, padilla2020}, we do not expect this component to have significant random fluctuations, and there are no other known sources of $V$ fluctuations on sky.}
\section{Conclusions}
\label{sec:conc}
CLASS observes the CMB polarization on the largest angular scales to constrain many cosmological phenomena such as inflation and reionization. Employing a VPM on each telescope provides CLASS with additional observational stability beyond that of a standard, pair-differenced-only polarization measurement. This is demonstrated here by the reduction in knee frequencies of demodulated CLASS data compared to pair differencing alone. For the data considered here, CLASS achieved median knee frequencies of 9.1, 29.1, 20.4, and 36.4 mHz for the $U$ data at 40, 90, 150, and 220 GHz, respectively. While red noise is not the sole factor in the final low-$\ell$ sensitivity of CLASS, these results provide a first look at the long-timescale stability of CLASS data across all four frequencies. On-going work will expand this analysis to include the whole multi-frequency data set.
\acknowledgments
We acknowledge the National Science Foundation Division of Astronomical Sciences for their support of CLASS under Grant Numbers 0959349, 1429236, 1636634, 1654494, 2034400, and 2109311. The CLASS project employs detector technology developed under several previous and ongoing NASA grants. Detector development work at JHU was funded by NASA cooperative agreement 80NSSC19M0005. Data analysis for CLASS is conducted using computational resources of the Advanced Research Computing at Hopkins (ARCH). We further acknowledge the very generous support of Jim and Heather Murren (JHU A$\&$S ’88), Matthew Polk (JHU A$\&$S Physics BS ’71), David Nicholson, and Michael Bloomberg (JHU Engineering ’64). R.R. is supported by the ANID BASAL projects ACE210002 and FB210003. Z.X. is supported by the Gordon and Betty Moore Foundation through grant GBMF5215 to the Massachusetts Institute of Technology. CLASS is located in the Parque Astronómico Atacama in northern
Chile under the auspices of the Agencia Nacional de InvestigaciГіn y Desarrollo (ANID).
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Search for LBVs in the Local Volume galaxies: study of four stars in NGC 4449 |
Abstract: We continue to search for LBV stars in galaxies outside the Local Group. In
this work, we have investigated four luminous stars in NGC 4449. Multiple
spectral observations carried out for J122810.94+440540.6, J122811.70+440550.9,
and J122809.72+440514.8 revealed the emission features in their spectra that
are characteristic of LBVs. Photometry showed noticeable brightness changes of
J122809.72+440514.8 ($\Delta I=0.69\pm0.13^m$) and J122817.83+440630.8 ($\Delta
R=2.15\pm0.13^m$), while the variability of J122810.94+440540.6 and
J122811.70+440550.9 does not exceed $0.3^m$ regardless of the filter. We have
obtained estimates of the interstellar reddening, photosphere temperatures, and
bolometric luminosities $\log(\text{L}_\text{Bol}/\text{L}_{\odot}) \approx
5.24-6.42$. Using the CMFGEN code, we have modelled the spectrum of the cold
state of J122809.72+440514.8 ($T_{\text{eff}}=9300\,$K) and have obtained
possible value of the mass loss rate $\dot{M} =
5.2\times10^{-3}\,M_{\odot}\,yr^{-1}$. Based on the observational properties,
J122809.72+440514.8 and J122817.83+440630.8 were classified as LBVs, while the
other two stars were classified as LBV candidates or B[e]-supergiants
candidates
| https://export.arxiv.org/pdf/2208.05892 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords} stars: emission lines, Be -- stars: variables: S Doradus -- stars: massive -- galaxies: individual: NGC\,4449
\end{keywords}
\section{Introduction}
Luminous blue variables are the massive ($M\geq25M\odot$, \citealt{Humphreys16}) stars with high luminosity ($\gtrsim10^5 L_{\odot}$), characterized by significant spectral and photometric variability.
A characteristic feature of many LBV stars is the S Dor-type variability \citep{vanGenderen01}. During this cycle, both photometric and spectral variability are observed. The brightness variation amplitude in the V band can reach 2.5 magnitudes, with the cold state of the star corresponding to maximum brightness, and the hotter state to minimum. Spectra of LBVs are similar to those of the A-F supergiants, hot B supergiants, or Of/late--WN stars depending on the photosphere temperature \citep{Vink12,Humphreys94}. A luminous star exhibiting the described variability can be unambiguously classified as an LBV in the S Dor cycle. In addition, some LBV stars experience dramatic brightness changes in the form of giant eruptions of more than $2.5^m$. This type of brightness change is $\eta$-Car variability \citep{Humphreys99}. An important feature of the spectral energy distribution of LBVs is the absence of an IR excess associated with the emission of hot dust \citep{Humphreys14}.
The evolutionary status of LBVs remains poorly known. In the accepted view they correspond to the transitional phase from single massive O-stars to the Wolf-Rayet stars\citep{Groh14}. Some studies have shown that rotating LBVs with initial masses of 20-25 M$_{\odot}$ can evolve directly into the core-collapse supernovae bypassing the Wolf-Rayet stage \citep{Groh13}.The possibility of the appearance of LBV as a result of the evolution of close binaries is considered in \citet{Smith15}.
The classification of high luminosity stars observed in the galaxies M\,33 and M\,31 was proposed in \citet{Humphreys14}, according to spectral and photometric characteristics of stars: B[e]-supergiants, Of/late--WN, LBV stars, warm hypergiants, Fe\,II-emission stars, hot and intermediate supergiants. These stars have similar spectral characteristics, which complicates the search for LBVs; however, only LBVs have significant brightness variability. Despite their similarity, the evolutionary connections between these luminous stars have not yet been clarified.
To date, only about 40 LBVs and about a hundred LBV candidates (cLBVs) are known in our and other galaxies, mostly belonging to the Local Group \citep{Richardson18}. The information about these stars in the Local Volume are incomplete, and only a few LBVs and cLBVs are known beyond 1 Mpc (for example, \citealt{Pustilnik17, Humphreys19, Drissen97, Goranskij16, Solovyeva19}). The confirmation of the LBV status requires a large amount of observational time for revealing photometric and spectral variability, but the discovery of new LBVs and candidates may clarify the origin of the LBV phenomenon and evolutionary status of LBVs.
We search for LBVs and similar objects in the Local Volume galaxies by selecting the point-like H$\alpha$ sources associated with blue stars. For our purposes we use archival broadband and narrowband H$\alpha$ images obtained with the Hubble Space Telescope (HST). The first results of our search in the NGC\,4736 and NGC\,247 galaxies were published in the works \citet{Solovyeva19, Solovyeva20}. This paper presents the results of a study of the NGC\,4449 galaxy (distance D=4.27 Mpc, \citet{Tully13}). This dwarf irregular galaxy of Magellanic-type (Ibm type) has specific star formation rate higher than that of the Large Magellanic Cloud (according to Catalog \& Atlas of the LV galaxies\footnote{https://serv.sao.ru/lv/lvgdb/}). We have discovered 4 new cLBVs in this galaxy: J122810.94+440540.6, J122811.70+440550.9, J122809.72+440514.8 and J122817.83+440630.8 (Fig. \ref{Fig1}). In this paper, we present the results of a detailed study of the detected stars based on spectral and photometric observations.
\section{OBSERVATIONS AND DATA REDUCTION}
\subsection{Spectroscopy}
The spectra of the all four LBV candidates were obtained with the 6-m telescope of SAO RAS (BTA) using the SCORPIO or SCORPIO-2 focal reducers \citep{Afanasiev05,AfanasievSco2}. The slit size was 0.5--1.2\arcsec. The observation dates, seeing and used grisms are shown in Table \ref{Tab1}. Spectral data processing was performed with the context \textsc{long} of \textsc{midas} using the standard algorithm. The spectra were extracted using the \textsc{spextra} package \citep{Sarkisyan17} intended to deal with long-slit spectra in the crowded stellar fields.
\subsection{Imaging}
Photometric data were obtained with the 2.5-m telescope of Caucasian Mountain Obseravtory of SAI MSU (2.5-m CMO), the BTA and the Zeiss-1000 of SAO RAS (Zeiss-1000). We also used archival data of HST (ACS, WFPC2 and WFC3 cameras) and the Bok Telescope of Kitt Peak National Observatory (Bok). Details are given in Tables~\ref{Tab2}, \ref{Tab3} and \ref{Tab4}. Primary processing of the 2.5-m CMO, BTA and Zeiss-1000 data was carried out with \textsc{midas}.
To determine stellar magnitudes in the ground-based observations, we performed point spread function (PSF) photometry since the objects are located in crowded stellar fields, and aperture photometry leads to overestimation of the source fluxes. The PSF photometry was performed using the \textsc{daophot\,ii} package \citep{Stetson87}. Absolute calibration for the ground-base observations were performed using 27--32 reference stars whose fluxes were measured from the HST images. The only exception was the U-band data (observation taken with 2.5-m CMO, Table~\ref{Tab3}) where we used reference star fluxes from the SDSS Photometric Catalog \citep{Abazajian09} in order to avoid incorrect flux estimation for stars with deep Balmer jump caused by the fact that the U filter is centered ($\lambda_{c}\approx3600$\AA) near the Balmer jump while the HST F336W (or F330W) filter cover these wavelength only partly. The resulting magnitudes are shown in Table~\ref{Tab3} with errors include statistical errors of the flux measurements, errors of absolute calibration and also account for background irregularities around the objects. For the objects J122810.94+440540.6 and J122811.70+440550.9 we did not provide result of ground-based photometry because they have nearby stars of comparable brightness and many faint stars, which complicates the background subtraction and makes results unstable. For these sources we show only magnitudes obtained from the HST data.
For the HST/WFPC2 data, we performed PSF photometry using the \textsc{hstphot}\,1.1 package \citep{Dolphin2000} on c0f images. The data from the ACS and WFC3 cameras were analyzed by the aperture photometry method using the APPHOT package of \textsc{iraf}. The calibrated pipeline-processed drizzled images were acquired from the MAST archive. Radii of circular apertures ($r_{\rm ap}$) for the source flux measurements and radii of annuli ($r_{\rm in}, r_{\rm out}$) for the background were chosen depending on the camera: $r_{\rm ap} = 0.15\arcsec$, 0.075\arcsec\ and 0.12\arcsec, $r_{\rm in} = 0.25\arcsec$, 0.25\arcsec\ and 0.20\arcsec, $r_{\rm out} = 0.45\arcsec$, 0.50\arcsec\ and 0.44\arcsec\ for ACS/WFC, ACS/HRC and WFC3/UVIS, respectively.
The aperture corrections were taken into account by photometric measurements of 20 (ACS/WFC), 35 (ACS/HRC), and 36 (WFC3/UVIS) single bright stars.
The object J122809.72+440514.8 has a saturated central pixel in the F435W, F555W, F814W images of the HST/ACS/WFC data obtained on 2005 November 11.
To exclude damaged part of its PSF, we carried out photometry of this source in the annular aperture with inner and outer radii of 0.1\arcsec\ and 0.2\arcsec. To obtain valid absolute magnitudes, this aperture was calibrated on $\sim40$ reference stars. The sky background for this source was measured from the annulus with $r_{\rm in}=0.3$\arcsec\ and $r_{\rm out}=0.5$\arcsec. Correctness of this approach is confirmed by the fact that after the conversion to the standard (Johnson-Cousins) system, the V magnitudes from the damaged observation (obtained from the F555W image) and from the closest subsequent observation (the F500M filter, November 18) appeared to be consistent within measurement errors.
The measured HST magnitudes were converted into the standard Johnson-Cousins system using the calcphot function of the \textsc{PySynphot} package. For J122810.94+440540.6 and J122817.83+440630.8, we assumed a power law as the model spectrum, and determined spectral indices from the fluxes in adjacent filters. Another two objects J122811.70+440550.9 and J122809.72+440514.8 (according to the 1997 data) show evidences of Balmer discontinuity (see below). Therefore, to convert the F336W or F330W magnitudes into the U band, we used the model spectra presented in Sec.~\ref{res}. The rest magnitudes were converted using a power law with spectral indices calculated in the manner described above.
Resulting magnitudes are shown in Tables~\ref{Tab3} and \ref{Tab4}. The provided errors include, beside the statistical error of the flux measurement, the accuracy of the conversion between the photometric systems \citep{Sirianni05, Harris18}, the stability of zero points\footnote{https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration,\\ https://www.stsci.edu/itt/review/ihb\_cy15/
ACS/c03\_intro\_acs6.html} and the stability of the filter PSF in each particular observation. (for example, \citealt{Anderson06}). Thus, since the instrumental errors have a characteristic value of the order of 1-3\%, we obtained the total photometry errors not better than 3\% even in the cases where the statistical errors were insignificant.
Also, we analysed infrared images of three LBV candidates obtained with the WFC3/IR camera on 2019 April 4 (Table~\ref{Tab4}). J122817.83+440630.8 was outside the field of view of this pointing. We carried out aperture photometry using aperture radii $r_{\rm ap}=0.26\arcsec$, $r_{in}=0.43$\arcsec\ and $r_{in}=0.82$\arcsec. The aperture correction factor was determined by measurements of $\sim$ 20 single bright stars in 0.26\arcsec\ and 0.40\arcsec\ apertures. The obtained magnitudes are shown in Table~\ref{Tab4}. In this table we also present the results of photometry in the $H\alpha$ filter (F658N) of the ACS/WFC and ACS/HRC cameras.
\begin{table*}
\caption{Log of spectral observations on the BTA.}
\begin{tabular}{ccccccc}
\hline\hline
Star& Date& Spectrograph, grism & Spectral & Spectral & Seeing & Total exp., s\\
& & & resolution,\AA & range, \AA & & \\ \hline
J122810.94+440540.6 & 2014/01/02 & SCORPIO/VPHG1200R & 5.0 & 5700-7400 & 1.6\arcsec & 2400 \\
& 2015/01/18 & SCORPIO/VPHG1200G & 5.0 & 3900-5700 & 1.7\arcsec & 2400 \\
& 2020/01/18 & SCORPIO/VPHG1200B & 5.5 & 3600-5400 & 1.8\arcsec & 1800\\ \hline
J122811.70+440550.9 & 2014/01/02 & SCORPIO/VPHG1200R & 5.0 & 5700-7400 & 1.6\arcsec & 2400\\
& 2015/01/18 & SCORPIO/VPHG1200G & 5.0 & 3900-5700 & 1.7\arcsec & 2400\\
& 2020/01/18 & SCORPIO/VPHG1200B & 5.5 & 3600-5400 & 1.8\arcsec & 1800\\ \hline
J122817.83+440630.8 & 2021/02/11 & SCORPIO-2/VPHG1200@540 & 5.2 & 3650-7250 & 2.3\arcsec & 1200\\ \hline
J122809.72+440514.8 & 2014/01/02 & SCORPIO/VPHG1200R & 5.0 & 5700-7400 & 1.5\arcsec & 2400 \\
& 2015/01/18 & SCORPIO/VPHG1200G & 5.0 & 3900-5700 & 1.7\arcsec & 2400 \\
& 2017/03/31 & SCORPIO/VPHG1200G & 5.0 & 3900-5700 & 1.8\arcsec & 3000 \\
& 2018/02/18 & SCORPIO/VPHG1200G & 5.0 & 3900-5700 & 1.1\arcsec & 2400\\
& 2018/02/18 & SCORPIO/VPHG550G & 7.3 & 3100-7300 & 1.2\arcsec & 1800\\
& 2020/08/18 & SCORPIO/VPHG1200B & 5.5 & 3600-5400 & 1.6\arcsec & 2700\\ \hline
\end{tabular}
\label{Tab1}
\end{table*}
\begin{table*}
\caption{Log of HST observations. Objects J122810.94+440540.6, J122811.70+440550.9, J122817.83+440630.8 and J122809.72+440514.8 are marked with 1, 2, 3 and 4.}
\begin{tabular}{cccc}
\hline\hline
Date& Camera& Filters& Objects\\ \hline
1995/05/10 & WFPC2 & F606W & 1, 2, 3, 4 \\
1997/07/28 & WFPC2 & F170W, F336W, F555W, F814W & 4\\
1998/01/09 & WFPC2 & F170W, F336W, F555W, F814W & 1, 2, 3\\
2005/11/10 & ACS/WFC & F435W, F555W, F814W & 1, 2\\
2005/11/11 & ACS/WFC & F435W, F555W, F658N, F814W & 1, 2, 3, 4\\
2005/11/17 & ACS/WFC & F814W & 1, 2, 3, 4\\
2005/11/18 & ACS/WFC & F550M & 1, 2, 3, 4 \\
2006/01/26 & ACS/HRC & F330W, F550M, F658N, F814W & 1, 2 \\
2014/07/09 & WFC3/UVIS & F275W, F336W& 1, 2, 3, 4 \\
2019/04/04 & WFC3/IR & F110W, F160W & 1, 2, 4\\
\hline
\end{tabular}
\label{Tab2}
\end{table*}
\begin{table*}
\begin{minipage}{18cm}
\caption{Results of the optical and UV photometry. The columns show the instruments, dates and observed stellar magnitudes (not corrected for reddening). All magnitudes are given in the VEGAMAG system.}
\begin{tabular}{lcccccccc} \hline\hline
\centering
Telescope & Date & F170W, mag & F275W, mag & U, mag & B, mag & V, mag &R, mag & I, mag \\ \hline
\multicolumn{9}{c}{J122810.94+440540.6} \\ \hline
WFPC2 &1995/05/10 & ---& ---&---& --- & --- & $19.15\pm0.04$ & --- \\
WFPC2 &1998/01/09 & $>20.00$& ---&$18.91\pm0.04$ & --- & $19.34\pm0.03$ & --- & $18.60\pm0.03$ \\
ACS/WFC& 2005/11/10& ---& ---& --- & --- & $19.34\pm0.03$ & --- & $18.63\pm0.03$\\
ACS/WFC& 2005/11/11& ---& ---& --- & $19.67\pm0.03$ & $19.33\pm0.03$ & --- & $18.64\pm0.03$\\
ACS/WFC& 2005/11/17& ---& ---& --- & --- & --- & --- & $18.67\pm0.03$ \\
ACS/WFC& 2005/11/18& ---& ---& --- & --- & $19.45\pm0.03$ & --- & --- \\
ACS/HRC& 2006/01/26& ---& ---& $19.09\pm0.04$ & $19.60\pm0.03$ & $19.31\pm0.03$ & --- & $18.66\pm0.03$ \\
WFC3/UVIS & 2014/07/09& --- & $19.69\pm0.05$& $18.80\pm0.04$ & --- & --- \\
\hline
\multicolumn{9}{c}{J122811.70+440550.9} \\ \hline
WFPC2 &1995/05/10 & ---& ---&---& --- & --- & $19.74\pm0.04$ & --- \\
WFPC2 &1998/01/09 & $18.83\pm0.11$& ---&$19.19\pm0.04$ & --- & $20.19\pm0.03$ & --- & $19.50\pm0.03$ \\
ACS/WFC& 2005/11/10& ---& ---& --- & --- & $20.12\pm0.03$ & --- & $19.48\pm0.03$\\
ACS/WFC& 2005/11/11& ---& ---& --- & $20.29\pm0.03$ & $20.15\pm0.03$ & --- & $19.51\pm0.03$\\
ACS/WFC& 2005/11/17& ---& ---& --- & --- & --- & --- & $19.47\pm0.03$ \\
ACS/WFC& 2005/11/18& ---& ---& --- & --- & $20.25\pm0.03$ & --- & --- \\
ACS/HRC& 2006/01/26& ---& ---& $19.33\pm0.04$ & $20.32\pm0.03$ & $20.28\pm0.04$ & --- & $19.60\pm0.03$ \\
WFC3/UVIS & 2014/07/09& --- & $18.40\pm0.05$& $19.12\pm0.04$ & --- & --- \\
\hline
\multicolumn{9}{c}{J122817.83$+$440630.8} \\ \hline
WFPC2 &1995/05/10 & ---& ---&---& --- & --- & $22.03\pm0.06$ & --- \\
WFPC2 &1998/01/09 & $>20.29$& ---&$21.25\pm0.08$ & --- & $22.01\pm0.06$ & --- & $21.60\pm0.07$ \\
BOK &2001/03/31 &---&---&---&---&---&$19.88\pm0.12$& --- \\
ACS/WFC& 2005/11/11& ---& ---& --- & $20.98\pm0.03$ & $20.69\pm0.03$ & --- & $20.03\pm0.03$\\
ACS/WFC& 2005/11/17& ---& ---& --- & --- & --- & --- & $20.00\pm0.03$ \\
ACS/WFC& 2005/11/18& ---& ---& --- & --- & $20.72\pm0.03$ & --- & --- \\
WFC3/UVIS & 2014/07/09& --- & $21.28\pm0.05$& $21.79\pm0.03$ & --- & --- \\
2.5m CMO & 2020/03/07 & --- & --- & ---& --- & ---& $22.04\pm0.20$&---\\
\hline
\multicolumn{9}{c}{J122809.72+440514.8} \\ \hline
WFPC2& 1995/05/10 & ---& ---& --- & --- & ---& $18.22\pm0.04$ & ---\\
WFPC2& 1997/07/28& $>20.20$& ---& $18.14\pm0.03$ & --- & $17.98\pm0.04$ & --- & $17.48\pm0.03$ \\
BOK &2001/03/31 &---&---&---&---&---&$17.71\pm0.11$& --- \\
ACS/WFC& 2005/11/11& ---& ---& --- & $18.30\pm0.08$ & $18.05\pm0.09$ & --- & $17.56\pm0.07$ \\
ACS/WFC& 2005/11/17& ---& ---& --- & --- & --- & --- & $17.50\pm0.03$ \\
ACS/WFC& 2005/11/18& ---& ---& --- & --- & $17.94\pm0.03$ & --- & --- \\
WFC3/UVIS & 2014/07/09& --- & $19.19\pm0.05$& $17.98\pm0.03$ & --- & --- \\
BTA & 2018/02/18& --- & --- &--- & $18.61\pm0.15$ & $18.23\pm0.15$ & $18.01\pm0.11$ & $17.73\pm0.07$ \\
2.5m CMO & 2019/01/18 & --- & --- & ---&$18.49\pm0.11$ & $18.13\pm0.10$& $18.02\pm0.07$&---\\
Zeiss-1000 & 2019/04/09& --- & --- & --- &$18.53\pm0.09$ &$18.28\pm0.09$ & --- & --- \\
Zeiss-1000 & 2019/05/22& --- & --- & --- &$18.51\pm0.11$ &$18.16\pm0.09$ & --- & --- \\
Zeiss-1000 & 2019/11/24& --- & --- & --- &$18.60\pm0.18$ &$18.28\pm0.06$ & $18.18\pm0.11$ & --- \\
BTAS & 2020/01/18& --- & ---& --- & $18.61\pm0.21$ &$18.38\pm0.19$& --- & --- \\
Zeiss-1000 & 2020/02/17& --- & --- & --- &--- &$18.46\pm0.14$ & $18.20\pm0.05$ & $18.17\pm0.13$\\
2.5m CMO & 2020/03/07 & --- & --- & $17.72\pm0.13$&$18.51\pm0.09$ & $18.32\pm0.07$& $18.14\pm0.06$&$18.02\pm0.11$\\
Zeiss-1000 & 2020/11/14 & --- & --- & ---&$18.60\pm0.16$&$18.28\pm0.10$ & $18.19\pm0.07$ & $18.03\pm0.12$ \\
2.5m CMO & 2020/12/03 & --- & --- & ---&$18.54\pm0.08$ & $18.27\pm0.12$& $18.17\pm0.08$&---\\
2.5m CMO & 2021/03/05 & --- & --- & ---&$18.48\pm0.13$ & $18.23\pm0.05$& $18.13\pm0.07$&---\\
\hline
\end{tabular}
\label{Tab3}
\end{minipage}
\end{table*}
\begin{table*}
\caption{Results of the photometry in H$\alpha$ and near-IR.}
\begin{tabular}{ccccc} \hline\hline
\centering
& \multicolumn{1}{c}{HST/ACS/WFC} & \multicolumn{1}{c}{HST/ACS/HRC} & \multicolumn{2}{c}{HST/WFC3/IR} \\
& \multicolumn{1}{c}{(2005/11/11)} & \multicolumn{1}{c}{(2006/01/26)} & \multicolumn{2}{c}{(2019/04/04)} \vspace{1ex} \\
Object & F658N, & F658N, &F110W, & F160W, \\
& mag & mag & mag & mag \\ \hline
J122810.94+440540.6 &-- & $17.44\pm0.05^m$ & $18.46 \pm 0.03$ & $18.06 \pm 0.04$ \\
J122811.70+440550.9 &-- & $17.41\pm0.05^m$ & $19.35 \pm 0.03$ & $18.87 \pm 0.04$ \\
J122817.83+440630.8 &$18.33\pm0.05^m$ &-- & -- & -- \\
J122809.72+440514.8 &$17.44\pm0.05^m$ &-- & $17.83 \pm 0.02$ & $17.57 \pm 0.02$ \\ \hline
\end{tabular}
\label{Tab4}
\end{table*}
\section{Results}
\label{res}
\subsection{J122810.94+440540.6}
\subsubsection{Spectra}
\label{specJ122810}
The J122810.94+440540.6 spectra obtained with BTA are shown in Fig.\ref{Fig2}. The spectra contain hydrogen Balmer emission lines with obvious broad components. The presence of narrow components in these lines associated with nebula emission makes it difficult to estimate the FWHM of the broad components. The bright narrow emission lines [\ion{O}{iii}]\,$\lambda$4959, $\lambda$5007, [\ion{N}{ii}]\,$\lambda$6548, $\lambda$6583, [\ion{S}{ii}]\,$\lambda$6717, $\lambda$6731 also must belong to the nebula, which is most likely background rather than physically related to the object. The spectra also contain a large number of emission \ion{Fe}{ii}, [\ion{Fe}{ii}] lines and weak \ion{He}{i} lines. There is no significant change in the flux and line shapes between the spectra obtained in 2015 and 2020. We have estimated the interstellar reddening as $A_V=0.2\pm0.2^m$ based on the ratio of hydrogen lines of a nebula measured slightly away from the object assuming case B photoionization \citep{Osterbrock06}.
\subsubsection{Photometry and spectral energy distribution}
Photometry of J122810.94+440540.6 did not reveal a significant change in its brightness, the maximal variation is $\Delta U = 0.29\pm0.06^m$, Table~\ref{Tab3}).
The photosphere temperature of J122810.94+440540.6 was estimated from the photometric data in a wide range of wavelengths. For this purpose we fitted the spectral energy distribution (SED) of the object constructed of the magnitudes in original HST filters obtained from the observation of 1998 (Fig.~\ref{Fig3}a). The data of ACS/HRC (2006), IR (2019) and UV (2014) photometry are also shown in Fig.~\ref{Fig3}a but not used during the fitting procedure. The SED demonstrates a downturn in the range of F275W filter, probably associated with strong absorption by metals. The SED was fitted with a blackbody model accounted for interstellar extinction with $R_V=3.07$ \citep{Fitzpatrick99}. The interstellar reddening was restricted to vary within the range $A_V=0.2\pm0.2^m$ derived from the spectroscopy. We have obtained the best-fitting temperature $\rm{T}_{\rm{eff}} = 10000\pm500$K at $A_V \approx 0.4^m$, and the corresponding bolometric magnitude and luminosity $\textrm{M}_\textrm{{Bol}}=-9.60\pm0.23^m$ and $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=5.76\pm0.09$. It is worth noting the presence of notable near-IR excess in F110W and F160W bands, which is probably associated with free-free emission of the wind and/or with the presence of warm circumstellar dust.
\subsection{J122811.70+440550.9}
\subsubsection{Spectra}
The spectra of J122811.70+440550.9 (Fig.~\ref{Fig4}) show very broad hydrogen lines H$\alpha$, H$\beta$, H$\gamma$ and H$\delta$, many emission lines of \ion{Fe}{ii}, [\ion{Fe}{ii}], as well as the \ion{He}{i} lines. In addition, the spectrum contains [\ion{O}{i}] $\lambda\lambda$ 6300,6363 lines, however, it is difficult to reveal whether these lines belong to the object or to the surrounding nebula. We do not note significant changes in spectral lines between the spectra of 2015 and 2020 (Fig.~\ref{Fig4}). Based on the ratio of hydrogen lines emitted by the nebula, we estimated the interstellar reddening as $A_V=0.3\pm0.2^m$.
\subsubsection{Photometry and spectral energy distribution}
The HST photometry did not reveal a noticeable brightness variability of J122811.70+440550.9 (Table \ref{Tab3}): the brightness of of the star is constant within $\approx0.2^m$.
As in the case of the previous object, we fitted the HST data of 1998 with a black body model with $A_V$ restricted within uncertainties $0.3\pm0.2^m$ obtained from the nebular hydrogen lines. As a result, we have got the best-fitting temperature $\textrm{T}_{\textrm{eff}}=17000\pm7000K$ for $A_V\approx0.5$, and the bolometric magnitude and luminosity estimates $\textrm{M}_\textrm{{Bol}}=-10.00\pm2.00^m$ and $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=5.92\pm0.80$.
The black body model alone gave poor agreement with the observed fluxes (black solid line in Fig.~\ref{Fig3}b) showing a strong discrepancy in ranges of the Balmer and Paschen series limits. This situations is similar to that of the B[e]-supergiant J004415.00 in the galaxy M31 studied by \cite{Sarkisyan2020}. Examining the SED of that object, the authors found a powerful contribution of free–free (f–f) and free–bound (f–b) radiation. This is consistent with the presence of an ionized circumstellar envelope typical for B[e]-supergiants \citep{Zickgraf1985, Zickgraf1986}. Following the methods described by \cite{Sarkisyan2020}, we approximate the observed SED by the black body model with extra spectral components accounting for both f-f and f-b radiation\footnote{When fitting with this more complicated model, additionally to the simultaneous data points of 1998, we utilized the WFC3/UVIS/F275W and ACS/HRC/F435W observations. The choice of these data is explained by the desire to obtain a more detailed SED in order to reach higher accuracy of the model parameters. Taking into account low photometric variability of the object, this choice have to not distort the observed SED.} using \textsc{chianti} package \citep{Dere1997, Landi2013}. We consider the case of isothermal pure hydrogen plasma at a temperature of $\textrm{T}_e = 10000$\,K \citep{Lamers1998}. The reddening was restricted within the uncertainties as above. As a result, we received the best-fitting temperature $\textrm{T}_{\textrm{eff}}=20800\pm4500K$, emission measure of $EM = 1.47\times10^{39}\pm4.37\times10^{38}$ cm$^{-5}$, $A_V\approx0.5$, and the bolometric magnitude and luminosity estimates $\textrm{M}_\textrm{{Bol}}=-9.90\pm1.09^m$ and $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=5.88\pm0.44$\footnote{including only the blackbody component.}
The result of the SED fitting with the composite model is shown in Fig.~\ref{Fig3}b in red lines: the dashed line shows the black body radiation, the dotted line designates f–f and f–b emissions, and the solid red line indicates the total model spectrum. The black solid line represent results of the black body model alone. The IR data (WFC3/IR/F110W,F160W, 2019) and optical data (ACS/HRC/F330W,F550M,F658N,F814W, 2006; WFC3/UVIS/F336W, 2014) are also plotted, but not used in the fit.
\subsection{J122817.83+440630.8}
\subsubsection{Spectra}
The spectrum of the region with J122817.83+440630.8 was obtained with SCORPIO-2 of BTA on 2021 February 11, when the magnitude of the object was $\approx22^m$. During this observations the seeing was about 2.3\arcsec. Thus, the object was too faint for this conditions, and we took only a spectrum of a nearby H\,II complex in order to measure the interstellar extinction. Using the ratio of hydrogen lines, we obtained $A_V=0.4\pm0.2^m$.
\subsubsection{Photometry and spectral energy distribution}
The most dramatic changes in brightness of this star occurred in the R band (Table~\ref{Tab3}). From the table it is seen that the source brightness increased by $\Delta R=2.15\pm0.13^m$ from the 1995 (HST) to 2001 (Bok) observations, and a return to its previous state to 2020 (2.5-m CMO). The light curves in different filters are shown in Figure \ref{Fig5}.
The shape of the object SEDs also dramatically changed (compare the data of 1998 and 2005 in Fig.~\ref{Fig3}c). To measure the photosphere temperatures, we carried out a joint approximation of these SEDs with a black body model taking into account that the interstellar extinction and the bolometric luminosity of the two data sets have to be the same if the observed variability is of the S Dor-type. As before, $A_V$ was limited to the range measured from the spectroscopy. As a result, we obtained the temperatures $\textrm{T}_{\textrm{eff}}=19000\pm1200$K (1998) and $\textrm{T}_{\textrm{eff}}=9000\pm600$K (2005) for the interstellar extinction $A_V\approx0.6^m $, which gives estimates of the bolometric magnitude $\textrm{M}_\textrm{{Bol}}=-8.30\pm0.38^m$ and the corresponding bolometric luminosity $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=5.24\pm0.15$.
\subsection{J122809.72+440514.8}
\label{Spec_J122809}
\subsubsection{Spectra}
The spectra of J122809.72+440514.8 are shown in Fig.~\ref{Fig6}.
The spectrum obtained in 2015 contains emission lines \ion{He}{i} $\lambda\lambda$4472,4922 and \ion{Fe}{ii} $\lambda\lambda$4400-4700, $\lambda\lambda$5100-5400 with P Cyg profiles which became less noticeable in 2017 and 2018, and then completely disappeared by 2020.
The wings of the hydrogen lines H$\beta$ and H$\gamma$ broadened noticeably from 2015 to 2020; the [\ion{Fe}{ii}] $\lambda$5157 and \ion{Fe}{ii} $\lambda$5169 lines become brighter. At the same time, there is a notable weakening of other emission lines of \ion{Fe}{ii} and [\ion{Fe}{ii}] $\lambda\lambda$4500-4700 and $\lambda\lambda$5200-5400. In addtition, the spectra show a few \ion{Cr}{ii} and \ion{Ti}{ii} lines. The emission lines \ion{He}{i} $\lambda$ 5876, $\lambda$6678 are also seen but their FWHM correspond to the spectral resolution, so these lines are probably emitted by the surrounding nebula.
The study of the hydrogen lines of the surrounding nebula allowed us to estimate the interstellar extinction $A_V=0.8\pm0.2^m$.
\subsubsection{Photometry and spectral energy distribution}\label{sed_J122809.72+440514.8}
The brightness variations of J122809.72+440514.8 are $\Delta U=0.28\pm0.24^m$, $\Delta V=0.48\pm0.14^m$, $\Delta I = 0.69\pm0.13^m$ from 1997 (HST) to 2020 (Zeiss-1000, 2.5-m CMO). The light curve is shown in Fig.~\ref{Fig7}.
Monitoring of the star with ground-based telescopes carried out over the past three years, has revealed the strongest changes in the star colour and brightness. The earlier data (HST) show weaker variability, and, at the same time, demonstrate (up to 2014 inclusively) a clear downturn in the range 3000-4000\AA\ (Fig.~\ref{Fig8}), which we identify as a quite deep Balmer jump. The observations of 2020 in a wide range of wavelengths (from the U to I band) show a significantly hotter SED without clear Balmer jump.
In the case of J122809.72+440514.8, the joint approximation of two extreme states of the star (we used the data of 1997 from HST and of 2020 from 2.5-m CMO) with a black body model and tied $A_V$ and $\text{L}_\text{Bol}$ values of the two data set, as we carried out for the previous object, did not give satisfactory results. The reason for this could be variations either in the absorption value or in the bolometric luminosity. The change of $A_V$ was observed in $\eta$~Car, associated with condensation of dust in the shell ejected during the Great Eruption \citep{DavidsonHumphreys1997}. However, we did not find in the literature any examples of absorption changes in the environment around LBVs undergoing the S Dor cycle.
The bolometric luminosity changes, nevertheless, have been observed in several confirmed LBVs during their S Dor-type variability cycle. The example is AG Car, whose bolometric luminosity decreased by a factor of 1.5 when the source was at the brightness maximum in the V band \citep{Groh09}. A similar behaviour of S Dor was noted by \cite{Lamers95}. They also has shown that the bolometric luminosity of LBVs can vary within 0.2 dex. This may occur due to the loss of energy for the expansion of the envelope during transitions of the star brightness from minimum to maximum.
In accordance to the above, we repeated the fit with the bolometric luminosity being free but the extinctions for the two SEDs were still tied and restricted within uncertainties. The best-fitting models gave $\textrm{T}_{\textrm{eff}}=7600\pm300$~K, $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=6.20\pm0.11$ ($\textrm{M}_{\textrm{Bol}}=-10.70\pm0.27^m$) for the `cold' state (1997), and $\textrm{T}_{\textrm{eff}}=13500\pm4300$, $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=6.44\pm0.64$ ($\textrm{M}_{\textrm{Bol}}=-11.30\pm1.60^m$) for the `hot' state (2020), with $A_V=0.6^m$. Both SEDs together with corresponding models are shown in Fig.~\ref{Fig3}d.
The difference between the two obtained values of $\text{L}_\text{Bol}$ lies within the 0.2~dex limit discussed above. However, it is still possible that this variation of $\text{L}_\text{Bol}$ is an artefact associated with underestimation of the photosphere temperature in the cold state due to the presence of the Balmer jump. To test this assumption, we decided to use the CMFGEN non-LTE models \citep{Hillier98}, which are capable to give more accurate estimates of the fundamental stellar parameters under conditions of a strong gas outflow in the form of a wind.
\subsubsection{CMFGEN model}
The photometric data taken in 2014 in the F275W and F336W filters of the HST/WFC3/UVIS are in a good agreement with the UV-range data of 1997, which indicates the absence of significant difference in the states of the star between these observation dates. The spectral features in the first spectrum of the star, which we obtained in 2015, are consistent with relatively low temperatures of the outflowing gas. Therefore, we assumed that this spectrum also corresponds to the cold state of the star, observed at least in 1997 and 2014, and we decided to use it for modelling with the \textsc{cmfgen} code. Beside the spectrum, we involved the photometric data of these years to obtain estimates of the star luminosity.
As a first approximation of the J122809.72+440514.8 spectrum, we utilised the models from our previously calculated CMFGEN grids of extended atmosphere models (Kostenkov et al. 2021, in prep.). A detailed description of the modelling method and algorithm is given in the works \citet{Kostenkov20a, Kostenkov20b}. The velocity distribution in the stellar wind was assumed to obey a simple velocity law with $\beta=1$ \citep{Lamers1996}. This value of $\beta$ provides a very rapid increase of the gas velocity near the star surface, which leads to the compact photosphere that follows from the presence of the absorption Balmer jump observed in the star SED. The absence of forbidden lines formed in distant wind regions (for example, [\ion{N}{ii}] $\lambda5755$) does not allow us to accurately estimate the terminal wind speed. Therefore, we have accepted its value equal to 300 km s$^{-1}$, which made it possible to reproduce the P Cyg profiles of iron lines quite accurately at the given velocity law. Since we can not observe the electron-scattering wings of the hydrogen lines due to the low signal-to-noise ratio, we assume homogeneity of the wind structure (filling factor $f=1$). The metals abundance is assumed to be equal to the metallicity of NGC\,4449 $Z=0.5Z_{\odot}$ \citep{Annibali2017}, with the exception of the nitrogen and carbon abundance that was changed to be consistent with the values observed in detailed studied LBVs (Table~\ref{Tab5}). The hydrogen abundance was determined iteratively based on the ratio [\ion{O}{iii}]$\lambda$5007/H$\beta$: we subtracted from the H$\beta$ its stellar component obtained from the model of current iteration, and sought to make the ratio of [\ion{O}{iii}]$\lambda$5007 and the `H$\beta$ remnant` the same as observed in the pure nebula spectrum extracted from nearby region.
\begin{table}
\centering
\caption{The main model parameters and chemical abundances ($X$) for J122809.72+440514.8.}
\begin{tabular}{ | p{130pt} | r | }
\hline
$L_{*}$, $L_{\odot}$ & 2.58 $\times$ $10^6$ \\
$\dot{M}$, $M_{\odot}$ yr$^{-1}$ & 5.2 $\times$ $10^{-3}$ \\
$R_{2/3}$ $^{*}$, $R_{\odot}$ & 620 \\
$R_{*}$ $^{*}$, $R_{\odot}$ & 410 \\
$T_{\text{eff}}$ $^{*}$, K & 9300 \\
$T_{*}$ $^{*}$, K & 11360 \\
$\beta$ & 1.0\\
$V_{\infty}$, km s$^{-1}$ & 300\\
$f$ & 1 \\
H, \% & 20\\
$X_{\text{C}}/X_{\odot}$ & 0.2\\
$X_{\text{N}}/X_{\odot}$ & 5.4\\
$X_{\text{Si}}/X_{\odot}$ & 0.5\\
$X_{\text{Fe}}/X_{\odot}$ & 0.5\\
\hline
\end{tabular}
\label{Tab5}
\textit{Notes.} $^{*}$ $R_{2/3}$ and $T_{\text{eff}}$ are the radius and temperature at $\tau = 2/3$, $T_{*}$ is the temperature at hydrostatic radius $R_{*}$ ($\tau \gtrsim 20$)
\end{table}
A relatively good agreement between the observed spectra and the model was reached at $\dot{M} = 5.2\times10^{-3}\,M_{\odot}\,yr^{-1}$ and the temperature $T_{*} = 11360$\,K at the hydrostatic radius of the star $R_{*} \approx 410\,R_{\odot}$ (Fig.~\ref{Fig9}). The main model parameters are presented in Table~\ref{Tab5}. The extremely powerful outflow appearing in this model, increases opacities at large distances from $R_{*}$ and significantly decreases the temperature at the photosphere radius, $T_{\text{eff}} = 9300$\,K. However, this estimate of the effective temperature still significantly exceeds the value obtained from the black body approximation of the SED ($\textrm{T}_{\textrm{eff}}=7600\pm300$K), which indicates underestimation of the temperature by the simple model.
The accuracy of the the mass loss rate estimates is limited by the uncertainty of the nebula contribution to the hydrogen lines observed in the object spectrum. This uncertainty could be reduced by further observations with a higher spectral resolution. The accuracy of the temperature estimates is determined by several factors. First of all, it is the absence of \ion{Si}{ii} lines in the spectrum, which are sufficiently sensitive to changes in the ionization state of the wind matter. Therefore, the upper limit of the photosphere temperature in the model was limited to $\sim11000\,$K. The lower limit for the effective temperature ($\sim 9000\,$K) was estimated based on the strength of the absorption components of the H$\beta$ and \ion{Fe}{ii} lines. A decrease in temperature leads to an increase in the amount of neutral hydrogen, and hence to an increase in the absorption component of the H$\beta$ line. Another consequence of a decrease in temperature below $\approx9000\,$K is a strong weakening of the \ion{Fe}{ii} lines. In principle, an increase in the mass loss rate can compensate weakening of ionized iron lines; however, the resulting increase in the wind density also strengthens the absorption components of these lines. Taking into account that strong absorption lines are not observed in the object spectrum, we assume that the lower estimate of the photosphere temperature is $9000\,$K.
The bolometric luminosity was determined by fitting of the obtained model spectrum accounted for the interstellar extinction to the observed SED (1997).
In this case, the extinction $A_V$ was a free parameter. The best agreement between the observed energy distribution and the model was obtained at $A_V=1.05\pm0.07^m$, which corresponds to the value obtained from observations, but is much higher than the estimate based on the black body fit ($0.6^m$). The luminosity estimate is $\log(\text{L}_\text{Bol}/\text{L}_{\odot})=6.41\pm0.03$. The model together with the observed SEDs are given in Fig.~\ref{Fig8}, where the observational data are shown in black and the model fluxes in grey. The figure shows the observed fluxes from the ACS/WFC data (2005) and the WFC3/UVIS data (2014). Obvious agreement between all the observed and calculated fluxes confirms the initial assumption that the star was in approximately the same state in 1997, 2005 and 2014-2015.
РўРѕ summarize, we carried out a modeling of the optical spectrum of the cold state of J122809.72+440514.8. However, the poor quality of the available data, its contamination by the nebula lines as well as the absence of IR- and UV-spectra did not allow us to properly constrain such important parameters as $\beta$, the terminal velocity, the hydrogen abundance, etc., which, in turn, made less accurate the estimates of the mass loss rate and photosphere temperature. Nevertheless, our modeling is able to reproduce many observed features both in the spectrum and in the SED.
\section{Discussion}
\subsection{Spectral classification}
As is noted in Sec.~\ref{specJ122810}, the spectrum of J122810.94+440540.6 contains \ion{Fe}{ii}, [\ion{Fe}{ii}] lines, but has no [\ion{O}{i}] $\lambda\lambda$ 6300,6363 and [\ion{Ca}{ii}] $\lambda\lambda$ 7291,7323 lines, which are indicators of circumstellar gas \citep{Aret12} and can be observed in spectra of B[e]-supergiants and warm hypergiants \citep{Humphreys13, Humphreys14}. At the same time, the \ion{Fe}{ii} and [\ion{Fe}{ii}] lines can be observed in stars of various types, including LBVs. The SED of this object shows a noticeable IR excess in the region of 1-2 $\mu$m (Fig.~\ref{Fig3}a). However, the excess is not large (less than 2 times higher than the black body model in the F160W band, approximately corresponding to the H band), therefore, it is not possible to unambiguously establish the source of this excess without mid- and far-IR data: the possible origin of the observed excess can be both a free-free wind radiation and a warm dust circumstellar envelope. Also, J122810.94+440540.6 did not show any noticeable brightness variability (it is less than 0.3$^m$ in all the studied bands). Considering all of the above facts, we cannot clearly classify the star, but we suppose that J122810.94+440540.6 could be either a LBV or B[e]-supergiant.
According to the spectral features, the star J122811.70+440550.9 is similar to J122810.94+440540.6, however, its spectrum contains the [\ion{O}{i}] $\lambda\lambda$ 6300,6363 lines, but there are no obvious [\ion{Ca}{ii}] lines. Lines of neutral oxygen can be excited both in the envelope of the star itself and in the gas of the surrounding nebula. The SED of this object is found to be similar to those of B[e]-supergiant J004415.00 \citep{Sarkisyan2020}. Applying the same model as was proposed for that B[e]-supergiant, which takes into account f-f and f-b radiation, to the SED of J122810.94+440540.6, we obtained a good agreement of the model with the observed data and have demonstrated the presence of an ionized envelope around the star. Thus, J122811.70+440550.9 shows characteristics typical of B[e]-supergiants. However the fragmentariness of the available data do not allow us to draw definitive conclusions.
The star J122817.83+440630.8 demonstrated significant brightness ($\Delta R=2.15\pm0.13^m$) and SED shape variability. The lack of spectral data does not allow to make unambiguous conclusion concerning the object type. Nevertheless, we have to note that such brightness changes are characteristic only for LBVs, since only they are highly variable objects among the entire set of high luminosity stars \citep{Humphreys17}. Various transient X-ray binaries are another type of objects with a large amplitude of optical variability and a hot photosphere. However, we examined images of Chandra and found no X-ray sources at the J122817.83+440630.8 position. So it is unlikely that the object is one of them. Thus, this object can probably be attributed to LBV stars.
The spectrum of J122809.72+440514.8 contains many lines characteristic of LBVs. This star also demonstrated spectral and photometric variability, the maximum of which is in the red range $\Delta I=0.69\pm0.13^m$. The SED of this object shows no clear IR excess, which indicates the absence of a hot gas and dust envelope around the star. Based on the observed features, J122809.72+440514.8 can be classified as an LBV star.
The spectrum of J122809.72+440514.8, obtained in 2015, shows similarity with the spectra of many confirmed LBVs in the cold state (e.g. HR\,Car, \citealt{Szeifert2003}; AG\,Car, \citealt{Stahl2001}; Var~C \citealt{Humphreys2014b}) but it is especially similar to the spectrum of $\eta$ Car of 2001 (Fig.~7 in \citealt{Groh2012}). The authors performed a quantitative analysis of $\eta$ Car spectrum and its modeling using the CMFGEN code.
The results of modelling of the J122809.72+440514.8 spectrum gave extremely high values of the mass loss rate $\dot{M}=5.2\times10^{-3}\,M_{\odot}\text{/yr}$, which is higher but still comparable to that of $\eta$ Car ($\dot{M}=2.4\times10^{-3}\,M_{\odot}\text{/yr}$, \citealt{Groh2012}).
Nevertheless, we should note that it is difficult to assess the reliability of the obtained value because there is a degeneracy between the mass loss and the ratio of helium and hydrogen abundances at low temperatures: an increase in the hydrogen content can be compensated for by a corresponding increase in $\dot{M}$. The same difficulty arose when modeling the $\eta$ Car spectrum \citep{Hillier2001}.
Moreover, the use of other spectral lines can not help to avoid the degeneracy.
For example, a decrease in $\dot{M}$ can lead to a decrease in the model intensities of the \ion{Fe}{ii} lines but, at the same time, the decrease in the intensities of these lines can be compensated for by an increase in the \ion{Fe}{ii} abundance, which is not known precisely due to the uncertainty in the metallicity of the star. Nevertheless, despite all the difficulties, we can assert that both stars have very powerful winds with a large mass loss in them.
$\eta$ Car has a significantly larger photosphere ($\approx 860\,R_{\odot}$) than J122809.72+440514.8 ($\approx 620\,R_{\odot}$) with almost the same temperature and with lower mass loss rate. We suppose that this is due to the following factors: firstly, to the significantly different selected values of the clamping, and secondly, due to the use in \cite{Groh2012} of a more complex velocity law, derived from the available spectral data in a very wide wavelength range (far UV to IR range). In the case of J122809.72+440514.8, there were not enough spectral lines in the optical range, so we applied the simplest model of the $\beta$-law. The less pronounced Balmer jump indicates a more extended photosphere of $\eta$ Car compared to J122809.72+440514.8.
\subsection{Age of the environment stars}
All the stars we study are located near the stellar associations in which these stars were probably formed. To estimate the age of the stellar environment, we have performed PSF photometry and have constructed В«colour-magnitudeВ» diagrams (CMD). For J122810.94+440540.6, J122817.83+440630.8 and J122809.72+440514.8, we have selected two regions: the small one, which corresponds to the size of the nearest stellar groups, and the large one, which covers all star-forming region near the star (Fig.~\ref{Fig10}). In the case of J122811.70+440550.9, we have selected two neighbouring large stellar groups. The choice of different regions is due to the uncertainty of the region where the stars were formed.
Photometry for the selected region was performed using archival data obtained in the F550M (2005/11/18) and F814W (2005/11/17) filters of the HST/ACS/WFC using the \textsc{dolphot} package \citep{Dolphin2016}. Fig.~\ref{Fig11} shows colour-magnitude diagrams for the studied associations. The theoretical isochrones\footnote{obtained from http://stev.oapd.inaf.it/cgi-bin/cmd} from \citet{Marigo2017} for the metallicity $Z = 0.5Z_{\odot}$ and the positions of the studied objects are also plotted on the diagram. When calculating the tracks, the canonical two-part-power law corrected for unresolved binaries was chosen as the initial mass function \citep{Kroupa2001, Kroupa2002}. The isochrones were reddened, and extinction was varied from the Galactic value to the maximum value that we measured from the spectral data. The optimal values of reddening were determined individually for each region, by a comparison of the reddened isochrone with the observed main sequence of the stars from selected regions. As a result, we obtained reddening estimates $A_V \approx 0.5^m $ for the stellar environment of J122810.94+440540.6 and J122817.83+440630.8, which is quite consistent with the average internal reddening in NGC 4449 \citep{Hill1998}, and $A_V \approx 0.1^m $ for stellar groups near the J122811.70+440550.9 and J122809.72+440514.8, which turned out to be below the average internal reddening.
The positions of stars in the selected regions on the CMD correspond either to continuous star formation within the studied regions for at least the last 100 Myr, or to several episodes of star formation during the same period of time. At the same time, the age of the youngest stars is approximately 5-10 Myr. We suppose that they (like the objects under study) were formed in one of the last bursts of star formation.
Moreover, we did not find a difference between the positions of the stars from the small and large regions on the CMD. More accurate age estimates require information on the values of many parameters such as rotation speed of stars, local metallicity (which may vary even inside one region), initial mass function, etc.
Another factors that can lead to both underestimation and overestimation of the age of the surrounding population are related with the relatively small number of stars in the studied associations. Thus the upper part of the diagram is poorly populated. Therefore, we can overestimate the age of the last starburst, as in the case of J122809.72+440514.8, which is located well above the brightest stars in the diagram. At the same time, there may be unresolved groups of stars among bright stars, which can lead to underestimation of the age of the youngest objects.
\subsection{Masses and luminosities}
To estimate the mass of studied objects, we have constructed a temperature-luminosity diagram (Fig.~\ref{Fig12}) and have compared the positions of objects with evolutionary tracks of massive stars of different initial masses \citep{Tang14}. We assumed the metallicity value $Z\approx0.5Z_{\odot}$ \citep{Annibali2017} when choosing evolutionary tracks of stars. An estimates of the photosphere temperatures and luminosities of our stars (in the case of J122809.72+440514.8 for the hot state) are obtained from the SED fitting. The temperature and luminosity of the cold state of J122809.72+440514.8 were taken from the results of modeling with the CMFGEN code.
Object J122817.83+440630 has an initial mass of about 20 $M\odot$. Stars with such masses can pass the red supergiant stage during their evolution \citep{Humphreys16}. J122809.72+440514.8 has the largest initial mass (more than 100 $M\odot$). The mass loss rate characteristic for stars of such large masses does not allow them to become red supergiants \citep{Humphreys1979} and they evolve into WR stars. The mass values of J122810.94+440540.6 and J122811.70+440550.9 are between the indicated extreme values. Thus, the masses of all four stars are quite typical of LBV stars or massive supergiants.
\section{Conclusions}
In this work, we have studied four massive (from 20 to $\lesssim100\,M\odot$) stars in the NGC\,4449 galaxy. Two stars have shown strong photometric variability for several years, which allows us to classify them as LBVs. Modeling the cold state ($T_{\text{eff}}=9300\,$Рљ) of the brightest one using the CMFGEN code showed a significantly high mass loss rate $\dot{M} = 5.2\times10^{-3}\,M_{\odot}\,yr^{-1}$. This star is a close analogue of $\eta$ Car in its state.
The classification of the remaining two stars is complicated by their small photometric variability and low signal-to-noise ratio in their spectra, which do not allow us to reliably confirm or exclude their own emission in such characteristic lines as [\ion{O}{i}]6300 and/or [\ion{Ca}{ii}]. The presence of a significant contribution of both f-f and f-b radiation to the SED of one of them indicates the presence of an ionized shell, characteristic of B[e]-supergiants. However, a more precise classification of both stars requires further observations.
\section*{Acknowledgements}
This research was supported by the Russian Foundation for Basic Research 19-52-18007. Observations with the SAO RAS telescopes are supported by the Ministry of Science and Higher Education of the Russian Federation (including agreement No05.619.21.0016, project ID RFMEFI61919X0016). The renovation of telescope equipment is currently provided within the national project ''Science''. The modelling spectra was performed as part of the government contract of the SAO RAS approved by the Ministry of Science and Higher Education of the Russian Federation. The research made use of equipment purchased with funds from the Program of Development of M.\,V.\,Lomonosov Moscow State University.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras} \bibliography{bibtexbase.bib}
\bsp \label{lastpage} |
Title:
Stellar Populations of Lyman-alpha Emitting Galaxies in the HETDEX Survey I: An Analysis of LAEs in the GOODS-N Field |
Abstract: We present the results of a stellar-population analysis of Lyman-alpha
emitting galaxies (LAES) in GOODS-N at 1.9 < z < 3.5 spectroscopically
identified by the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX). We
provide a method for connecting emission-line detections from the blind
spectroscopic survey to imaging counterparts, a crucial tool needed as HETDEX
builds a massive database of ~1 million Lyman-alpha detections. Using
photometric data spanning as many as 11 filters covering 0.4-4.5 microns from
the Hubble and Spitzer Space Telescopes, we study the objects' global
properties and explore which properties impact the strength of Lyman-alpha
emission. We measure a median stellar mass of 0.8 (^+2.9_-0.5) x 10^9 Msol and
conclude that the physical properties of HETDEX spectroscopically-selected LAEs
are comparable to LAEs selected by previous deep narrow band studies. We find
that stellar mass and star formation rate correlate strongly with the
Lyman-alpha equivalent width. We then use a known sample of z>7 LAEs to perform
a proto-study of predicting Lyman-alpha emission from galaxies in the Epoch of
Reionization, finding agreement at the 1-sigma level between prediction and
observation for the majority of strong emitters.
| https://export.arxiv.org/pdf/2208.01660 |
\title{Stellar Populations of Ly$\alpha$ Emitting Galaxies in the \hd\ Survey I:\\ An Analysis of \laes\ in the \gn\ Field}
\author[0000-0002-3912-9368]{Adam P. McCarron}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\email{apm.astro@utexas.edu}
\author[0000-0001-8519-1130]{Steven L. Finkelstein}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0003-2332-5505]{Oscar A. Chavez Ortiz}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0002-8925-9769]{Dustin Davis}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0002-2307-0146]{Erin Mentuch Cooper}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\affiliation{McDonald Observatory, University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0003-1187-4240]{Intae Jung}
\affil{Department of Physics, The Catholic University of America, Washington, DC 20064, USA }
\affil{Astrophysics Science Division, Goddard Space Flight Center, Greenbelt, MD 20771, USA}
\affil{Center for Research and Exploration in Space Science and Technology, NASA/GSFC, Greenbelt, MD 20771}
\author[0000-0002-7707-9437]{Delaney R. White}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0002-9393-6507]{Gene C. K. Leung}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0002-8433-8185]{Karl Gebhardt}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0002-6788-6315]{Viviana Acquaviva}
\affiliation{Physics Department, NYC College of Technology, 300 Jay Street, Brooklyn, NY 11201, USA}
\affiliation{Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA}
\author[0000-0003-4381-5245]{William P. Bowman}
\affil{Department of Astronomy \& Astrophysics,
The Pennsylvania State University, University Park, PA 16802, USA}
\affil{Institute for Gravitation and the Cosmos, The Pennsylvania
State University, University Park, PA 16802, USA}
\author[0000-0002-1328-0211]{Robin Ciardullo}
\affil{Department of Astronomy \& Astrophysics,
The Pennsylvania State University, University Park, PA 16802, USA}
\affil{Institute for Gravitation and the Cosmos, The Pennsylvania
State University, University Park, PA 16802, USA}
\author[0000-0003-1530-8713]{Eric Gawiser}
\affiliation{Department of Physics and Astronomy, Rutgers, The State University, Piscataway, NJ 08854, USA}
\author{Caryl Gronwall}
\affil{Department of Astronomy \& Astrophysics,
The Pennsylvania State University, University Park, PA 16802, USA}
\affil{Institute for Gravitation and the Cosmos, The Pennsylvania
State University, University Park, PA 16802, USA}
\author[0000-0001-6717-7685]{Gary J. Hill}
\affiliation{McDonald Observatory, University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author{Wolfram Kollatschny}
\affiliation{Institut f\"ur Astrophysik, Universit\"at G\"ottingen, Friedrich-Hund Platz 1, D-37077 G\"ottingen, Germany}
\author[0000-0003-1838-8528]{Martin Landriau}
\affiliation{Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA}
\author[0000-0001-5561-2010]{Chenxu Liu}
\affiliation{Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA}
\author[0000-0003-4237-2470]{Daniel N. Mock}
\affiliation{Department of Physics, Florida State University, Tallahassee, Florida 32306}
\author[0000-0003-1198-831X]{Ariel G. S\'anchez}
\affiliation{Max-Planck-Institut f\"ur extraterrestrische Physik,
Postfach 1312, Giessenbachstr., 85748 Garching, Germany}
\section{Introduction}
\label{sec:intro}
Lyman-alpha emitting galaxies (hereafter \laes) have fascinated astronomers for decades, from when \citet{partridge67} first predicted that primitive galaxies in formation could emit a detectable \lya\ line, through their discovery by \citet{cowie98} and \citet{rhoads00}. These objects exhibit strong emission of the \lya\ photon corresponding to the $n \! = \! 2$ to $n \! = \! 1$ resonant transition in hydrogen atoms. These photons are expected to face high optical depths from neutral hydrogen to escape the galaxies in which they are generated, and dust grains along their paths can absorb them. To date, despite enormous effort (see \citealt{ouchi20} for review), the community has not formed a strong consensus on exactly how \lya\ radiation escapes its host galaxy, and no reliable model exists to predict the \lya\ luminosity or equivalent width, $\ewlya$, of a galaxy given its global physical properties, such as stellar mass, metallicity, age, star formation rate, and dust extinction.
Part of the problem arises from discrepant conclusions drawn from studying LAEs identified using different selection techniques. Locally ($z \ll 1$), the ultraviolet (UV) flux measured in wide or narrow-band filters often defines LAE samples, biasing studies to brighter, higher mass systems than those found spectroscopically \citep{hayes14}. Observations in the nearby universe paint \laes\ as low mass galaxies with young stellar ages as determined from spectral energy distribution (SED) fitting, and many studies concur on trends showing an increase in \lya\ luminosity with decreasing dust and metals \citep{hayes15}. Nonetheless, many galaxies show stronger \lya\ emission than models would predict based on dust extinction (e.g., \citealt{martin15}, \citealt{atek14}, \citealt{scarlata09}, \citealt{fink09}), and a satisfactory explanation of this \lya\ enhancement does not currently exist.
With narrow-band selected LAEs at higher redshift, discrepant results still persist. \citet{fink09} found \laes\ at $z \sim 4.5$ represent a diverse population in terms of stellar age, mass, and dust extinction. \citet{keely15} modeled the SEDs of IRAC-detected LAEs at $z \sim 5$ and found a third have old stellar populations, contrasting with the young populations found in the local universe, and \citet{guaita11} observed similar heterogeneous populations in a narrow-band selected sample at $z \simeq 2.1$. Moreover, \citet{gawiser07} found NB-selected LAEs at $z=3.1$ to generally be low mass, dust-free objects, but their model allowed for both young and more evolved stellar populations, and \citet{acquaviva12} found LAEs at $z=3.1$ to be older than those at $z=2.1$. \citet{kornei10} compiled a UV continuum selected sample of $z\sim3$ galaxies, finding those with strong \lya\ emission had older stellar populations with lower star formation rates and less dust. Recently, \citet{santos20} used SED fitting of nearly 4000 LAEs in the COSMOS field at $2<z<6$ to find that LAEs were younger and/or more dust-poor than other UV-selected objects based on their UV slopes.
Studies of LAE samples compiled using detection of the Ly$\alpha$ emission line itself in the high-redshift universe confound consensus as well. \citet{hagen16} used the Hobby Eberly Telescope Dark Energy Experiment (HETDEX) pilot survey (\citealt{adams11}, \citealt{blanc11}) to compare properties of LAEs at $z \sim 2$ with optical emission line-selected galaxies (oELGs) and found no significant differences between the populations. Remarkably, even the UV-slope did not differ in the two samples, implying either that diffuse dust in the interstellar medium (ISM) did not modulate \lya\ emission or that oELGs strongly emit \lya. Recently, spectroscopic surveys have also yielded confusing results about LAEs at $z>2$. Using data from the VANDELS survey, \citet{marchi19} suggested LAEs have low mass and low dust extinction, but found no correlation with star formation rate. From the VIMOS Ultra-Deep Survey, \citet{hathi16} concurred with LAEs having lower mass and lower dust extinction, but they found that the objects have lower SFRs than non-LAEs. Approaching the problem from the other direction, \citet{oyarzun17} found from studying the spectra of stellar mass selected galaxies at $3<z<4.6$ that a negative correlation existed between \lya\ equivalent width and both stellar mass and star formation rate. A review of the field's current knowledge of high-redshift \lya\ emission can be found in \citet{ouchi20}.
A deeper understanding of what makes \laes\ unique from other star forming galaxies (SFGs) tantalizes astronomers because of the profound implications for leveraging \laes\ as sensitive probes of reionization at $z \gtrsim 6$. Whether the Universe re-ionized rapidly at late times (e.g. \citealt{robertson15}) or gradually beginning very early in its history (e.g. \citealt{fink19}) can determine if massive, rare galaxies or low-mass, ubiquitous objects emitted the needed ionizing photons. Answering such a fundamental cosmological question hinges on our ability to detect neutral hydrogen in the Universe's infancy. Crucially, the attenuation of \lya\ photons can probe the presence of neutral hydrogen in the intergalactic medium (IGM) \citep[e.g,][]{miralda-escude98, malhotra04, dijkstra14}, but the photons also undergo complicated resonant scattering within the galaxy, complicating our understanding of how much of the emission exits the ISM and circumgalactic medium (CGM) and enters the IGM in the first place. Recent attempts to use Ly$\alpha$ as a reionization probe have struggled to account for the intrinsic effects of host galaxy properties on the Ly$\alpha$ luminosity before the radiation encounters the IGM, leaving an unknown systematic uncertainty present in their results. The most detailed spectroscopic studies of post-reionization LAEs point to the covering fraction of optically thick neutral hydrogen (e.g. \citealt{reddy21}) as the key predictor of \lya\ escape, but such observations remain expensive and time intensive. Finding correlations between \lya\ emission and global properties such as mass and star formation activity, which photometry can reliably measure even at very high redshifts, could be a path forward to predicting galaxies' intrinsic \lya\ output.
Small LAE sample sizes ($<20$) were typical a decade ago, and although recently large samples with $>$1000 objects have been amassed using narrow-band surveys (e.g. \citealt{sobral18}, \citealt{ono2021}), spectroscopically confirmed samples remain small. This has statistically hindered the efficacy of studies of global property correlations with Ly$\alpha$ emission. The HETDEX project (\citealt{hill08}, \citealt{hill21}, \citealt{gebhardt21}) is in the process of discovering a transformative sample of LAEs, clearing the way for the community to obtain a better understanding of this intriguing population. The un-targeted (targets not pre-selected), spectroscopically selected
\hd\ \lae\ sample at \zrange{1.9}{3.5} provides a unique vantage point of galaxy evolution, as these galaxies probe the lower-mass end of the galaxy distribution, making them analagous to typical galaxies discovered in the epoch of reionization (e.g., \citealt{fink10}).
As the first step toward realizing \hd's ability to unlock \laes\ as probes of reionization, we present an initial study detailing how to link detections from the survey to imaging counterparts, and we provide an SED fitting analysis of their stellar population properties. Our modest sample of \nsamp\ \laes\ in the \gn\ field will pave the way for future large samples from \hd\ to obtain the best understanding of \laes\ to date. In \S\ref{sec:method} we describe how we built our sample and selected imaging counterparts. In \S\ref{sec:analysis} we describe our SED fitting procedure. We present our results in \S\ref{sec:results}, comparing them to other studies, and we discuss our interpretations in \S\ref{sec:discussion}. Finally, we attempt to predict the \lya\ emission from a sample of epoch of reionization (EoR) galaxies in \S\ref{sec:predictions} and summarize this study in \S\ref{sec:summary}. In our analysis, we adopt a flat $\Lambda$CDM cosmology with $H_0 = 70 \ \mrm{km\ s^{-1}\ Mpc^{-1}}$ and $\Omega_{\mathrm{m}} = 0.30$.
\section{Methodology}
\label{sec:method}
In order to explore how \lya\ emission from galaxies depends on stellar population properties, we built a sample of \laes\ using emission line detections from the \hd\ survey, carefully identifying them as \lya\ or other contaminant features, such as \oiill, which is unresolved at \hd\ resolution. We then created a procedure for assigning the line detections to imaging counterparts in \hst\ data so that we could proceed with fitting their SEDs.
\subsection{The HETDEX Survey}
With \hd\,
the upgraded Hobby-Eberly Telescope (\citealt{rams94}, \citealt{hill21}) is observing an area of $540 \ \mathrm{deg}^2$ in the north Galactic cap and on the celestial equator using up to 78 pairs of integral-field spectrographs that span $350-550 \ \mathrm{nm}$ at $R \sim 800$. Each spectrograph pair is fed by an integral field unit (IFU) of 448 1.5$\arcsec$-diameter fibers which cover a 51$\arcsec$ $\times$ 51$\arcsec$ region on the sky with 1$/$3 fill factor \citep{kelz14, hill21}. Each HETDEX observation consists of three 6-min dithered exposures to fill in the area between fibers, each with $>$30,000 individual fibers. The majority of these fibers just contain blank sky, but some subset contain continuum sources such as stars or emission lines from both nearby and distant galaxies.
\citet{gebhardt21} describe the data reduction and calibrations needed to convert the raw observations into a three dimensional spectroscopic data set as well as the methods used to detect emission lines contained in the millions of observed spectra. As a brief summary, HETDEX reductions involve three types of calibration frames: biases (taken nightly), pixel flats (taken yearly using a laser-driven light source), and twilight sky flats (taken nightly and averaged monthly), which are used for bias subtraction, bad pixel masking, fiber profile tracing, wavelength calibration, scattered light removal, spectral extraction, fiber normalization, spectral masking, and sky subtraction. These frames, combined with sky background on science images, produce a wavelength calibrated, sky-subtracted spectrum for each fiber in the array.
Astrometric calibrations are achieved by measuring the centroid of each field star and comparing their positions on the IFUs to the stars' equatorial coordinates in the Sloan Digital Sky Survey \citep[SDSS;][]{york00, abazajian09} and \textit{Gaia} \citep{gaia18} catalogs. This process typically results in global solutions which are good to $\sim 0.2\arcsec$ and no worse than $\sim 0.5\arcsec$, with the exact precision of a measurement dependent upon the number of IFUs in operation at the time of the observation. %
To find emission lines, the data pipeline searched every spatial and spectral resolution element in the internal HETDEX data release 2 (HDR2) to look for a peak in signal. Regions of enhanced signal were fit with a single Gaussian model with a constant continuum level, a model found adequate for potentially asymmetric line profiles by \cite{gebhardt21} because of the low resolution of the VIRUS spectrographs and low signal to noise (\snr) of typical sources. The exact location was determined by rastering on a grid and maximizing the line's signal-to-noise. An internal catalog of high-quality emission lines was generated by Mentuch Cooper et al. (in preparation),
and we drew our initial sample from the \hdr\ version of that catalog. The catalog reduced the raw detected line emission sources as described in \citet{gebhardt21} into a more robust sample by passing the observations through a quality assessment pipeline and limiting various fitted line parameters. Specifically, emission lines were required to have a quality of fit, $\chi^2 <1.2$ and a linewidth, $\sigma$, in the Gaussian model between 1.7\,\AA~and 8\,\AA.
The full \hd\ survey will eventually detect $\sim1$ million LAEs, providing an incredible opportunity to study such objects, but our analysis is focused on \laes\ discovered in 2018--2020 data from a \hd\ science verification field in \gn, a roughly $10^\prime \times 16^\prime$ field centered at (J2000) $\mrm{12^h 36^m 55^s}, 62^\circ 14^\mrm{m} 15^\mrm{s}$ (\citealt{giavalisco04}, \citealt{grogin11}, \citealt{koekemoer11}) because we required deep, multi-band imaging to study each galaxy's stellar populations.
\subsection{Sample Selection}
\label{ssec:sampleselection}
We visually inspected \hd\ detections in \gn\ to obtain a clean sample of \laes. To get initial candidates, we applied various quality cuts to the curated catalog for data release \hdra\ (Mentuch Cooper at al., in preparation). We restricted emission line detections to those with signal-to-noise ratio $S/N>5.5$ to limit the fraction of spurious detections from noise fluctuations to less than 5\% (see \citealt{gebhardt21}) as well as $\chi^2 < 1.6$ for the Gaussian model fit, which was a value tuned to remove the most obvious artifacts while retaining the largest sample for inspection. We required emission line full-width at half-maximum (FWHM) between 3.4 \AA\ and 24 \AA, where the lower bound removed exceedingly narrow peaks arising from unidentified cosmic rays and the upper bound removed emission generated by broad-line AGNs, which we considered contaminants in this study (see \S\ref{ssec:sedfitting}). We further only included observations with throughput $>0.07$ for reliable flux measurements minimally affected by cloud cover, and seeing below 2.8$\arcsecs$ to enable continuum counterpart identification. We did not remove ``repeat'' detections coincident spatially and spectrally resulting from the survey revisiting the field multiple times in order to ensure we found as many \lya\ detections as possible. We excluded data in \gn\ taken prior to 2018 as they included significant artifacts from early CCDs that had been replaced by 2018.
Finally, we did not initially remove any detections based on the Bayesian probability values used to help determine the identity of an emission line as \lya\ vs \oiill, such as \plae. These probabilities, which are calculated by the \hd\ team based on the work of \citet{leung17} and \citet{farrow21}, leverage the inherent differences between the emission line luminosity and equivalent width (EW), $\ew$, distribution functions of LAEs and \oii\ emitters to identify single emission line detections using information about the line flux and continuum emission, when available. During the process of visual inspection, we used the statistic to guide our identifications, and we make recommendations for using quality cuts based off this statistic at the end of this section. We do not believe that keeping this statistic visible to the classifier biased our results because we implemented an independent procedure (see \S \ref{ssec:cptident}) to distinguish LAEs from low-redshift counterparts that relied on SED fitting.
After applying quality cuts, we began with \ninspect\ detections (of which $\sim$500 were ``unique'' in the sense that there were no other emission line detections within 3$\arcsec$ spatially and 6\,\AA\ spectrally). To inspect each detection, we used the \hd\ Emission Line eXplorer tool \elixer\ (Davis et al., in preparation), %
which shows measured quantities for the emission lines such as \snr, line width, line fit $\chi^2$, the continuum estimate, the Bayesian probability for \lya\ emission described above, as well as useful visual information, such as cutouts of the 2D spectra for several fibers containing the feature, the Gaussian model fit to the feature, the full 1D spectrum, and any imaging and catalog data uploaded in the \hd\ pipeline.
We rated our confidence in a detection on a scale of 0-5 using a customized widget tool that allows interactive classification of detected sources based on the its Elixer Report (see Figure~\ref{fig:elixer_lae}). Additionally, other classifications include ``artifact,'' a false detection caused by a malfunction in the instrument or the reduction pipeline, low-redshift sources, and ``other'' for miscellaneous objects like meteors. To qualify for a classification of 4 or 5 (a high-confidence LAE by our definition), a detection had to meet the following criteria:
\begin{itemize}
\item A clear emission line in at least one fiber in the un-smoothed 2D spectrum, or a probable emission line in at least two fibers. Since each point-spread-function (PSF) covers multiple fibers (due to the dithering pattern), we expected strong emission to be seen in more than one fiber, increasing the likelihood of a real detection.
\item No obvious defects at the emission line location in the pixel flat or sky subtraction cutouts. This eliminated hot pixels, sky model residuals, charge traps, and other artifacts from the sample.
\item A Gaussian plus constant continuum model fit that adequately matched the data and did not have a FWHM far below the spectral resolution of $\sim 6$\,\AA\null.
\item A line peak that exceeded the typical noise level in multiple pixels in the 1D spectrum.
\item No source at the line's detection position brighter than roughly $m_{AB} = 24$ in the imaging cutouts, if available. The high equivalent widths of sources fainter than this threshold drastically decrease the likelihood of contamination by \oii\ emitters (see Figure 6 in \citealt{leung17}), though a few low equivalent width, luminous LAEs can be missed with this requirement.
\end{itemize}
As the \oiill\ emission feature falls into the \lrange{3500}{5500} spectral range for $z<0.5$, the imaging proved crucial in choosing between high-redshift \laes\ and interloping \oii\ emitting galaxies.
Figure~\ref{fig:elixer_lae} shows an example \elixer\ report for a source classified as a high-redshift \lae\null. Note that for readability, tabulated numeric information such as \plae, line flux, line model $\chi^2$, and more was cropped out of this visualization, but was visible to the classifier. In Figure~\ref{fig:elixer_lae}, A clear emission feature is present as a black signal in three out of the four 2D un-smoothed fiber spectra, the sky subtraction looks clean, the model fit accurately represents the data, and the image stamps show a number of faint sources with photometric redshift estimates reasonably close to the \lya\ redshift (shown by the vertical red dashed line). Figure~\ref{fig:elixer_lowz} shows a clear example of a low-redshift object detected by its \oii\ emission line. As in Figure~\ref{fig:elixer_lae}, the line appears strong in multiple fibers, and the sky subtraction and model fit present no concerns. Characteristically of a brighter low-redshift galaxy, continuum emission is visible as a horizontal black trace in the fiber spectra, and a large, bright object appears in the \hst\ image stamps. In this case, the object is in fact a cataloged \oii\ emitter, but even without such information this would be a clear low-redshift classification. In both of these cases, no other emission lines are detected, or would be expected to be detectable, across the observed wavelength range.
After classifying each detection, we obtained $\sim 200$ detections categorized as high confidence \laes\ (scores of 4-5) and almost three times as many classified as low-$z$ sources (Figure~\ref{fig:class_dist}). Note that we did not include detections with scores of 3 or below for initial study as we want the cleanest sample possible. To assess the \hd\ collaboration's built-in Bayesian classification probability, \plae, we plotted that statistic for all of our detections classified as either low-$z$ galaxies or \laes. Figure~\ref{fig:plya_comp} shows that true LAE detections rarely score low in the \plae\ statistic, but a few low-$z$ sources can score in the intermediate range. For this reason, we suggest future studies can dramatically reduce the amount of visual inspections needed by adopting a cutoff of \plae $\gtrsim 0.6$ for \lae\ candidates.
To finalize our sample, we removed detections of the same source (since the \gn\ field was observed multiple times between 2018--2020), by selecting the highest \snr\ measurement of all detections grouped within 2$\arcsec$ and one spectral resolution element (6~\AA\null). Our final emission line sample consisted of \nlyalines\ high-confidence \lya\ detections (with classification scores of 4-5).
\subsection{Counterpart Identification}
\label{ssec:cptident}
In order to study the stellar populations of the \laes\ in our sample, we developed a method to match the un-targeted spectroscopic detections to counterparts in \hst\ imaging of the \gn\ field.
The overall astrometric precision of a \hd\ observation is $\sim$0.2\arcsec. However, due to the 1.5\arcsec\ diameters of the fibers, the typical seeing, and the 3-dither pattern, the position of an individual (faint) LAE is known to no better than $\sim$0.5\arcsec. Since the HST images have a resolution that is $\sim$20 times higher than this, great care is needed to ensure an emission-line source is matched with the correct counterpart.
We used the imaging obtained by the Great Observatories Origins Deep Survey \citep[GOODS][]{giavalisco04} with the optical ACS camera, and the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS][]{grogin11,koekemoer11} with the WFC3/IR infrared camera, using the internal CANDELS team's reduced mosaics for each filter. This dataset consists of imaging in nine filters (F435W, F606W, F775W, F814W, F850LP with ACS, and F105W, F125W, F140W and F160W with WFC3/IR). We made use of the photometric catalogs derived by \citet{fink21} which used \srcex\ \citep{sextractor} in two-image mode to create a F160W-selected catalog, coupled with using the Tractor \citep{lang16} to perform deblended photometry on the deep S-CANDELS \citep{ashby15} {\it Spitzer}/IRAC 3.6 and 4.5 $\mu$m imaging. Further details on the cataloguing process are available in \citet{fink21}. Similar to the widget used to classify detections as \lya\ , we created a visual inspection tool that provided information about the distance between the centroid of the HETDEX emission location and a given imaging source, the HETDEX emission-line strength when re-extracted centered at the imaging counterpart position, and the goodness of an SED fit assuming the \lya\ redshift, $z_{\mrm{Ly\alpha}}$.
Before selecting counterpart candidates, we optimized our search by developing a deep photometric catalog using a stacked image across all \hst \ filters in \gn. Each pixel value in this image and its error was computed using an inverse variance weighted average across $N=9$ filters with pixel value $p_i$ and rms error $\sigma_i$ given by Equation~\ref{eqn:weightedsum}.
\begin{equation}
\label{eqn:weightedsum}
\bar{p} = \frac{\sum_i^N p_i \sigma_i^{-2}}{\sum_i^N \sigma_i^{-2}} \ , \
\sigma_{\bar{p}} = \left( \sum_i^N \sigma_i^{-2} \right)^{-1/2}
\end{equation}
Since LAEs are often low-mass, faint systems, this stacked image improved our chances of identifying the continuum source corresponding to the detected emission line.
We then used \srcex\ \citep{sextractor} to detect the faintest possible sources in the stacked image, requiring a source to have 5 contiguous pixels with \snr$>1.6$. Following the procedures outlined in \citet{fink21} we used the same software in two-image mode to measure the flux in each filter and applied the appropriate aperture correction obtained from simulations. We performed extinction corrections using a Cardelli extinction law with $R_V=3.1$ for the Milky Way \citep{cardelli89}. We then compared the fluxes measured in this catalog to the F160W-selected catalog of \citet{fink21} and found the flux measurements to have no systematic offset and minimal scatter. Figure~\ref{fig:photflux} shows the fractional error of the stacked catalog photometry compared to the \citealt{fink21} photometry as a function of source brightness in the $I$-band. The median offset is zero with scatter of roughly 25\% for fluxes near 100 nJy, in agreement with the typical error bars for such sources, providing confidence in the fidelity of the stacked catalog. In all subsequent analysis, we defaulted to using measurements from the \citet{fink21} catalog for sources detected in both, and we only used photometry from the stacked catalog for five LAEs in our sample unique to it.
After generating the catalog from stacked imaging, we identified all imaging sources within $3\arcsec$ of the \hd\ detection position as possible \lae\ counterparts. Since the typical image quality of the HETDEX observations used here has a point spread function (PSF) of $\sim 1.7\arcsec$, the $3\arcsec$ annulus served as a generous aperture around the \lya\ centroid to encompass all possible counterparts for the detected emission.
We selected imaging counterparts based on the neighboring sources' angular distances from the detection, significance of emission extracted at the source positions, and goodness of SED fits performed by fixing the redshift assuming a \lya\ detection. First, we measured the on-sky angular separation from the detection position to the position of each possible source in the photometric catalog (labeled $\theta$ in Figure~\ref{fig:cpt_inspect}). Then, for each source, we used the \hd\ API script, \href{https://github.com/HETDEX/hetdex_api}{\tt get\_spectrum.py}\footnote{https://github.com/HETDEX/hetdex\_api}, to perform an aperture-weighted optimized spectral extraction (following \citealt{horne86}) at the source position to obtain a 1D spectrum. We created a Markov chain Monte Carlo (MCMC) line-fitting code using \emcee\ \citep{emcee} to fit a model to the feature to estimate its flux and significance. Our model consisted of two components: a linear trend with slope $m$ and intercept $b$, which captured any underlying continuum, and a Gaussian with total flux $F$ and standard deviation $\sigma$ to fit the line profile.
\begin{equation}
\label{eqn:linefit}
f_\lambda = m(\lambda - \lambda_0) + b + \frac{F}{\sqrt{2\pi} \sigma}
\exp \left[- \frac{(\lambda - \lambda_0)^2}{2 \sigma^2} \right]
\end{equation}
In the model, $\lambda_0$, the wavelength of the emission line, was allowed to vary by $\pm$ one pixel (2\,\AA) from the detection wavelength reported by \hd\null. For each fit, we measured an effective \snr\ ratio (labeled SNR in Figure~\ref{fig:cpt_inspect}) by comparing the median value of the line flux to the standard deviation of the line flux for the last 20\% of the MCMC sampling chain, which had converged at that stage of sampling. To limit computation time for future counterpart identification steps, we ruled out any counterpart candidates that did not have any indication ($S/N > 1$) of an emission feature at the pixel corresponding to the detected wavelength. Finally, for those sources with significant emission, we performed SED fitting with \bagpipes\ (see Section~\ref{ssec:sedfitting} for a full description of this procedure), fixing the redshift as $z_{{\rm Ly}\alpha}$. Our simple SED model for counterpart identification included free parameters for stellar mass, metallicity, dust extinction, and SFH, and we adopted the \citet{calzetti94} dust attenuation law, the \cite{chabrier03} initial mass function, and a delayed-$\tau$ SFH\null. At this stage, we did not include any IRAC fluxes in our fits since those fluxes depend sensitively on deblending, which is unreliable when sources are crowded. Furthermore, between $1.9 < z < 3.5$, there are no strong spectral features at the rest-frame wavelengths probed by IRAC, and redshift-sensitive features such as the 4000\,\AA\ break are adequately covered by \hst. We then visually inspected the separations, spectral extractions, and SED fits of all candidate counterparts to choose the one most likely to be the detected LAE.
Figure~\ref{fig:cpt_inspect} shows an example of our approach. Separate sources are marked with an ``X'' and the color of the mark corresponds to the color of the table row, spectrum, and SED in the subsequent plots. In this case, the red and and orange sources within $0\farcs 5$ of the detection position (magenta star) show similar extracted emission line flux at the detection wavelength. Crucially, the SED fit for the red object poorly matches the data when fixing the redshift as $z_{Ly\alpha}$, but the orange object has a fit in excellent agreement with its observed SED based on the $\chi^2$ statistic. Therefore, in this example case we selected the orange object as the detected LAE at $z=2.90$. We followed the same process to identify counterparts for the other \lya\ lines in our sample.
By studying the distribution of our counterparts in the parameter space of separation, signal-to-noise of emission, and SED $\chi^2$, we found no obvious way to select counterparts reliably based on these numbers alone, but we did find favorable regions. Figure~\ref{fig:cpt_stats}a shows the distribution of separation from the detection positions for sources we identified as \laes\ and sources that just happened to be nearby. Clearly, it was exceedingly unlikely that the true counterpart lay farther than 1$\arcsec$ away on-sky. For this reason, we could very reasonably shrink our selection criteria from all sources within 3$\arcsec$ to roughly 1$\arcsec$ without significant loss of \laes. In terms of emission line \snr\ (compared to the measured value of the detection itself), we found that, while typically the identified counterparts had stronger emission, the \hd\ PSF caused the extracted flux to not depend sensitively enough on position to clearly identify the counterpart for sources separated by less than 1$\arcsec$. This is clearest in Figure~\ref{fig:cpt_sep_snr}a, which shows that true counterparts and close neighbors show overlap in the \snr, separation plane. Note that the different on-sky centroids for emission line extraction between the counterparts and the original detection allow for the values of the \snr\ ratios in Figure \ref{fig:cpt_stats}b to be greater than unity. Finally, we note that, while most of the \laes\ in our sample had $\chi^2$ values in good agreement with the $z_\mrm{Ly\alpha}$ hypothesis, many neighboring galaxies also had low $\chi^2$, as shown in Figure~\ref{fig:cpt_stats}c. We attribute the low $\chi^2$ values for non-counterparts to our inclusion of such faint objects, which have large flux errors and are thus easily fit by a wide range of models.
After visually vetting all detections in our sample of \nlyalines\ \lya\ lines, we found 6 instances of detected emission with no continuum-detected counterpart. Since we could not study the properties of an LAE without photometry, we removed these objects from the final analysis. Furthermore, we removed 16 objects from the sample due to the following quality concerns. We eliminated the LAE corresponding to \hd\ detection ID 2100245124 (RA,DEC=189.346621$^\circ$,62.260662$^\circ$) from our sample as it was the only counterpart with an X-ray detection in the catalog of \citet{xue16}, indicating the galaxy hosted an AGN\null. Since our SED fitting code did not have an AGN template, we could not reliably report the physical properties of this object. We also eliminated the detection for ID 2100171783, as the counterpart inspection revealed the \lya\ emission line came from two probable LAEs separated by less than $0\farcs 5$ meaning we could not assign flux accurately to each source. Finally, we only analyzed objects detected in the $H$-band (F160W) of the \hst\ imaging as well as at least two bluer bands in order to span the rest-frame 4000\,\AA\ break at the sample redshift range. This serves as a crucial feature for constraining galaxy masses and ages with SED fitting (e.g., see \citealt{shapley03}). These choices limited our final sample size to \nsamp\ LAEs in \gn\ spanning \zrange{1.98}{3.48}. For 5 of these objects, photometry was not present in the catalog of \citet{fink21}, so we used photometry from the stacked catalog described in \S \ref{ssec:cptident}. Appendix \ref{sec:appendixb}, Figure~\ref{fig:all_lines} shows all the \hd\ emission lines for the LAEs in the final sample, and Figure~\ref{fig:hst_stamps}
shows \hst\ imaging in F160W ($H$-band) for all objects.
\section{Analysis}
\label{sec:analysis}
After connecting \hd\ emission line detections with \hst\ imaging counterparts, we leveraged SED fitting to measure the galaxies' stellar population properties. From the SED fits and emission line detections, we also inferred the UV-slope and \lya\ equivalent width.
\subsection{SED Fitting with \bagpipes}
\label{ssec:sedfitting}
We fit all LAEs in our final sample with \bagpipes\ \citep{carnall18}, a flexible \python\ code that rapidly generates galaxy model spectra through stellar population synthesis using the 2016 version of the \citet{bruzual03} stellar spectral libraries. It explores the high-dimensional, multi-modal, and degenerate (e.g., age-dust-metallicity) model parameter space using the \multinest\ algorithm \citep{multinest}.
Our sample in \gn\ had photometry across nine \hst\ filters ranging from 0.4 to 1.6 $\mic$ as well as two \spitzer/IRAC channels centered at 3.6 $\mic$ and 4.5 $\mic$. Translating to the rest-frames of the objects in our sample at \zrange{1.9}{3.5}, these filters probed the UV, optical, and near-infrared (NIR) energy output of our objects.
The filter coverage of our sample of \laes\ motivated our choice of SED modeling parameters. Table~\ref{table:sedparams} shows the names and units of the free parameters in our model, as well as the prior probability distributions assumed in our Bayesian framework. We adopted a delayed-$\tau$ SFH, defined as:
\begin{equation}
\label{eqn:sfh}
\mrm{SFR}(t) \propto
\begin{cases}
(t-t_0) e^{-(t-t_0) / \tau} & t > t_0 \\
0 & t < t_0
\end{cases}
\end{equation}
This flexible SFH allows for star formation to be either rising, peaking, or falling, as opposed to the common exponentially declining model that only allows for falling SFRs over time. For example, \citet{lee10} found that SED fitting that adopted rising SFHs matched the stellar masses and SFRs from semi-analytic models for galaxies at \zrange{3}{6} better than exponentially declining models, while \citet{papovich11} found similar results favoring rising SFHs for real galaxies at $z =$ 4--7. We fit the $e$-folding scale of the SFH, $\tau$, the age of the Universe at the onset of star formation, $t_0$, the stellar mass formed, $M_\mrm{form}$, the global metallicity, $Z$, the dust extinction in the $V$-band, $A_V$, and the ionization parameter, $\log U$, defined as the log of the ratio of the number densities of ionizing photons and hydrogen atoms. Though we fit the total stellar mass formed by a galaxy, $M_\mrm{form}$, we report its stellar mass at the redshift of observation excluding remnants, and we denote that stellar mass $M_\star$. We note that some of the parameters (namely $Z$ and $U$) are not expected to be well-constrained by our photometric data. Nonetheless we allow them to vary within our imposed priors such that the uncertainties in the other parameters include the uncertainties in these parameters. We adopted the \citet{calzetti94} dust attenuation law for star-forming galaxies and the \citet{chabrier03} initial mass function.
\begin{table}
\centering
\begin{tabular}{ |c|c|c|c| }
\hline
Parameter & Prior & Bounds & Units \\
\hline\hline
$t_0$ & Uniform & 0, $T(z)$ & Gyr \\
\hline
$\tau$ & Uniform & 0.3, 10. & Gyr \\
\hline
$M_\mrm{form}$ & Log Uniform & $10^6$, $10^{12}$ & $\msun$ \\
\hline
$Z$ & Log Uniform & $10^{-5}$, 2 & $\zsun$ \\
\hline
$A_V$ & Uniform & 0, 2 & mag \\
\hline
$\log U$ & Uniform & -4, -2 & - \\
\hline
\end{tabular}
\caption{Free parameters and their prior probability distributions for SED fitting. In our galaxy models, the redshift, $z$, was fixed based on the observed wavelength of \lya\ from \hd. $T(z)$ refers to the age of the Universe at redshift $z$. Note that we fit the cumulative stellar mass formed, $M_\mrm{form}$, from which the stellar mass (excluding remnants) at the object redshift was computed within the \bagpipes\ \citep{carnall18} code.}
\label{table:sedparams}
\end{table}
All 11 filters were not necessarily included for every galaxy SED fit in our sample. For example, due to the large PSF of the IRAC imager, modeling sources in crowded fields of view and deblending the flux contribution of each source is crucial to accurately measuring the NIR fluxes of our LAEs. Although the catalog we used performed deblended photometric modeling with the IRAC PSF, this process can fail in crowded regions. We thus visually inspected all IRAC residual maps for objects in our sample and removed the IRAC fluxes from our SED fitting if there were obvious problems in the deblending procedure. For the 5 objects not present in the catalog of \citet{fink21}, we did not have IRAC measurements. Furthermore, because the purpose of our analysis was to study the SED-derived properties of our \laes\ in relation to their \lya\ emission, we did not want \bagpipes's modeling of \lya\ emission or the IGM attenuation to bias our results. For this reason, we masked out all filters whose bandpass extended blue-ward of the observed \lya\ line; thus the $B$-band (F435W) and sometimes the $V$-band (F606W) was excluded, depending on redshift.
Figure~\ref{fig:sedfit} shows an example \bagpipes\ SED fit for an \lae\ in our sample. We plotted the $1\sigma$ spread on the model photometry as rectangles as well as the $1\sigma$ spread on the underlying model spectrum computed by evaluating the 16th and 84th percentiles of the posterior models. In this example, the fit did an excellent job matching salient features like the rest-frame 4000\,\AA\ break and nebular emission in the rest-frame optical region. We estimated galaxy properties using the posterior distributions for all free parameters explored by \bagpipes. Figure~\ref{fig:corner} shows an example ``corner'' plot (produced via \citealt{corner}), where all free parameters are plotted against each other for easy assessment of constraints and correlations. Stellar mass, time since the onset of star formation, and dust extinction were constrained well, while metallicity, $\tau$, and ionization parameter were not well-constrained by our broadband photometry data. Figure~\ref{fig:all_sed_2} in Appendix \ref{sec:appendixb} shows the SED fits for all \nsamp\ LAEs in our final sample.
\subsection{Measuring $\ewlya$ and $\beta$}
Emission line strengths can be represented by the parameter equivalent width (EW or $\ew$), which represents the width of a rectangle drawn to the same height as the continuum needed for the rectangular area to match the area under the emission line. To estimate the equivalent width of \hd\ \lya\ detections, we used the measured line flux and error from the internal \hdrb\ catalog computed by optimally extracting flux from all fibers within a 3.5 \arcsec~ radius circular aperture (roughly 15-20 individual fiber spectra) contributing to the emission line detection (following \citealt{horne86}), weighted by the PSF of a point-source. We approximated the continuum flux density using the \bagpipes\ sampled model spectra from the SED fit. We took the continuum flux density to be the median value of all 500 sampled spectra averaged between 1250 and 1300\,\AA\ in the given object's rest-frame, and we computed the $1\sigma$ error using half the spread between the 16th and 84th percentiles of those values. This method allowed us to take advantage of complex computations performed by \bagpipes\ to get a statistically representative estimate of the continuum flux density instead of using a coarse approximation based off the flux in one of our photometric bands. We evaluated the \lya\ flux and the continuum flux density in the observer-frame and translated to the galaxy rest-frame by dividing by a factor of $(1+z)$ using the detected wavelength of \lya.
\begin{equation}
\label{eqn:eqwidth}
\ewlya = \frac{F_{\mrm{Ly\alpha}}}{f_{\lambda}} (1+z)^{-1}
\end{equation}
We measured $\beta$, the UV continuum slope (un-corrected for dust), using the model spectra for galaxies in our sample following the method described in \citet{fink12a}. We masked the stellar and interstellar absorption features in the rest-frame UV using the windows provided by \citet{calzetti94}, and we fit a linear model to the spectrum in log space ($\log f_\lambda = \beta \log\lambda + C$) using \textit{polyfit} from the \python\ package \code{Numpy} \citep{numpy}. We determined $1\sigma$ uncertainties on $\beta$ for each object by measuring the distribution of values fitted to 500 spectral models sampled from the posterior by \bagpipes.
\section{Results}
\label{sec:results}
We measured various physical properties of objects in our \lae\ sample using the posterior distributions returned by \bagpipes' exploration of the parameter space. We took the 16th and 84th percentiles of the posterior distributions to represent the error bars on physical properties. Examples of such measurements are shown in Figure~\ref{fig:corner} for a representative LAE in our sample.
\subsection{SED-Derived Properties}
\label{ssec:sedprops}
Figure~\ref{fig:distributions} shows the 1D distributions of posterior median values of stellar mass ($M_\star$), star formation rate (SFR), specific star formation rate (sSFR), dust extinction ($A_V$), mass-weighted age, and UV-slope ($\beta$) for all objects in our final LAE sample. We found the median stellar mass of our \hd\ \laes\ to be $0.8^{+2.9}_{-0.5} \times 10^{9} \ \msun$. This stellar mass value lies near the median masses of \laes\ selected in narrowband imaging surveys covering redshifts similar to this study (e.g. \citealt{guaita11}, \citealt{gawiser07}, \citealt{vargas14}, \citealt{kusakabe18}, \citealt{santos20}) and well below typical masses of Lyman-break selected objects (e.g. \citealt{shapley03}, \citealt{papovich01}, \citealt{trainor19}), which often have minimum masses an order of magnitude larger due to the depth of the broadband imaging used in their selection.
We used our SED fitting procedure to obtain the attenuation in the $V$-band of starlight due to dust for galaxies in the sample, and we obtained a median value of $A_V = 0.3^{+0.4}_{-0.1}$ mag. The presence of dust has been measured in many other samples of \laes\, with values of $A_V$ or $E(B-V)$ often falling within a factor of two of this study (e.g. \citealt{guaita11}, \citealt{fink09}, \citealt{hathi16}, \citealt{kusakabe18}, \citealt{matthee21}).
Similar to dust reddening, our \lae\ sample has similar ages and star formation rates to \lae\ samples in the literature compiled using narrow band or continuum selection methods. Our SED-derived mass-weighted ages, typically spanning 0.05-0.5 Gyr, broadly agree with the narrow band samples of \citet{acquaviva11}, \citet{fink09}, \citet{gawiser07}, and \cite{vargas14}. Our median SFR, $4.8^{+10.4}_{-3.8} \mathrm{M_\odot / yr}$, falls near values reported by \citet{gawiser07}, \citet{hathi16}, and \citet{kusakabe18}, but falls 1 dex above the median SFR for \laes\ found in the MUSE HUDF Survey \citep{feltre20}. This discrepancy does not surprise us since the MUSE HUDF LAE sample had a median mass roughly 0.5 dex lower than this study, and their sample spanned \zrange{2.9}{4.6}, probing an era of lower star formation activity in the Universe than the one studied here (see \citealt{madau14}).
Our model included stellar metallicity and ionization parameter as free parameters, but our broadband photometric data could not constrain those values precisely (see Figure~\ref{fig:corner}), since reliable estimates typically require sensitive emission line diagnostics (e.g. \citealt{reddy21}), which were coarsely probed at best by our filter set. For this reason, we do not present or discuss our galaxies' metallicities or ISM ionization conditions, but we note that by letting these parameters vary, our posterior constraints on all other parameters include the uncertainties in these quantities.
\subsection{$\ewlya$ Distribution}
The equivalent width distribution of LAEs has been modeled by various authors as exponential with the form given by Equation~\ref{eqn:ewdist} (e.g. \citealt{gronwall07}, \citealt{guaita10}, \citealt{wold14}, \citealt{jung18}).
\begin{equation}
\label{eqn:ewdist}
\frac{\mrm{d}N}{\mrm{d}\ew} \propto e^{-\ew / W_0}
\end{equation}
We show our sample's rest-frame $\ewlya$ distribution in Figure~\ref{fig:ew_dist} with an $e$-folding scale $W_0=100 \ \ang$ drawn for comparison. We cannot measure the underlying distribution for LAEs from our sample since we have not measured the completeness as a function of equivalent width (which is complex due to our method of sample creation, and not crucial for our study of stellar population properties). Various other studies have precisely measured the Lyman-alpha equivalent width distribution, such as \citet{gronwall07}, who found an $e$-folding scale of $76^{+11}_{-8} \ \ang$ for a deep, narrow-band (NB) selected \lae\ sample at $z=3.1$, \citet{guaita10}, who measured $W_0=50 \pm 7 \ \ang$ for a NB sample at $z=2.1$, and recently \cite{santos20}, who measured $W_0=129 \pm 11 \ \ang$ for the full SC4K sample at \zrange{2}{6}. We plot some of these measured distributions in Figure~\ref{fig:ew_dist} for comparison. It is apparent that our sample becomes increasingly incomplete at EW $\lesssim$ 50 \AA, due to a combination of the HETDEX flux limit, the emission-line identification process, and our counterpart selection process.
\subsection{Correlations between $\ewlya$ and Galaxy Properties}
\label{ssec:corrs}
We combined our SED-derived galaxy properties with the $\ewlya$ measurements described above in order to assess correlations between \lya\ emission and global galaxy properties. We used $\ewlya$ as a proxy for the fraction of photons emitted as \lya\ as opposed to $L_{\mathrm{Ly\alpha}}$, for example, because the equivalent width more closely probes the physics governing \lya\ escape, whereas the flux also includes physics related to the \lya\ production rate. Figure~\ref{fig:correlations} shows $M_\star$, specific star formation rate (sSFR), star formation rate (SFR), dust extinction ($A_V$), mass-weighted stellar population age, and UV-slope ($\beta$) plotted against each galaxy's $\ewlya$ measurement. In the figure, error bars denote the 16th to 84th percentile range, and we indicate Pearson's linear correlation coefficient, $r_p$, and its significance ($p-$value) with text.
Stellar mass and star formation rate both correlate strongly with $\ewlya$, with low mass, low SFR systems achieving larger $\ewlya$ than higher mass systems. The correlation with mass has been established in the literature from studies of a wide variety of galaxies such as LBGs, oELGs, and LAEs. It was noticed early by \citealt{ando06} and measured recently by many works such as \citealt{du18}, \citealt{marchi19}, \citealt{oyarzun17}, and \citealt{shimakawa17}. Specifically, \citet{weiss21} found a negative correlation between the \lya\ escape fraction, \fesc, and stellar mass using data from the \hd\ survey. Additionally, \citet{khostovan21} found an intrinsic, negative correlation between H$\alpha$ equivalent width and galaxy stellar mass from a NB survey at $z \sim 5$. While the lack of low-EW, low-mass systems can be driven by selection incompleteness, we should be complete to high-mass, high-EW systems, yet these are seemingly rare.
Notably, our results show no significant anti-correlation between $\ewlya$ and dust extinction ($A_V$), whereas numerous other studies of Lyman-alpha emission measured a clear relationship that indicates dust hinders the ability of the \lya\ photon to escape the galaxy. For example, \citet{shapley03}, \citet{guaita11}, \citet{du18}, \citet{hathi16}, \citet{huang21}, \citet{marchi19}, \citet{matthee16}, \citet{reddy21}, \citet{trainor19}, and \citet{weiss21}, all showed that dustier galaxies exhibit weaker Lyman-alpha emission measured as $\ewlya$ or have smaller \fesc. However, the lack of a significant anti-correlation may be due to our limited sample size and small dynamic range in dust attenuation. Moreover, objects with significant amounts of dust that suppress their \lya\ fluxes would not become members of our science sample in the first place. The majority of our sample has $A_V <$ 0.3. We do observe multiple galaxies with $A_V >$ 0.5, and interestingly these do not all have low $\ewlya$, implying that \lya\ can escape even from modestly dusty galaxies, which could indicate enhanced escape due to outflows (e.g., \citealt{steidel10}, \citealt{erb12}) or a multi-phase ISM (e.g. \citealt{fink09}, \citealt{neufeld91}).
Our Pearson correlation coefficient suggests a moderate correlation between $\ewlya$ and galaxy stellar mass-weighted age ($r_p=0.32$) in the sense that older galaxies exhibit larger $\ewlya$. \citet{marchi19} found a similar result, obtaining a Spearman rank correlation coefficient of 0.40. This contrasts with \citet{pentericci09} and \citet{pentericci10} who found no strong dependence of \lya\ equivalent width on age for \laes\ and LBGs, as well as \citet{reddy21} who found a weak negative correlation between the two measurements for star-forming galaxies in the same redshift range probed by this study.
Finally, a moderate negative correlation exists between sSFR and $\ewlya$, though the large error bars for our measurements of sSFR weaken the reliability of the correlation. For comparison, \citet{hathi16} found no significant correlation between the two properties for a sample including \lya\ in absorption and emission.
We also plot SFR against $M_\star$ for all objects in our sample in Figure~\ref{fig:sfms} to see how our galaxies compare to other objects at similar redshift in relation to the star-forming main sequence (SFMS). We include the best-fit line found by \citet{sanders18} for star forming galaxies in the MOSDEF survey at $z \sim 2.3$. Note that masses derived for that study used the \cite{chabrier03} IMF and \cite{calzetti00} dust curve but stellar population synthesis models from \cite{conroy09}. We also use a colorbar to show the value of $\ewlya$ for each galaxy. The position of LAEs on the SFMS remains somewhat controversial. Studies such as \citet{vargas14}, \cite{keely15}, \citet{hagen16}, and \cite{santos20} found LAEs to lie above the relation, while other studies have interpreted them as lying directly on the low-mass end of the relation (e.g. \citealt{kusakabe18}). Figure~\ref{fig:sfms} shows that the LAEs in our sample lie largely on the SFMS, though a significant fration lie below the relation of \cite{sanders18} for $M_\star < 10^9 \ \msun$.
In Appendix \ref{sec:appendixa}, we explore the model-dependence of our measured galaxy properties, since the parameters derived from SED fitting can be systematically different using different models (see \citealt{conroy13}). We conclude that our results, including the median physical properties and the correlations with $\ewlya$ are not driven by our specific choice of model.
\section{Discussion}
\label{sec:discussion}
\subsection{Are HETDEX \laes\ Special?}
The question, ``What is a HETDEX LAE?'' holds particular importance for astronomers studying galaxy science with this survey. A vast sample of HETDEX LAEs is upcoming, and samples of such objects selected by emission line detection from a blind spectroscopic survey remain rare in the literature (with the exception of the HETDEX Pilot Survey (\citealt{adams11}, \citealt{blanc11}), which probed a smaller area to a brighter flux limit, and MUSE surveys, which probe much smaller areas to fainter flux limits with only a small overlap in redshift with \hd). Characterizing any idiosyncrasies in the HETDEX LAE population will put these objects in context relative to the numerous LAEs found by previous studies, and it will aid the interpretation of future blind spectroscopic surveys for these objects in the EoR.
As described above, in our f$_{Ly\alpha}$ $\gtrsim 6 \times 10^{-17}\ \flux$ flux-limited sample \citep{gebhardt21},
the median galaxy mass of $0.8^{+2.9}_{-0.5} \times 10^{9} \ \msun$ lies very close to many LAE samples selected through narrow band imaging. For example, \cite{gawiser07} found a median mass of $1^{+0.6}_{-0.4} \times 10^9 \ \msun$ with a flux limit of $1.5 \times 10^{-17} \mrm{erg \ s^{-1} \ cm^{-2}}$ at $z=3.1$. \citet{guaita11} pushed to an even lower median mass of $\sim 4 \times 10^8 \ \msun$, roughly a factor of two less massive than this sample's median, with a flux limit of $2.0 \times 10^{-17} \ \flux$ at $z=2.1$. The MUSE HUDF went even deeper, finding sources at $z>3$ with \lya\ line fluxes as small as $\sim2 \times 10^{-18} \ \flux$ and obtaining a median sample mass of $\sim 2.5 \times 10^8 \ \msun$. The sample of \citet{santos20} was limited by medium-band line flux limits spanning $3.0-4.8 \times 10^{-17} \ \flux$ over \zrange{2}{6} \citep{sobral18} and measured a median \lae\ mass of $~2 \times 10^9 \ \msun$, consistent with this study. Of course, the mass range probed by HETDEX falls far below samples selected using the Lyman/Lyman-alpha break (for example, the lowest mass probed by \citet{papovich01} was $10^{10} \ \msun$ at \zrange{2.0}{3.5}). Thus, the HETDEX flux limit explores an \lae\ mass range comparable to NB surveys, yet slightly more massive than the deepest NB and spectroscopic surveys. At the expense of sensitivity, the HETDEX survey can find fairly low-mass LAEs over a large continuous redshift interval, reducing the effects of cosmic variance compared to NB observations.
As mentioned in \S\ref{ssec:sedprops}, the LAEs in this sample do not stand out from NB samples at similar redshift in terms of age, star formation rate, and dust extinction. Thus, we can conclude that the HETDEX survey selects a typical LAE having properties consistent with the general NB-selected population, but it may have slightly higher stellar mass based on the line flux limit of the survey.
Nonetheless, our sample may stand out in its relation to the SFR-$M_\star$ relation shown in Figure~\ref{fig:sfms}. Compared to the relation measured in \citet{sanders18}, LAEs in the sample with $M_\star \lesssim 10^9 \ \msun$ appear to lie below the trend. This contrasts markedly with the work of \cite{hagen16}, who compiled their sample using the HETDEX Pilot survey (\citealt{adams11}, \citealt{blanc11}) and found their LAEs to lie above the SFMS. Interestingly, the LAEs lying below the SFMS in Figure~\ref{fig:sfms} have very high $\ewlya$, which correlates with lower $M_\star$ and SFR in Figure~\ref{fig:correlations}. We are not surprised that the lowest mass systems in our sample have the highest values of $\ewlya$ given the negative correlation with $M_\star$ and the fact that low mass objects need large $\ewlya$ to be detected by \hd, but their position below the SFMS is peculiar. It could be related to the weak negative correlation we found between $\ewlya$ and sSFR, or could simply be an artifact of our small sample size. This motivates further study of the positions of LAEs on the SFMS with larger samples.
\subsection{Which Properties Drive \lya\ Emission?}
While the size of the sample analyzed in this study is small, we were still able to extract important information linking galaxy stellar-population properties to \lya\ emission strength. As the number of LAEs detected by HETDEX grows in fields with rich photometric data, such as the Spitzer-HETDEX Exploratory Large-Area Survey (SHELA) \citep{papovich16}, the number of LAEs with measured galaxy properties will grow by many orders of magnitude. This will provide a trove of useful data for explaining why some galaxies shine brightly in \lya\ while others do not, as well as exploring the effects of galaxy environment on \lya\ emission.
We found a significant, strong negative correlation between $\ewlya$ and stellar mass in our sample (see the top left panel of Figure~\ref{fig:correlations}). This trend is often theoretically attributed to low mass, star-forming galaxies having less neutral gas to resonantly scatter the \lya\ photon (as well as less dust) leading to a shorter total path length to exit the galaxy without absorption by dust (see \citealt{ando06}). In this sample, $\ewlya$ also negatively correlated (even more strongly) with SFR, and the fact that stellar mass and star formation rate correlate strongly with each other complicates the interpretation of this result. \citet{weiss21} addressed this issue by binning their sample of [O\,{\sc iii}]-emitting galaxies with \lya\ line flux measurements from \hd\ according to stellar mass and SFR. They found mass to better predict \fesc\ at fixed SFR than SFR did at fixed mass.
Fascinatingly, we did not find even a weak correlation between dust extinction and $\ewlya$. This seems surprising given that many authors have noted such a correlation and that the theoretical explanation is inarguable: resonantly scattered \lya\ photons can get absorbed readily in the presence of even a small amount of dust. A partial explanation for our sample's behavior with $A_V$ could be that it consists of systems exhibiting strong \lya\ emission, not absorption. For example, \citet{reddy21} studied systems with \lya\ in net absorption or emission and found a strong correlation between $\ewlya$ and $E(B-V)$. If our sample contained objects with negative $\ewlya$, perhaps those objects would reveal the correlation. Nevertheless, other studies of only emitters ($\ewlya > 0$) have also noted a trend with dust extinction, such as \citet{marchi19}, though a close examination of their Figure 7 shows that the negative correlation is largely driven by weak emitters with $\ewlya < 10\ \ang$. Our small dynamic range in $\ewlya$ may obfuscate a correlation with dust extinction. This interpretation may also be complicated by the \lya\ photon's ability to escape the galaxy even in the presence of large amounts of dust. Given a clumpy ISM geometry, clumps of gas and dust can act as mirrors to \lya\ photons, which ``bounce'' of the surfaces of these clumps through resonant scattering by neutral gas, while continuum photons pass through and thus experience extinction. \citet{gronke16} found that simulated \lya\ emission lines agreed well with observations for models with clumpy ISM geometries, and \citet{fink09} found that clumpy-ISM models better fit the SEDs of over half their NB-selected sample of LAEs at $z \sim 4.5$. \citet{vargas14} also found their sample of 20 NB-selected LAEs at z=2.1 favored clumpy-ISM models.
Lastly, we found a moderate correlation between $\ewlya$ and galaxy mass-weighted age. The strength of \lya\ emission depends on both its production through recombination in HII regions as well as its escape through channels in the ISM with low neutral gas covering fractions, so the interplay between these processes determines $\ewlya$. As noted by \citet{marchi19}, who obtained a similar result, the trend with age could arise from older systems having experienced intense star formation in their past, where stellar winds and radiation cleared out neutral gas and dust, leaving channels for \lya\ escape. Through ongoing star formation or recent bursts, these objects can still produce \lya\ photons, and the ISM conditions favor their escape. For the youngest galaxies, even though the most massive, ionizing photon-producing stars are present, it is possible that a significant amount of dust and neutral gas has yet to be swept away, hindering the escape of \lya.
\section{Predicting Lyman-alpha emission in the Epoch of Reionization}
\label{sec:predictions}
Using our knowledge of \lya\ emission from \hd\ galaxies situated in an ionized IGM, we can attempt to predict the intrinsic emission strength of LAEs at $z>7$, an era where starlight from galaxies was still actively re-ionizing the universe.
\subsection{An LAE Sample in the Epoch of Reionization}
Our sample at \zrange{1.9}{3.5} provides a view of \lya\ emission unobscured by a significant IGM neutral fraction. By creating a predictive model that connects global galaxy properties to their intrinsic $\ewlya$ in this pristine era, we can apply it to LAEs in the EoR to derive their expected intrinsic $\ewlya$, then attributing any deficiency of \lya\ emission from objects in the EoR to an increasing neutral fraction. This does require the assumption that the production and escape of \lya\ photons does not evolve with redshift for fixed galaxy properties, which will require further testing. As a pilot attempt here, we took advantage of the sample of $z > 7$ \laes\ that \citet{jung20} found in \gn\ to test our ability to predict \lya\ emission from EoR galaxies.
Using a deep, spectroscopic survey conducted with Keck/MOSFIRE, \citet{jung20} found 10 $>4\sigma$ \lya\ detections at $z > 7$ among 72 high-$z$ candidate galaxies. Such objects likely reside in ionized bubbles of the IGM, allowing the \lya\ photon to redshift away from the resonant-frequency therefore lowering the absorption cross-section with neutral hydrogen. These emitters thus serve as direct tests of our understanding of the galaxy properties that modulate \lya\ emission strength from the ISM/CGM.
Because the photometric catalog for the \gn\ field contains the \laes\ discovered by \citet{jung20}, we performed the same SED analysis detailed in section~\ref{ssec:sedfitting} for those objects. We again masked all photometric bands including and blueward of \lya\ given the object's spectroscopic redshift. For most of the $z>7$ LAEs, this left 3 \hst\ filters as well as both \spitzer/IRAC channels. We again used \bagpipes\ to estimate the galaxy properties, adopting our fiducial model (delayed-$\tau$ SFH, \citealt{calzetti94} dust law). Figure~\ref{fig:intae_sed} shows an example fit for an object at $z=7.51$.
\subsection{A Predictive Model for $\ewlya$}
To predict the \lya\ equivalent widths of the $z>7$ sample, we chose several properties that strongly impact the emergent \lya\ emission from galaxies: stellar mass, dust extinction, and star formation rate. As discussed above, stellar mass may determine the amount of neutral hydrogen gas (and thus dust) in the galaxy as well as the total path length needed to escape. In the presence of dust, \lya\ photons may terminate their resonant scattering process through absorption by a dust grain following re-emission at longer wavelengths, limiting likelihood of escape. Finally, the global star formation rate impacts the production of UV photons that can create \lya\ through recombination, and feedback from star formation may impact the structure of the ISM itself, creating ionized channels for escape.
Using the posterior distributions sampled by \bagpipes, we matched each $z>7$ emitter to \laes\ in the \hd\ sample based on SED-derived properties. To do this, we calculated the ``separation" in the log mass, SFR, dust attenuation parameter space from the EoR \laes\ to each \lae\ in the \hd\ sample. For the separation calculation, we divided each parameter value by the full range of values in the sample to normalize the parameter space. For example, for log stellar mass, an object in the \hd\ sample with log mass halfway between the sample minimum and maximum would have a value of 0.5, so the difference between 0.5 and the EoR \lae\ log stellar mass scaled the same way would become input to the Euclidean distance formula. We then ranked the \hd\ \laes\ by separation in parameter space and constructed the prediction using the $N=3,5, \ \mrm{and}\ 7$ closest neighbors. We computed the posterior $\ewlya$ distribution by co-adding Gaussian distributions with mean and standard deviation set by the $\ewlya$ measurements and error bars in our sample. To give more importance to those LAEs that closely resembled the EoR galaxy, we weighted each Gaussian distribution by the inverse of its squared distance in parameter space from the EoR galaxy when co-adding to obtain the final prediction. The predicted $\ewlya$ distributions are normalized such that the integral over all equivalent widths equals unity.
Figure~\ref{fig:ew_pred_sn4} shows our predicted $\ewlya$ distributions for \laes\ in the \citet{jung20} sample with Ly$\alpha$ $S/N>4$. We show predictions using three different values of $N$, the number of nearest neighbors in parameter space, to reveal any stochasticity in the prediction. The measured \lya\ equivalent widths from \citet{jung20} are indicated by vertical dashed lines with $1\sigma$ error intervals shaded grey. Importantly, we only expect our predictions to match the observed equivalent widths of EoR \laes\ if they exist in ionized bubbles. If the EoR \laes\ instead exist in regions of the IGM with significant neutral fractions, we expect to over-predict the \lya\ emission. On the other hand, an under-prediction of the \lya\ emission from an EoR object would imply our sample size is too small to account for the diversity in physical properties of the \lae\ population.
In Figure~\ref{fig:intae_pred_vs_obs}, we plot the predicted versus observed equivalents widths with a one-to-one line drawn to facilitate comparison. Each object's predicted value and error were calculated as the first moment and square root of the second moment of the $N=5$ curves in Figure~\ref{fig:ew_pred_sn4}, respectively. In five out of ten cases (ID z7\_GND\_18626, z7\_GND\_44088, z7\_GND\_42912, z7\_GND\_22233, and z7\_GND\_39781) the $1\sigma$ interval of our $\ewlya$ predictions overlapped with the $1\sigma$ interval of the observational measurement, indicating moderate agreement. For strong emitters (observed $\ewlya > 20\ \ang$), our prediction overlapped with observation five out of eight times. Furthermore, two strong emitters (z7\_GND\_42912 and z7\_GND\_16863), postulated by \citet{jung20} to inhabit ionized bubbles, had observed equivalent widths greater than or equal to the majority of our predicted $\ewlya$ distributions, as one might expect for sources with little IGM attenuation.
It is not surprising that our model failed to predict weak \lya\ emission accurately. First, our model predicts \lya\ EWs in the absence of IGM absorption, thus an under prediction could imply significant absorption of \lya\ photons by neutral hydrogen in the IGM. Second, as our sample by construction contains far more strong emitters than weak ones (see Figure~\ref{fig:ew_dist}), this could presently bias us towards an over-prediction of \lya\ emission strengtdrasticallyh. We note that we under-predicted the emission from ID z7\_GND\_34204 (indicated by an arrow in Figure \ref{fig:intae_pred_vs_obs}), which could be attributed to the dearth of objects in our sample with very high equivalent widths to match with that object's value, $\sim 280 \ \ang$.
ID z7\_GND\_42912 offers a good example of how challenging predicting \lya\ emission can be. As $N$ increases, the peak of the predicted distribution shifts from agreeing well with the observation to under-predicting it. It is clear that our sample is presently too small to fully span the parameter space in both $\ewlya$ and physical properties. Future analyses with much larger samples made possible by \hd\ should be able to better capture the mean trends as well as variance in galaxy parameters that determine \lya\ emission strength.
Some of the predictions in Figure~\ref{fig:ew_pred_sn4} bode well for constraining the expected $\ewlya$ given a suite of galaxy properties measured from broadband SED fitting. With larger samples that suffer less from the inherent idiosyncratic behavior of \lya\ emission (for example, its dependence on the observer's line-of-sight), a rigorous, statistical understanding of the properties that drive that emission will arise, unlocking the potential of LAEs to probe cosmic reionization. We further note that, with larger samples, machine learning (ML) may prove an invaluable tool in making the nuanced connection between global galaxy properties and \lya\ emission strength, as the problem requires a regression analysis well suited for ML techniques.
\section{Summary}
\label{sec:summary}
We used SED fitting to study the properties of a sample of LAEs from the \hd\ survey in \gn\ to better understand the phenomenology behind \lya\ emission and ultimately leverage these beacons of light in the distant Universe as probes of cosmic reionization.
To build the sample, we inspected \ninspect\ emission line detections to determine if the line was \lya\ or a feature from a low-redshift galaxy, such as \oii. We then created a procedure to synthesize information about angular separation from the emission line detection position, extracted emission line flux, and $\chi^2$ of SED fit assuming $z_\mathrm{Ly\alpha}$ to identify the continuum counterpart in our deep, mult-band \hst\ imaging in \gn. After removing detections with no counterparts, AGN contaminants, and sources with insufficient photometric data, we analyzed a sample of \nsamp\ LAEs using SED fitting performed by \bagpipes.
Our sample's properties were consistent with studies of LAEs from NB imaging surveys at similar redshifts. Our median sample mass was $0.8^{+2.9}_{-0.5} \times 10^{9} \ \msun$, and the galaxies' SFRs appeared to put them approximately on the star-forming main sequence, except for at $M_\star < 10^9\ \msun$. Using \lya\ emission line flux measurements from \hd, we also studied correlations between $\ewlya$ and galaxy properties. We found strong correlations between $\ewlya$ and stellar mass as well as SFR. We additionally found a moderate correlation where galaxies with older stellar populations had larger \lya\ equivalent widths. Interestingly, we did not find a significant impact of dust extinction on $\ewlya$, whereas many other studies have. Overall, this paints a picture of LAEs as low-mass systems with moderate star formation activity wherein \lya\ photons can escape even in the presence of dust. Also, the LAEs detected by \hd\ do not stand out significantly in terms of their stellar population properties from LAEs found using NB imaging with comparable flux limits.
Finally, we used our LAE sample to try to predict the value of $\ewlya$ for ten LAEs at $z>7$ by matching the distinct samples in the parameter space of mass, SFR, and dust extinction. Our prediction matched the data at the $1\sigma$ level five out of ten times (5/8 for strong emitters); the three over-predictions could indicate significant absorption by a neutral hydrogen in the IGM. With large sample sizes in the near future and tools such as machine learning, we are optimistic about the ability of \hd\ LAEs to unlock the potential of \lya\ as a reliable reionization probe.
\section{Acknowledgements}
APM and SLF acknowledge support from the National Science Foundation, through grants AST-1908817 and AST-1614798. I.J. acknowledges support from NASA under award number 80GSFC21M0002.
HETDEX is led by the University of Texas at Austin McDonald Observatory and Department of Astronomy with participation from the Ludwig-Maximilians-Universität München, Max-Planck-Institut für Extraterrestrische Physik (MPE), Leibniz-Institut für Astrophysik Potsdam (AIP), Texas A\&M University, Pennsylvania State University, Institut für Astrophysik Göttingen, The University of Oxford, Max-Planck-Institut für Astrophysik (MPA), The University of Tokyo and Missouri University of Science and Technology. In addition to Institutional support, HETDEX is funded by the National Science Foundation (grant AST-0926815), the State of Texas, the US Air Force (AFRL FA9451-04-2- 0355), and generous support from private individuals and foundations.
The observations were obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.
VIRUS is a joint project of the University of Texas at Austin, Leibniz-Institut f{\" u}r Astrophysik Potsdam (AIP), Texas A\&M University (TAMU), Max-Planck-Institut f{\" u}r Extraterrestrische Physik (MPE), Ludwig-Maximilians-Universit{\" a}t M{\" u}nchen, Pennsylvania State University, Institut f{\" u}r Astrophysik G{\" o}ttingen, University of Oxford, and the Max-Planck-Institut f{\" u}r Astrophysik (MPA).
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing high performance computing, visualization, and storage resources that have contributed to the research results reported within this paper. URL: \url{http://www.tacc.utexas.edu}
The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at Pennsylvania State University.
\software{get\_spectrum.py \\ (https://github.com/HETDEX/hetdex\_api), emcee (Foreman-Mackey et al. 2013), Bagpipes (Carnall et al. 2018), Numpy (Harris et al. 2020)}
\bibliographystyle{aasjournal.bst}
\clearpage
\appendix
\section{Model-Dependence of Measured Galaxy Properties}
\label{sec:appendixa}
Bayesian approaches to SED fitting, like the one implemented in \bagpipes\, provide robust constraints on the parameter uncertainties and their interdependence, but the model chosen for comparison to the data (as well as the chosen priors) determines the accuracy of those estimates. In other words, an inaccurate model yields inaccurate measurements of galaxy properties. Many galaxy SED fitting studies have shown that model choices, such as the SFH, systematically impact the measured galaxy properties (see \citealt{conroy13} for review).
To test the robustness of our results to different modeling choices, we performed an additional analysis of our entire sample using an alternate model. We did not seek to find a more (or less) accurate model; we simply wanted a different model to determine if the median properties or correlations between \lya\ emission and galaxy properties changed. To this end, we adopted a constant SFH parametrization as well as the dust absorption model of \citealt{CharlotFall}. The constant SFH required two parameters: the time when star formation began and the constant star formation rate. For dust attentuation, we adopted the recipe given in \citet{CharlotFall} by using an absorption curve proportional to $\lambda^{-0.7}$, and a factor of three reduction in the dust extinction normalization for stellar populations older than $10^7$ years to account for the dispersal of stellar birth clouds. The authors found this recipe to match the absorption of stellar continuum and nebular emission for nearby starburst galaxies very well, and the differential extinction toward young stars differs markedly from the treatment by \citet{calzetti94} used in our ``fiducial'' model presented above.
Figure~\ref{fig:modcomphist} shows the distribution of \lae\ properties measured using the alternate model compared with the fiducial model. The sample median stellar mass increased by 0.1 dex, as did the median SFR. These two changes do not affect our results or interpretation significantly. The median dust dropped from $A_V = 0.30$ to 0.17, a fairly substantial change, but not unusual given the common factors of $\sim$ a few discrepancies between different models and SED-fitting codes (see \citealt{leja17}). Nonetheless, the correlations between galaxy properties and $\ewlya$ remained unaffected by the model modifications, as shown in Figure~\ref{fig:modcompscat}. Stellar mass and SFR correlated strongly and negatively with \lya\ emission strength, while other parameters, like dust extinction, continued to show no significant correlations.
\section{Imaging, Emission lines, and SED fits for LAEs in this study}
\label{sec:appendixb}
In this section, for all \nsamp\ \laes\ in our sample, we present \hst\ imaging cutouts in Figure~\ref{fig:hst_stamps} showing the sources and any neighbors, the \hd\ \lya\ emission line detections in Figure~\ref{fig:all_lines}, and the SED fits with \bagpipes\ \citep{carnall18} used to measure physical properties in Figure~\ref{fig:all_sed_2}.
|
Title:
Dust, CO and [CI]: Cross-calibration of molecular gas mass tracers in metal-rich galaxies across cosmic time |
Abstract: We present a self-consistent cross-calibration of the three main molecular
gas mass tracers in galaxies, the $\rm ^{12}CO$(1-0), [CI]($^3P_1$-$^3P_0$)
lines, and the submm dust continuum emission, using a sample of 407 galaxies,
ranging from local disks to submillimetre-selected galaxies (SMGs) up to $z
\approx 6$. A Bayesian method is used to produce galaxy-scale universal
calibrations of these molecular gas indicators, that hold over 3-4 orders of
magnitude in infrared luminosity, $L_{\rm IR}$. Regarding the dust continuum,
we use a mass-weighted dust temperature, $T_{\rm mw}$, determined using new
empirical relations between temperature and luminosity. We find the average
$L/M_{\rm mol}$ gas mass conversion factors to be $\alpha_{850}=
6.9\times10^{12}\,\rm W\,Hz^{-1}\,M_{\odot}^{-1}$, $\alpha_{\rm CO} = \rm
4\,M_{\odot} (K\,km\,s^{-1}\,pc^2)^{-1}$ and $\alpha_{\rm CI} = \rm 17.0
\,M_{\odot} (K\,km\,s^{-1}\,pc^2)^{-1}$, based on the assumption that the mean
dust properties of the sample ($\kappa_H$ = gas-to-dust ratio/dust emissivity)
will be similar to those of local metal rich galaxies and the MW. The tracer
with the least intrinsic scatter is [CI](1-0), while CO(1-0) has the highest.
The conversion factors show a weak but significant correlation with $L_{\rm
IR}$. Assuming dust properties typical of metal-rich galaxies, we infer a
neutral carbon abundance $X_{\rm CI} = [C^0/\rm mol]=1.6\times 10^{-5}$,
similar to that in the MW. We find no evidence for bimodality of $\alpha_{\rm
CO}$ between main-sequence (MS) galaxies and those with extreme star-formation
intensity, i.e. ULIRGs and SMGs. The means of the three conversion factors are
found to be similar between MS galaxies and ULIRGs/SMGs, to within 10-20%. We
show that for metal-rich galaxies, near-universal average values for
$\alpha_{\rm CO}$, $X_{\rm CI}$ and $\kappa_H$ are adequate for global
molecular gas estimates.
| https://export.arxiv.org/pdf/2208.01622 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
ISM: dust, extinction; Galaxies: high redshift; Submillimetre: galaxies, ISM; Radio lines: galaxies, ISM
\end{keywords}
\section{Introduction}
The cosmic star-formation rate (SFR) density has declined by more than
an order of magnitude during the past $\approx 8$\,Gyr of cosmic
history \citep{Lilly1996,Madau1996,Madau2014}. The driver of star
formation is the molecular gas supply in galaxies,
and indeed the SFR--stellar mass (SFR--$M_\star$) relationship known
as the galaxy main sequence (MS) is purely a by-product of the
relationship between SFR and molecular gas \citep[e.g.][]{baker2022},
for unperturbed galaxies with significant gas reserves. A major
observational goal is to produce a combined census of the molecular
gas -- the `potential for future star formation' -- and the stellar
content -- the `record of past star formation' -- over this period
\citep[e.g.][]{Keres2003,Dunne2003,Dunne2011,Zwaan2004,
Zafar2013,Walter2014,Decarli2016,Saintonge2017,Driver2018,Rhee2018,Decarli2019,
Riechers2019}.
The molecular gas fraction of a galaxy is a crucial component in
models of galaxy formation \citep[e.g.][]{Obreschkow2009,Popping2014,
Lagos2015,Chen2018} and thus measurements of $\rm H_2$ and stellar
mass over large representative galaxy samples are key requirements for
understanding how galaxies have transformed from clouds of gas
residing in dark matter haloes into the regular agglomerations of
stars we see in the local Universe. While it is clear that
CO(1--0)-luminous gas is the phase linked with star formation
\citep[e.g.][]{Wong2002}, observations of molecules with higher critical densities
(e.g.\ HCN) revealed that it is the dense H$_2$ gas phase
($n> 10^4$\,cm$^{-3}$) that correlates most tightly and linearly with tracers of star-formation \citep{Gao2004}.
Atomic hydrogen (\HI), on the other hand, constitutes a longer-term gas
reservoir for star formation, where under certain conditions of pressure, far-UV radiation field, density and metallicity, a phase transition \HI\ $\rightarrow$ H$_2$ takes place, catalysed by dust grains \citep[e.g.][]{ElmegreenH21993,PPP2002,Blitz2006}: a picture supported
by numerous observations \cite[e.g.][]{Honma1995,Leroy2008,Bigiel2008,Schruba2011}. This transition occurs in the inner \HI\ distribution of galaxies, in the cold neutral medium (CNM: $n\sim 50$--100\,cm$^{-3}$, $T_{\rm kin}\sim
100$--200\,{\sc k}), meanwhile pure \HI\ gas often extends many
optical radii beyond the luminous stellar disk
\citep[e.g.][]{peroux2020}, where it can be found
concomitant with cold dust (e.g. \citealp{thomas2002}).
Unlike \HI\ and its hyperfine line emission at 21\,cm, the H$_2$ molecule in its S(0):$J=2-0$ transition at 28$\mu$m (the least excitation-demanding $\rm H_2$ line) is essentially invisible at temperatures typical of giant molecular clouds (10--20\,{\sc k}). This is because its $\Delta E/k_{\rm B}$$\sim $$510\,{\sc k}$, limits its excitation and detetection only to shocked regions of molecular clouds, where gas temperatures can rise past $\sim 1000$\,{\sc k}, for small ($\sim 1$--2 per cent) gas mass fractions. Even then, to observe this \mol\ line at 28\,$\mu$m requires space-borne telescopes.
For these reasons the rotational transitions of CO (the next most abundant
molecule with $\rm [CO/H_2]\sim 10^{-4}$) are commonly used to trace $\rm H_2$
gas, with the lowest transition ($^{12}$CO $J=1$--0) being the most established
tracer. Its $E_{10}/k_{\rm B}\sim 5.5$\,{\sc k} ensures a well-populated
upper level even in the coldest gas, while its low critical density,
$n_{\rm cr}\sim 400$\,cm$^{-3}$, ensures its excitation even at low
densities\footnote{Because the CO(1--0) line is typically
optically thick, with $\tau_{10}\sim 5$--10 (\citealp[e.g.][]{BS1996,Papadopoulos2012}: their Eqn. 11), the {\it effective}
critical density is lower still: $n_{\rm cr}(\beta_{10})= \beta_{10}
n_{\rm crit}\sim 40$--80\,cm$^{-3}$, where $\beta_{10}=(1-e^{-\tau
_{10}})/\tau_{10}$ is the line escape probability.}.
The CO(1--0) line has significant optical depths in the typically macro-turbulent $\rm H_2$ gas, though these arise locally within the velocity-coherent gas cells allowing the CO emission to trace gas
mass throughout molecular clouds \citep[e.g.][]{Dickman1986}. The conversion factor, \aco, in the relation $\Mh=\aco\lcoa$ cannot be determined using standard optically thin line formation physics due to the high line optical depths. This created the need for a \aco\ calibration as soon as the ubiquity of CO line emission in
H$_2$ clouds was established. Observational and theoretical investigation of \aco\ suggests it is sensitive to metallicity, molecular gas surface density and kinematic state in galaxies \citep[e.g.][]{Pelupessy2009,Narayanan2011,Papadopoulos2012,Bolatto2013}.
Three distinct problems are now recognised regarding the use of CO as a global tracer of H$_2$ mass
in galaxies:
\begin{enumerate}
\item{The \aco\ factor is sensitive -- in a highly non-linear fashion -- to
the ISM metallicity and ambient far-UV radiation fields
\citep[e.g.][]{Israel1997, Pak1998, Bolatto2013}.}
\item{Non-self-gravitating molecular clouds -- and/or very different average ISM states in terms of average temperature and gas density range from those found in spiral galaxies where \aco\ was
first calibrated -- can yield systematically different \aco\
factors. For example, \aco $\sim 1/5-1/4\times$ Galactic
was initially reported for a sample of four ULIRGs
by \citet{Downes1998}.}
\item{Elevated cosmic ray (CR) energy densities can destroy CO below a certain gas density threshold, leaving behind more C-rich gas. This density threshold depends on the CR energy density in a highly non-linear fashion, as explored by \citet{Bisbas2015}, who found that regions of CO suppression may occur even in moderately enhanced CR conditions if the gas density is low, while the very high CR energy densities expected in ultraluminous infrared galaxies (ULIRGs) may be partly compensated by higher gas densities in such starbursts. Modelling [\CI/CO] ratios as a function of CR, turbulence, gas density and metallicity is an active area of theoretical research \citep[e.g.][]{Bisbas2015,Bisbas2017,Bisbas2021,Glover2016, Clark2019ci,Papadopoulos2018,Gong2020}.}
\end{enumerate}
In the distant Universe, additional problems arise. High-redshift galaxies are often observed solely in high-$J$ CO lines ($J=3$--2
and higher), due to the observational challenge of observing the two low-$J$ CO lines\footnote{Prior to the commissioning of its bands 1 and 2, low-$J$ lines from high-redshift galaxies are inaccessible to the Atacama Large Millimetre Array (ALMA). The Jansky Very Large Array (JVLA), the Australia Telescope Compact Array (ATCA) and the Greenbank Telescope (GBT), have in some cases been able to access the faint low-$J$ ($J_{\rm u}\leq2$) CO
lines, but it requires huge amounts of observing time in the best available weather.}. Using the high-$J$ lines means that global CO($J+1,J$)/(1--0) ratios must be assumed
before an \aco\ factor can be used; given the wide range of CO spectral-line energy distributions (SLEDs)
found for LIRGs for $J=3$--2 and higher \citep{PPP2012xco,Greve2014,Kamenetzky2016}, these assumptions come with large uncertainties. Finally, at the highest redshifts ($\ga $4),
low-$J$ CO lines (and dust emission) can be severely suppressed for cold gas (and dust) reservoirs due to their low contrast against
the ambient, rest-frame cosmic microwave background \citep{daCunha2013,Zhang2016}.
In principle, radiative transfer models of well-sampled CO (and
$^{13}$CO) SLEDs can yield \aco\ values appropriate for a particular
galaxy (or even galaxy class) \citep[e.g.][]{PPP2012xco,PPP6240, Harrington2021}. Nevertheless, the size of the CO line datasets per galaxy required to do this make it impractical (in terms of telescope time) to obtain $M$(H$_2$) for large galaxy samples. Amassing a large sample typically means only one or two lines can be gathered per galaxy, and thus a calibration of \aco\ and its uncertainties remains very valuable. The only practical way to achieve this is to cross-calibrate against the other galaxy-scale $\rm H_2$ mass tracers.
Large-area far-infrared (FIR) and submillimetre (submm) surveys
\citep[e.g.][]{Armus2009,Eales2010,Vieira2010,Kennicutt2011,Oliver2012,Hodge2013}
ushered in a new era in which submm continuum emission from dust has been used widely as an
alternative tracer of \Mh, although it has been clear that
submm-derived dust masses ($\propto \lsub$) and CO-derived molecular
gas masses ($\propto \lcoa$) are tightly correlated ever since the
first statistical submm survey of 100 local FIR-bright galaxies
\citep[SLUGS --][]{Dunne2000}. The first suggestions to use dust as an
alternative to CO at high redshift
\citep[e.g.][]{Santini2010,Magdis2012,Scoville2014} were followed quickly by work
demonstrating its potential
\citep[e.g][]{Scoville2016,Hughes2017,Orellana2017}.
An advantage of using submm continuum emission from dust as an $\rm
H_2$ gas tracer is that it becomes easier to measure at high redshift,
because of the negative $K$-correction \citep[e.g.][]{blain1993},
while recent technological advances made it possible to image areas
large enough to be free of cosmic variance, leading to the FIR/submm
detection of many thousands of galaxies by the {\it Herschel Space
Observatory}, for example. The use of dust as a gas mass proxy
requires an estimate of metallicity, since the dust-to-gas ratio,
\gdr, is roughly proportional to metallicity
\citep[e.g.][]{MM2009,Magdis2012, Sandstrom2013,Draine2014}. The
appropriate \gdr\ can then be applied
\citep[e.g.][]{Valentino2018}. Whilst this requirement is often raised
as a problem regarding the use of dust as a gas mass tracer, its
dependence on metallicity is in fact weaker than that of CO\footnote{Moreover,
since $\rm H_2$ cannot be traced (in bulk) by any of its own lines, regardless of which other tracer (X) is used
(dust emission, CO, or $^{13}$CO, or \CI\ line emission), it will always be necessary to assume a $\rm[X/H_2]$ abundance in
order to proceed to a final $\rm H_2$ gas mass estimate.}.
For galaxies selected at FIR/submm/mm wavelengths, it is safe to
assume that the metallicity will be high, such that \gdr\ will be
broadly similar to those found for local metal-rich spirals and the
Milky Way \citep{Dunne2001,
Draine2009,Magdis2012,Sandstrom2013,Rowlands2014,
Yang2017,Berta2021}. A detailed discussion of the advantages and disadvantages of using dust as a tracer of gas can be found in
\citet{Genzel2015} and \citet{Scoville2017}\footnote{Continuum dust emission does not yield information on kinematics, unlike spectral lines.}.
A third method of tracing molecular gas -- the use of atomic carbon
lines -- has come to the fore since ALMA became operational. Its promise was recognised by
\citet{PPP2004} and its first application as a tracer for molecular gas mass in galaxies gave good results
\citep{Weiss2003,PPP&Greve2004}, implying that: a) the
\CIfull\ lines are optically thin for the bulk of $\rm H_2$ gas
\citep{PerezB2015} and b) atomic carbon is present throughout CO-rich
molecular cloud volumes.
The latter contradicts the earlier simple plane-parallel PDR model where atomic carbon (and its line
emission) occupied only a thin layer, sandwiched between $\rm
C^{+}$ in the outer and CO in the inner regions of FUV-illuminated
molecular clouds
\citep{Tielens1985}. However observations have repeatedly shown
excellent concomitance of \CI\ line emission with CO
line emission, by area and by velocity, and \CI\ shows a tighter correlation with $^{13}$CO than with
$^{12}$CO. \CI\ is now thought to arise from same volume as the CO,
with similar excitation conditions
\citep[e.g.][]{Plume1999,Ikeda2002,Schneider2003,Beuther2014,PerezB2015}.
Moreover, it may be that \CI\ lines can also trace
CO-dark molecular gas, should such phase exist in galaxies in
significant amounts, e.g. due to CR-induced dissociation of CO to C (and O) \citep{Bisbas2015}.
Despite being much fainter than the $\rm C^{+}$ line at 158\,$\mu$m
(the prime ISM cooling line), atomic carbon lines do hold certain
advantages, namely: a) they solely trace $\rm H_2$ gas, whereas the $\rm C^{+}$ line also traces the \HI\ and H\,{\sc ii} gas reservoirs, which can
be significant, especially in metal-poor systems
\citep[e.g.][]{Madden1997,Liszt2011,PPP&Geach2012,PerezB2015,Clark2019ci}; b) the \CI\ lines can
remain excited for cold gas (e.g.\ for [\CI](1--0): $E_{10}/k_{\rm B}\sim
24$\,{\sc k}) unlike the $\rm C^{+}$ line, where the $\Delta E/k_{\rm B}\sim 92$\,{\sc k} will keep it very faint for cold gas; c) the frequencies of the two
\CI\ lines, at 492 and 809\,GHz, remain accessible for
galaxies over a much larger redshift range (and thus cosmic volume) than
the $\rm C^{+}$ line. In the latter case, its rest-frame frequency,
$\rm \nu (C^+) \sim 1.9$\,THz, means the $\rm C^{+}$ line is observable
by ALMA's most sensitive receivers only at $z\ga 4$.
Nevertheless, the high rest-frame frequencies of the \CI\ line made
early observations (and thus any calibration efforts) in the local
Universe very difficult. Initially there had been relatively little
observational work outside of the Milky Way, largely confined to
extreme systems such as quasars and starburst nuclei
\citep[e.g.][]{White1994,Weiss2005, Walter2011}. These studies
advocated a higher carbon abundance for these extreme systems,
$\Xci=\rm [C^0/H_2] = 5$--$12\times 10^{-5}$, compared to the
$\Xci=1$--$2.5\times 10^{-5}$ seen in the Milky Way
\citep{Frerking1989}.
More recently, {\it Herschel} observed many local galaxies in \CI,
although the [\CI](1--0) line was at the edge of the observable range
for the {\it Herschel} Fourier Transform Spectrometer (FTS), such that the
sensitivity was somewhat compromised. As a result, most of the
detected galaxies were either ULIRGs, starbursts or low-metallicity
dwarfs \citep{Kamenetzky2014, Rosenberg2015,Lu2017,Jiao2017}. A small
sample of normal disk galaxies was mapped in \CI\ \citep[][hereafter
J19 -- see also
\citealp{Crocker2019}]{Jiao2019}. J19 studied the spatial
distribution of \lci\ and \lcoa\ at a $\sim 1$-kpc scale in 15 local
galaxies. They concluded that \CI\ is a good tracer of molecular gas,
in the sense that it correlates well with CO and the ratio
\lci/\lcoa\ is distributed smoothly across galaxies. Comparing
against CO(1--0) maps and the independent estimates of \aco\ from
\citet{Sandstrom2013}, these resolved studies suggested
$\Xci$=1.3--2.5$\times10^{-5}$, similar to the range in the Galaxy,
and that found by the absorber study of \citet{Heintz2020}.
\begin{table*}
\caption{\label{SampleT} Samples used for our comparisons.}
\begin{adjustbox}{center}
\begin{tabular}{lccccclcc}
\toprule
Sample & Selection & $z$ & $N_{\rm CO}$ & $N_{\CI}$ &
$N_{\rm
sub}$ &
Notes & SF mode & References\\
name & $\lambda_{\rm obs}$ (\mic) &&&&&&&(see below)\\
\midrule
high-$z$ SMG & 850--2000 & 2--6 & 89 & 42 & 114 & Corrected for lensing & Both & $a$\\
Local SF & & 0 & 35 & 19 & 35 & \CI\ from FTS & MS &$b$\\
(U)LIRGs & 60 & 0 & 85 & 19 & 114 & \CI\ from FTS & Both & $c$\\
$z=1$ & 850 & 1 & 11 & 18 & 9 & CO(2--1) & MS & $d$\\
$z=0.35$ & 250 & 0.35 & 12 & 12 & 12 & & MS & $e$\\
$0.04<z<0.3$ & 160 & 0-0.3 & 48 & 0 & 54 & VALES & MS & $f$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{In columns 4--6, $N$ refers to the number of detections in each of the tracers.\\
$a$: \citet{Chapman2005,Chapman2010,Weiss2005,Weiss2013,Coppin2006,Hainline2006,Kovacs2006,Daddi2009,Wu2009,Carilli2010,Carilli2011,Engel2010,Harris2010,Ivison2010,Ivison2011,Ivison2013,Frayer2011,Frayer2018,Riechers2011,Riechers2013,Riechers2020aspecs,Walter2011,Walter2012,Cox2011,Danielson2011,Lestrade2011,McKean2011,Magnelli2012,Thomson2012,AZ2013,Bothwell2013,Bothwell2017,Bussmann2013,Bussmann2015,Emonts2013,Sharon2013,Sharon2016,Cooray2014,Messias2014,Messias2019,Negrello2014,Negrello2017,Swinbank2014,Tan2014,Canameras2015,Dye2015,Aravena2016,Scoville2016,Spilker2016,Huynh2017,Oteo2017,OteoGRH,Popping2017,Falgarone2017,Wong2017,Yang2017,Yang2019,Bethermin2018,Enia2018,Pavesi2018coldz,Pavesi2018cosmos,Perna2018,Valentino2018,Valentino2020,Wang2018,Dannerbauer2018,GomezG2019,Jin2019,Kaasinen2019,Leung2019,Nesvadba2019,Bakx2020z,Boogaard2020,Berta2021,Ciesla2020,Drew2020,Neri2020,Harrington2021}.\\
$b$: \citet{Mirabel1990,Tinney1990,Young1995,Casoli1996,Zhu1999,Curran2000,Dunne2000,Dunne2001,Gao2004,Thomas2004,Stevens2005,Albrecht2007,Kuno2007,Ao2008,Baan2008,Young2008,Galametz2011,Koda2011,Iono2012,Pappalardo2012,Schruba2012,Alatalo2013,PS2013,Wong2013,Ueda2014,Liu2015,Rosenberg2015,Bolatto2017,Cao2017,Jiao2019,Jiao2021,Clark2018,Valentino2018,Valentino2020,Hunt2019,Lapham2019,Sorai2019};\\
$c$:\citet{Dunne2000,Yao2003,Gao2004,Wilson2008,Chung2009,PPP2010,Papadopoulos2012,GarciaB2012,Alatalo2016,Chu2017,Jiao2017,Lu2017,Yamashita2017,herI19,Michiyama2020,Izumi2020};\\
$d$: \citet{Valentino2018,Valentino2020,Bourne2019};\\
$e$: \citet{Dunne2021};\\
$f$: \citet{Villanueva2017,Hughes2017}.}
\end{table*}
With ALMA now in routine operations, studies of \CI\ have expanded to
a broader range of galaxies, with a greater variety of average
ISM conditions, over a wider range of redshift. These include
SMGs, which lie mainly at $z>1$ \citep[e.g.][]{AZ2013, Bothwell2017,
Popping2017, OteoGRH,Nesvadba2019, Dannerbauer2018, GomezG2019}, and
main-sequence (MS) galaxies at $z=0.35$--1.2
\citep{Valentino2018,Bourne2019,Valentino2020,Dunne2021}. \CI\ has
even been detected in the intracluster medium of the Spiderweb galaxy
cluster at $z=2.16$, as well as in several of its individual galaxies
\citep{Emonts2018}. Routine use of \CI\ as a tracer of molecular gas
is currently limited by the lack of calibration studies to explore and
determine the values and behaviour of the parameters involved, i.e.\
\Xci\ and \aci. $\Xci=3\times10^{-5}$ has been adopted by almost all
recent studies, taken from \citet{Weiss2003}, determined from a
comparison of analyses of CO and \CI\ in the centre of M\,82, which
has unusually high $\rm [C^0/CO]\sim 0.5$, whereas attempts to estimate
\Xci\ in other ways -- e.g.\ from absorption studies of Gamma-ray bursts and
quasar absorbers \citep{Heintz2020} -- have found lower values,
consistent with the range seen in the Milky Way.
This paper presents the first dedicated cross-calibration study of the
dust, \COa\ and \CIfull\ emission in a sample of \Ntot\ galaxies from
the literature, including MS galaxies and SMGs, such that we can
compare their properties and tracer-($\rm H_2$ mass) conversion factors. We include the
250-$\mu$m-selected galaxies at $z=0.35$ observed with ALMA in all
three tracers by \citet{Dunne2021} where our method was first briefly
presented.
In \S\ref{obsS} we describe the samples used in this analysis, the
observables, and the derived quantities. In \S\ref{optS} we describe
the Bayesian approach for producing optimised, self-consistent
tracer-($\rm H_2$-mass) conversion parameters between multiple tracers simultaneously. We
then examine correlations of the observables to look for trends in
\S\ref{correlationsS}. In \S\ref{caltrendS} we investigate the trends
we have found in the conversion factors and provide refined calibration
recipes. Finally, in \S\ref{DiscS} we
discuss the results and highlight the open questions. Throughout, we
use a cosmology with $\Omega_{\rm m} = 0.27, \Omega_{\Lambda} =0.73$
and $H_0 = 71$\,km\,s$^{-1}$\,Mpc$^{-1}$.
\section{Deriving observational quantities}
\label{obsS}
\subsection{Sample}
\label{sampleS}
The samples used in our study are those available in the
literature -- up-to-date as of early 2022 -- which have at least two
of the three tracers: submm dust continuum emission at
$\lambda_{\rm rest} >500$\mic, \COa\ or (2--1), and
\CIfull. Summarising: \Nad\ galaxies have both CO and submm continuum
detections; \NXd\ have both \CI\ and submm dust continuum detections;
\NXa\ have both \CI\ and CO detections; \NdaX\ have all three
tracers. The sample covers the redshift range $0<z<6$, and includes galaxies lying within 1\,dex of the MS as well as extreme starbursts such as local ULIRGs and most high-$z$ submm-selected galaxies. Full details and references
are listed in Table~\ref{SampleT}. Lensed galaxies are included only
where there is an estimate of the magnification, $\mu$, and all luminosities have been corrected by the magnification factor. Our sample includes the galaxies from one of the most comprehensive studies of dust as a tracer of molecular
gas across cosmic time - \citet{Scoville2016},
henceforth Sco16\footnote{Although lensed galaxies were included in their work, the luminosities
were not de-magnified.}. The \citeauthor{Scoville2016} sample has been updated as described in
Appendix~\ref{notesS}.
In order to test for the effect of SF intensity or `SF-mode' on any later results, we divide the sample into two groups, referred to hereafter as `MS galaxies' and `SMGs', the names of the groups are not meant to be accurate definitions but rather a reference to familiar categories. For this heterogeneous data-set, defining a simple criterion for two groups is not possible, and even if it were, a fuzzy boundary would still remain due to measurement errors and the inability to capture the complexity in a single parameter. The extreme starburst `SMG' group contains the high-redshift submillimeter selected galaxies which were discovered in the pre-ALMA era and as such are extreme star forming systems (else they could not have been detected), plus the local ULIRGs and some LIRGs which have evidence for very intense and obscured regions (e.g. NGC~4418, IC~860) where conditions are likely to be extreme \citep{DiazSantos2017,Falstad2021}. The `MS galaxy' group contains the lower luminosity local disk galaxies plus the LIRGS which are not extreme, the intermediate redshift sources selected at 250\mic\ from the {\em Herschel}-ATLAS -- $z=0.35$ galaxies from \citet{Dunne2021} and the $z<0.3$ VALES galaxies
\citep{Hughes2017}, the $z\sim 1$ galaxies
\citep{Valentino2018,Bourne2019} and the ASPECs sources denoted as `MS' in that survey \citep{Boogaard2020}. (Full references are provided in
Table~\ref{SampleT}.)
There are two situations where corrections to luminosities may be required:
\paragraph*{\HI-dominated galaxies at low \textit{L}$_{\textbf{IR}}$.}
For galaxies with a large fraction of \HI\ within their
optical disk, their dust tracing \HI\ rather than \mol\ makes a significant
contribution to the submm continuum emission. Since our intention is
to provide a calibration for \mol\ rather than total gas, we apply a
correction to \lsub\ for galaxies with $\fhi= \HI/\mol>1$, as described
in Appendix~\ref{HIS}. Galaxies corrected in this way are shown as
cyan diamonds in the plots.
\paragraph*{Local galaxies mapped in \CI\ by the \textbf{\textit{Herschel}} FTS.}
The local galaxies mapped using the {\it Herschel} FTS by J19 present
some complex issues. Some do not have \CI\ and dust continuum
measurements in matched apertures, and those same galaxies are often
only detected in \CI\ in the inner few kpc of the galaxy, where the
ratios of \lci/\lcoa\ may also be biased -- for example, by a lower
\aco\ in galaxy centres. We discuss the issues in more detail in
\S\ref{J19S} and Appendix~\ref{J19A}. Galaxies requiring a significant
correction ($>0.1$ dex) to \lci\ are labelled as \CIcor; they are
shown in the plots as pink diamonds, but not included in the analysis
unless specified.
\subsection{Observables}
\label{calparamS}
\label{observablesS}
We will compare three tracers of molecular gas, where the observables
(the luminosities \lsub, \lcoa\ and \lci) are empirically related to
the molecular gas mass as
\begin{equation}
\label{MhE}
\Mmol=\lsub/\asub =\aco\lcoa =\aci\lci
\end{equation}
\noindent The goal of this analysis is to determine self-consistent
conversion factors \asub, \aco\ and \aci\ and study the physical
properties they depend on, e.g.\ C abundance, gas-to-dust ratio (\gdr),
dust emissivity. Our definition of the `observables' is intended to
be independent of as many assumptions as possible. For CO and \CI, we
use $L^{\prime}$ as defined by \citet{Solomon2005}:
\begin{equation}
L^{\prime} = \frac{3.25\times10^7}{\nu_{\rm rest}^2} \left(\frac{D_{\rm
L}^2}{1+z}\right) \left[\frac{\int _{\Delta V} S dv}{\rm Jy\, km\, s^{-1}}\right] \,\,\,\,{\rm K\,km\,s}^{-1}\,{\rm
pc}^2 ,
\end{equation}
where $\int _{\Delta V}S dv$ is the velocity-integrated line flux density,
$D_{\rm L}$ is the luminosity distance (Mpc), and $\nu_{\rm rest}$ is
the rest frequency\footnote{Where we use $\nu_{\rm rest}$,
\citet{Solomon2005} use $\nu_{\rm obs}$, hence the different
exponent for $(1+z)$ cf.\ their equation (3).} of the transition in
GHz.
Most of the galaxies in our compilation have been observed in the
$^{12}$CO($J=1$--0) transition. However, some observations at high redshift target the
$^{12}$CO($J=2$--1) line. We convert \lcob\ to $L^{\prime}_{10}$ using
the line luminosity ratio $R_{21}=0.8$; if instead we were to set
$R_{21}$ to unity, this would not affect any of our conclusions. We do
not use $J\geq 3$ CO lines because the uncertainties in the global excitation
corrections become too large for a useful calibration study.
We use only the \CI\ $^3P_1$--$^3P_0$ line, as it is the least
sensitive to the average excitation conditions, and correlates better with the low-$J$ CO
emission \citep{Jiao2017,Jiao2019,Crocker2019}. Moreover, there is now evidence
of strongly sub-thermal excitation for \CI(2--1) \citep{Harrington2021,PPPDunne2022},
making it difficult to use this line as an $\rm H_2$ mass tracer since its excitation
is extremely uncertain.
For the dust continuum emission, we use \lsub, the luminosity at
rest-frame 850\,\mic:
\begin{equation}
\lsub= 4\pi S_{\rm{\nu(obs)}}\times K \left(\frac{D_{\rm
L}^2}{1+z}\right) \,\,\,\,\, {\rm W\,Hz}^{-1} ,
\end{equation}
where $D_{\rm L}$ is the luminosity distance, $S_{\rm{\nu(obs)}}$ is the
observed flux density and $K$ is the $K$-correction to rest-frame 850\mic,
defined as
\begin{equation}
\label{KcorE}
K=\left(\rm{\frac{353\,GHz}{\nu_{\rm
rest}}}\right)^{3+\beta}\,\left(\frac{e^{\rm{h\nu_{\rm
rest}/k\td}}-1}{e^{16.956/\td}-1}\right) .
\end{equation}
Here, $\nu_{\rm rest}= \nu_{\rm obs}(1+z)$, \td\ is the
luminosity-weighted dust temperature, from an isothermal fit to the
spectral energy distribution (SED) with the dust emissivity, $\beta$,
allowed to vary between 1.8--2.0.
Sco16 assumed a \td=25\,{\sc k} and $\beta=1.8$, respectively, to extrapolate (or $K$ correct) their observed
submm luminosities to rest-frame 850\,\mic. We make full use of the available data to refine this procedure as follows: 1) With sufficient data, we fit the SED ourselves with $\beta=1.8$ and estimate the rest-frame 850\,\mic\ luminosity directly from the SED fit. 2) Failing that, we use the reported \td\ in the literature to make the extrapolation from the longest wavelength measurement available. 3) For SMGs with insufficient data points to have had their SED fitted, we adopt their observed average, \td=38\,{\sc k} \citep{daCunha2015}. The bulk of the high-$z$ samples now have observations between 2-3~mm with ALMA, as such the extrapolation to rest-frame 850\mic\ is small, even at the highest redshifts. The shortest rest-frame wavelengths we deal with are $\lambda_{r}\sim 250$\mic\ for sources at $z\sim 2-3$ observed at 850\mic, which require K-corrections in the range 50--140. However, the important consideration is the potential uncertainty in that K-correction, not its absolute value. We tried alternatively using the Sco16 method of assuming \td=25\,{\sc K} to extrapolate to rest-frame \lsub\ and found a maximum difference of a factor 1.6, with the average being 1.15 times. The true uncertainty due to the SED sampling and K-correction will be smaller than this, as we know from our work in Section~\ref{dustS} and Figure~\ref{tdhistF} that the dust temperatures in SMG are much higher than 25\,{\sc K}. We thus do not consider the extrapolation to rest-frame 850\mic\ to be a significant source of uncertainty or bias in this analysis.
\subsection{Physical dependencies of gas mass tracers}
\label{physparamS}
\subsubsection{Dust--$\rm{H_2}$ calibration}
\label{dustS}
Large dust grains ($a\sim 0.1$\mic) in thermal equilibrium with their
incident radiation field emit as a modified black body (MBB), where
the emission is related to the mass of hydrogen as:
\begin{equation}
\Mh = \frac{L_{\nu}}{4\pi B(\nu,\mwtd)}\,\,\,\kh(\nu) . \label{MdE}
\end{equation}
\noindent The two physical quantities needed to calibrate dust
continuum emission as a tracer of gas are therefore \mwtd\ and
\kh. Expressing Eqn~\ref{MdE} in astronomical units for
$\lambda= 850$\,\mic, we can write:
\begin{equation}
\frac{\Mh}{[\msun]} = 6.14\times10^{-14}\frac{\kh}{[\rm{kg\,m^{-2}}]}\frac{\lsub}{[\rm{W\,Hz^{-1}}]}\left(\frac{24.5}{\mwtd}\right)^{-1.4} , \label{MdaE}
\end{equation}
\noindent
where we have simplified the exponential term in the Planck function
as $\sim (24.5/\mwtd)^{1.4}$ for $17<\mwtd<30$\,{\sc k}.
The mass-weighted dust temperature, \mwtd, is often lower than the
luminosity-weighted dust temperature, \td, as derived from an
isothermal MBB fit to the dust SED, because warm dust outshines cold
dust per unit mass. There is an excellent discussion of this in the
Appendix to Sco16 which we will not repeat here \citep[see
also][]{Dunne2001}.
To determine \mwtd, we require a multi-component MBB fit to a
well-sampled dust SED \citep[e.g.][]{Dunne2001}, or an SED fit using a
model that allows a range of radiation field strengths, leading to a
range of dust temperatures \citep[e.g.][]{Draine2007}. These methods
give broadly consistent results. As a rule of thumb, the range of
\mwtd\ in local star-forming galaxies is 15--25\,{\sc k}
\citep{Dunne2001,Draine2007,Hunt2015,Dale2002,daCunha2008,Bendo2014,Clark2015},
increasing to 25--30\,{\sc k} in luminous starbursts at higher
redshifts \citep{Rowlands2014,daCunha2015}.
As the dust (\asub) factor is only weakly dependent on the
assumed temperature at rest-frame 850\,\mic, Sco16 and others assumed
a constant \mwtd=25\,{\sc k}. If instead the true \mwtd\ were to be 15
[30]\,{\sc k}, the dust and gas mass would be under-[over-]estimated
by a factor $\sim 2\times$ [$1.3\times$], which is overshadowed by the
other uncertainties. On the other hand, failure to account for any
{\it systematic} trend of \mwtd\ with another physical parameter can
introduce or mask correlations of the conversion factors with
that physical parameter.
We therefore explore the validity of assuming constant \mwtd\ by
collating measurements of \mwtd\ from the literature
\citep{Dunne2001,Hunt2019}, and additionally making our own fits where possible. In
Fig.~\ref{mwtdzF} we show that there are indeed strong correlations of
\mwtd\ with the observables, namely $z$, \Lir\ and the SED colour,
\Lir/\lsub. There is a clear difference between samples with low and
high SFRs, with $\langle\mwtd(\rm MS)\rangle=23.0\pm0.4$\,{\sc k}
while $\langle\mwtd(\rm SMG)\rangle=30.1\pm0.7$\,{\sc k}. We fit
empirical relations for the correlations in Fig.~\ref{mwtdzF} (see
Table~\ref{fitsT}). Where there is no direct estimate of \mwtd\ from
an SED fit, which is the case for two-thirds of galaxies, we use these
empirical relations\footnote{We restrict the predicted \mwtd\ such
that $\mwtd\leq\td$.} to derive \mwtd\ for use in our subsequent
analysis. Appendix~\ref{QTS} compares our approach, where we use
individual estimates of \mwtd, to the adoption of a constant
\mwtd=25\,{\sc k}, where we will discuss those findings in
\S\ref{caltrendS}.
In their work on strongly lensed SMGs at high redshift, which includes
dust continuum emission as a constraint in a large-velocity gradient
(LVG) model, \citet{Harrington2021} find that \td $\sim$ \mwtd\ for
most SMGs, with both measures of temperature higher than the
\mwtd=25\,{\sc k} commonly used in the literature. To ensure
consistency with our other estimates of \td, we fitted the
\citet{Harrington2021} photometry with three simple models: 1) an
isothermal optically thin MBB; 2) an MBB with variable optical depth
and -- where there were enough data -- 3) a two-component MBB. In
agreement with \citeauthor{Harrington2021}, we find that a single dust
temperature adequately describes the SED of these galaxies, in contrast
to lower redshift (U)LIRGs and normal galaxies which are better fit
with multiple dust components (\td\ $>$ \mwtd) and/or fits with higher
FIR optical depths. The temperatures from the \citeauthor{Harrington2021}
turbulence model correlate best with our isothermal \td\ meansurements
for these galaxies, and the temperatures returned when allowing
variable optical depth, $\td(\tau)$, are always significantly higher
than those from the \citeauthor{Harrington2021} model. We therefore do
not use optically thick fits to yield \mwtd\ for our high-redshift
SMGs. We instead use two-component SED fits to the lensed {\it Planck} sources
and the handful of SMGs with sufficient data for the
empirical relations shown in Fig.~\ref{tdcorrF}.
The other key physical parameter in the dust--\mol\ conversion is \kh,
which is a combination of the dust mass absorption coefficient (\kd)
and \gdr, such that\footnote{Literature studies generally present the
dust-\mol\ conversion in terms of \gdr\ for a fixed emissivity,
\kd. Given the mounting evidence that \kd\ varies within our own
\citep{Remy2017,Ysard2015,Ysard2018,Kohler2015} and other galaxies
\citep[e.g.][but see also \citealt{Priestly2020}]{Clark2019}, we
prefer to work with \kh\ to avoid projecting all the variation in the
$\rm H_2$-dust conversion factor onto \gdr.} \kh = \gdr/\kd. Briefly, \kd\ is sensitive
to the grain composition and structure (amorphous; crystalline;
coagulated; mantled), while \gdr\ is roughly proportional to
metallicity and, for galaxies with metallicity within a factor 2 of $Z_{\odot}$, as expected for those in our
samples, can be taken to be roughly constant, at \gdr $= 100$--150
\citep{Sodroski1997,Dunne2000,Dunne2001,
Draine2007,MM2009,Leroy2011,Sandstrom2013,Planck2011,Jones2017,deVis2021}.
Fortunately, observational measures of \kh\ are available, both for
the Milky Way and for external galaxies, with values of \kh\
$\sim 2000$\,\khunit\ in the Milky Way's diffuse interstellar medium
(ISM) and \kh\ $\sim 800$\,\khunit\ in dense
clouds. Appendix~\ref{kappaA} discusses in more detail how it is
measured, and Table~\ref{kappalitT} provides a comprehensive set of
observational and theoretical values for \kh\ from the literature.
It is impossible to disentangle the effect of changing dust properties
(\kd) from changes in \gdr\ in observational determinations of
\kh. While the decrease in \kh\ towards denser sightlines in the Milky
Way is thought to be due to the dust grains coagulating in denser
environments -- a process expected to increase their emissivity
\citep[e.g.][]{Kohler2015} -- there may also be some decrease in \gdr\
if the gas is accreted into dust mantles or ices (i.e.\ grain
growth). Both effects are to be expected
\citep[e.g.][]{Jones2017,Jones2018} and both act to decrease
\kh. Counter to that, the higher estimates of \kh\ in the diffuse
atomic phase (lowest $N_{\rm H}$ sightlines at high latitudes) in the
Milky Way may be due in part to a lower dust emissivity for grains
without ice mantles, where only the refractory cores remain, subjected
to harsher ultraviolet (UV) irradiation. Additionally, there is likely
a metallicity gradient at high latitudes, leading to a higher \gdr,
further increasing \kh. There is thus a qualitative expectation that
denser regions with higher metallicity will have higher dust
emissivity, \kd, and lower \gdr, producing a lower \kh. More diffuse
regions with lower metallicity will move in the opposite direction. In
\S\ref{caltrendS}, we find that we can constrain the {\it range} of
\kh, at least, and therefore the combination of \gdr/\kd.
\subsubsection{$\rm C\,I$--$\rm{H_2}$ calibration}
\label{CIS}
Here, we introduce the two physical parameters the \Xci=$\rm [C^0/H_2]$ average
abundance ratio and the average excitation factor \Q$=\rm N_1/N_{tot}$, pertinent to the use of \CI\ as a tracer of \mol.
The relationship between \Mh\ and the `observable' -- [\CI](1--0) line emission -- is (in astronomical units):
\begin{equation}
\Mh ({\rm M}_\odot) = \frac{0.0127}{X_{\rm
C\,I}\,\,Q_{10}}\left(\frac{D_{\rm L}^2}{1+z}\right)\,\,\left[\frac{\int _{\Delta V}S_{\rm [CI](1-0)} dv}{\rm Jy\, km\, s^{-1}}\right]%
\end{equation}
with $D_{\rm L}$ in Mpc and $\int_{\Delta V}S_{\rm [CI](1-0)}\Delta v$ in Jy\,\kms. Expressed in
units of line luminosity, this becomes:
\begin{equation}
\Mh (\msun) = \frac{9.51\times 10^{-5}}{\Xci\, Q_{10}}\,\lci .\label{MhciE}
\end{equation}
\noindent
The excitation term, \Q, describes the relative fraction of carbon atoms in the
$J=1$ state. Under general non-LTE conditions it is a function of both gas density, $n$, and \tk\, and is
derived analytically in the Appendix to
\citet*{PPP2004}. A recent study of the [\CI](2--1)/(1--0) line ratio
\citep*{PPPDunne2022} finds that the \CI\ lines are both sub-thermally excited in the ISM of galaxies, with the [\CI](2--1) especially
so \citep[see also][]{Harrington2021}. Thus the LTE expressions for \Q\
should not be used, nor will the \CI\ line ratio produce an estimate of \tk\ (both methods having been widely used in the literature to date). Details for \Q\ are in the Appendix~\ref{QA}, but in summary we find:
\begin{enumerate}
\item The [\CI](1--0) excitation term, \Q, is a non-trivial function of
density and temperature, but for the range $\tk \geq 20$~K and log $n\geq 2.5$ -- which is where the bulk of \mol\ in star forming galaxies is thought to reside -- $\langle\Q\rangle=0.48\pm 0.08$ where the 99
per cent confidence range is quoted (see \citealt{PPPDunne2022} and Figure~\ref{QulF} for details).
\item Due to a slight super-thermal behaviour, higher density, higher \tk\
conditions can produce similar or even lower \Q\ than lower
density, lower \tk\ conditions. This breaks any intuitive link
between \Q\ and the ISM conditions, i.e.\ we do not necessarily
expect a higher \Q\ in SMGs compared to MS galaxies (see
Fig.~\ref{QulF}).
\item As the [\CI](2--1) line is even more strongly sub-thermally excited, its $Q_{21}=N_2/N_{\rm tot}$ factor varies strongly\footnote{The $Q_{21}(n, T_k)$ that enters the estimates of molecular gas
mass when the [\CI](2--1) line is used can vary almost by a factor of $\sim5$, depending on $(n, T_k)$.}. This is the main reason why our current
study is restricted to the [\CI](1--0) line.
\end{enumerate}
As the \CIfull\ line is optically thin for most conditions expected in
spiral disks \citep{Weiss2005,PerezB2015,Harrington2021}, the
relationship between \lci\ and \Mh\ is proportional to \Xci\ -- the abundance
of carbon atoms relative to H$_2$. This dependence on abundance is as expected for any method that employs tracers of $\rm H_2$ gas mass other than
the $\rm H_2$ lines themselves\footnote{Even for optically thick
tracers of $\rm H_2$ gas, such as CO(1--0) line emission, a $\rm
[CO/H_2]$ abundance still enters the method via the CO--H$_2$ cloud
volume-filling factor, $f_{\rm CO}$, albeit not in a sensitive fashion
unless a combination of strong FUV radiation and/or low
metallicities selectively dissociate CO in the outer cloud layers while
leaving the largely self-shielding $\rm H_2$ intact (then $f_{\rm CO}$ can be $\ll 1$,
see Pak et al. 1998 for details).}.
With the excitation factor \Q\ varying no more than 16 per cent over
the typical range of \mol\ conditions in galaxies ($\tk\geq 20$~K, log $n\geq 2.5$), the major source of uncertainty in \CI-based
molecular gas mass estimates (and thus the major source of scatter in the \aci\ conversion factor) is the neutral carbon abundance, \Xci. The relatively recent introduction of the [\CI](1--0) line as a gas tracer means that \Xci\ has not been widely explored -- constraining it and investigating any potential trends is a key outcome of our
cross-calibration work.
In the Milky Way, \Xci\ is found to vary only modestly, from
0.8--$2.2\times10^{-5}$ \citep[e.g.][]{Zmuidzinas1988,
Frerking1989,Tauber1995,Ikeda2002}, while a much higher value
($\Xci=5\times10^{-5}$) has been inferred for the nearby starburst nucleus of M\,82 \citep{Schilke1993,White1994,
Stutzki1997}\footnote{The measurement is in fact the $\rm [C^0/CO]$ abundance, and a value for [CO/\mol] has then to be assumed to infer \Xci.}. Thanks to ALMA, very high localised ratios of \lci/\lcoa\ (translating to high \Xci=5-7$\times 10^{-5}$) have also been measured in extreme regions, such as the Circum-Nuclear Disk (CND) of NGC~7469 which is believed to host an X-ray Dominated Region (XDR) \citep{Izumi2020} and the outflow region in NGC~6240 \citep{Cicone2018}. More modestly elevated \lci/\lcoa\ ratios tend to be found in the central nuclear regions of starburst galaxies \citep{Jiao2019,Salak2019,Saito2020}. However, when averaged over larger kpc scale regions -- the ratios become consistent with the average global ratios measured for this sample (see Figure~\ref{tdcorrF}). Independent measurements of $\Xci=1.6^{+1.3}_{-0.7} \times 10^{-5}$ (for solar
metallicity) were made by
\citet{Heintz2020} using UV absorption measures for a range of absorber systems across cosmic time. Cosmic rays (and X-rays) are
expected to dissociate CO in favour of atomic carbon, increasing $[{\rm C}^0/{\rm CO}]$, a
hypothesis supported by both simulations and observations
\citep[e.g.][]{Bisbas2015,Clark2019ci,Israel2020,Izumi2020}.
\subsubsection{\rm CO--$\rm{H_2}$ calibration}
\label{COS}
The $^{12}$CO(1--0) line is optically thick in most (but not all see
\citealp[e.g.][]{Aalto1995}) ISM conditions expected in galaxies. Unlike
dust continuum emission where optical depths build up over large
columns of dust, the entire CO line optical depth builds up within
very small gas `cells' ($<$0.1\,pc) due to the very turbulent nature of the velocity fields,
and the small thermal line widths \citep{Tauber1991,Falgarone1998}. This localised nature of CO line optical depths and
the macro-turbulent CO line formation mechanism allows a great
simplification of the radiative transfer models of such lines, i.e.\
the use of the so-called Large Velocity Gradient (LVG) approximation. However, it also complicates the relationship between the CO line luminosity and
the underlying $\rm H_2$ gas mass, making the corresponding conversion
factor, \aco, dependent on the thermal state of the gas, its average
density, as well as its dynamic state.
Following \citet{PPP2012xco} the \aco\ factor in an LVG setting is given by:
\begin{equation}
\aco = 2.65\frac{\sqrt{n_{\rm H2}}}{T_{\rm b}}\,K_{\rm vir}^{-1}\,\,\,\, [\aunit] \label{acoE}
\end{equation}
\noindent
where $n_{\rm H2}$ and $T_{\rm b}$ are the average density (in \cc)
and the CO(1--0) brightness temperature\footnote{Here the cloud
CO-H$_2$ volume filling factor is set $f_{\rm CO}=1$.} for the
molecular cloud ensemble while $K_{\rm vir}$ describes the average
dynamic state of the gas (self-gravitating clouds $K_{\rm vir}\sim 1$, unbound clouds $K_{\rm vir}>1$). In principle, multi-phase LVG
models of CO (and $^{13}$CO) SLEDs can be used to constrain \aco, but in
practice this demands large line datasets per galaxy \citep[e.g.][]{PPP6240, Harrington2021}, making it
impractical for use in large galaxy samples. This is why in our study
\aco\ remains an empirical conversion factor to be (cross)-calibrated.
\subsubsection{Conversion factors and physical parameters}
The two optically thin tracers -- thermal dust continuum emission and
\CI\ -- have a simple relation between the empirical `mass-to-light' conversion
parameter ($\alpha_{\rm X}$) and the physical conditions in the ISM
(e.g.\ abundance, emissivity, temperature). We can write the empirical
factors (Eqn.~\ref{MhE}) in terms of these physical parameters as
follows\footnote{Hereafter we omit the units for \aco, \aci\ and
\asub.}:
\begin{equation}
\asub =
\frac{1.628\times10^{16}}{1.36\,\kh}\left(\frac{24.5}{\mwtd}\right)^{-1.4}
\,\,\,{\rm W\,Hz}^{-1}{\rm M}_{\rm mol}^{-1} \,\, , \label{asubE}
\end{equation}
where the factor 1.36 corrects to total molecular mass, including
He.
\begin{equation}
\aci=16.8\,\left[\frac{\Xci}{1.6\times10^{-5}}\right]^{-1}\left[\frac{Q_{10}}{0.48}\right]^{-1} \,\,\,\rm{\msun\,(K\,km\,s^{-1}\,pc^2)^{-1}} \label{aciE}
\end{equation}
Eqn.~\ref{aciE} also includes the factor 1.36 for He.
\section{Deriving self-consistent calibration of conversion factors}
\label{optS}
We next describe how we combine the measurements of multiple gas
tracers in the most efficient way, in order to determine their
cross-calibrations. Our goal is to find the empirical conversion factors (\asub, \aci, \aco) or physical parameters (\kh, \Xci), which
produce a consistent estimate for \Mh\ in a given galaxy.
Our dataset provides nested samples, each with a different set of
available gas tracers. The daX sample has all three tracers
available -- dust continuum, CO and \CI, and contains \NdaX\ galaxies
($N_{\rm daX}$ = \NdaX). The names and statistics for the other
samples are as follows: ad -- CO and dust, $N_{\rm ad}$ = \Nad; Xd
-- \CI\ and dust, $N_{\rm Xd}$ = \NXd; Xa -- \CI\ and CO, $N_{\rm Xa}$
= \NXa. The properties of these samples are in
Table~\ref{namesT}.
The best constraints at log$_{10}$ \Lir\ $> 11$\footnote{Hereafter we will refer to $\log_{10}$ as simply log.} come from the daX
sample because it has three independent tracers of gas mass, but
it lacks coverage of luminosities below $l_{\rm IR} = 11$. The ad sample is the
largest and spans the widest range in \Lir, reflecting the longer time
for which CO observations have been possible for nearby galaxies.
We begin with the daX sample, to illustrate the method of
optimisation for the estimates of all three conversion factors
simultaneously.\footnote{This method was first presented in brief in
\citet{Dunne2021}, where it was applied to the sample of $z=0.35$
galaxies.} There are four unknowns namely: $m={\rm log} (\Mh)$, $X={\rm log} (\Xci)$,
$\kappa = {\rm log} (\kh)$, and $\alpha= {\rm log} (\aco)$, and
three observables: \lcoa, \lci\ and \lsub.
With an independent measure of the true \Mh, the observables would
provide direct estimates of the three conversion factors; however,
the value of \Mh\ is not known {\it a priori}, so we must use a
probabilistic argument based on the fact the observations do provide
constraints on the {\it relative} values of the conversion factors
for each galaxy. There is thus a set of self-consistent conversion
factors which link the observables to the true \Mh, with an unknown
common constant factor.
The Bayesian approach we use is described in detail in
Appendix~\ref{bayesS} and requires an estimate of the intrinsic
scatter for the logarithms of each of the factors:
$s_{\rm X}$, $s_{\kappa}$ and $s_{\alpha}$. The observable luminosities relate
to these factors as follows, where the coefficients of proportionality are listed in Table~\ref{methodT}:
\begin{equation}
\begin{aligned}
\label{ratioE}
\frac{\lcoa}{\lci}\propto\aco\Xci,\,\, &
\frac{\lsub}{\lci}\propto\kh\Xci,\, &
\frac{\lcoa}{\lsub}\propto\frac{\aco}{\kh} .
\end{aligned}
\end{equation}
We begin by measuring the intrinsic scatter between the three pairs of
observables using an orthogonal distance regression (ODR) fitting
method, which includes the intrinsic scatter, $\lambda$, as a third
parameter in the analysis\footnote{We need to multiply $\lambda$ from
the ODR fitting routine by $\sqrt{2}$ because we need to know the
intrinsic scatter of $X-Y$ in our dataset in order to determine the
intrinsic scatter of each conversion factor in turn.} (see
Appendix~\ref{ODRS} for full details). The three pair variances
derived from the data are then used to estimate the intrinsic variance
of the three individual conversion factors (the derivation can be
found in Appendix~\ref{pairwiseS}). The values of the intrinsic
scatter for the parameters are given in Table~\ref{methodT}, with
\Xci\ having the smallest scatter between galaxies. This finding is
purely empirical, requiring no assumptions about the values or trends
of the conversion factors, and as such is very interesting.
\begin{table}
\caption{Samples used in cross-calibration analysis.}
\label{namesT}
\begin{adjustbox}{center}
\begin{tabular}{cccc}
\toprule
Sample & Tracers present & $N$ & median log \Lir \\
\midrule
daX & Dust, CO and \CI & \NdaX\ (\NdaXhi) & 11.65 (11.77) \\
Xa & CO and \CI & \NXa\ (\NXahi) & 11.66 (11.88) \\
Xd & Dust and \CI & \NXd\ (\NXdhi) & 11.88 (12.06)\\
ad & CO and dust & \Nad\ (\Nadhi) & 11.54 (12.07) \\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{$N$ is the size of the sample upon which the analysis has been performed, excluding those with uncertain and potentially large corrections -- see \S\ref{J19S}. Values in parenthesis are the number of galaxies in the samples with \Lhi\ and their median log \Lir.}
\end{table}
\begin{table}
\caption{\label{methodT} Summary of the parameters required to reproduce this analysis.}
\begin{adjustbox}{center}
\begin{tabular}{ccccc}
\toprule
Quantity & Set & \CI & CO & Dust\\
\midrule
physical & & \Xci & \aco & \kh\\
empirical & & \aci & \aco & \asub\\
\midrule
$s_{X,\alpha,\kappa}$ & \Lhi & 0.082 & 0.1646 & 0.1339\\
& BL & 0.1125 & 0.1436 & 0.1294\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{BL = baseline (excludes \CIcor\ and lo-VALES galaxies).}
\vspace*{1cm}
\begin{adjustbox}{center}
\begin{tabular}{lccc}
\toprule
\multicolumn{1}{c}{Pair}&\multicolumn{1}{c}{Set}&\multicolumn{2}{c}{Mean log pair}\\
\cmidrule{3-4}
\multicolumn{2}{c}{}&\multicolumn{1}{c}{BL}&\multicolumn{1}{c}{\Lhi}\\
\midrule
\aco\Xci & Xa & $-4.400\pm0.020$ & $-4.383\pm0.021$ \\
& daX & $-4.393\pm0.021$ & $-4.373\pm0.022$ \\
\aco/\kh & ad & $-2.769\pm0.015$ & $-2.798\pm0.018$ \\
& daX & $-2.867\pm0.025$ & $-2.875\pm0.027$ \\
\kh\Xci & Xd & $-1.529\pm0.021$ & $-1.509\pm0.021$ \\
& daX & $-1.526\pm0.024$ & $-1.498\pm0.024$\\ \bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Notes: Values here can be used to reproduce our method
and should be applicable to other metal-rich
samples. $s_{X,\alpha,\kappa}$ are the intrinsic scatter on
the log conversion factors, $X$, $\alpha$ and
$\kappa$. `Mean log pair' are the means of the log
combinations of calibration factors listed in the `Pair'
column, quoted with the standard error on the mean. We list
in the second column the sample used to derive these
means, both the sample with the largest number of pairs
and also for daX, which provides our reference set. We
provide numbers both for the BL galaxies (excluding those
discussed in \S\ref{J19S}) and also those with
\Lhi. The differences are not significant.
}
\end{table}
As we do not have any independent measure of the gas mass with which
to normalise our cross-calibration (four unknowns but only three
measurements), we must make an assumption about the sample average of
one of the physical or empirical conversion factors. However, with that assumption made
transparent, the individual values can always be scaled to whichever
normalisation a reader wishes to adopt. {\it The relative values, however,
are always the optimal solution.}
We choose to use the dust parameter \kh =
\gdr/\kd\ for this normalisation, because there are no trends of
\lsub/\lcoa\ or \lsub/\lci\ with \Lir\ (see
Figs~\ref{LhistsF},~\ref{tdcorrAF}) and \kh\ also has the best
observational constraints.
For the Xa sample, where there are no dust continuum measurements,
we normalise to $\Xci^{\rm N}=1.6\times10^{-5}$, which is the
mid-range of the values suggested by the independent study of
absorption lines by \citet{Heintz2020}.
All the galaxies in our sample are metal rich ($0.5 < Z/Z_{\odot} < 2$) and so we assume that
\kh\ (\gdr) should be similar to that in the Milky Way and other local
disks. Throughout the rest of this work, we will use as our reference
point the mid-range of extragalactic determinations, \kh =
1500--2200\,\khunit, which are consistent with measurements of the
diffuse ISM in the Milky Way. Our chosen normalisation value, then, is
\kh$^{\rm N}$ = 1884\,\khunit, which is a good match to current
theoretical dust models (THEMIS: \citealp{Jones2017}, and the updated
\citealp{Draine2007} modified by \citealp{Hensley2021}).
For the standard Milky Way value of \gdr\ \citep[=
135,][]{Jones2017,Magdis2012}, \kh\ = 1884\,\khunit\ implies that \kd\ =
0.071\,\kunit, similar to that used in many extragalactic studies
\citep{Dunne2000,James2002,daCunha2013}. For a \gdr\ fixed to 135, the
range of \kh\ in extragalactic studies implies a range in \kd\ of
0.06--0.09\,\kunit. Table~\ref{kappalitT} lists \kh\ values from
extragalactic and Galactic observations, as well as from theoretical
dust models.
Note that the choice of normalisation does not affect any of the
trends, nor the ratio of the conversion parameters in the pairings;
it merely sets the average value of the reference calibration
parameter, to which the others are relative.
The sample mean expectation values for the other two
parameters, $\langle\aco\rangle$ and $\langle\Xci\rangle$, are next derived
from our assumed value of $\langle \kh \rangle$, together with the mean
ratios of the observables listed in Table~\ref{methodT}. The {\it effective standard
deviation} is also calculated -- the intrinsic scatter of each parameter added in
quadrature to the measurement error for that gas tracer. For example,
for CO:
\[
\sigma_{\rm eff} = \sqrt{s_\alpha^2 + \sigma_{\rm CO}^2},
\]
where $s_{\alpha}$ is the intrinsic scatter in log(\aco), and
$\sigma_{\rm CO}$ is the measurement error on log(\lcoa).
We can now estimate the probability of finding a particular set of
conversion factors for any given galaxy. We use $a_i,\, i=1,2,3$ to
denote the logarithms of the three conversion factors\footnote{For
ease of representation, $a_{850}=-\log(\asub)$.}, and write the mean
expectation values and effective standard deviations as
$\langle a_i \rangle$, and $\sigma_{i,\rm eff}$ respectively.
Assuming that these follow Gaussian distributions, the probability of
finding the factors, $a_i$, for any galaxy is:
\begin{multline}
\label{eqn:chi2}
P \propto \displaystyle\prod_{i=1}^N \exp\left(-\frac{(a_i-\langle a_i \rangle )^2}{2\sigma_{\rm i, eff}^2}\right)\\
= \exp \left(- \displaystyle\sum_{i=1}^N \frac{(a_i-\langle a_i\rangle
)^2}{2\sigma_{\rm i, eff}^2} \right) .
\end{multline}
\noindent
Thus, the ratios of observable luminosities for any given galaxy can
be used to determine the ratios of conversion factors
(Eqn.~\ref{ratioE}), and the common scaling factor that maximises the
probability in Equation~\ref{eqn:chi2} is the best estimate of
\Mh. The derivation in Appendix~\ref{bayesS} shows that this reduces
analytically to a simple inverse variance weighted mean, such that:
\begin{equation}
\log M_{\rm H_2}^{\rm opt} = \frac{\sum^N_{i=1}(m_i\times
w_i)}{\sum^N_{i=1} w_i} \label{MoptE} ,
\end{equation}
\noindent
where $w_i = 1/\sigma_{i, \rm eff}^2$, and $m_i$
is the log mass estimate for each tracer.
\[
m_i = l_i + \langle a_i \rangle ,
\]
\noindent where $l_i$ is the measured observable (log luminosity) and
$\langle a_i \rangle$ is the sample mean expectation value for the
conversion factor. Once the optimal mass is determined this way, we
can then estimate the corresponding optimal conversion factor on a
per-galaxy basis, as:
\begin{equation}
a_i = m^{\rm opt} - l_i .
\end{equation}
\noindent The error on the optimal mass is simply the error on the
inverse variance weighted mean:
\begin{equation}
\sigma_m^{\rm opt} = \left(\sum^N_{i=1} w_i\right)^{-1/2} ,
\end{equation}
\noindent and the error on each of the conversion factors, accounting
for co-variance is:
\begin{equation}
\sigma_{ai} = \sqrt{\sigma_{\rm m^{opt}}^2 + \sigma_{li}^2 \left(1 -
\frac{2 w_i}{\sum^N_{j=1} w_j}\right)} ,
\end{equation}
\noindent where $\sigma_{li}$ is the logarithmic measurement error on
the observable quantity, e.g.\ \lcoa, \lci, \lsub.
By design, each tracer for a given galaxy, together with its optimised
conversion factors, will produce the same gas mass, such that
$\Mh^{\rm CO}=\Mh^{\rm C\,I}=\Mh^{\rm dust}$.
\section{Trends in the luminosity ratios}
\label{correlationsS}
As our cross-calibration process relies on measurements of the
luminosity ratios, it is first instructive to look at the trends in
these observables to better understand any subsequent trends in the
derived conversion factors.
Histograms of the tracer ratios are shown in Fig.~\ref{LhistsF}, split
by factors such as lensing, redshift, SFR, and other notable
quantities. The correlations of the three tracer luminosities are
shown in Fig.~\ref{LcorF}, where the various samples are colour coded
and labelled and in each panel the blue line and shaded region
represent the best fit and $2\sigma$ error interval. Fitting was
performed using our own Orthogonal Distance Regression (ODR) method, which includes $x$ and $y$ errors,
intrinsic scatter as a third parameter, and covariance in errors where
required. The method is described in detail in
Appendix~\ref{ODRS}. Fit parameters, slope $m$, intercept $c$, and scatter ln $\lambda$
, are listed in Table~\ref{fitsT}
and statistics for the various subsets from Fig.~\ref{LhistsF} are given
in Table~\ref{hypT}. It is instructive to look at these two plots
together for the same luminosity pairs.
\subsection{\lsub\ vs.\ \lcoa}
\label{L850LCOS}
The left-hand column of Fig.~\ref{LhistsF} and Fig.~\ref{LcorF}(a)
show the quantities \lsub\ vs.\ \lcoa. There are no significant differences in the distribution of
\lsub/\lcoa\ with lensing, redshift or SF-mode, but the observed
uncorrected ratio is significantly higher for galaxies with log \lcoa\
$<8.9$ (Fig.~\ref{LhistsF}; left green histogram). These log \lcoa\ $<8.9$
galaxies tend to have optical disks dominated by atomic hydrogen
(\fhi\ $>1$) and the likely increased contribution to \lsub\ from dust
associated with H\,{\sc i} rather than \mol\ results in the offset to
higher \lsub/\lcoa\ ratios. For local galaxies with \fhi\ $>1$ we
apply a correction (Appendix~\ref{HIS}) which appears to remove this
offset (cyan diamonds in Fig.~\ref{LcorF}(a)). Galaxies with log \lcoa\
$<8.9$ from the VALES sample at $0.04<z<0.3$
\citep{Villanueva2017,Hughes2017} have noticeably higher \lsub/\lcoa\
ratios than VALES galaxies with log \lcoa\ $>8.9$ (peach circles in
Fig.~\ref{LcorF}(a) and Table~\ref{hypT}). There are no published
H\,{\sc i} measurements for VALES, but we suspect that the lo-VALES
sample with log \lcoa\ $<8.9$ is likely to be H\,{\sc i}-rich, based on
the similarity between the log \lcoa\ $<8.9$ and \fhi\ $>1$ categories in
the third panel. We therefore exclude lo-VALES from our averages, as
we suspect they are in need of a correction for H\,{\sc i} but we have
no means to apply one. We also recommend that a low \lcoa\ requires
careful consideration of H\,{\sc i}-related dust. In
Fig.~\ref{LcorF}(a), the tracers show a linear dependence, regardless
of exactly which galaxies are included (Table~\ref{fitsT}). The
lo-VALES galaxies are excluded from all the fits.
\subsection{\lsub\ vs.\ \lci}
The central column of Fig.~\ref{LhistsF} and Fig.~\ref{LcorF}(b) show
\lsub\ vs.\ \lci, the pair with the least scatter
($\ln \lambda$ in Table~\ref{fitsT}). There are no significant differences in the distributions of \lsub/\lci\ as a function of lensing, redshift or SF-mode. The \lsub--\lci\ sample has
12 galaxies with large angular sizes from J19 that have only part of
their disk detected in \CI, denoted \CIcor. To compare \lci\ and
\lsub\ for these galaxies, we need to apply an aperture correction to
their \CI\ fluxes, thereby assuming that their \lci/\lcoa\ ratios
remain roughly constant across the disk (see
Appendix~\ref{J19S}). Some of the correction factors are very large
(up to 0.7 dex); even after correction, their \lsub/\lci\ ratios are
significantly offset from the rest of the \ms\ sample (green
histogram). Fig.~\ref{LcorF}(b) shows \lsub\ vs.\ \lci, with the
\CIcor\ galaxies as pink diamonds. The fit to all galaxies including
the \CIcor\ subset is shown as a dotted blue line, and has a
sub-linear slope $m$ ($3\sigma$), although the difference is very small
($m=0.937\pm0.020$). Excluding the \CIcor\ galaxies from the fit
leaves a linear relationship, shown by the blue solid line. At first
glance, this and the green histogram in Fig.~\ref{LhistsF} suggest
that our CO-based corrections are insufficient; however,
Fig.~\ref{LcorF}(d) shows that the problem is not simply the
assumption used to correct \lci\ to match the global \lsub. This plot
shows the dust emissivity cross-section, \cs, equivalent to \lsub\ but
with the temperature sensitivity removed\footnote{\cs\ is derived from
the data provided in \citet{Jiao2021} by multiplying the dust mass
in the \CI\ aperture by the \kd\ used in their method, \kd\ =
0.034\,\kunit\ \citep{draine2003,Draine2007}.}. Importantly, this quantity is
measured in the same aperture as \lci. The \CIcor\ galaxies are shown
as pink stars, and they -- along with many other low-luminosity
galaxies in J21 -- still appear to have less \CI\ emission for a given
amount of dust.
\begin{table*}
\caption{Parameters of robust ODR fits between variables using
MCMC, co-variant errors and including intrinsic scatter, $\ln \lambda$.}
\begin{adjustbox}{center}
\begin{tabular}{llcrrrrlc}
\toprule
\multicolumn{1}{l}{Log $x$}&\multicolumn{1}{l}{Log
$y$}&\multicolumn{1}{c}{Group}&\multicolumn{1}{c}{$m$}&\multicolumn{1}{c}{$c$}&\multicolumn{1}{c}{ln
$\lambda$}&\multicolumn{1}{c}{$r_{\rm
s}$}&\multicolumn{1}{c}{$p$}&\multicolumn{1}{c}{$N$}
\\
\midrule
\lcoa & \lci & BL & 1.023 (0.029) & $-0.93$ (0.29) & $-2.09$ (0.09) & 0.94& & 109\\
\lcoa & \lci & BL+\CIcor & 1.078 (0.026) & $-1.49$ (0.25) & $-2.03$ (0.09) & 0.96 & & 121\\
\lcoa & \lci & \Lhi & 0.950 (0.035) & $-0.18$ (0.37) & $-2.07$ (0.10) & 0.92 & &97\\
\midrule
\lci & \lsub & BL & 0.976 (0.024) & 14.33 (0.23) & $-2.11$ (0.09) & 0.95 && 140\\
\lci & \lsub & BL+\CIcor & 0.937 (0.020) & 14.71 (0.19) & $-2.08$ (0.08) & 0.96 & &152\\
\lci & \lsub & \Lhi & 1.024 (0.030) & 13.86 (0.30) & $-2.17$ (0.09) & 0.94 & &128\\
\lci & \cs & BL & 1.024 (0.025) & $-2.28$ (0.24) & $-2.05$ (0.08) & 0.95 & & 140\\
\lci & \cs & BL+\CIcor & 0.997 (0.021) & $-2.01$ (0.20) & $-2.07$ (0.08) & 0.96 & & 152\\
\midrule
\lcoa & \lsub & BL & 1.003 (0.015) & 13.42 (0.15) & $-2.01$ (0.05) & 0.96 & & 326\\
\lcoa & \lsub & \fhi<1 & 1.002 (0.017) & 13.43 (0.16) & $-1.99$ (0.05) & 0.96 & & 310\\
\lcoa & \lsub & \Lhi & 0.983 (0.026) & 13.63 (0.26) & $-1.91$ (0.06) & 0.93 & & 226\\
\midrule
\Lir & \lci/\lcoa & BL+\CIcor & 0.071 (0.02) & $-1.57$ (0.23) & $-1.70$ (0.09) & 0.26 & 0.005 & 121\\
\Lir & \lci/\lcoa & BL & 0.034 (0.02) & $-1.11$ (0.25) & $-1.70$ (0.09) & 0.09 & 0.34 & 109\\
\midrule
\td & \lci/\lcoa & BL+\CIcor & 1.23 (0.23) & $-2.60$ (0.35) & $-$2.20 (0.12) & 0.31 & & 115\\
\td & \lci/\lcoa & BL & 0.83 (0.28) & $-2.0$ (0.4) & $-$2.00 (0.15) & 0.15 & 0.12 & 103\\
\midrule
\Lir/\lsub & \mwtd & & 0.216 (0.010) & 3.90 (0.12) & $-3.76$ (0.15) & 0.82 && 152\\
\Lir & \mwtd & & 0.070 (0.004) & 0.60 (0.05) & $-3.10$ (0.10) & 0.80 && 152\\
\midrule
\Lir & \asub & BL & 0.045 (0.007) & 12.271 (0.082) & $-3.36$ (0.30) & 0.46 && 230\\
\Lir & \aco & BL & 0.59 (0.09) & $-0.91$ (1.10) & $-1.00$ (0.50) & 0.46 && 230\\
\Lir & \aci & BL & $-0.052$ (0.010) & 1.896 (0.124) & $-3.90$ (0.20) & $-0.48$ && 82\\
\Lir & \Xci & BL & $0.028$ (0.011) & $-5.136$ (0.133) & $-3.70$ (0.23) & 0.29 & 0.008 & 82\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Notes: $y=mx+c$ fit parameters are given with 1$\sigma$
errors in parentheses. Parameters are calculated accounting for the
errors in both $x$ and $y$ using the robust orthogonal distance
regression described in Appendix~\ref{ODRS}. Errors are sampled
using the {\sc emcee} MCMC sampler. Intrinsic scatter ($\lambda$) is
fitted as a third parameter. $r_{\rm s}$ is the Spearman rank
correlation coefficient, and $p$ is the probability, shown when
$p>0.005$. $N$ is the number of galaxies in that
regression. `Group' defines the galaxies on which the regression is
performed: BL = baseline (excludes \CIcor\ and lo-VALES galaxies),
while galaxies with \fhi>1 are corrected as described in
Appendix~\ref{HIS}.}
\label{fitsT}
\end{table*}
\begin{table*}
\caption{Two-sample KS-test result and Z-test statistic for the following parameter pairs shown in Figs~\ref{LhistsF} and \ref{tdhistF}.}
\begin{adjustbox}{center}
\begin{tabular}{@l^l^c^l^c^c^c^c^c}
\toprule
Quantity & $A$ & $N_{\rm A}$ & $B$ & $N_{\rm B}$ & $\bar{A}$ & $\bar{B}$ & $Z(\sigma)$ & $P_{\rm KS}$\\
\midrule
\lsub/\lcoa$^\dag$ & log \lcoa<8.9 & 50 & \lcoa>8.9 (MS) & 138 & $13.600\pm 0.030$ & $13.420\pm 0.016$ & 5.3 & 3e-5\\
\lsub/\lcoa$^\dag$ & \fhi<1 (MS) & 168 & \fhi>1 (MS) & 24 & $13.456\pm 0.014$ & $13.685\pm 0.042$ & 5.2 & 3e-6\\
\lsub/\lcoa$^\dag$ & lo-VALES & 7 & \fhi>1 & 17 & $13.741\pm 0.061$ & $13.663\pm 0.053$ & & 0.57\\
\lsub/\lcoa & \fhi<1 (MS) & 168 & \fhi>1 (MS$^{\ast}$) & 17 & $13.456\pm 0.014$ & $13.448\pm 0.039$ & & 1.0\\
\lsub/\lci & MS & 61 & \CIcor & 12 & $14.108\pm 0.027$ & $14.36\pm 0.036$ & 5.6 & 6e-4\\
\lci/\lcoa & $z<3$ (SMGs) & 37 & $z>3$ & 11 & $-0.686\pm 0.031$ & $-0.512\pm 0.044$ & 3.2 & 0.012\\
\lcoa/\lci & MS & 55 & SMGs & 54 & $-0.743\pm 0.028 $ & $-0.651\pm0.028$ & 2.3 & 0.056\\
\lcoa/\lci & MS+\CIcor & 66 & SMGs & 54 & $-0.786\pm0.026$ & $-0.651\pm0.028$ & 3.5 & 0.006 \\
\midrule
\td & MS & 174 & SMGs & 160 & $31.1\pm0.4$ & $38.3\pm0.7$ & 8.8 & 2e-12\\
\mwtd & MS & 82 & SMGs & 52 & $23.0\pm0.4$ & $30.1\pm0.7$ & 8.8 & 7e-13\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{$^{\dag}$Using \lsub\ without correction for \fhi>1 (as
this is the driver of the difference).\\
$^{\ast}$Not including the lo-VALES galaxies and with the \HI\ correction applied. }
\label{hypT}
\end{table*}
\subsection{\lci\ vs.\ \lcoa}
The right-hand column of Fig.~\ref{LhistsF} and Fig.~\ref{LcorF}(c)
show \lci\ vs.\ \lcoa. There are no significant differences in \lci/\lcoa\ for strongly lensed vs. unlensed sources,
but the highest redshift, $z>3$, galaxies have higher \lci/\lcoa\
ratios at marginal significance ($p=0.012$). There are only 11
galaxies at $z>3$ and a larger sample is needed to determine if this
is a genuinely significant trend. Fig.~\ref{LcorF}(c) shows the
\CIcor\ galaxies as pink diamonds, where the \CI\ and CO fluxes are
measured in the same apertures by J19. The solid line shows the fit to
all galaxies, which is non-linear at 3$\sigma$ significance
($m=1.078\pm0.026$). The slope becomes linear once the \CIcor\
galaxies are removed. The green histogram in Fig.~\ref{LhistsF}
(right) shows more clearly why we see this: the \CIcor\ galaxies have
significantly lower \lci/\lcoa\ ratios compared to other galaxies at
similar or higher luminosity. The bottom histogram shows a marginal
difference between galaxies with different SFRs ($p=0.056$) when
excluding the \CIcor\ galaxies, which becomes significant when they
are included ($p=0.006$).
\subsection{Resolved \CI\ fluxes from Herschel FTS mapping}
\label{J19S}
The local resolved galaxies observed with {\it Herschel} FTS \citep{Jiao2019} lie off the global trends seen in Fig.~\ref{LcorF}. There are possible physical explanations why lower luminosity, and more quiescently star-forming galaxies might have lower \CI/CO line ratios (for example, different ISM environments in terms of their position in the CR energy density vs average molecular gas density diagram: see Figure 1 in \citealp{Bisbas2015}). Low ratios of \lci/\lcoa\ have been found in other studies, most intriguingly in the case of the interacting LIRG NGC~6052 using ALMA \citep{Michiyama2020}, and some high-$z$ strongly lensed sources \citep{Harrington2021}. Such ratios tend to be unusual in higher luminosity samples, however, whereas the resolved FTS sample has a very low {\em average} for the \CI\ line ratios with both CO and dust.
We recommend caution in the interpretation of the data for these resolved FTS sources because another team subsequently presented the same data but drew different conclusions \citep{Crocker2019}. We can therefore only note
that the \CI\ fluxes from {\it Herschel} FTS mapping datasets are not always consistent when analysed by different teams.\footnote{\citeauthor{Crocker2019}
did not provide integrated fluxes, nor a method to determine them from their published measurements; hence, we cannot use their work directly in our analysis. Q.~Jiao has provided us with the maps used in J19, enabling us to check the measurements independently and extend our analysis, but we have had no responses to our requests for integrated fluxes or the details of the method used from the Crocker team.} As these resolved FTS measurements are
essentially the only source of \CI\ data at \Llo, and carry a lot of weight in \Lir\ and SFR correlations, we chose not to include the \CIcor\ galaxies in the statistical analysis. If, instead, we take the J19 measurements at face value -- they signpost a fundamental physical change in \CI\ properties, a finding which clearly warrants further study with ground-based facilities. We discuss possible
physical mechanisms for changes in the \lci/\lcoa and \lci/\lsub\ ratios in Appendix~\ref{J19A}.
\subsection{Trends with global indicators of star-formation.}
Finally, we check to see if any of the tracer ratios are sensitive to
SFR indicators. In our data-set the dust observables \Lir\ and \td\ are indicators of the intensity and magnitude
of star formation in galaxies
\citep[e.g.][]{Kennicutt1998,Foyle2012,Liu2021}. As expected, the
distribution of \td\ is very different for the MS galaxies and SMGs
(Fig.~\ref{tdhistF} and Table~\ref{hypT}), reflecting the increase in
the intensity of star formation in the SMGs. The only tracer ratio
sensitive to these SF indicators is \lci/\lcoa, which in Figure~\ref{tdcorrF} is seen to increase with
\Lir\ and \td\ when all galaxies are considered. Such a trend was not
reported for smaller samples over a more limited range of luminosity
\citep[e.g.][]{Jiao2017,Jiao2019}, presumably because of limited
statistics. However, if the \CIcor\ galaxies (pink diamonds) are
excluded, the correlation all but disappears (blue dotted line).
Naively, we might expect \lci/\lcoa\ to rise with increasing SFR intensity, due to the expected destruction of CO by cosmic rays (CR) in high-SFR environments \citep[e.g.][]{Bisbas2015}. For a given range of \mol\ densities in the typically hierarchical molecular clouds, any increased CR-induced ionisation rate, \zcr\ (due to a rising average CR energy density, $U_{\rm CR}$) will destroy CO in the lower-density, more extended areas, while leaving CO still tracing \mol\ in the more compact, denser regions (see also Figure 1 of \citealp{Papadopoulos2018} for a visual effect of this). Intriguingly, the gas density, $n$(\mol), and CR ionization rate, \zcr, will compete against each other in ULIRG/SMG environments, with the higher <n> expected in their highly turbulent ISM
tending to keep the ordinary CO/\CI\ chemistry in place, even when
exposed to the higher \zcr\ values.\footnote{We here assume that CR energy density $U_{\rm CR}\propto \rm \rho_{SFR}$ and CR ionisation rate $\zcr \propto U_{\rm CR}$.}
Guessing which one will win this highly non-linear competition
(see Fig 1, 8 in \citealp{Bisbas2015}) is dangerous in the absence of CO and \CI\ line data. These effects have been probed with a variety of simulations \citep[e.g.][]{Bisbas2015,Bisbas2021,Clark2019ci,Gong2020} and while showing similar trends, they are not easily parameterisable in terms of $n$(\mol) and \zcr; one reason why such cross-calibration efforts of the available gas mass tracers are so important.\footnote{On an individual galaxy basis one could assemble well-sampled CO, $^{13}$CO and \CI\ line SLEDs and overcome these problems with detailed analysis \citep[e.g.][]{PPP6240}. However, even in the ALMA era this remains very expensive in terms of telescope time making it prohibitive for large samples of galaxies.}
The other two tracer ratios show no trends
with either \Lir, \td\ (our proxies for SFR) -- we present the relevant plots in
Appendix~\ref{tdcorrAF} for completeness.
\section{Results}
\label{caltrendS}
In this section we present the results of the optimisation method,
firstly for the daX sample, for which we have all three gas
tracers, and later for the other three samples, for which we have
pairs of tracers. We investigate trends of the conversion factors
with \Lir\ and SFR. Mean values for the conversion factors are
listed in Table~\ref{caloptT}.
Fig.~\ref{caldaXF} shows the results for the daX sample
(Figs~\ref{calXdF}--\ref{calaodF} present the same results for each of
the samples in turn). The top row of each plot shows the
distribution of the relevant physical parameter for \CI\ and dust, and the conversion factor for CO: \Xci, \kh\ and \aco.
The lower left panels show the same quantities for the individual
galaxies as a function of \Lir; each panel indicates a reference
measure to give context. For \Xci, the horizontal lines indicate the
measured extremes found in the local Universe: Orion A/B clouds in the
Milky Way \citep{Ikeda2002} and the starburst centre of M\,82
\citep{White1994}, while the grey shaded region shows the range of
values inferred from observations of GRB hosts and QSO absorbers for
solar metallicity by \citet{Heintz2020}. They use a method which does
not rely on emission measures of dust, CO or \CI\ and so can be
considered independent. For \aco, the horizontal lines indicate the
typical \aco\ for the Milky Way \citep{Bolatto2013} and that commonly
adopted\footnote{In this panel, \aco\ does not include the factor 1.36
for He.} for ULIRGs and SMGs \citep{Downes1998}. For \kh, we show a
shaded band indicating the range derived for local galaxies (see
Table~\ref{kappalitT}), along with lines showing the value for the
most diffuse and dense sight lines in the Milky Way
\citep{Remy2017}. The right lower panels show the running log-means as
a function of \Lir\ to make it easier to see any trends, and
additionally includes\footnote{As elsewhere, the empirical parameters
\aci\ and \asub\ include the factor 1.36 for He.} the empirical
parameters, \aci\ and \asub. The solid shaded bins are the means for
the grey points, which are those used to determine the calibration;
the yellow points are \CIcor\ and the semi-transparent pentagon is the
mean of those -- see \S\ref{J19S} for more details.
Fig.~\ref{caldaXF} shows that for galaxies with \Lhi\ there are only
weak trends of the conversion factors with \Lir. While the
normalisation ($\kh^{\rm N}=1884$\,\khunit) was chosen to produce average dust
properties consistent with the Milky Way and other nearby spirals, the
CO and \CI\ conversion factors derived from the luminosity ratios
also lie within the ranges expected from independent studies. The
averages at \Llo\ are based on only a small number of points (12) and
more \CI\ studies are required to probe quiescent local galaxies.
\begin{table*}
\caption{Mean optimised conversion factors for our various samples.}
\begin{adjustbox}{center}
\begin{tabular}{clcccccccc}
\toprule
Sample & Selection & $N$ & \Xci & $\sigma_{\bar{X}}$ & \aco &
$\sigma_{\bar{\alpha}}$ & \kh & $\sigma_{\bar{\kappa}}$ & \gdr \\
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
\cmidrule(lr){8-9}
& & & \multicolumn{2}{c}{/$\times 10^{-5}$} & \multicolumn{2}{c}{\aunit} & \multicolumn{2}{c}{\kunit} & \\
\midrule
daX & \Lhi & 90 & $1.59^{+0.45}_{-0.38}$ & 0.04 & $2.66^{+0.96}_{-0.70}$ & 0.10 & $1990^{+738}_{-607}$ & 86 & 141\\ \addlinespace[1pt]
daX & \Llo & 12 & $1.18^{+0.60}_{-0.29}$ & 0.13 & $2.44^{+0.56}_{-0.51}$ & 0.17 & $1571^{+732}_{-525}$ & 163 & 112\\ \addlinespace[1pt]
Xd & \Lhi & 128 & $1.59^{+0.47}_{-0.35}$ & 0.04 & & & $1946^{+654}_{-464}$ & 53 & 138\\
\addlinespace[1pt]
Xd & \Llo & 12 & $1.24^{+0.57}_{-0.27}$ & 0.13 & & & $1503^{+604}_{-369}$ & 145 & 107\\
\addlinespace[1pt]
ad & \Lhi & 240 & & & $3.08^{+1.32}_{-0.81}$ & 0.07 & $1936^{+658}_{-504}$ & 45 & 137\\
\addlinespace[1pt]
ad & \Llo & 88 & & & $3.52^{+0.95}_{-0.84}$ & 0.10 & $1718^{+502}_{-339}$ & 44 & 122\\
\midrule
Xa & \Lhi & 97 & $1.61^{+0.39}_{-0.31}$ & 0.04 & $2.57^{+0.71}_{-0.62}$ & 0.08 & & &\\
\addlinespace[1pt]
Xa & \Llo$+$\CIcor & 24 & $1.30^{+0.2}_{-0.23}$ & 0.05 & $1.88^{+0.41}_{-0.34}$ & 0.10 & & &\\
\addlinespace[1pt]
Xa & \Llo & 12 & $1.37^{+0.34}_{-0.31}$ & 0.09 & $2.11^{+0.18}_{-0.57}$ & 0.15 & & &\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Means of the optimal conversion parameters (\Xci, \aco,
\kh) and the error on the mean ($\sigma_{\bar{X}}$,
$\sigma_{\bar{\alpha}}$, $\sigma_{\bar{\kappa}}$) for each
subset. We calculate the log-mean and express here in the linear
form. $^\dag$ We also report the gas-to-dust ratio, \gdr, for a
fiducial \kd=0.071\,\kunit. We use two normalisations: where dust is
one of the tracers, we use $\kappa^{\rm N}=1884$\,\khunit (equivalent to Milky Way
$\gdr=135$ for $\kd=0.071$\,\kunit); otherwise, for the Xa
sample we use $\Xci^{\rm N}=1.6\times10^{-5}$ -- the mid-range of the
values found by \citet{Heintz2020} for solar metallicity. The errors
are the 16th and 84th percentiles of the distribution. The \CIcor\
and lo-VALES galaxies are removed for analysis and the variances are
derived from the same set.}
\label{caloptT}
\end{table*}
\subsection{A calibration for the gas masses}
\label{prescS}
We next give a prescription for estimating gas mass, tailored to how
many tracers are available and -- where appropriate -- the type of
galaxy being investigated.
\subsubsection{Dual-band}
While the information content is greatest for the daX sample, which
has three tracer pairs to optimise, the method presented in
\S\ref{optS} still improves the cross-calibration for samples which
have two tracer measurements, i.e.\ one pair. The results for the Xd,
Xa and ad samples are shown in Figs~\ref{calXdF}--\ref{calaodF} and
behave similarly to the daX sample, as one would hope given that the
daX galaxies are a subset of the others. The pink diamonds in the
lower-left panels in Figs~\ref{calXdF} and \ref{calXaF} denote the
\CIcor\ galaxies. The cyan diamonds in the lower-left panel of
Fig.~\ref{calaodF} are galaxies with \fhi>1 which have been corrected
for the contribution of dust mixed with the \HI\ gas, as described in
Appendix~\ref{notesS}. The open peach circles are the lo-VALES
galaxies, which we suspect to have \fhi>1 (see \S\ref{L850LCOS}) but
which we cannot correct. We do not include these in any averages or
histograms.
The method previously used in the literature
\citep[e.g.][]{AZ2013,Scoville2016,Orellana2017,Hughes2017,Valentino2018}
has been to assume one tracer in a pair (e.g.\ \lcoa) has a known
conversion (\aco), then to fix that factor for all
galaxies in order to estimate the second (i.e.\ the one of
interest). We show this simple method alongside our optimal method as
grey lines and dashed grey error bars in the relevant panels of
Figs~\ref{calXdF}--\ref{calaodF}. The scatter in the conversion
factors for the optimised estimates are governed by the intrinsic
scatter we inferred in our analysis of the data in
Appendix~\ref{pairwiseS}, free of assumptions. In contrast, the simple method proscribes that there is no
scatter in the known conversion factor, and therefore all of the
intrinsic scatter in the luminosity ratio is attributed to the second
conversion factor of interest. The optimised method presented here
does not assume an {\it ad hoc} preference for any particular
conversion factor: as it is based on empirical variance analysis, it
uses more of the available information to improve the accuracy of the
estimated conversion factors. The histograms in
Figs~(\ref{calXdF}--\ref{calaodF}) show that the scatter in the factor
of interest is larger when using the simple method, and the trends in
the running medians are also more exaggerated.
Comparing the parameter estimates using three tracers to those using
two tracers for the same galaxies allows us to test the accuracy of
these two-tracer estimates. The details are in Appendix~\ref{testsA},
but in summary there is a reasonable correlation between the
three-tracer and two-tracer estimates, without bias
(Fig.~\ref{acocompF}) and an average scatter of 0.06--0.08 dex.
Thus, when multiple \mol\ tracers are available, we recommend
the procedure outlined in the example below.
\paragraph*{Example:}
Take the example of a galaxy with observations of both dust and CO. We
take the mean value for the appropriate pair combination from
Table~\ref{methodT}: \aco/\kh=0.00133. For our adopted sample mean
normalisation of $\kappa^{\rm N} = 1884$\,\khunit, we now infer the sample mean
expectation value of $\langle\alpha\rangle= 0.00133 \times 1884 = 2.5$
(excluding He). The sample means, $\kappa^{\rm N}$ (assumed) and
$\langle\alpha\rangle$ (derived), are next used to estimate an initial
gas mass for our galaxy in each of the two tracers, \lsub\ and \lcoa.
\begin{multline}
M_{\kappa} = \kappa^{\rm N} \lsub/4\pi B(\nu_{850},\mwtd) \\
M_{\alpha} = \langle\alpha\rangle \lcoa \\
\end{multline}
\noindent
Next, we calculate the effective standard deviation by adding the
observational error on the tracer luminosities in quadrature to the
intrinsic scatter for $\alpha$ and $\kappa$.
\begin{multline}
\sigma^{\kappa}_{\rm eff} = \sqrt{s_\kappa^2 + \sigma_{850}^2}\\
\sigma^{\alpha}_{\rm eff} = \sqrt{s_\alpha^2 + \sigma_{\rm CO}^2}\\
\end{multline}
where $\sigma_{850}$, $\sigma_{\rm CO}$ are the errors on $\log (\lsub)$ and
$\log (\lcoa)$, and $s_{\kappa}$ and $s_{\alpha}$ are the intrinsic
scatter on $\log (\kh)$ and $\log (\aco)$ listed in Table~\ref{methodT}.
\noindent
The optimal \mol\ mass estimate is then calculated thus:
\begin{equation}
M^{\rm opt} = \frac{M_{\kappa}/\sigma^2_{\rm \kappa, eff} + M_{\alpha}/\sigma^2_{\rm \alpha, eff}}{1/\sigma^2_{\rm \kappa, eff} + 1/\sigma^2_{\rm \alpha, eff}}
\end{equation}
We now work back to find the optimal conversion parameters for this galaxy:
\begin{multline}
\kh^{\rm opt} = \kappa^{\rm N} M^{\rm opt}/M_{\kappa}\\
\aco^{\rm opt} = \langle\alpha\rangle M^{\rm opt}/M_{\alpha}.\\
\end{multline}
\subsubsection{Single band: empirical conversion factors}
\label{empS}
\begin{table}
\caption{Empirical conversion factors recommended
for use when only a single gas tracer observation is in hand. These
include a factor 1.36 to account for He.}
\begin{adjustbox}{center}
\begin{tabular}{lccc}
\toprule
Sample & \asub ($\times10^{12}$) & \aco & \aci \\
& W Hz$^{-1}$ \msun$^{-1}$ & \multicolumn{2}{c}{\aunit}\\
\midrule
\Llo & $5.8\pm0.1$ & $4.7\pm0.1$ & .. \\
\Lhi & $6.9\pm0.1$ & $4.0\pm0.1$ & $17.0\pm0.3$\\
MS & $6.2\pm0.1$ & $4.4\pm0.1$ & $19.1\pm0.6$\\
SMGs & $7.3\pm0.1$ & $3.8\pm0.1$ & $16.2\pm0.4$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Values are the weighted means and errors from each
of the daX, ad and dX samples which all have the same
normalisation of $\kh^{\rm N}=1884$\,\khunit. Low luminosity (\Llo) values
are based only on the ad sample due to the small numbers of
reliable \CI\ measurements in this luminosity range. The differences in the
weighted means for SMGs and MS galaxies are significant in
all cases, but are likely driven by the trends with
luminosity seen in Figs~\ref{caldaXF}, \ref{calaodF} and
\ref{empLF}. A more accurate way to determine the
conversion factor to use would be to use one of the
relationships from Table~\ref{calfitsT}.}
\label{prescT}
\end{table}
\begin{table*}
\caption{Log means and statistical tests for conversion factors for
MS galaxies and SMGs.}
\begin{adjustbox}{center}
\begin{tabular}{@l^c^c^c^c^c^c^c^c}
\toprule
\CI & Sample & $\Xci{\rm (MS)}$ & $\Xci{\rm (SMG)}$ & $\aci{\rm (MS)}$ &
$\aci{\rm (SMG)}$& $Z(\sigma)$ & $P_{\rm KS}$ & $P_{\rm KS}$\,(25\,{\sc k}) \\
\midrule
& daX & {\boldmath $1.41\pm0.07$} & {\boldmath $1.70\pm 0.06$} & {\boldmath $19.1\pm0.9$} & {\boldmath $15.8\pm0.5$} & {\bf 3.1} & {\bf 0.016} & 0.074\\
& Xd & $1.44\pm0.05$ & $1.65\pm0.05$ & $18.7\pm0.8$ & $16.4\pm0.5$ & 2.6 & 0.013 & 0.48\\
& Xa & {\boldmath $1.44\pm0.04$} & {\boldmath $1.64\pm 0.05$} & {\boldmath $18.7\pm0.6$} & {\boldmath $16.4\pm0.5$} & 3.0 & 0.028 & \\
\addlinespace[0.5em]
\cmidrule(r){1-1}
\cmidrule(lr){3-4}
CO & & $\aco{\rm (MS)}$ & $\aco{\rm (SMG)}$ & & & & & \\
\cmidrule(r){1-1}
\cmidrule(lr){3-4}
& daX & $2.56\pm0.14$ & $2.61\pm0.12$ & & & 0.3 & 0.35 & 0.017\\
& Xa & $2.45\pm0.12$ & $2.70\pm0.10$ & & & 1.6 & 0.05 &\\
& ad & {\boldmath $3.32\pm0.07$} & {\boldmath $2.87\pm0.09$} & & & {\boldmath $3.8$} & {\boldmath $0.0008$} & 0.04 \\
\addlinespace[0.5em]
\cmidrule(r){1-1}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
Dust & & $\kh{\rm (MS)}$ & $\kh{\rm (SMG)}$ & $\asub{\rm (MS)}$ & $\asub{\rm (SMG)}$ & & & \\ \cmidrule(r){1-1}
\cmidrule(lr){3-4}
\cmidrule(lr){5-6}
& daX & $1758\pm90$ & $2039\pm113$ & $6.8\pm0.3$ & $7.7\pm0.3$ & 2.2 & 0.04 (0.25) & 0.76 \\
& Xd & {\boldmath $1757\pm66$} & {\boldmath $2025\pm58$} & $6.9\pm0.2$ & $7.4\pm0.2$ & 3.0 & 0.015 (0.41) & 0.58 \\
& ad & {\boldmath $1722\pm36$} & {\boldmath $1981\pm65$} & {\boldmath $6.0\pm0.1$} & {\boldmath $7.3\pm0.2$} & {\boldmath $3.6$} & {\boldmath $0.0017$} & 0.11\\
& & & & & & \boldmath$(6.5)$ & \boldmath$(4\times10^{-11})$ & \\
\bottomrule
\end{tabular}
\end{adjustbox}
\noindent{\flushleft We compare MS galaxies and SMGs for each subset
to look for differences in the parameters. The MS group excludes
\CIcor\ and lo-VALES galaxies, but it does include the \fhi>1
galaxies, after applying the correction from Appendix~\ref{HIS}. The
numbers in each subset are: daX: MS=46, SMG=55; Xd: MS=60, SMG=79;
Xa: MS=54, SMG=55; ad: MS=184, SMG=144. {\bf Bold} indicates
parameters which are significantly different between the MS galaxies
and SMGs in the Z-test and KS tests. The \CI\ parameters, \Xci\ and
\aci, are simply linked due to our adoption of constant $\Q=0.48$,
meaning that the distributions have the same KS results. The dust
parameters, \kh\ and \asub, are related to each other as a function of
\mwtd\ and so they can behave differently, e.g.\ \kh\ can be
indistinguishable between samples but the \asub\ can be significantly
different. Thus in the dust section, there are two $P_{\rm KS}$
values: those for \kh\ and then, in parentheses, those for
\asub. The final column, $P_{\rm KS}$(25\,{\sc k}), is the KS result
when \mwtd\ is fixed to 25\,{\sc k}, which makes
the distributions of \kh\ and \asub\ identical.}
\label{hyp2T}
\end{table*}
\begin{table*}
\caption{Fits to the conversion parameters and their tracer luminosities. All quantities include He.}
\begin{adjustbox}{center}
\begin{tabular}{ccccccc}
\toprule
$y$ & $x$ & $m$ & $c$ & $r_{\rm s}$ & $N$ & Sample\\
\midrule
log \aco & log \lcoa & $-0.062$ (0.009) & 1.25 (0.09) & $-0.33$ & 335 & ad\\
log \asub & log \lsub & $0.052$ (0.008) & 11.60 (0.18) & 0.37 & 335 & ad\\
log \aci & log \lci & $-0.052$ (0.013) & 1.73 (0.12) & $-0.3$ (0.0003) & 140 & Xd\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Fits in the form $y=mx+c$ for the empirical
conversion parameters and their tracer luminosities. The
\CIcor\ galaxies are excluded from the fits but are shown in
the plots (Fig.~\ref{empLF}). $r_{\rm s}$ is the Spearman
rank correlation coefficient with probability of the null
hypothesis of no correlation in parentheses for
$p\geq0.0001$. The effects of co-variance in the errors have
been accounted for.}
\label{calfitsT}
\end{table*}
The three empirical conversion factors, \asub, \aci\ and \aco,
directly relate the observable tracer luminosity to a gas mass,
according to Eqn.~\ref{MhE}. If only one tracer (\lcoa, \lci\ or
\lsub) is available, the empirical conversion factor we have estimated in Table~\ref{prescT}
is the best choice. We adopt a convention that the empirical
parameters are referenced to \Mmol, which includes a factor 1.36 for
He.
\begin{table*}
\caption{Empirical calibration factors derived from
this study. \Mmol\ columns
include a factor of 1.36 for He.}
\begin{adjustbox}{center}
\begin{tabular}{lccccccc}
\toprule
\multicolumn{2}{l}{} & \multicolumn{3}{c}{\Mh} & \multicolumn{3}{c}{\Mmol} \\
\cmidrule(r){3-5} \cmidrule(l){6-8}
Sample & $N$ & \asub $\,(\times10^{12})$ & \aco & \aci & \asub $\,(\times10^{12})$ & \aco & \aci \\
& & $\rm{W\,Hz^{-1}\msun^{-1}}$ & \multicolumn{2}{c}{\aunit} & $\rm{W\,Hz^{-1}\msun^{-1}}$ & \multicolumn{2}{c}{\aunit} \\
\midrule
daX & 101 & $9.9\pm0.2$ & $2.6\pm0.1$ & $12.6\pm0.4$ & $7.3\pm0.2$ & $3.5\pm0.1$ & $17.2\pm0.5$\\
ad (\Lhi) & 240 & $9.1\pm0.1$ & $3.1\pm0.1$ & & $6.7\pm0.1$ & $4.2\pm0.1$ & \\
ad & 326 & $8.8\pm 0.1$ & $3.2\pm0.1$ & & $6.5\pm0.1$ & $4.3\pm0.1 $ & \\
Xd & 140 & $9.8\pm0.3$ & & $12.7\pm0.4$ & $7.2\pm0.2$ & & $17.3\pm0.5$\\
Xa & 109 & & $2.5\pm0.1$ & $12.5\pm0.3$ & & $3.4\pm0.1$ & $17.0\pm0.4$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{The samples which include dust continuum used a normalisation of
$\kh^{\rm N}=1884$\,\khunit, while for the Xa sample, $\Xci^{\rm N}=1.6\times 10^{-5}$ was used. In this analysis we excluded the \CIcor\ and lo-VALES galaxies (see \S\ref{J19S}).}
\label{empT}
\end{table*}
Figs~\ref{caldaXF}--\ref{calaodF} show the empirical conversion
factors as a function of \Lir; Fig.~\ref{empLF} shows the empirical
conversion factors as a function of the tracer luminosity, and
Fig.~\ref{arghF} shows their distribution when the sample is split
into MS galaxies and SMGs (see Table~\ref{hyp2T} for details). All empirical factors show significant but shallow
correlations with the tracer luminosity. We have carefully accounted
for the co-variance between the $x$ and $y$ parameters when fitting,
so the correlations we find are not caused by the
involvement\footnote{The inclusion of the co-variance matrix in the
fit reduces the slope (closer to zero) by 0.05.} of \lsub, \lci\ and
\lcoa\ in the derivation of \asub, \aci\ and \aco.
Table~\ref{calfitsT} lists the fit parameters. The intrinsic scatter
in all of these relationships is very small once the measurement
errors are accounted for. Correlations are also seen between \Lir\ and
\asub, \aci\ and \aco (Fig.~\ref{calXdF}--\ref{calaodF}), albeit
with more scatter.
A correlation between \asub\ and \lsub\ was also noted by Sco16 (shown
as the red dashed line on our plot) but ours is somewhat shallower
($m=0.052$ vs $m=0.07$ from Sco16), although the difference is
unlikely\footnote{Sco16 do not quote an error on their fit, so it is
difficult to be certain, but the error on Sco16 would likely be
larger than ours, which means they are consistent to within
2$\sigma$.} to be significant. These shallow but significant
relationships with their tracer luminosity could be further applied to
give more accurate calibration (see Table~\ref{calfitsT}). The trends
of \aci\ and \aco\ with their respective tracers are explored for the
first time in large numbers here.
\subsection{Discussion of empirical factors relative to the literature.}
\label{litdiscS}
\subsubsection{Submillimetre dust empirical calibration, \asub}
The final calibration factors from this work are provided in Table~\ref{empT}.
A compilation of \asub\ values from our optimal method\footnote{There
is no significant difference if we fix $\aco$ to 4.3.} and those
from the literature, referenced to a common $\aco=4.3$: the Galactic
value including He from \citealp{Bolatto2013}) is presented in
Table~\ref{asublitT}. Literature values cover the range, $\asub$ =
3.6--$10.1 \times10^{12}$, comfortably within the range of estimates here:
$\asub^{\rm opt}=$ 6.5--$7.2\times10^{12}$. The lowest value,
$\asub =3.6^{+3.6}_{-1.9}\times10^{12}$, comes from the local sample
of \citet{Orellana2017}, who include H\,{\sc i} as well as \Mmol\
(from \lcoa). Their \asub\ refers to the total gas mass -- sensibly,
since their lower luminosity sample is more H\,{\sc i}-dominated than
the others we compare to -- meaning that a lower value for \asub\ is
required. The highest value, $\asub=(10.1\pm0.3)\times10^{12}$, is
from Sco16.\footnote{The original value of $\asub=6.7\times10^{12}$
quoted by Sco16 assumed that $\aco=6.5$ to calibrate the gas mass
from CO. Re-normalising the Sco16 result to the same $\aco=4.3$ as
our literature comparison increases the Sco16 value to
$\asub({\aco=4.3})=10.1\times10^{12}$.} There are two reasons
why Sco16 found a significantly higher \asub\ compared to our analysis. The first is simple
mathematics, as Sco16 quote a linear mean for a distribution that has
a significant tail to higher values; in contrast, we quote a log-mean
which is less sensitive to tails. This statistical bias results in a linear mean
for \asub\ which is 20 per cent higher than the log-mean
\citep{Behroozi2013} . Assuming the shape of our \asub\ distribution
is similar to that from Sco16, we adjust their linear mean down by 20
per cent to approximate our log-mean method. Thus our log-mean
estimate of the Sco16 value is
$\asub^{\rm LM}(\rm Sco16) = 8.4\times10^{12}$. Secondly, there have
been changes in the versions of the {\it Herschel} pipeline data used
as the basis for the local portion of the Sco16 sample (see
Appendix~\ref{notesS}). When we re-fitted the local galaxy SEDs to
estimate \mwtd\ and \lsub\ using the most recent {\it Herschel} flux
densities \citep{Chu2017,Clark2018} we found an increase in \lsub\ of
$\sim 0.1$\,dex compared to that reported in Sco16\footnote{This
difference is not just the photometry change; Sco16 used a different
method to estimate \lsub\ from the Herschel 500-\mic\ fluxes.}. Once
these factors are accounted for, the Sco16 result is comparable to
ours.
\begin{table*}
\caption{Summary of our empirical dust continuum--\Mmol\
calibration factor, \asub, compared to literature values referenced to $\aco=4.3$ (i.e.\ Galactic
\aco, including He).}.
\begin{adjustbox}{center}
\begin{tabular}{clcll}
\toprule
\asub ($\times10^{12}$) & Sample & $N_{\rm gal}$ & Notes & Reference\\
\asunit & & & & \\
\midrule
$6.4\pm0.1$ & all & 328 & log-mean opt & this work\\
$7.2\pm0.2$ & SMGs & 144 & log-mean opt & this work\\
$5.9\pm0.1$ & MS & 184 & log-mean opt & this work\\
$10.1\pm 0.3$ & local galaxies and SMGs & 72 & linear mean & \citet{Scoville2016}\\
8.3 & MS & 30 & linear mean & \citet{Scoville2016}\\
12.7 & SMGs & 30 & linear mean & \citet{Scoville2016}\\
$3.6^{+3.6}_{-1.9}$ & local galaxies & 136 & median H\,{\sc i} $+$ 4.3\lcoa & \citet{Orellana2017}\\
$6.1\pm0.14$ & $z<0.4$ 160-\mic\ selected & 41 & log-mean (ex lovales) &\citet{Hughes2017}\\
$8.4\pm1.0$ & $z=1.6$--2.9 unlensed SMGs$^{\dag}$ & 9 & log-mean & \citet{Kaasinen2019}\\
$11.6\pm1.2$ & $z=1.6$--2.9 unlensed SMGs$^{\dag}$ & 9 & linear mean & \citet{Kaasinen2019}\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Errors quoted are the standard error on the mean, from the
variance of the \lsub/\lcoa\ ratio. Where we have the data
for \lcoa\ and \lsub, we calculate the mean log \asub\ because
the distribution of ratios is skewed in linear space \citep{Behroozi2013}, leading
to a significantly higher value for \asub\ in the linear
averaging. We also cite the linear average, scaled to $\aco=4.3$
where that is presented in the original literature
reference. $^\dag$This small sample may potentially be biased by
choosing the brightest 850-\mic\ galaxies from the parent sample.}
\label{asublitT}
\end{table*}
\subsubsection*{Atomic carbon empirical factor: \aci}
We compare our optimised \aci\ estimates with others from the literature in Table~\ref{acilitT}, and for reference we also compile the literature values for \Xci\ in Table~\ref{XCIlitT}, the two are related simply by the excitation factor \Q, as given in Eqn.~\ref{aciE}. Our values are a weighted average
of the three samples that contain \CI, where we find
$\langle\aci\rangle=17.3\pm0.3$ (standard error on the mean). This
compares well with the only truly independent measure,
$\aci=21.4^{+13}_{-8}$, from absorber systems across a range of
redshift by \citet{Heintz2020}, but is considerably higher than
reported by many literature studies, e.g.\ $\aci=4.9$--10.3 for a
large study of (U)LIRGs by \citealp{Jiao2017} and
$\aci=7.3^{+6.9}_{-3.6}$ for local disks by \citet{Crocker2019}. Many literature studies assume a
fixed value for either \Xci\ or \aco\ in order to derive \aci\ (typically $\Xci=3\times 10^{-5}$ or $\aco=1$ for high-$z$ SMG or local (U)LIRGs). These assumed values are very different from those we have derived
here under our minimal assumption that metal rich galaxies have similar dust properties. Table~\ref{acilitT} describes the assumptions made for each literature source.
The MS galaxies in this work have a value of $\aci$ of $19.1\pm0.6$, again significantly higher than that found in \citet{Crocker2019}. However, J19, using a largely overlapping sample, derived a value of $\aci=19.9\pm1.9$ using the same {\em Herschel} FTS \CI\ mapping observations.
\citeauthor{Crocker2019} use $L^{\prime}_{\rm [CO](2-1)}$ images and spatially resolved \aco\ estimates from \citet{Sandstrom2013}, which were derived using a robust method which minimises the scatter in the
gas-to-dust ratio (see also \citealp{Eales2012}). While there are no {\em a priori} assumptions about \aco\ in \citet{Crocker2019}, implicit assumptions are required for the average CO $r_{21}$ excitation. The limited sensitivity of the FTS instrument to [\CI](1--0) meant that \CI\ was primarily detected in the brighter nuclear regions, where \aco\ tends to be lower than is typical in spiral disks ($\sim 1$ compared to $\sim 3$--4), \citep[e.g.][]{Sandstrom2013} -- as noted by \citeauthor{Crocker2019} Their measured \lci/\lcoa\ ratios in the resolved regions
are compatible with other MS galaxies in our sample (though still higher than the ratios derived for the same set of sources by J19, see
Fig.~\ref{tdcorrF}), meaning that the \aci/\aco\ ratios are also similar. As \aco\ in these regions is determined to be low in the \citet{Sandstrom2013} analysis, the \aci\ inferred by \citeauthor{Crocker2019} is correspondingly lower as well.
\citet{Jiao2021} (henceforth J21) use CO(1--0), H\,{\sc i}, [\CI](1--0),
dust continuum and metallicity maps to investigate the variation of
\aci\ and \aco\ across the disks of six well-resolved local galaxies
from their J19 study. They use the FIR/submm dust maps from {\it
Spitzer} and {\it Herschel} and the method of \citet{Draine2007} to
model the dust mass across the galaxy and relate this to a gas mass
via a relationship between dust-to-gas and metallicity
\citep{MM2009,Draine2007,Sandstrom2013}. As they explicitly use the dust mass together with a model of the \gdr\ dependence on metallicity, their results are normalised to the dust properties of
the \citet{Draine2007_kappa} model (hereafter \citetalias{Draine2007_kappa}), which assumes $\gdr=100$ and
$\kd=0.034$\,\kunit\ at solar metallicity. Their self-consistent
DGR(ii) method derives weighted mean values of
$\langle\aci\rangle=19.9\pm 1.9$ (including the information from lower
limits) and $\langle\aco\rangle=2.0\pm0.3$ ($2.6\pm0.4$) over the same
area as the \CI\ observations (the entire CO detection
region)\footnote{We have multiplied the values in J21 by 1.36 to
include He for consistency with our convention.}. In the central
region, the \aco\ values are significantly lower at
$\aco^{\rm C}=1.5\pm0.3$, while \aci\ is not found to be significantly
different with $\aci^{\rm C}=21.8\pm 0.5$. These values are comparable
to to our average of $\aci=19.1\pm0.6$ for MS galaxies, and to the
average value for $Z_{\odot}$ derived independently by
\citet{Heintz2020} of $\aci(\rm
HW20)=21.4^{+13.3}_{-8.2}$\footnote{While there are differences in
the normalisation for the dust mass model chosen by J21 and
ourselves, the introduction of a metallicity dependence
for the dust-to-gas ratio by J21 means that there is no simple way to scale
their results to our method. However, we can calculate their average
`effective \kh', $\kappa_{\rm eff}=1300$, which indicates that J21
derive a lower \mol\ for a given \lsub\ compared to our
normalisation (and hence a lower value of \aci\ and \aco). However,
the six galaxies in J21 are a subset of the \CIcor\ objects, at the
low luminosity end where there are potential decreases in \kh\ and
increases in \aci.}.
\begin{table*}
\caption{Summary of our empirical \aci\ calibration compared to work from the literature; \aci\ is quoted including He.}.
\begin{adjustbox}{center}
\begin{tabular}{clcll}
\toprule
\aci & Sample & $N_{\rm gal}$ & Notes & Reference\\
\aunit & & & & \\
\midrule
$17.0\pm0.3$ & \Lhi & & weighted average & this work\\
$19.1\pm0.6$ & MS & & weighted average & this work\\
$16.2\pm0.4$ & SMGs & & weighted average & this work\\
$10.3\pm0.3$ & (U)LIRGs & 71 & assuming $\Xci=3\times10^{-5}$ & \citet{Jiao2017} \\
$4.9\pm0.3$ & (U)LIRGs & 71 & CO(1--0) with $\aco=1.1$ & \citet{Jiao2017} \\
$7.3^{+6.9}_{-3.6}$ & resolved local disks & 18 & CO(2--1) and resolved $\aco$ from S13 & \citet{Crocker2019} \\
$19.9\pm1.9$ & resolved local disks & 6 & H\,{\sc i}, CO(1--0), \CI, dust, $Z$ ($\kappa_{\rm eff}\sim 1300$) & \citet{Jiao2021} \\
$16.2\pm7.9$ & $z=3$ lensed SMGs & 16 & multi-$J$ CO, \CI\, dust modelling & \citet{Harrington2021}\\
$21.4^{+13.3}_{-8.2}$ & GRB/QSO absorbers & 19 & $H_2$ and \CI\ absorption lines at $\rm{Z_{\odot}}$ & \citet{Heintz2020}\\
17.6 & theory & & for $\zcr=5\times10^{-17}\rm{s^{-1}}$ & \citet{Offner2014}\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{The values from this work are the weighted averages of the
results from each of the three sub-groups containing \CI\
information.}
\label{acilitT}
\end{table*}
\begin{table*}
\caption{Summary of our \Xci\ calibrations compared to other work in the literature.}
\begin{adjustbox}{center}
\begin{tabular}{cccccc}
\toprule
\Xci ($\times 10^{-5}$) & Sample & $N_{\rm gal}$ & Notes & Reference\\
\midrule
$1.6^{+0.5}_{-0.4}$ & $z=0$--5 \Lhi & 90 & \lsub, CO, \CI\ with $\kh^{\rm N}=1884$\,\khunit & this work\\
$2.5\pm1.0$ & local SF & 11 & CO(1--0) and $\aco=1$ & \citet{Jiao2019}\\
$1.3\pm $ & local SF & 9 & CO(1--0) and \aco\ from S13 & \citet{Jiao2019}\\
$1.6\pm0.7$ & $z\sim1.2$ MS & 11 & CO(2--1) and \aco(Z) ($\langle\aco\rangle=3$) &\citet{Valentino2018}$^\dag$\\
$2.0\pm0.5$ & $z\sim1.2$ MS & 11 & dust and \gdr(Z) ($\langle\gdr\rangle=134$) &\citet{Valentino2018}$^\dag$\\
$3.9\pm0.4$ & $z=2-3$ SMGs & 14 & CO(4--3), CO(1--0) and $\aco=1$ & \citet{AZ2013}$^\dag$\\
$8.4\pm3.5$ & SMGs/QSOs & 10 & CO(3--2) and $\aco=0.8$ & \citet{Walter2011}\\
$8.3\pm3.0$ & local (U)LIRGS & 23 & CO(1--0) and $\aco=0.8$ & \citet{Jiao2017,Jiao2019}$^\dag$\\
$0.9\pm0.3$ & $z=1$ ISM selected & 2 & CO(2--1), \CI\ and $\aco=2.6$ & \citet{Boogaard2020}\\
$2.0\pm0.4$ & $z=1$ ISM selected & 3 & 1.2\,mm, \CI\ and $\asub=6.7\times10^{12}$ from Sco16 & \citet{Boogaard2020}\\
$^{\ast}1.6^{+1.3}_{-0.7}$ & $z=2$--4 GRB/QSO absorbers & 19 & \mol\ and $\rm{C^0}$ absorption lines for $\rm{Z_{\odot}}$ & \citet{Heintz2020}\\
$^{\ast}7^{+7}_{-3.5}$ & NGC\,7469 (CND) & 1 & AGN, dynamical mass, \lci\ and \lcoa & \citet{Izumi2020}\\
$^{\ast}1.4-5$ & NGC\,6240 & 1 & \aco\ from CO--SLED, high-density tracers & \citet{Cicone2018}\\
\multicolumn{3}{c}{}& and two-phase LVG modelling & \citet{PPP6240}\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{$^{\ast}$ indicates estimates of \Xci\ independent
of assumptions for \aco\ or \kh. $^\dag$ indicates that this
sample forms part of the literature sample we have used,
although we have calibrated \Xci\ using the submm luminosity
and an average normalisation of $\kh^{\rm N}=1884$\,\khunit
($\gdr=135$ for $\kd=0.071\kunit$) for the sample, rather
than \lcoa\ and a fixed \aco. A breakdown of our
results by intensity of star formation can be found in
Table~\ref{hyp2T}.}
\label{XCIlitT}
\end{table*}
\subsubsection*{CO empirical factor: \aco}
We compare our optimised \aco\ estimates with others from the literature in Table~\ref{acolitT}.
Sophisticated LVG modelling with very large datasets which include
high-density gas tracers, optically thin CO isotopologues, full CO
SLEDs, and sometimes the \CI\ lines and dust emission
\citep[e.g.][]{Weiss2007,PPP2012xco,PPP6240,Israel2020,Harrington2021}
can break some of the model degeneracies of the optically thick CO
lines, though the method is still reliant on assumptions for
[CO/\mol], isotopologue ratios, the number of components allowed
(single components give very different results to multiple components)
and the allowed range of velocity gradients in the models.
The best examples are NGC\,6240 \citep{PPP6240} and the {\it Planck}
lensed galaxies \citep{Harrington2021} where detailed LVG modelling
and comprehensive datasets have sufficient constraints to break the
degeneracies which usually be-devil this method. The two-component LVG
result for NGC\,6240 is $\aco=2$--4 \citep{PPP6240} (cf.\ \aco=0.6
when using a single component LVG model \citealp{PPP2012xco}) and we
can further use the ratio of $\lci/\lcoa$ measured by
\citet{Cicone2018} and our relationship,
$\lci/\lcoa=\aco/\aci = 3324\,\aco\Xci$, to infer that
$\Xci=1.4$--$2.9\times10^{-5}$ in the starburst region. In fact, our
optimised values for this galaxy using global fluxes, are
$\aco({\rm daX}) = 2.9\pm0.6$,
$\Xci({\rm daX})=(2.4\pm0.5) \times 10^{-5}$,
$\kh({\rm daX})=2800\pm700$ ($\gdr=200$), in excellent agreement. The
{\it Planck} lensed galaxies analysed by \citet{Harrington2021} do not
have the same degeneracy-breaking lines used by \citep{PPP6240} in
their analysis, but they do have multi-$J$ CO coverage and incorporate
the \CI\ lines and the dust continuum emission in their model fitting,
based on \citet{Weiss2007}. They assume similar dust parameters as we
do for their normalisations ($\gdr=120$--150 with
$\kd=0.08$\,\kunit). With this, they infer an average $\aco=3$--4 and
an average $\aci=16.2\pm7.9$ (incl.\ He), remarkably consistent with
our results, given our very simple approach.
\begin{table*}
\caption{Summary of our empirical \aco\ calibrations compared to work in the literature, \aco\ is quoted including a factor of 1.36 for He.}
\begin{adjustbox}{center}
\begin{tabular}{cccccc}
\toprule
\aco & Sample & $N_{\rm gal}$ & Notes & Reference\\
\aunit & & & & \\
\midrule
$3.6^{+1.3}_{-1.0}$ & $z=0$--5, \Lhi & 90 & \lsub, CO, \CI\ with $\kh^{\rm N}=1884$\,\khunit & this work \\
\addlinespace[1pt]
$4.2^{+1.8}_{-1.1}$ & $z=0$--5, \Lhi & 240 & \lsub, CO with $\kh^{\rm N}=1884\,\khunit$ & this work\\
\addlinespace[1pt]
$4.8^{+1.3}_{-1.1}$ & local MS, \Llo & 88 & \lsub, CO with $\kh^{\rm N}=1884$\,\khunit & this work\\
\addlinespace[1pt]
$^{\ast}3.1^{+3.1}_{-1.5}$ & local disks & 26 & CO(2--1), $r_{21}=0.7$, H\,{\sc i}, dust & \citet{Sandstrom2013}\\
$^{\ast}4.2$ (3.5--5.4) & MW large scale & & $\gamma$-ray various & \citet{Remy2017}\\
$^{\ast}3.4\pm2.1$ & Planck lensed SMGs & 24 & LVG: multi-$J$ CO, \CI\ and dust & \citet{Harrington2021}\\
$^{\ast}4.1^{+4}_{-2}$ & NGC\,7469 (CND) & 1 & AGN, dynamical mass, \lcoa & \citet{Izumi2020}\\
$^{\ast}2-4$ & NGC\,6240 & 1 & LVG: multi-$J$ CO, dense gas tracers & \citet{PPP6240}\\
$^{\ast}3.8^{+1.0}_{-0.7}$ & $z=0$--5 & 22 & CO, \CI, Z and absorber based \aci & \citet{Heintz2020}\\
\addlinespace[1pt]
$^{\ast}4.4^{+2.0}_{-1.4}$ & local galaxies & 24 & C\,{\sc ii}, CO(1--0) and modelling at $\rm{Z_{\odot}}$ & \citet{Accurso2017} \\
$^{\ast}0.6\pm0.2$ & (U)LIRGs & 28 & LVG:
Single component, multi-$J$ CO & \citet{PPP2012xco} \\
$^{\ast}2-6$ & (U)LIRGs & 28 & LVG: Two-comp, free d$V$/d$R$, dense-gas tracers& \citet{PPP2012xco}\\
$^{\ast}3.9\pm1.1 $ & local disks & 9 & CO(1--0), H\,{\sc i}, dust & \citet{Eales2012} \\
$1.8\pm0.5$ & MW local clouds & 6 & H\,{\sc i}, CO(1--0), $\gamma$-ray & \citet{Remy2017} \\
$2.9\pm0.5$ & MW local clouds & 6 & H\,{\sc i}, CO(1--0), 850-\mic\ dust & \citet{Remy2017} \\
2.9 & Taurus & 1 & H\,{\sc i}, CO(1--0) and extinction/reddening & \citet{Chen2015} \\
$2.4\pm0.4$ & local disks & 7 & CO(1--0), H\,{\sc i}, dust & \citet{Cormier2018} \\
$1.9\pm0.3$ & resolved local disks & 6 & H\,{\sc i}, CO(1--0), [\CI](1--0), dust, $Z$, $\kappa_{\rm eff}\sim 1300$ & \citet{Jiao2021}\\
$3.2\pm1.0$ & $z=4$ lensed SMGs & 9 & CO(2--1), [\CI](1--0) $\Xci=3\times10^{-5}$ & \citet{Bothwell2017}\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{Errors are 1$\sigma$ standard deviations (or 16--84
percentiles). A breakdown of our results by intensity of
star formation can be found in Table~\ref{hyp2T}. $^{\ast}$
indicates estimates which do not rely on assumptions for
\Xci\ or \kh.}
\label{acolitT}
\end{table*}
\subsection{Lack of bi-modality in the conversion factors}
\label{calCOS}
Our sample contains normal star forming galaxies -- those obeying the SFR--$M_\star$
correlation that forms as a result of the more intimate relationship
between SFR and H$_2$ -- as well as many extreme star-forming systems, which belong to the (U)LIRG and high-$z$ submillimeter selected samples. Here we remind the reader that we refer to the extreme SF group -- those that supposedly require a
lower \aco\ -- as `SMGs', and the normal star forming sources as `MS galaxies', or
sometimes just `MS'. As mentioned in Section~\ref{obsS}, the assignment of the galaxies to either category is by nature of the data rather `fuzzy' as we do not have a measure of SFR or stellar mass for all sources, nor any homogeneous way to estimate them. We thus rely on the categories used by previous authors where possible, especially for high-$z$ sources. The $z=1$ galaxies from the samples of \cite{Bourne2019,Valentino2018,Valentino2020} are deemed to be `MS', as are the sources from ASPECs \citep{Boogaard2020}. Most low redshift sources with log \Lir<12 are classed as `MS' though there are some exceptional LIRG class sources in the local Universe which have extreme properties as evidenced by their FIR, MIR lines and vibrational HCN \citep{DiazSantos2017,Falstad2021}.
We note that using a more conservative separation when assigning galaxies into MS and extreme starburst categories does not change any of the results. We therefore conclude that while our assignment of sources into the two SF categories is not perfect, this categorisation is not capable of masking any strong bi-modality in the observable ratios.
Fig.~\ref{arghF} and Table~\ref{hyp2T} detail the distributions of
conversion factors for each sample, split into MS galaxies and
SMGs. While formally there are significant differences in the
parameters for some samples, these are very small -- around 10--20 per
cent in the mean, rather than the factor $\sim 3$--$4\times$ often assumed for
\aco\ \citep[e.g.][\aco=0.8, derived for four ULIRGs]{Downes1998}. In fact, only the ad sample
shows any difference in \aco\ between the MS galaxies and SMGs, while
the estimates based on \CI\ and CO, or on all three tracers, show no
significant difference. This is partially explained by the larger
luminosity range in the ad sample, combined with the previously noted
negative correlation between \aco\ and luminosity (Figs~\ref{calaodF}
and \ref{empLF}), with a factor $\sim 2\times$ reduction in \aco\ for
a factor $\sim 100\times$ increase in \lcoa. We cannot rule out that
the correlation of \aco\ with luminosity is the true reason that the
ad sample shows a significant difference between MS galaxies and
SMGs\footnote{$\Lir{\rm (ad)}{\rm (MS)}=10.95$
($\lcoa{\rm (ad)}{\rm (MS)}=9.31$) while
$\Lir{\rm (Xa)}{\rm (MS)}=11.22$ ($\lcoa{\rm (Xa)}{\rm
(MS)}=9.52$). Using the relation in Table~\ref{calfitsT}, the
expected $\Delta \aco=\aco{\rm (MS)} - \aco{\rm (SMG)}=0.36$ for the
Xa sample and -- due to the lower numbers in the Xa and daX samples
-- such a difference would not be detected at a significant level,
if it existed.}.
This is not the first time\footnote{However our current dataset is
more homogeneous, using only CO(1--0) or CO(2--1) and a consistent
approach to modelling the dust with our empirical relations for
\mwtd.} that lack of bi-modality in \aco\ has been reported when
compared to dust-based determinations
\citep[e.g.][]{Magdis2012,Rowlands2014,Genzel2015}. The range of
\aco\ we find for SMGs (see Fig.~\ref{calaodF}) is well within the
framework set out by \citet{Papadopoulos2012}, who noted that galaxies
with a highly turbulent ISM (e.g. ULIRGs and SMGs) can have
\aco\ similar to galaxies with a much more quiescent ISM, the only
difference being that in a turbulent ISM, the distribution of gas mass
as a function of density is weighted to higher densities
than in a less-turbulent ISM.
Recent joint SLED/SED modelling of an
exquisite dataset that includes CO, \CI\ and dust continuum for lensed
SMGs \citep{Harrington2021} finds a mean $\aco =3.4$--4.2 for these highly
turbulent galaxies (albeit with a large dispersion). The \citeauthor{Harrington2021} radiative transfer models
employ a continuum distribution of molecular gas mass as a function
of average Mach number (and average density of the molecular cloud ensemble), making them better equipped to `capture' any re-distribution of
the underlying molecular gas mass towards higher densities.
While important for other
issues (e.g. the initial conditions of star formation in
SMGs/ULIRGs), such a re-distribution in a highly turbulent ISM may actually leave \aco\ statistically unaffected. The initial reports of a bimodal \aco\ factor in
the local Universe, with $\sim $4-5$\times $ lower values for ULIRGs
than LIRGs and ordinary spirals, can possibly be explained by a CO-luminous, strongly unbound,
low-density molecular gas component found preferentially in ULIRGs. Such a component can dominate the global CO(1--0) line luminosities of
ULIRGs/SMGs (even if containing only small fractions of their total
molecular gas), while its large $\rm K_{vir}$ values will yield systematically low \aco\ factors, under
one-component LVG modelling (Equation 9).\footnote{Also we must consider the size of the original sample -- four ULIRGs in the first study by
\citet{Downes1998}.}
For individual galaxies, only multi-component models of SLED/SED (that
also include molecules/transitions tracing the dense gas) can properly
account for this effect \citep[e.g.][]{PPP6240,Harrington2021}, while for large galaxy samples, our
cross-calibration of \aco\ against the other two molecular gas mass
tracers, is the most economical method. In that regard it is worth
noting that {\it dust continuum is immune to the gas-dynamics effects
described above,} i.e. a diffuse low-density, unbound, $\rm H_2$
gas component will contribute very little to the total dust continuum
if its gas/dust mass is indeed low. The optically thin \CI\ line
emission will also be much less sensitive than CO(1--0) to such
gas-dynamics effects exactly because of its low optical depths. These
are perhaps the reasons why our cross-calibration of \aco\ against
dust and \CI\ emission has not uncovered any obvious bimodality of its values in
MS galaxies compared to SMGs.
The range of values we find for \aco\ is consistent with expected values
for $Z>0.5\,Z_{\odot}$ galaxies (\citealt{Accurso2017}, based on calibrating \aco\ using C{\sc ii}). Using their predictions, we would expect
$2.7<\aco<15.2$ for the likely range of metallicity and offset from the MS in our sample.
Our underlying assumption: that the dust--gas properties of MS galaxies and SMGs can be described as a uni-modal distribution with well defined mean and scatter, is based on our finding that the luminosity ratios (Fig.~\ref{LhistsF}) -- the most basic observables used in deriving the empirical conversion factors -- have such a distribution. They show no evidence for the strong bi-modality advocated for \aco\ in some of the literature. That the distribution of the observed luminosity ratio is, to first order, similar to the distribution of the conversion parameters, is the simplest `Occam's Razor' assumption we can make.
To see what a different initial assumption would mean for the conversion factors, we repeated our analysis, this time
inserting the popular bi-modal behaviour in \aco\ \citep{Greve2005,Weiss2005,
Tacconi2006,Tacconi2008,Genzel2010,Walter2011,AZ2013,Jiao2017,Valentino2018}
as our prior, such that the sample mean normalisations for SMGs and
MS galaxies are set to be different: $\aco^{\rm N}(\rm SMG)= 0.8$ and
$\kh^{\rm N}(\rm MS)=1884$\,\khunit. Our optimal method then allows
the data to return the most likely values for the other
parameters\footnote{We did not re-calculate the intrinsic scatter
split into MS galaxies and SMGs, only the values of the \Xci/\aco\
and \aci/\kh\ pairs.} under these assumptions.
Fig.~\ref{smgF} shows the results with this bi-modal normalisation,
(blue points: MS galaxies, red points: SMGs). By design, we have
reproduced the extreme bi-modality $\aco(\rm MS)\sim 3$--4 and
$\aco(\rm SMG)\sim 1$, but Fig.~\ref{smgF} clearly shows that the
same extreme bi-modality has to be present in \kh\ (\gdr) and \Xci,
giving a clear prediction that $\Xci(\rm SMG)\geq 4\times 10^{-5}$ if
the bimodality in \aco\ really exists, with essentially no overlap in
\Xci\ between the MS galaxies and SMGs. To test this will require an
independent determination of \Xci\ in SMGs, without reference to dust
or CO calibration. To date, there is no such determination of \Xci, although \citet{Izumi2020} observed the nearby LIRG NGC~7469 with ALMA, using kinematic data to derive $\rm{M_{dyn}}$, which is the sum of \Mh, stellar mass and dark matter. This method has promise, but the systematic uncertainties in \Mh\ from this analysis are too large (0.3\,dex) to answer our question. While the \citeauthor{Izumi2020} study clearly indicates\footnote{via the extremely high observable ratio $\lci/\lcoa=0.92$ in the CND} that the [$\rm{C^0}/CO$]
abundance can be enhanced in extreme environments, the CND is only a tiny region and
the {\em global ratio} for this source is very similar to other LIRGs, with $\lci/\lcoa = 0.20\pm0.04$. Any study which wishes to test the bi-modality hypothesis must also be representative of the galaxy global properties.
Here we must stress again that for individual galaxies, joint
SLED/SED radiative transfer models of well-sampled SLEDs and dust
emission SEDs do recover Galactic-valued \aco\ factors even in
(U)LIRGs or SMGs \citep{PPP6240,Harrington2021}. However
such results cannot be used in a statistical sense, i.e. as typical
of the respective galaxy populations for obvious reasons, and our
statistical approach remains the sole avenue.
Indeed, the only way that a bi-modal \aco\ for MS galaxies and SMGs can be
reconciled with \Mh\ estimates using dust or \CI\ is to impose
the same bi-modality on their conversion parameters (\kh, \Xci, \asub, \aci). A reduction of
\aco\ by a factor $3\times$ necessitates a decrease [increase] in
\kh\ [\Xci] by the same factor. Thus, if $\aco=0.8$ is preferred for
extreme star-forming galaxies \citep[e.g.][]{Walter2011}, then
$\kh=600$ ($\gdr=43$ for $\kd=0.071$\,\kunit) and
$\Xci=5.3\times10^{-5}$ must also be adopted (statistically, for this
galaxy population). This discrepancy was previously noted by
\citet{Bothwell2017} and \citet{Valentino2018} who found that using
$\Xci(\rm SMG)=3\times10^{-5}$ with \lci\ as a tracer resulted in
larger gas masses than using \lcoa\ with the `ULIRG' value of
$\aco=0.8$. {\em Therefore, the popular `choices' of $\aco=0.8$ and $\Xci=3\times10^{-5}$ are incompatible with each other.}
Based on our current understanding, there are two plausible physical mechanisms which may cause an increase in \Xci\ and a decrease in \kh\ in extreme ISM conditions. The effect of enhanced cosmic ray densities on carbon chemistry \citep{Bisbas2015,Bisbas2021,Glover2016,Gong2020} favour a higher $\rm [C^0/CO]$ abundance, however, this mechanism is density dependent and is less effective in dense regions, which typify the ISM of SMGs. Thus while extreme environments with elevated cosmic rays or X-rays would certainly act to increase \Xci\ at a fixed density, it does not simply follow that extreme SF activity will produce high \Xci\ since those same regions (CRDR/XDR) are typically found in regions with increased density.
The higher dense gas fractions common in SMGs may favour higher rates of grain growth, or mantling, both of which would reduce the value of \kh\ -- i) by decreasing the \gdr\ and ii) by increasing
the dust emissivity, \kd. Our results imply, however, that any such changes must act in harmony with each other so as to maintain the same observable ratios, so an increase in
\Xci\ must correlate directly with a decrease in \kh\ and \aco. This
prediction is a clear challenge to models, and full astro-chemical simulations for the extreme physical conditions expected in the ISM of SMGs and ULIRGs will be needed to explore how the three tracers can vary in the exact same way through very different physical mechanisms.
\subsection{On the robustness of our choices}
\subsubsection{Impact of uncorrected \fhi>1 galaxies}
Statistically, the effect of having uncorrected \fhi>1 galaxies is
small, since there are only 15 such galaxies, where
$\langle\aco\rangle$ increases\footnote{Note that this offset is
linear, not logarithmic.} by +0.27 in their luminosity bin compared
to when they are removed altogether (by +0.18 compared to when they
have been corrected). We are thus confident that dust associated with
\HI\ is not biasing our overall determination of conversion factors
and their trends, at least in this sample. For individual galaxies,
however, the difference in \aco\ can be very large. When using this
method for low-redshift galaxies with significant \HI\ within the
dust-emitting region, corrections are needed.
\subsubsection{Impact of using a constant \mwtd}
\label{fixTS}
The strong correlation found between \mwtd\ and luminosity
(Fig.~\ref{mwtdzF}) has not been considered in previous works. In
Appendix~\ref{QTS}, we show a comparison of results using our
empirical \mwtd\ relations to those for constant, \mwtd=25\,{\sc
k}. Summarising these findings:
\begin{enumerate}
\item{The median offsets between parameters when using constant vs.\
variable \mwtd\ are $<0.015$\,dex. The scatter in a parameter is
generally within 0.1\,dex (Fig.~\ref{delcalqu_histF}). Thus the
global averages we present in this paper are not affected by a
change to constant \mwtd=25\,{\sc k}.}
\item{Allowing \mwtd\ to vary with \Lir\ is more realistic and leads
to a shallow but significant trend with luminosity, such that
\aco\ decreases with increasing \Lir, while \kh, \asub\ and \Xci\
increase slightly. Using a constant \mwtd\ of 25\,{\sc k} produces
no trends of any conversion factor with \Lir\
(Fig.~\ref{fixQT_lirF}).}
\item{Using a constant \mwtd\ of 25\,{\sc k} results in gas masses up
to 0.1\,dex lower at \Llo\ and 0.1\,dex higher at $l_{\rm IR}<12.0$
compared to the variable \mwtd\ used in the main analysis
(Fig.~\ref{delMhQT_lirF}).}
\end{enumerate}
\section{Discussion}
\label{DiscS}
The diverse galaxies in this study show a remarkable consistency in
their gas mass tracers, with linear relationships between all three
pairs of observables, \lsub, \lci\ and \lcoa.
We find weak trends in the conversion factors with \Lir; decreasing \aco,
\aci\ and increasing \asub, \kh, \Xci. These trends are very shallow, amounting to a factor $<2\times$ change
over 2--3 orders of magnitude in luminosity. The intrinsic variation
in \kh\ and \Xci\ (the physical quantities encompassing most of the
uncertainties in the corresponding conversion factors) is likely very small,
and approximating them with a single constant value should be robust.
For the sub-samples with a \CI\ tracer (daX, Xa, Xd), we see decreases in all three tracer conversion factors at \Llo: \Xci\ (15--25 per cent), \aco\ (10--30 per cent) and
\kh\ (\gdr: 20--25 per cent). However, the data indicating a drop in conversion factors originates from the J19 sample (see earlier discussions). More
\CI\ studies of normal star-forming galaxies in the local Universe are
urgently required to further explore any such trend, in particular using the global
\CI\ line emission, rather than that of the central few kpc of a
galaxy.
The average values of \Xci, \aco\ and \kh\ (\gdr) for galaxies with
all three tracers (the daX sample) and \Lhi\ are our `reference'
values, $\acoR =3.7$ (including He), $\XciR = 1.6\times10^{-5}$, $\kh^{\rm R} =
1990\, (\gdrR=141)$. These agree within the errors with the mean
values determined using only two tracers. These reference values are
{\em not unique} because only the ratios and products of the conversion
factors are constrained by the observables, \lci/\lcoa,
\lci/\lsub\ and \lsub/\lcoa.
Once a conversion factor is known or assumed, however, the others can
be determined by the self-consistent ratios listed in
Table~\ref{methodT}. For example, using the ad sub-sample and normalising to $\kh^{\rm
N}=2800\,\rm m^2\,kg^{-1}$ would produce $\Xci=1.1\times10^{-5}$ and
$\aco=4.7$, in reasonable agreement with \citet{Accurso2017} for
$Z=0.6\,Z_{\odot}$, while normalising to $\aco^{\rm N}=0.8$ gives
$\Xci=5.8\times 10^{-5}$ and $\gdr=34$. While the data are consistent
with both of these possibilities, or any other combination of the
above ratios, we must caution that the low values of \aco\ often
recovered from CO-only methods (and after modeling only a few low-J CO
lines) may be an artifact of well-known gas-dynamics effects, which are
expected to have very little impact on the global \CI\ line emission and
none whatsoever on the corresponding dust continuum.
For galaxies at \Llo\ we can only use the ad sample (CO and dust
continuum) because of the uncertainties surrounding the \CIcor\
galaxies. For the 88 galaxies at \Llo\ with CO and dust
measurements, $\aco = 4.8^{+1.4}_{-1.1}$ (including He), with
$\langle\kh\rangle=1718$\,\khunit (\gdr=122) but note that this is
still a sample of massive and metal-rich galaxies, just at lower log $L_{\rm IR}\sim 9-11$.
This study is not applicable to low mass metal-poor
galaxies.
\section{Conclusions}
We have cross-calibrated the three mainstays of molecular gas
measurements in extra-galactic astronomy: $^{12}$CO(1--0), \CIfull\
and submm continuum emission from dust. This analysis uses galaxy
samples spanning $0<z<5$ and more than four orders of magnitude in
\Lir. All the galaxies are metal rich and/or massive, to remove the
need for large corrections for metallicity effects.
\begin{itemize}
\item{We present a new method of optimising gas mass estimation when
multiple tracers are observed, making use of the intrinsic scatter
in all three pairs of gas tracers. We demonstrate its
effectiveness compared to the simpler method used previously in
the literature, and give examples and prescriptions for its use.}
\item{In a purely empirical analysis, we show that \lci\ is the
molecular gas tracer with the least intrinsic scatter,
particularly at \Lhi. In such galaxies, \lci\ should be
the preferred tracer, all other considerations being equal.}
\item{Using our optimised method, we determine the mean empirical
conversion factors for \Mmol\ (including He). For \Lhi\
these are: ${\aci^{\rm R}=\acir\pm\eacir}$,
$\asub^{\rm R}=(\asubr\pm\easubr)\times10^{12}$,
$\aco^{\rm R}=\acorm\pm\eacorm$, with a scatter of 0.11-0.15 dex. These values are for an overall normalisation set to the average dust properties of local galaxies and diffuse dust in the Milky Way (\kh = \gdr / \kd = 1884 kg $\rm m^{-2}$). A change in this choice of normalisation will affect \aco\ and \aci\ in a proportional manner and \asub\ in an inversely proportional way}. Our reference conversion values
can be applied to any metal-rich galaxy with $Z>0.5\,Z_{\odot}$ in
the range $0<z<6$.
\item{Using the same method we determine the principal mean
physical parameters on which these conversion values depend. For galaxies at \Lhi:
${\Xci^{\rm R}=\xcir\times10^{-5}}$, $\kh^{\rm R}=\khr\, (\gdr^{\rm R}=141)$.}
\item{The relationships between the observables, \lsub, \lcoa\ and \lci\ are consistent with being linear and the ratios of these
observables do not show a strong dependence on IR luminosity, dust
temperature, redshift or the intensity of star formation.}
\item{The ratio of \lci/\lcoa\ is marginally (3$\sigma$) different for
MS galaxies and SMGs, with the latter having higher \lci/\lcoa,
broadly consistent with expectations from astro-chemical cloud
models that include enhanced cosmic rays.}
\item{We find $\Q=0.48$ to be a reasonable choice for the excitation
function (required to convert \lci\ to $M_{\rm C\,I}$), based on
recent analysis showing that \Q\ has a super-thermal behaviour in
non-LTE conditions \citep{PPPDunne2022}. For a range of plausible
galaxy ISM density and gas temperatures, the 99th percentile
confidence interval on this value is $\pm 16$ per cent.}
\item{We present empirical relations for the mass-weighted dust
temperature, \mwtd, to allow observers to better estimate their
dust calibration factors. We find a significant trend, where
\mwtd\ increases with \Lir. The median \mwtd\ for SMGs at
$z\sim 2.5$ is $\mwtd^{\rm SMG}=\tmwSMG\pm\etmwSMG$\,{\sc k},
while for MS galaxies $\mwtd^{\rm \ms}=\tmwMS\pm \etmwMS$\,{\sc
k}.}
\item{We find a weak trend for \kh\ and \Xci\ to increase with \Lir,
and a similar trend for \aco\ to decrease. The empirical
conversion factors (\aco, \aci\ and \asub) also show a shallow
but significant correlation with their tracer luminosities. These
trends are not apparent if a constant \mwtd\ is adopted. They are
therefore driven by the change in \mwtd\ with luminosity.}
\item{Using an Occam's Razor assumption that metal-rich galaxies have similar dust emissivity per unit gas mass, we find no evidence for the factor 3--4 \aco\ bi-modality between SMGs and MS galaxies often adopted in the literature. The shallow trends we do find reflect the common assumption that extreme SF systems have lower \aco\ and higher \Xci, albeit at a far more subtle level, with only a $\sim 15$\,per\,cent difference in the sample mean \asub\ (higher), \aci\ (lower) and \aco\ (lower) for extreme star-forming galaxies versus `normal' MS star-formers.}
o overall
\item{With the Occam's Razor assumption, we also find no evidence to support the extremely high global estimates of
$\Xci$ ($\sim 6\times 10^{-5}$) reported in some literature for
ULIRGs/SMGs -- the high reported values are a consequence of assuming a low $\aco\sim 1$. High \Xci\ values may be expected, and indeed have been measured in small ($<500$pc) regions such as M82 (nuclear starbursts) and XDR regions around AGN, but the extent to which a global estimate would be enhanced depends on the dominance of that extreme environment in the galaxy's \mol\ reservoir.}
\item{One can, however, still postulate a different prior for the normalisation assumption and impose the popular bimodality in \aco. The constancy of the measured tracer luminosity ratios then forces the conversion factors for the other two tracers (dust and \CI) to become bi-modal in the same way.}
\end{itemize}
We conclude by noting that lacking a direct \Mh\ measurement method
(i.e. via the $\rm H_2$ lines themselves), one must assume a
normalisation for one of the sample mean conversion factors in statistical studies like ours. In the present study we choose to benchmark to the dust emission, with $\kh^{\rm N}=1884$\,\khunit. Other
normalisation choices can of course be made, but currently dust emission is the simplest and best understood tracer, and has the advantage of being totally insensitive to the
gas-dynamic effects that affect the \aco\ conversion factor (e.g. unbound molecular gas components in the winds that exist in actively star-forming galaxies; winds which can be CO-bright
while carrying little mass).
The [\CI](1--0) line emission will also be largely unaffected by these
gas-dynamics effects, and as such the corresponding conversion factor, \aci,
shows promise as a good benchmark, borne out by the empirical finding that it has the least intrinsic scatter of the three tracers. With
more extensive observational and theoretical studies of \CI\ line emission (particularly in galaxies of lower IR luminosity), the limits of its usefulness as a gas tracer can be determined.
\section*{Data Availability}
Data tables based on the samples used in this paper are available via anonymous ftp to cdsarc.u\-strasbg.fr (130.79.128.5), alternatively via \url {http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/MNRAS/}. The datasets were derived from sources in the public domain, which are listed in Table~\ref{SampleT}.
\section*{Acknowledgments}
The authors thank the referee for their careful reading and insightful comments on the original version of the paper.
LD thanks P.~Clark, S.~Glover, Q.~Jiao and T.~Bisbas for helpful discussions. LD, SJM
and HLG acknowledge support from the European Research Council
Consolidator grant, Cosmicdust.
This paper makes use of the following software available publicly from
github: corner.py, emcee.py \citep{DFM2013,corner}.
\bibliographystyle{mnras/mnras}
\bibliography{masterbib}
\bsp
\appendix
\section{Notes on the literature fluxes}
\label{notesS}
In order to produce a homogeneous and up-to-date set of fluxes, we have
applied the following corrections.
\paragraph*{Corrections to previously published work:}
\begin{enumerate}
\item{Since Sco16 was published, the 500-\mic\ flux densities used for
their local sample \citep{Dale2012} were updated following the
latest {\it Herschel} calibration. To estimate \mwtd, \td\ and
\lsub\ we fitted the photometry presented by \citet{Chu2017} and
\citet{Clark2018} using the method described in
\citet{Dunne2001}.}
\item{The 850-\mic\ photometry for local galaxies in the SLUGS sample
\citep{Dunne2000} is contaminated by the CO(3--2) line. We have
corrected for this using the results of \citet{Seaquist2004},
where for galaxies with $D<148$\,Mpc we reduce the 850\,\mic\ flux
density by 25 per cent.}
\item{It appears that the CO(2--1) data from \citet{Aravena2016}, as
reproduced in \citet{Bothwell2017}, has been incorrectly converted
to $L^{\prime}_{10}$ (\lcob\ appears to have been multiplied by
0.9 instead of being divided by it). We have corrected this error
and applied our chosen value of $r_{21}=0.8$ for the conversion.}
\end{enumerate}
\paragraph*{Homogenisation of distances:}
The most local galaxies ($D<30$\,Mpc) often have a variety of
distances used in the literature. As we have often taken \Lir, \lci,
\lcoa\ and \lsub\ from different papers, we have had to homogenise the
literature luminosities to correspond to a common distance. The
distance chosen is that listed in \citet{Dale2017} and presented in
Table~\ref{SampleT}.
\paragraph*{Updating local CO data:}
The Sco16 local galaxy sample used CO(1--0) fluxes from the FCRAO
single-dish survey of \citet{Young1995}, which has significant and
uncertain extrapolations to total fluxes for extended galaxies. We
have updated the CO data for these very local galaxies to use CO(1--0)
maps from the COMING survey \citep{Sorai2019} where possible as well
as from other mapping datasets from the literature
\citep{Gao2004,Kuno2007,Young2008,Galametz2011,Koda2011,Schruba2012,Ueda2014}.
\paragraph*{New CO measurement for ID141:}
We use an unpublished CO(1--0) flux for ID141, which was observed with
the Jansky Very Large Array and has ${S_{10}=0.61\pm0.09\,\rm Jy\,\kms}$.
\section{Required corrections}
\subsection{H\,{\sc i}-dominated galaxies at lower \Lir}
\label{HIS}
There is a potential source of bias when deriving calibration factors
involving \lsub\ for galaxies with large ratios of $\fhi= \HI/\mol$,
as the dust may be tracing \HI\ as well as \mol. If we apply our
method from \S\ref{optS} to such H\,{\sc i}-dominated galaxies, we
will infer the presence of more \mol\ due to the dust which resides
only in the \HI\ phase. Because we calibrate in pairs of tracers, this
leads to an over-estimate of \aco\ or \aci\ as well as a bias in the
dust-based calibration factor.
To investigate this, we estimated \fhi\ in the same regions as the
submm flux densities for the local galaxies we could find in the
literature \citep{Dunne2000,Spekkens2004,Wong2013,Groves2015,
Thuan2016,Dale2017,Koribalski2018,Jiao2021}. As \fhi\ correlates
inversely with \Ms, metallicity and \Lir\
\citep[e.g.][]{Bothwell2014,Saintonge2016}, this issue affects more of
the low \Lir\ galaxies (mostly in the ad sample). For any galaxies
with $\fhi>1$ within the optical disk, we make a correction to \lsub,
removing that portion of the dust emission which is likely associated
with the excess \HI. This correction is designed to produce the same
\lsub/\mol\ ratio as a galaxy with $\fhi=1$.
\begin{equation}
\lsub^{\rm cor} = \lsub \left(\frac{2}{\fhi+1}\right) \label{HIE}
\end{equation}
\noindent Galaxies with $\fhi>1$ are shown with this correction
applied as cyan diamonds in the figures. The higher luminosity
(U)LIRGs and SMGs are dominated by molecular gas
\citep[e.g.][]{Yao2003} so we do not need to correct these.
\subsection{Discussion of local \CI\ data}
\label{J19A}
For the {\it Herschel} FTS measurements of local (U)LIRGs
\citep{Lu2017}, we only include local galaxies with $D>27$\,Mpc to avoid
issues with mis-matched beams. We also rejected galaxies where there was
a large discrepancy between the measurement of \citet{Lu2017} and that
of \citet{Kamenetzky2016} (using the same data).
The set of local galaxies which were mapped by the {\it Herschel} FTS
and presented by J19 are shown in the figures, but not included in the
averages for the following reasons:
\begin{enumerate}
\item{The \CI\ and CO measurements are made in matched apertures,
however the area mapped in \CI\ is sometimes much smaller than
that used for the 500--850\,\mic\ flux densities reported in the
literature. Any analysis which involves both \lci\ and \lsub\
requires a correction to \lci\ to address the mis-match in
apertures. We attempted to do this by taking the global CO
luminosities (which are equivalent global fluxes to the submm
continuum measurements) and assume that the deficit between the
global \lcoa\ and that measured in the same aperture as the \CI\
by J19 is the same as the deficit in \lci:
\begin{equation}
\lci^{\rm cor} = \lci^{\rm J19} \frac{\lcoa^{\rm global}}{\lcoa^{\rm J19}} \label{CIcorE}
\end{equation}
These corrections (JC) range from $\rm JC = 0.00$--0.74\,dex, and the
pink diamonds in the figures indicate those galaxies that have
$\rm JC>0.07$\,dex. Even after applying the corrections, the J19
galaxies have different average properties in the \lsub/\lci\ ratio
(see Fig.~\ref{LhistsF}). We therefore, do not have confidence in our
comparison of \lci\ to \lsub\ for these galaxies and so exclude them
from the statistics.}
\item{Although the CO and \CI\ luminosities from J19 are measured in
the same apertures, there is a trend for these resolved galaxies
to have lower \lci\ for a given \lcoa\ compared to galaxies which
have more global flux measurements. There could be a sampling bias
because \CI\ is only detected over the inner kpc or so of the
larger galaxies. The CO luminosity per mass of gas (\aco) has been
found to be lower in the central regions of many galaxies
\citep{Sandstrom2013}, which would produce a decrease in
\lci/\lcoa. Since we wish to compare the same averaged global
fluxes across all galaxies, we remove these `centrally-biased'
galaxies from our statistical analysis, but we show them in the
figures for completeness.}
\item{Finally, a more recent paper by \citet{Jiao2021} did produce
matched dust and \CI\ measurements for a subset of the J19
galaxies. The results are shown in Fig.~\ref{LcorF}(d) where it
can be seen that the J21 galaxies are still deficient in \CI\
compared to the higher luminosity galaxies. This cannot be due to
a mis-matched aperture but the same sampling bias is present
toward the inner regions of the resolved galaxies. An offset to
lower \lci\ per \lsub\ implies either depressed \lci\ (lower \Xci)
or increased \lsub\ per unit gas mass (lower \gdr, or
higher dust emissivity).}
\item{Unfortunately, this is the only published set of \CI\ fluxes for
galaxies with $\log \lci<8$ and the only set of fluxes published
for the mapping mode of the {\it Herschel} FTS. There is no
description in the literature of how the processing for this mode
should be made, and there are differences in the results of J19
and \citet{Crocker2019}, who analyse some of the same mapping
data. Despite our best attempts to contact the relevant team, we
have not been given the details of their flux measurements. We
can only note that the \CI\ fluxes from {\it Herschel} FTS mapping
are not necessarily repeatable when analysed by different teams
and so elect to exclude the resolved J19 galaxies from the
statistical analysis. }
\end{enumerate}
\noindent Excluded galaxies are denoted as `\CIcor' and they are
shown as pink diamonds on the relevant figures.
\section{Dust mass opacity and the relationship of dust to gas}
\label{kappaA}
The dust mass opacity coefficient, $\kappa_{\rm d}(\lambda)$, is
proportional to the emissivity per unit mass of dust. It is related to
the calibration parameter we use in our analysis,
$\kh=\gdr/\kappa_{\rm d}$, where \kh\ refers to the dust emission per
H mass, thus encompassing the two unknowns of dust optical properties
and gas-to-dust ratio (\gdr).
The dust optical properties are not easily measured, and can vary
enormously from laboratory-based studies to theoretical dust models
and from those inferred by observations (for a review see
\citealp[e.g][]{Dunne2003,Clark2019}).
\begin{table*}
\caption{Summary of our physical dust calibrations (\kd, \gdr)
compared to other work in the literature, where \gdr\ and \kh\
refer to the mass of hydrogen in all forms, excluding He.}
\begin{adjustbox}{center}
\begin{tabular}{cccc}
\toprule
$\kh = \gdr/\kd$ & Sample & Notes & Reference\\
\khunit & & & \\
\midrule
1884 (1500--2200) & ex-gal & average of extragalactic estimates & this work\\
\multicolumn{4}{c}{}\\
\multicolumn{4}{c}{\bf Milky Way diffuse and atomic regions}\\
\midrule
$2352\pm198$ & diffuse & 850\,\mic, H\,{\sc i} very diffuse sight lines & \citet{Planck2014xvii}\\
$1988\pm710$ & all sky & 850\,\mic, H\,{\sc i}, CO(1--0) with $\aco=3.2$ & \citet{Planck2014xi}\\
$1380\pm251$ & Taurus H{\sc i} & H\,{\sc i} with 25\% opacity correction, Planck, scaled $\beta=1.8$ & \citet{Planck2011xix}\\
$1518$ & & 250\,\mic\ scaled to 850\,mic\ with $\beta=1.8$, H\,{\sc i} & \citet{Boulanger1996}\\
\multicolumn{4}{c}{}\\
\multicolumn{4}{c}{\bf Milky Way molecular/higher density regions}\\
\midrule
$1392$ & $\rm{log(N_H)>20}$ & 850\,\mic, H\,{\sc i}, CO(1--0) with $\aco=3.2$ & \citet{Planck2014xi}\\
$1392$ & $\rm{log(N_H)\sim21}$ & 850\,\mic, H\,{\sc i}, CO(1--0) with $\aco^\gamma$ & \citet{Remy2017}\\
$1012-1044$ & DNM & Dark neutral medium, 850\,\mic, $\gamma$-rays & \citet{Remy2017,Remy2018}\\
$700\pm200^{\dag}$ & local clouds (\mol) & 850\,\mic, CO(1--0), \aco\ from $\gamma$ & \citet{Remy2017}\\
$1210\pm184^{\dag}$ & local clouds (H\,{\sc i}) & 850\,\mic, H\,{\sc i} & \citet{Remy2017} \\
$654\pm85$ & Taurus \mol & NIR extinction, Planck, scaled $\beta=1.8$ & \citet{Planck2011xix}\\
\multicolumn{4}{c}{}\\
\multicolumn{4}{c}{\bf Local galaxies}\\
\midrule
$1663\pm333$ & 9 & CO(1--0), H\,{\sc i}, 500\,\mic\ dust scaled to 850\,\mic\ with $\beta=1.8$ & \citet{Eales2012}\\
2296 (163/0.071) & 101 Sab--Sbc & CO(1--0) with $\aco=3.2$, H\,{\sc i}, dust SED fits & \citet{Casasola2020}\\
$1692-2169$ & 130 Sa--Sc & CO(1--0), H\,{\sc i}, dust MBB, \aco(Z) & \citet{Bianchi2019}\\
$2096$ & 26 & CO(2--1), H\,{\sc i}, dust \citetalias{Draine2007_kappa} fits & \citet{Sandstrom2013}\\
2402 (92/0.0383) & 189 & CO(1--0), H\,{\sc i}, dust \citetalias{Draine2007_kappa} fits $\aco=3.2$ & \citet{Orellana2017}\\
$1500-2200$ & M74, M83 & $Z$, H\,{\sc i}, CO(2--1), 500\,\mic\ with \citet{James2002} method & \citet{Clark2019}\\
\multicolumn{4}{c}{}\\
\multicolumn{4}{c}{\bf Physical dust models commonly used in the literature.}\\
\midrule
3232 (109/0.034) & theoretical & physical dust model producing too
much $A_v/N_{\rm H}$ & \citet{draine2003}; \citetalias{Draine2007_kappa} \\
& & & \citet{Planck2016xxix}\\
$1972$ & theoretical & up-dated \citetalias{Draine2007_kappa} dust model & \citet{DH2020}\\
1901 (135/0.071) & theoretical & physical dust model THEMIS & \citet{Jones2017,Jones2018}\\
\bottomrule
\end{tabular}
\end{adjustbox}
\flushleft{The first column is \kh, the ratio of the
gas-to-dust ratio (\gdr) and the dust mass opacity coefficient. Where
there is an explicit assumption for \gdr\ or \kd\ in a reference, we
include it in parentheses. $^\dag$The clouds in these rows are
the same; \citeauthor{Remy2017} have calculated the dust opacity
for each gas phase separately. \aco(Z) from \citet{Amorin2016}.}
\label{kappalitT}
\end{table*}
Commonly adopted extragalactic estimates range from
$\kappa_{850}=0.03$--$0.08\,\rm m^2\,kg^{-1}$
\citep{Li2001,Dunne2000,James2002,draine2003,
Planck2011xix,Eales2012,Clark2016,Bianchi2019}, though higher values
(by factors of several) are inferred for the very densest and coldest
environments where grains can grow icy mantles and coagulate
\citep{Kohler2015,Remy2017,Ysard2018}. These changes in opacity have
also been correlated with a loss of PAH and stochastically heated
small grains \citep{Flagey2009,Ysard2013}. \citet{Remy2017} suggest
that regions of the ISM with dust opacities a factor $\sim 2$ higher
than the diffuse ISM (and with cold dust, $\td\sim16$--18\,{\sc k}),
would be those where grains are accreting carbonaceous mantles, as in
the THEMIS dust model \citep{Jones2017,Jones2018}. This carbon
mantle-accreting regime is largely assumed to be the dark neutral
medium (close to the atomic-molecular transition, where there is low
CO emission and high H\,{\sc i} opacity). Deeper within clouds, where
the temperature drops to $\td<16$\,{\sc k}, the dust begins to
aggregate and accrete ice mantles, which increases the opacity
further. These very dense, cold environments do not, however, contain
the bulk of the ISM mass and certainly do not emit a dominant fraction
of \lsub\ in a galaxy \citep{Draine2007,Bianchi2019}. The increase in
dust emissivity (\kd) from atomic to moderately dense molecular
material is in the range 1.2--2.0 \citep{Remy2017}.
In fact, it is \kh\ -- the parameter relating the dust emissivity to
the gas mass -- that can be measured in astrophysical situations,
since we have no absolute knowledge of \gdr. Table~\ref{kappalitT}
lists a comprehensive set of observational and theoretical values for
\kh\ from the literature. Estimates of \kh\ in the Milky Way are made
across a number of sight-lines, from H\,{\sc i}-only (diffuse) to
H$_2$-dominated clouds (dense) where CO emission is used with
assumptions about \aco\ in order to determine $N_{\rm H}$. Independent
confirmation is provided by studies \citep[e.g.][]{Remy2017} using
$\gamma$-ray observations to determine the gas column; the resulting
values of \kh\ are in good agreement (see Table~\ref{kappalitT}), with
\kh\ being higher along diffuse sight-lines (1800--2400), dropping to
700--1500 in denser molecular or dark neutral media.
In extragalactic studies, a similar method is used, although with
larger uncertainties as it is less straightforward to decompose the
atomic and molecular components along the line of sight. These studies
find a range of $\kh=1500$--2200\,\khunit, closer to the diffuse ISM
measurements in the Milky Way.
For a given dust model, we can also calculate the theoretical \kh\
given the assumed dust optical properties, chemical abundances and
depletions. The theoretical values are also listed in
Table~\ref{kappalitT} where the current consensus is for
$\kh \sim 1900$--2000\,\khunit. The popular
\citet{draine2003} model has a significantly
higher $\kh=3200$\,\khunit\ (lower $\kd=0.034$\,\kunit\ for
$\gdr=109$) than all of the empirical measurements. This was noted by
\citet{Draine2014} and \citet{Planck2016xxix} and has been updated in
the more recent version of this model by \citet{Hensley2021}. We
encourage readers to use the updated version in order to produce
dust-based measurements which are consistent with what we know about
dust from observations.
\section{Deriving gas mass from observations of \CIfull}
\label{QA}
The excitation term, $Q_{\rm ul}$, which describes the fraction of C
atoms in each excited state, is a function of ($n$,\tk) in non-LTE
conditions, and is derived analytically in the Appendix to
\citet{PPP2004}. A recent study of the [\CI](2--1)/(1--0) line ratio
found that [\CI](2--1) is strongly sub-thermally excited, and [\CI](1--0)
presents interesting super-thermal behaviour in the range of density
and temperature expected for galaxies. We illustrate the dependence of $Q_{\rm ul}$ on ($n$,\tk) in
Fig.~\ref{QulF}. As discussed by \citet{PPPDunne2022}, the value of
\Q\ for lower densities ($n=300$--3000\,\cc) can exceed the LTE value
at \tk>20\,{\sc k}, but Fig.~\ref{QulF} shows that for a reasonable
range of $n$ and \tk\ ($300<n<10,000$\,\cc, $25<\tk<80$\,{\sc k}) \Q\
does not go outside the range 0.35--0.53. In fact, for a uniform
probability of ($2.4< \log n <4.0$) and ($25<\tk<80$\,{\sc k}) the 99
per cent range for \Q\ is 0.40--0.54, median=0.48. The relative
uncertainty on the calibration of \CI\ mass from the lack of knowledge
of ($n$,\tk) is thus $<\pm 16$ per cent. We will therefore use the
median value of $\Q=0.48$ throughout, because even though we may be
able to use the measured \td\ to infer the galaxies with higher or
lower \tk\ (assuming $\tk=\alpha^{\rm TD} \, \td$ -- see
\citealp{PPPDunne2022}), the lack of knowledge of the density and the
super-thermal behaviour in the $J=1$ state means that there is no
direct correlation between \Q\ and \tk. Using sensible average
parameters for MS galaxies [and SMGs], so $n$=500 [5,000]\,\cc and
\tk\ = 40 [80]\,{\sc K} we find only a small ($\sim$ 10 \,per\,cent) difference in the \Q\ values expected.
The LTE expressions for \Q\ and \tx\ should not be used \citep{PPPDunne2022} as $\rm Q_{10}^{LTE}$ is actually {\em lower} than the non-LTE \Q\ for densities higher than a few hundred \cc, and thus its use would lead to a systematic bias - e.g. for \tk=60\,{\sc K} and n=1000\cc, the LTE value for \Q\ is 18 \,per\,cent lower than the appropriate non-LTE value. This would lead to an 18 \,per\,cent over-estimate of the \mol\ mass using \CI.
For the [\CI](2--1) line, things are not so promising
(Fig.~\ref{QulF}(right)). The range of possible values of $Q_{21}$ are
large, ranging from 0.07--0.37 for the 99 per cent range. The median
is $Q_{21}=0.22$, giving an uncertainty range of $\pm 68$ per cent for
reasonable values of ($n$,\tk). Because of the sub-thermal behaviour,
the [\CI](2--1) line is a sensitive indicator of density
\citep{PPPDunne2022} and galaxies with strong [\CI](2--1) emission will
have a larger fraction of their \mol\ in a dense state.
\section{From pairwise variances to individual variances}
\label{pairwiseS}
We have measurements of different tracers of gas mass for several
galaxies, but no direct measurements of \Mh\ itself. Hence, it is not
possible to measure directly how well each tracer follows the gas
mass. However, we do have measurements of the different tracers for
each galaxy, so we can estimate the scatter in the difference between
the tracers. Under some assumptions this allows us to infer the
scatter between each tracer and the gas mass.
To simplify the notation, we write the log of observed quantities
and corresponding standard deviation of errors as
\begin{equation}
\label{eqn:x_errs}
\begin{aligned}
x_1& = \rm \log(L_{850}), & \quad& \sigma_1 = \rm \log(1+\sigma_{850}/L_{850}), \\
x_2 &= \rm \log(L'_{\rm CO}), & &\sigma_2 = \rm
\log(1+\sigma_{\rm CO}/L'_{CO}),\\
x_3 &= \rm \log(L'_{\rm C\,I}), & & \sigma_3 = \rm
\log(1+\sigma_{\rm C\,I}/L'_{\rm C\,I}).
\end{aligned}
\end{equation}
If true value of the log of the gas mass is
$ \hat m = \log(\Mh) $, and the true values of the observed
quantities are $\hat x_i$, where $i=1 ... 3$, then we can write
\begin{equation}
\hat m = \hat x_i + \hat a_i,
\label{eqn:cal-const}
\end{equation}
where $\hat a_i $ are the true calibration factors for each galaxy,
\begin{equation}
\begin{aligned}
\hat a_1 = &- \rm \log(\hat \alpha_{850}) \\
\hat a_2 = &\quad \rm \log(\hat \alpha_{CO}) \\
\hat a_3 = &\quad \rm \log(\hat \alpha_{C\,I})\\
\end{aligned}
\end{equation}
Note that the true values $\hat a_i$ may be different for each galaxy,
depending on the individual physical conditions within the galaxies.
If we choose a particular set calibration factors for all galaxies,
say $\tilde{a}_i$, this provides three estimates of the gas mass for
each galaxy,
\begin{equation}
m_i = x_i + \tilde{a}_i
\end{equation}
The error in each mass estimate is
\begin{equation}
\begin{aligned}
m_i -\hat{m} & = x_i - \hat{x_i} + \tilde{a}_i - \hat{a}_i \\
& = \delta x_i + \delta a_i
\end{aligned}
\end{equation} where $\delta a_i$ is the difference between the true
factor for this galaxy and the value we have chosen, and $\delta x_i$
are the measurement errors of the observations.
If we assume that the errors on $x_i$ are not correlated
with the errors on $a_i$, then the variance of the mass errors is given by:
\begin{equation}
\var(m_i -\hat{m}) = \sigma_i^2 + s_i^2
\end{equation} where $s_i^2$ is the variance of the
true calibration factors.
The value of $s_i^2$ gives a direct measure of how accurate
the particular tracer is when using a universal calibration factor
for all galaxies. Without knowing the true gas mass, we do not have a direct
measure of this value, but we can obtain an estimate by considering the
differences between the mass measurements:
\begin{equation}
\begin{aligned}
m_i - m_j &= x_i - x_j + \tilde{a}_i - \tilde{a}_j \\
&= \delta x_i -\delta x_j + \delta a_i -\delta a_j
\end{aligned}
\end{equation}
If we ignore all co-variance terms, the variance of the differences is
given by:
\begin{equation}
\label{pair_var}
v_{ij} = \var(m_i - m_j) = \sigma_i^2 + \sigma_j^2 + s_i^2 + s_j^2
\end{equation}
It is straightforward to re-arrange these equations to find the
intrinsic variance of the calibration factors as:
\begin{equation}
s_0^2 = \left(v_{01} - v_{12} + v_{20} \right)/2 - \sigma_0^2
\end{equation}
with similar equations for $s_1^2$, and $s_2^2$. So long as we have
good estimates of the measurement errors, $\sigma_i$, for the observed
quantities, we can estimate the scatter in calibration constants for
each tracer. Using our dataset we have measured the variance for each
pair of factors in Eqn.~\ref{pair_var}. Assuming that the co-variance
between the calibration factors is zero, we use the three pair
variances to estimate the intrinsic variance of the three individual
calibration factors. The resulting standard deviations are
$s_{\kappa} = 0.1294$, $s_{\alpha} = 0.1436$ and
$s_{\mathrm{X}} = 0.1125$, using all galaxies except the
\CIcor\footnote{When restricting the analysis to \Lhi\ galaxies,
\CI\ produces notably less scatter than both CO and dust continuum, with
$s_{\kappa} = 0.1339$, $s_{\alpha} = 0.1646$ and
$s_{\mathrm{X}} = 0.082$.}. Values are listed in
Table~\ref{methodT}.
This analysis shows that \Xci\ has the smallest scatter between
galaxies, especially when considering \Lhi\ galaxies, which is a new
result, independent of any assumptions.
\section{A Bayesian approach to combining gas mass estimates}
\label{bayesS}
Our method of combining the three gas mass tracers is based on the
idea that the conversion factors for any particular galaxy come
from parent distributions with variances as derived in
Appendix~\ref{pairwiseS}. This means that we should allow for the
expected scatter in conversion factors as well as the observational
error when combining estimates from the different tracers. Using a
Bayesian approach to the problem, we show the most likely mass
estimate is simply the inverse variance weighted mean of the tracers,
where the weights include both measurement error and the variance in
conversion factors.
We continue to use the the notation as in Appendix~\ref{pairwiseS}, where the
observed quantities are $x_i$ and errors $\sigma_i$.
Assuming the measurement errors are Gaussian the probability of
measuring the observed value of $x_i$ is
\begin{equation}
P(x_i| \hat x_i, \sigma_i) = {\cal N}(x_i | \hat x_i, \sigma_i^2)
\end{equation}
where $\cal N$ represents the normal distribution centred on $\hat x_i$
and with variance $\sigma_i^2$.
Now, for each observation we can use Bayes theorem to estimate the
posterior probability that the gas mass is $m$,
\begin{equation}
P(m,\hat a_i | x_i) = P(x_i| m, \hat a_i) P(\hat a_i)P(m) / P(x_i)
\end{equation}
where we have assumed $m$ and $\hat a_i$ are independent. For the
prior on $\hat a_i$, we assume a normal distribution with mean
$\bar a_i$ and variance $s_i^2$, as discussed in
Appendix~\ref{pairwiseS} . We assume a flat prior on $m$, implying
that $P(m)$ is constant. Since $P(x_i)$ is also constant, the position
of the maximum posterior probability does not depend on the actual
value of $P(m)/P(x_i)$, and for convenience we set this to
1. Therefore:
\begin{equation}
\begin{split}
P(m,\hat a_i | x_i) &\propto P(x_i| m , \hat a_i) P(\hat a_i)\\
&= {\cal N}(x_i| m - \hat a_i, \sigma_i^2)
{\cal N}(\hat a_i | \bar a_i,
s_i^2)\\
&= {\cal N}(\hat a_i| m - x_i, \sigma_i^2)
{\cal N}(\hat a_i | \bar a_i,
s_i^2) .
\end{split}
\end{equation}
Here we have used Equation~\ref{eqn:cal-const} to go from $m-\hat a_i$
to $m-x_i$. Since we are interested primarily in the value of the gas
mass, and not explicitly in the values of the calibration factors, we
can marginalise over the values of $\hat a_i$. Ignoring the
uncertainties on the variances, $\sigma_i^2$ and $s_i^2$, leads to:
\begin{equation}
P(m | x_i) = {\cal N}(m | x_i + \bar a_i, \sigma_i^2+s_i^2) .
\end{equation}
Including all three observations for the galaxy this becomes
\begin{equation}\begin{split}
P(m | \{x_i\}) &= \prod_{i=1}^{3} {\cal N}(m | x_i + \bar a_i, \sigma_i^2+s_i^2) \\
&\propto %
\exp\left( -\sum_{i=1}^3 \frac{(m-x_i-\bar
a_i)^2}{2(\sigma_i^2+s_i^2)}\right) .
\end{split}
\end{equation}
So maximising the posterior probability with respect to $m$ is
equivalent to minimising $\chi^2$, where:
\begin{equation}
\chi^2 = \sum_{i=1}^{3} \frac{(m-x_i-\bar a_i)^2}{ 2( \sigma_i^2+s_i^2)} .
\end{equation}
The minimum with respect to $m$ is given by
\begin{equation}
\begin{split}
m^{\rm opt} &= \left(\sum_{i=1}^3
\frac{x_i+\bar{a_i}}{\sigma_i^2+s_i^2} \right) \Bigg/
\left(\sum_{i=1}^3 \frac{1}{\sigma_i^2+s_i^2} \right) \\
&= \left(\sum_{i=1}^3 (x_i+\bar{a_i})w_i \right) \Bigg/
\left(\sum_{i=1}^3 w_i \right),
\end{split}
\end{equation}
\noindent where $w_i = 1/(\sigma_i^2+s_i^2)$. So the optimal mass estimate is simply the inverse variance-weighted
mean of the three estimates, where each uses the mean conversion
factor, and where the variance for each measure is the sum of the
measurement error and the expected variance of the conversion
factor.
The uncertainty on $m^{\rm opt}$ is the uncertainty on the weighted mean,
\begin{equation}
\label{eqn:var_m}
\sigma_{m^{\rm opt}} = 1 \Bigg/
\left(\sum_{i=1}^3 w_i \right).
\end{equation}
The corresponding estimates of the conversion factors for a
particular galaxy are then simply given by:
\begin{equation}
a_i = m-x_i, \quad i=1...3
\end{equation}
The uncertainty on the factor $a_i$ depends on the uncertainty on
$m^{\rm opt}$, from equation \ref{eqn:var_m}, and the uncertainty on the
measurement $x_i$, from equation \ref{eqn:x_errs}. Since the
estimate of $m$ depends on the measurements $x_i$, there is a non-zero
covariance between $m$ and $x_i$. Allowing for this covariance, the expected
uncertainty on $a_i$ is given by:
\begin{equation}
\label{eqn:a_i_errs}
\sigma_{a_i}^2 = \sigma_{\rm m^{opt}}^2 + \sigma_i^2 \left( 1-
\frac{2 w_i}{\sum_{i=1}^3 w_i } \right).
\end{equation}
\section{Sensitivity of tracer to SFR and radiation field intensity}
\label{tdcorrA}
Fig.~\ref{tdcorrAF} shows the observable ratios, \lsub/\lci\ and
\lsub/\lcoa, as a function of \td\ (left) and \Lir\ (right). There is
no significant trend for \lsub/\lci\ or \lsub/\lcoa\ with either \td\
or \Lir. There is a noticeable offset to higher \lsub/\lci for the
\CIcor\ galaxies (pink diamonds), which also have lower \td\ and \Lir\
than the other samples. As these galaxies require large corrections to
\lci\ in order to compare to \lsub, we cannot be sure if this is a
real effect, or just an under-estimate of the required correction. A
larger sample of low-temperature, low-luminosity galaxies with matched
apertures will be required to investigate this.
\section{Tests of robustness}
\label{testsA}
\subsection{Consistency of parameter estimates}
We investigated the consistency of our parameter estimates for the
same galaxies when three tracers are used compared to only
two. Fig.~\ref{acocompF} shows that there is a reasonable correlation
between the three-tracer and two-tracer estimates, with only small
differences in the sample medians when different numbers of tracers
are used. The Xd pair produces the closest match to the method with
three pairs (Fig.~\ref{acocompF} centre and lower-right panels), with
no bias and a small scatter. If restricted to choosing only one pair
to observe, the best choice seems to be \lsub\ and \lci.
\subsection{Impact of using fixed vs. variable \mwtd}
\label{QTS}
In this section we test a different approach to \mwtd, one of the main
physical dependencies that impacts on the calibration of gas
masses\footnote{This is not to suggest that \aco\ is not dependent on
the physical properties of the gas, but being optically thick, this
line does not have any simple relationship with anything we can
empirically determine. Similarly, we have shown that \Q\ is not easy
to determine per galaxy, but its range is small enough to have no
significant impact on our calibration study.}. To estimate gas mass
from \lsub, the mass-weighted dust temperature, \mwtd, is required.
\mwtd\ has been set to 25\,{\sc k} in previous studies
\citep[e.g.][]{Scoville2014,Scoville2016,Hughes2017}, adding to the
uncertainty in gas-mass estimates for individual galaxies. However, as
we wish to study trends in the conversion factors, we are concerned
about the possible effects of systematic trends in \mwtd, since these
may affect the resulting behaviour of the conversion factors if
ignored.
Having determined empirical relationships between $z$, \Lir, SED
colour (\Lir/\lsub) and \mwtd\ in \S\ref{dustS}, we compare the
calibration results using these empirically determined \mwtd\ to the
standard assumption of constant \mwtd=25\,{\sc k} made in the
literature. Fig.~\ref{fixQT_lirF} shows the impact of using our
empirical relations (coloured points), versus keeping \mwtd\ fixed
(grey points). Each panel shows one of the affected conversion
factors derived from either the ad or Xd samples. The trends with
luminosity -- visible for our default prescription -- disappear when
a constant $\mwtd=25$\,{\sc k} is used.
The histogram of the offsets in each conversion factor when using
the empirical \mwtd\ compared to constant $\mwtd=25$\,{\sc k}
(Fig.~\ref{delcalqu_histF}) shows that the choice of \mwtd\ makes no
significant difference to the median values of the parameters
($<0.015$\,dex). For individual galaxies, the average uncertainty
introduced by using a constant \mwtd\ is 0.046--0.06\,dex (1$\sigma$),
with a maximum of $\sim 0.2$\,dex.
Finally, the difference in the gas-mass estimates, \Mh, when using
constant \mwtd\ versus our empirical prescription is shown in
Fig.~\ref{delMhQT_lirF}. At lower \Lir, a constant \mwtd\ produces
lower \Mh\ compared to our empirical method, because these galaxies
are local disks which tend to have colder diffuse dust
temperatures. At higher log \Lir$>12$, the trend reverses as the
diffuse dust temperatures increase to $\sim30$\,{\sc k}.
\section{A robust Orthogonal Distance Regression algorithm}
\label{ODRS}
In order to fit the most robust linear model to the data, we have
employed an Orthogonal Distance Regression and included intrinsic
scatter. \footnote{These ideas are outlined in \citet{Hogg2010} and
\citet{dfmplane}, however both of their Bayesian implementations
result in biases in the estimate slope. The biases are quite
pronounced when the range sampled by the data is not much larger than
the errors on the data, but are significant even when the range
sampled is $\sim$10$\sigma$. The biases also depend on which axis is
chosen as the ``true'' independent variable and whether the errors are asymmetrical ($\sigma_x \ll \sigma_y$, or $\sigma_x \gg \sigma_y$). We found that an
ODR which does not use the Bayesian likelihood formalism is the only
one which does not have such biases; hence our choice to use it here.}
We use the {\sc emcee} MCMC sampler \citep{DFM2013} to explore the
$\chi^2$ space and compute robust confidence intervals. Our algorithm
results in parameters which are symmetric under transformation of $x$
and $y$, allowing us to utilise the full co-variance matrix,
including the intrinsic scatter as a third variable.
The MCMC is set up to explore the following likelihood function:
\begin{equation}
Ln L = -0.5\sum^{N}_{i=1}(\Delta^2/\sigma^2 + \ln(\sigma^2/S_2))
\end{equation}
\[
\Delta = {\bf v.Z} - b\, \cos(\theta)
\]
with ${\bf Z}$ as the data array of $x$ and $y$ values, $b$ as the intercept
and $\theta$ related to the slope as $m = tan(\theta)$. $v$ is a
matrix to rotate to find the perpendicular distances, given by ${\bf v}=
[-sin(\theta), cos(\theta)]$.
\[
\sigma^2 = ({\bf S+\Lambda_{\rm m}.v}).{\bf v}
\]
where ${\bf S}$ is the co-variance matrix. To include intrinsic
scatter in the orthogonal direction, as well as measurement errors
into the fitting, we add a term to the co-variance matrix, as
suggested in \citet{dfmplane}:
\begin{equation}
{\bf \Lambda_{\rm m}} =
\begin{pmatrix}
\tan(\theta)^2 & -\tan(\theta)\\
-\tan(\theta) & 1.0\\
\end{pmatrix}
\times \cos(\theta)^2 \times e^{2\,\ln(\lambda)}
\end{equation}
\[
S_2 = ({\bf S.v}).{\bf v}
\]
The initial conditions were given by the ordinary least-squares fit
parameters for variance in the $y$ direction. The run was checked to
ensure adequate burn-in and independence between samples. We used 32
random walkers with 6,000 steps each.
\label{lastpage} |
Title:
Searching for cataclysmic variable stars in unidentified X-ray sources |
Abstract: We carry out a photometric search for new cataclysmic variable stars (CVs),
with the goal of identification for candidates of AR~Scorpii-type binary
systems. We select GAIA sources that are likely associated with unidentified
X-ray sources, and analyze the light curves taken by the Zwicky Transient
Facility, Transiting Exoplanet Survey Satellite, and Lulin One-meter Telescope
in Taiwan. We investigate eight sources as candidates for CVs, among which six
sources are new identifications. Another two sources have been recognized as
CVs in previous studies, but no detailed investigations have been done. We
identify two eclipsing systems that are associated with an unidentified
XMM-Newton or Swift source, and one promising candidate for polar associated
with an unidentified ASKA source. Two polar candidates may locate in the
so-called period gap of a CV, and the other six candidates have an orbital
period shorter than that of the period gap. Although we do not identify a
promising candidate for AR~Scorpii-type binary systems, our study suggests that
CV systems that have X-ray emission and do not show frequent outbursts may have
been missed in previous surveys.
| https://export.arxiv.org/pdf/2208.01833 |
\title{Searching for cataclysmic variable stars in unidentified X-ray sources}
\author{J..Takata}
\affiliation{Department of Astronomy, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China}
\author{X.F. Wang}
\affiliation{Department of Astronomy, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China}
\author{A.K.H. Kong}
\affiliation{Institute of Astronomy, National Tsing Hua University, Hsinchu 30013, Taiwan}
\author{J. Mao}
\affiliation{Yunnan Observatories, Chinese Academy of Sciences, Kunming, 650216, China}
\affiliation{Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences,
Kunming, 650216, China}
\author{X. Hou}
\affiliation{Yunnan Observatories, Chinese Academy of Sciences, Kunming, 650216, China}
\affiliation{Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming, 650216, China}
\author{C.-P. Hu}
\affiliation{Department of Physics, National Changhua University of Education, Changhua 50007, Taiwan}
\author{L. C.-C. Lin}
\affiliation{Department of Physics, National Cheng Kung University, Tainan 701401, Taiwan}
\author{K.L. Li}
\affiliation{Department of Physics, National Cheng Kung University, Tainan 701401, Taiwan}
\author{C.Y. Hui}
\affiliation{Department of Astronomy and Space Science, Chungnam National University, Daejeon 305-764, Korea}
\email{takata@hust.edu.cn}
\section{Introduction}
The cataclysmic variable star (hereafter CV) is a binary system composed of a white dwarf primary (hereafter WD)
and a low-mass main-sequence star \citep{1995cvs..book.....W}. In the usual CV system, the mass is transferred from the companion star to the WD, and it forms an accretion disk extending down to the WD surface or accretion column on the WD's pole, toward which the accreting matter is channeled by the WD's magnetic field.
The former and latter systems usually belong to nonmagnetic and magnetic CV systems, respectively. A nonmagnetic CV shows frequent outbursts due to an instability of the accretion disk (dwarf nova).
Magnetic CVs, in which the WD's magnetic field is $B_{WD}>10^5$~G,
are divided into two types, namely, intermediate polar (hereafter IP) and polar. In an IP system, the spin period of the WD
is different from the orbital period of the system. Polar has the strongest magnetic field and shows a spin-orbit phase
synchronization. CVs are usually observed in the optical to X-ray bands,
for which emission originates from the boundary layer of the accretion disk, the companion star surface, or the WD surface/accretion column.
Numerous efforts to identify new CVs and candidates have been made in previous works, and the number of known CVs is
rapidly increasing with recent photometric and spectroscopic all-sky surveys \citep{1998A&AS..129...83R, 2016MNRAS.456.4441C,2021arXiv211113049S,2021AJ....162...94S}.
The methods of confirming CVs are mainly divided into three types, namely, the observation of dwarf-nova outbursts,
identification of orbital/WD spin variations in photometric light curves, and confirmation of CV-like spectral properties. The Open Cataclysmic Variable Catalog \citep{2020RNAAS...4..219J} offers a vast list of the known CVs and the CV candidates found in previous studies.
Among known WD binary systems, AR Scorpii is one of the special classes in terms of the observed emission properties.
The emission in the radio to X-ray bands is modulating with a spin period of WD ($\sim117$~s) and/or a beat period
($\sim 118$~s) between the WD spin and orbital motion, and its broadband spectrum is described by a nonthermal emission process plus thermal emission from the companion star surface \citep{2016Natur.537..374M, 2017NatAs...1E..29B,2018A&A...611A..66S,2018ApJ...853..106T}. Although there has not yet been a direct measurement of the magnetic field of a WD \citep{2021ApJ...908..195G}, the observational properties suggest a magnetic WD binary system. The emission in the optical and X-ray bands is also modulating with the orbital period of $\sim 3.56$~hr, and the shape of the orbital light curve suggests heating of the dayside of the companion star at a rate of $L_{irr}\sim 10^{32-33}~{\rm erg~s^{-1}}$, most of which is converted into emission in the IR/optical/UV bands. The multiwavelength spectrum exhibits no features of emission from an accretion disk in the system, and the emission from the WD is fainter than the observed optical emission, which cannot be the source of the heating of the companion star. It is therefore suggested that the magnetic field of the WD may be the energy source of the heating, and it is interacting with the companion star or outflow matter from the companion \citep{2017ApJ...851..143T,2017ApJ...835..150K,2020arXiv200411474L}. AR~Scorpii may be classified as an IP in the sense that the spin period of the magnetic WD is shorter than the orbital period. However, the X-ray luminosity is of the order of $L_X\sim 4\times 10^{30}~{\rm erg~s^{-1}}$, which is two to three orders of magnitude lower than that of typical IPs, in which the X-ray emission originated from the accretion column on the WD. Attention has been paid to AR Scorpii to study the origin of the magnetic field of
the WD \citep{2021NatAs...5..648S,2021MNRAS.508..561W}.
An AR~Scorpii-type binary system will be a new astrophysical laboratory for nonthermal processes, and it may be the origin of cosmic-ray electrons. A second AR~Scorpii, however, has not yet been identified, and the
population in the galaxy has not been understood. AE~Aquarii/LAMOST J024048.51+195226.9 \citep{2022MNRAS.509L..31P} are known as a magnetic WD system in the propeller phase, and they are similar to AR Scorpii in the sense that (i) no accretion disk is formed and (ii) the system contains a fast-spinning WD \citep{1979ApJ...234..978P,1997MNRAS.286..436W, 2021MNRAS.503.3692P,2021ApJ...917...22G}. On the other hand, there is no evidence of heating of the companion star, and the existence of the nonthermal emission extending in broad energy bands (radio to X-ray/gamma-ray bands) has not been established; for AE~Aquarii, although evidence of the nonthermal emission in X-ray and TeV energy bands has been reported \citep{1994ApJ...434..292M,2008PASJ...60..387T}, the results have not been confirmed by follow-up observations \citep{2014ApJ...782....3K,2014A&A...568A.109A, 2016ApJ...832...35L}. CTCV~J2056-3014 and V1460 Her also contain fast-spinning WD, and they are classified as X-ray faint IPs \citep{2020ApJ...898L..40L,2020MNRAS.499..149A}. Spectroscopic studies suggest that CTCV~J2056-3014 and V1460~Her contain accretion disks and hence their WDs will have a weak magnetic field.
AR Scorpii has faint X-ray emission and has not shown a dwarf-nova-type outburst. Moreover, AR Scorpii contains a
WD with a relatively cool surface temperature, and does not show a deep eclipsing feature in the light curve. Binary systems with such featureless emission properties may have at first glance, been missed by previous surveys, and may require a different approach to confirm new AR~Scorpii-type binary systems. In this study, we take the approach of finding new CVs in unidentified X-ray sources, since the X-ray emission as a result of the interaction between the WD's magnetic field and the companion star is a unique property of AR Scorpii. The structure of this paper is as follows. Section~\ref{select} describes our strategy and method for searching for new CV candidates. Section~\ref{result} presents
eight candidates including six new identifications and two sources that have been recognized as CV candidates but have not been listed in the current catalogs.
Although our new candidates will not be categorized as a new AR~Scorpii-type binary system,
searching in the unidentified X-ray sources offers an alternative approach to identifying new CV systems. In section~\ref{discuss}, we compare
the UV and X-ray emission properties of our candidates with those of known CVs.
\section{Searching method}
\label{select}
\subsection{candidate selection}
First, we select candidates of the WD/low-mass main-sequence star binary systems from
the GAIA DR2 source list \citep{2018A&A...616A..10G}.
In the GAIA color-magnitude diagram, AR~Scorpii, CTCV~J2056-3014 and the typical CV systems are located between the main-sequence and the cooling sequence of the WD (Figure~\ref{hr}).
In this work, therefore, we limit the range of the search with a color of $0.5<G_{BP}-G_{RP}<1.5$ ($G_{BP}$ and $G_{RP}$ are blue and red magnitude defined by the GAIA photometric system, respectively) and a magnitude of $9<M_G<12$. We do not limit the range of the parallax, but restrict the magnitude of the error with a cut \verb|parallax_over_error > 5|
in the query. To obtain a clean sample, we refer to \cite{2018A&A...616A...2L} and apply the conditions that
\begin{itemize}
\item \verb|phot_bp_mean_flux_over_error > 8|
\item \verb|phot_rp_mean_flux_over_error > 10|
\item \verb|astrometric_excess_noise < 1|
\item \verb|phot_bp_rp_excess_factor < 2.0+0.06*|
\verb|power(phot_bp_mean_mag-phot_rp_mean_mag,2)|
\item \verb|phot_bp_rp_excess_factor > 1.0+0.015*|
\verb|power(phot_bp_mean_mag-phot_rp_mean_mag,2)|
\item \verb|visibility_periods_used > 5|.
\end{itemize}
We downloaded a catalog of the GAIA sources using \verb|astroquery| \citep{2019AJ....157...98G}.
We search for a possible X-ray counterpart of the selected GAIA sources in (i) the ROSAT all-sky survey bright source catalog~\citep{1999A&A...349..389V}, (ii) the second Swift-XRT point-source catalog \citep{2020ApJS..247...54E} and (iii) XMM-Newton DR-10 source catalog~\citep{2020A&A...641A.136W}.
We select the GAIA sources that are located within 10'' from the center of the X-ray source, and then we remove the sources that have been
already identified as a CV or other types of objects by checking the catalogs of CVs
\citep{2003A&A...404..301R,2016MNRAS.456.4441C,2020RNAAS...4..219J, 2021arXiv211113049S, 2021AJ....162...94S}, and the SIMBAD astronomical database\footnote{\url{http://simbad.u-strasbg.fr/simbad/}}.
After selecting the GAIA sources that are potentially associated with unidentified X-ray sources, we cross match them
with sources observed by the Zwicky Transient
Facility DR-8 objects \citep[hereafter ZTF, ][]{2019PASP..131a8003M}. We select a potential candidate of CVs based on (i) identification of a signature of an outburst and (ii) a periodic search with a Lomb-Scargle periodogram \citep[hereafter LS,][]{1976Ap&SS..39..447L}.
We download the light curves from the Infrared Science Archive \footnote{\url{https://irsa.ipac.caltech.edu}}, and use the $r$-band data to search for a periodic signal. We apply the barycentric correction to the photon arrival time using \verb|astropy|\footnote{\url{https://docs.astropy.org/en/stable/time/index.html}}. We find, on the other hand, that since the
ZTF observation for our candidates (section~\ref{result}) provides only $500-1000$ data points during 2018-2021 observations, the data quality may not be
enough to study the detailed properties of the light curve (e.g. identifying eclipse feature, section~\ref{g141}).
The Transiting Exoplanet Survey Satellite~\citep[hereafter TESS,][]{2014SPIE.9143E..20R} provide the light curves of the sources by monitoring for a month, and also photometric data of sources that are out of
the field of view of ZTF. For our targets, TESS full-frame images (hereafter FFIs) provides the data taken every 10~minutes or 30 minutes, for which Nyquist frequencies are $F_{N}\sim 72~{\rm day^{-1}}$ and $\sim 24~{\rm day^{-1}}$, respectively. We extract the light curve of the pixels around the source region from TESS-FFIs using TESS analysis tools, \verb|eleanor| \citep{2019PASP..131i4502F,2019ascl.soft05007B} and \verb|Lightkurve| \citep{2018ascl.soft12013L}. We note that since several GAIA sources
are usually located at one pixel, TESS data alone cannot tell us which source produces
the periodic signal in a light curve extracted from TESS-FFIs. To complement ZTF and TESS observations, we carry out photometric observations for some of our targets with the Lulin One-meter Telescope (hereafter LOT) in Taiwan.
\subsection{LS periodogram}
First, we produce an LS periodogram for each source by taking into account the Gaussian and uncorrelated errors that are usually provided
in the archival data (or by usual data processing). We search for a possible periodic signal in $1~{\rm day^{-1}}<f<50-150~{\rm day^{-1}}$, in which the maximum frequency depends on the time resolution of the observation. We estimate the false alarm probability (hereafter FAP) of the signal with the methods of \cite{2008MNRAS.385.1279B} and of the bootstrap~\citep{2018ApJS..236...16V}. For an accreting system, it is proposed to investigate the effect of the time-correlated noise on the LS periodogram~\citep{2018ApJS..236...16V}. Based on the correlated noise model of \cite{2020A&A...635A..83D}, we, therefore, produce an LS periodogram with
different parameters of the noise model (Appendix~\ref{noise}). We find that
the LS periodogram of TESS data is insensitive to time-correlated noise model. For ZTF data, although the noise model can change the shape of the LS periodogram, it is less effective in the periodic signals presented in this study (Table~1); but see section~\ref{g141} and Appendix~\ref{noise} for ZTF18aampffv,
in which one periodic signal may be related to time-correlated noise.
In section~\ref{result}, therefore, we present an LS periodogram
created with the time uncorrelated noise. The correlated noise model and
some results of the LS periodogram are presented in Appendix~\ref{noise}.
\section{Results}
\label{result}
\begin{deluxetable*}{ccccccc}
\tablecolumns{7}
\tabletypesize{\footnotesize}
\tablecaption{Basic information on the CV candidates in this study}
\tablehead{
\colhead{GAIA} &
\colhead{ZTF}&
\colhead{X-ray source} &
\colhead{Distance} &
\colhead{$f_{ZTF}/F_0$\tablenotemark{\rm a}} &
\colhead{Orbital} &
\colhead{Proposed type} \\
\colhead{DR2} &
\colhead{}&
\colhead{}&
\colhead{(pc)} &
\colhead{(${\rm day^{-1}}$)} &
\colhead{frequency} &
\colhead{}
}
\startdata
1415247906500831744 (G141) & 18aampffv &4XMM J172959.0+52294
& 532 & - /11.148(4) & $F_0$ & Eclipsing\\
4534129393091539712 (G453) & 18abikbmj &1RXS J185013.9+242222 & 322 & - / 39.11(4) & $F_0$/2 or $F_0$ & Dwarf-Nova/superhump \\
2072080137010352768 (G207) & 18abrxtii &2SXPS J195230.9+372016& 313 & 28.575(1)/28.57(4) & $F_0$/2 & Eclipsing \\
2056003803844470528 (G205) &18aayefwp &2SXPS J202600.8+333940& 595 & 24.595(1)/24.60(4) & $F_0$/2 or $F_0$ &\\
2162478993740496256 (G216) & 17aaapwae & 2SXPS J211129.4+445923& 429 & 22.175(1)/22.18(2) & $F_0$ & \\
4321588332240659584 (G432)& 18aazmehw & 2SXPS J192530.4+155424 & 582 & 18.5521(9)/18.553(1)& $F_0$/2 or $F_0$ & IP\\
4542123181914763648 (G454)& 18abttrrr & 1RXS J172728.8+132601 & 502 & 8.6499(8)/8.65(2) & $F_0$ & Polar
\enddata
\tablenotetext{\rm a}{$f_{ZFT}$ and $F_0$ correspond to photometric periodic signals seen
in the ZTF and TESS light curves, respectively.}
\end{deluxetable*}
\begin{deluxetable*}{cccccc}
\tablecolumns{6}
\tabletypesize{\footnotesize}
\tablecaption{Information on TESS and LOT observations}
\tablehead{
\colhead{GAIA} &
\colhead{TESS}&
\colhead{} &
\colhead{LOT} &
\colhead{} &
\colhead{Figures\tablenotemark{\rm a}}
\\
\colhead{DR2} &
\colhead{Date (MJD)}&
\colhead{Sector}&
\colhead{Date (MJD)} &
\colhead{Exposure (hrs)} &
\colhead{}
}
\startdata
G141 & 58764-58789 &17 & 59491/59492 & 5.2 & 2-5, A1 \\
& 58954-59034 & 24, 25, 26& & &\\
& 59579-59606 & 47 & && \\
& 59664-59690 & 50 & && \\
\hline
G453 & 59009-59034 & 26 & 59398/59399 & 4.9 & 6-9 \\
\hline
G207 & 58683-58736 & 14, 15& 59475/59476 &2.8 &10,11\\
& 59419-59445 & 41 & & &\\
\hline
G205 & 58683-58736& 14, 15& 59542 & 0.7& 10, A1 \\
& 59419-59445 & 41 & & &\\
\hline
G216 & 58710-58762& 15, 16 &59474 & 2.7 & 10,12 \\
\hline
G432& 58682-58709 & 14 & & & 13,14\\
& 59390-59418& 40 & & &\\
\hline
G454& 58984-59034 & 25, 26 & & & 15
\enddata
\tablenotetext{\rm a}{References for figures in this paper.}
\end{deluxetable*}
\begin{deluxetable*}{cccccc}
\tablecolumns{6}
\tabletypesize{\footnotesize}
\tablecaption{Summary of Swift X-ray observations}
\tablehead{
\colhead{GAIA}&
\colhead{Data} &
\colhead{Date/Exposure}&
\colhead{$N_H$} &
\colhead{Photon index} &
\colhead{Luminosity}
\\
\colhead{DR2} &
\colhead{}&
\colhead{(MJD)/(ks)}&
\colhead{$10^{22}(\rm{cm^2})$} &
\colhead{} &
\colhead{$10^{31}({\rm erg~s^{-1}})$}
}
\startdata
G141 & TOO & 59558/4.7 & $1.9^{+5.4}_{-1.9}$& $0.5^{+0.6}_{-0.5}$ & $4.5^{+1.9}_{-1.3}$ \\
G453 & TOO & 59529, 59534/2.0 &$1.4^{+2.5}_{-1.4}$ & $1.6^{+0.7}_{-0.6}$& $5.5^{+2.0}_{-1.5}$ \\
G207 & Archive & 57046, 57047/3.2 & 0.3 (fixed)\tablenotemark{\rm a} & $0.9^{+1.0}_{-1.0}$& $0.5^{+0.8}_{-0.3}$ \\
G205 & Archive & 57796-57806/9.1 &$4.0^{+5.2}_{-2.0}$ & $2.4^{+1.1}_{-0.9}$ & $1.4^{+3.4}_{-0.6}$ \\
G216 & Archive & 57210-57309/10 & 4.0 (fixed)\tablenotemark{\rm a} & $1.8^{+0.4}_{-0.4}$& $1.2^{+0.4}_{-0.3}$ \\
G432& Archive &56011-56074/1.1 & 13 (fixed)\tablenotemark{\rm a} & $0.3^{+1.1}_{-1.4}$ & $14^{+24}_{-8.0}$\\
G454& TOO & 59591/2.5 & 0.75 (fixed)\tablenotemark{\rm a}& $-0.2^{+1.0}_{-1.6}$ & $5.5^{+14}_{-3.2}$
\enddata
\tablenotetext{\rm a}{$N_H$ is estimated from the sky position using the hydrogen column density calculation tool ``${\rm N_H}$'' under HEASoft, and is fixed during the fitting process.}
\end{deluxetable*}
We downloaded the catalog of $\sim 2\times 10^5$ GAIA sources selected based on criteria described in section~\ref{select}, and
identify 29 sources that are potential counterparts of the unidentified X-ray sources. After searching for period
signals in the light curves based on ZTF, TESS, and LOT data,
we identify seven sources as being CV candidates (Tables~1-3, sections~\ref{g141}-\ref{g452}), for which we can obtain a periodic signal with at least two different facilities. Two of them have been recognized as candidates for CVs in previous studies, but they are not listed in the current catalogs. We also present four other sources
(Table~4, section~\ref{others}), for which no useful ZTF data are available, but TESS-FFI data indicates a periodic signal, and one of them is a promising candidate for polar. LS periodograms and light curves obtained from
ZTF and TESS data for 11 candidates are presented in Figures~\ref{ztf}-\ref{other}.
\subsection{GAIA DR2 1415247906500831744}
\label{g141}
4XMM J172959.0+522948 (hereafter, 4XMMJ1729) is an unidentified X-ray source, and its position is consistent with GAIA DR2 1415247906500831744 (hereafter G141, Figure~\ref{ztf18aam}). Based on the parallax measured by GAIA, the distance to this source is $\sim 532$~pc. There is a nearby source (GAIA DR2 1415247906499736448), whose position from our target is separated by $\sim2''$. Since the GAIA G-band mean magnitude of the nearby source, $\sim20$, is larger than G141 ($\sim 18.3$ mag), contamination will have less influence on the optical light curve observed for our target.
Figure~\ref{ztf18aam} (top right panel) shows the $r$-band light curve for G141 measured by ZTF (ZTF18aampffv). We find in the figure that the target frequently repeats small outbursts or transitions between high and low states on a timesale of months, which may indicate an accreting system. The amplitude is $\delta m > 2$, which is relatively large compared to those seen in the ZTF light curves for other candidates presented in this study. We search for a potential periodic signal in the ZTF light curve with the LS periodogram (left panel of Figure~\ref{power}),
and find a significant signal at $f_{ZTF}\sim 46.9991(9)~{\rm day^{-1}}$ ($\sim 0.021$~days), where the uncertainty denotes the Fourier resolution of the observation (namely, the inverse of the time span covered by the observation).
The TESS observation covered the source region several times (Table~2).
The middle panel of Figure~\ref{power} shows an LS periodogram of the light curve of the TESS data, the sector 50 (Figure~\ref{ztf18aam}). The time resolution of the sector~50 is about 10 minutes, and it has the capability to searching for a signal of $f_{ZTF}\sim 47~{\rm day^{-1}}$. We find,
however, that the LS periodogram is dominated by the signal with $\sim 22~{\rm day^{-1}}$, which is indicated with $2F_0$ in the figure, its harmonics and aliasing signals. We check the data of sector 47, which also does not show a significant periodic signal at $f_{ZTF}$.
Since the data of sectors 17 and 24-26 were taken approximately every $30$~minutes, they cannot be used to search for a periodic signal of $f_{ZTF}$. The signal of $\sim 22~{\rm day^{-1}}$ is confirmed in all data (although sector~40 also covers the source, the data is contaminated by unusually high background emission or noise).
To collect evidence of a binary nature, we carried out a photometric observation with LOT (2021, October 4th and 5th, Table~2).
The light curve clearly shows that G141 is an eclipsing binary system (Figure~\ref{18aam_BJD0405}),
and the observation covered three eclipse events that repeat at a frequency of $F_0=11\pm 4~{\rm day^{-1}}$, suggesting that G141 is a binary system with an orbital period of $P_{orb}\sim 0.09$~day. With the time resolution of $\sim$2 minutes for the LOT data, we estimate that each eclipse lasts $\sim15$~minutes, and the eclipse profile does not change during the observation. With the LOT data, we unable to confirm a periodic signal corresponding to $f_{ZTF}\sim 47~{\rm day^{-1}}$.
The top panel of Figure~\ref{18aam-orbit} shows the folded light curve of the TESS data with a frequency of
$F_0\sim 11~{\rm day^{-1}}$, and clearly shows an eclipsing feature at the orbital phase of $\sim 0.2$ in the light curve. We also find the secondary eclipse around the orbital phase $\sim 0.7$, which
is shallower than that of the primary eclipse. This illustrates that the signal of the second harmonic ($F_1=2F_0\sim 22.296~{\rm day^{-1}}$) is stronger than that of the fundamental signal in the LS periodogram.
4XMMJ1729 was covered by the field of view, when XMM-Newton observations were carried out for two stars (HD 150798 and HD 159181) in 2002. We extract the data in the standard way using the most updated instrumental calibration, tasks \verb|emproc| for MOS and \verb|epproc| for PN of the XMM-Newton Science Analysis Software (XMMSAS, version 19.0.1). Although the observation was carried out with a total exposure of about 20~ks, the quality of the data is insufficient to measure
the spectral properties, since (i) the target is located at the edge of the field of view in all EPIC data and (ii) the observation is significantly affected
by background flare-like events. We create
an LS periodogram for the light curve (Figure~\ref{ztf18aam}) of PN data;
we do not analyze MOS data due to an insufficient count rate. As can be seen in Figure~\ref{power}, the window effect
dominates the periodogram, and a significant periodic signal at $f_{ZTF}\sim 47~{\rm day^{-1}}$ is not identified, although an indication may exist. Figure~\ref{18aam-orbit} shows the
X-ray light curve folded with the orbital period $F_0=11.148~{\rm day^{-1}}$. Although
a lower count rate is observed at $\sim0\sim 0.3$ orbital phase,
a deeper observation is required to investigate the eclipse feature in X-ray bands.
We obtain an $\sim 5$~ks Swift observation for 4XMMJ1729 (Table~3). We extract a cleaned event file with \verb|Xselect| under \verb|HEASoft ver.6-29|, and fit the spectrum with \verb|Xspec ver.12.12|. Since we could not constrain the spectral model because of insufficient photon counts, we fit the spectrum with
a single power-law function (Table~3). Using the distance measured by GAIA (Table~1), we estimate the luminosity in the 0.3-10~keV band as
$L_X=4.5^{+1.9}_{-1.3}\times 10^{31}(d/{\rm 532~pc})^{2}~\rm{erg~s^{-1}}$ (hydrogen column density of $N_H=1.9^{+5.4}_{-1.9}\times 10^{21}~{\rm cm^{-2}}$ and photon index of $\Gamma=0.5^{+0.6}_{-0.5}$).
We have not found a significant periodic signal with $f_{ZTF}\sim 47~{\rm day^{-1}}$ in TESS, LOT and XMM-Newton data.
In Appendix~A, therefore, we carry out a further investigation of the LS periodogram of the ZTF data and find the possibility that the signal is caused by the time-correlated noise.
\subsection{GAIA DR2 4534129393091539712}
GAIA DR2 4534129393091539712 (hereafter, G453) is selected as a possible counterpart of the ROSAT source, 1RXS~J185013.9+242222. We obtain an $\sim2$~ks Swift observation for 1RXS J185013.9+242222 in 2021 November, and we estimate the luminosity of $L_X\sim 5.5^{+2.0}_{-1.5}\times 10^{31}(d/322~\rm{pc})^2~{\rm erg~s^{-1}}$ in the 0.3-10~keV band by fitting the spectrum with a power-law model ($N_H=1.4^{+2.5}_{-1.4}\times 10^{21}~{\rm cm^{-2}}$ and $\Gamma=1.6^{+0.7}_{-0.6}$, Table~3).
As can be seen in the right panel of Figure~\ref{ztf18abi}, the ZTF light curve (ZTF18abikbmj) shows several outbursts with a recurrent time scale of a year. In fact, 1RXS~J185013.9+242222 has already been recognized as a dwarf nova by previous observation of the outburst \footnote{\url{http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/25434}}. Since no detailed investigation for this source has been carried out, we searched for possible orbital modulation in the photometric data.
First, we searched for periodic signals in a quiescent state in the ZTF data, but we did not find any obvious periodic signals (except for the window effects) in the periodogram (Figure~\ref{power-abi}). TESS observed this source in 2020 June during a stage of the small outburst (Figure~\ref{ztf18abi}), and
the data was taken approximately every $30$~minutes. We extract the light curve with 4 pixels around the target (left panel of Figure~\ref{ztf18abi}).
In the LS-periodogram, we find two strong signals at 8.88(4) day$^{-1}$ ($\sim 0.11$~days)
and 39.11(4) day$^{-1}$ ($\sim 0.026$~days), either of which is likely the aliasing of the sampling frequency. Since the extracted light curves with 4 pixels contain several sources (left panel of Figure 7), the TESS observation alone cannot support the detected signal originating from G453.
We confirm the binary nature of G453 with LOT. We carried out the observations with $r$-band (2021 July 3) and g-band (2021 July 4) filters, for which one observation covered the source with an exposure of $\sim 2-2.5$ hr (Table~2). As
indicated in Figure~\ref{ztf18abi}, the LOT observation was also carried out during a small outburst, which could be a re-brightening after the large outburst happen around MJD~59300. In the LS-periodogram (Figure~\ref{power-abi}) and the
observed light curve (Figure~\ref{18abi-BJD0304}), we confirm evidence of periodic modulation at
$F_0=37.3\pm 1~{\rm day^{-1}}$, which is close to the TESS result.
Although we cannot firmly conclude the origin of the periodic signal with the current photometric data, the signal $F_0$ is likely related to the orbital period. From the shape of the light curve taken by LOT, we can expect that the orbital period is double the photometric period, namely $P_{orb}\sim 2\times 1/F_0\sim 0.05~{\rm days}$; e.g. the light curve with the $r$-band filter may be more consistent with modulation caused by the elliptical shape of the companion star that fills the Roche-lobe, and an indication of
double-peak structure with the different peak magnitudes in the $g$-bands (Figure~\ref{abi-fold}).
Another possible origin of the signal $F_0$ is the superhump, which is a periodic variation observed emission from an eccentric disk after the outburst of CVs \citep{1995cvs..book.....W}. The period of the superhump is several percent longer (or sometimes shorter) than the orbital period. Although superhump is usually observed after a super-outburst of CVs \citep{2005PJAB...81..291O, 2014PASJ...66...90K,2020AdSpR..66.1004H}, it has also been observed at normal outburst or re-brightening after a super-outburst \citep{2006PASJ...58..367Z,2012PASJ...64L...5I}. The ZTF light curve of G453 suggests that there were several re-brightenings after the large outburst occurred around MJD~59300, and the LOT observation covered one of re-brightening stages. Moreover, the previous observations for other dwarf novae confirm a beat signal between the orbital modulation and superhump \citep{2000PASP..112.1567P,2010PASJ...62.1525K}. In the periodogram of the TESS data for G453 (middle panel of Figure~\ref{power-abi}), we notice a periodic signal at $\sim 1.34~{\rm day^{-1}}$ ($\sim 0.75$~day), which is confirmed only at the source region, with a significance greater than the 99~\% confidence level. This signal can be explained by the beat signal, if the difference between the orbital period and the superhump period is $\epsilon \sim 3-4$~\% (i.e. $1.34~{\rm day^{-1}}\sim \epsilon \times 39~{\rm day^{-1}}$). The TESS light curve folded with $F_0$ is presented in Figure~\ref{all-light}.
\subsection{GAIA DR2~2072080137010352768, 2056003803844470528, and 2162478993740496256}
\label{threes}
We find these three candidates that do not show frequent outbursts, but the observed brightness drops suddenly
and stays at a low luminosity state with a time scale of months. Figure~\ref{ztf3} shows the light curves of GAIA DR2 2072080137010352768
(hereafter G207), 2056003803844470528 (G205), and 2162478993740496256 (G216) measured by ZTF, which are named ZTF~18abrxtii, 18aayefwp, and 17aaapwae, respectively. For G207 (top panel of Figure~\ref{ztf3}), for example, a sudden
drop in brightness occurred around MJD~59050 and the source stayed in a low luminous state for $\sim 50$~day. These three GAIA sources are selected as the optical counterparts of Swift sources. Using the distance measured by GAIA, the X-ray luminosity in the 0.3-10~keV energy band is estimated to be $L_X\sim 0.5^{+0.8}_{-0.3}\times 10^{31}(d/313~{\rm pc})^2~{\rm erg~s^{-1}}$ for G207, $\sim 1.4^{+3.4}_{-0.6}\times 10^{31}(d/~595{\rm pc})^2{\rm erg~s^{-1}}$ for G205 and $\sim 1.2^{+0.4}_{-0.3}\times 10^{31}(d/429~{\rm pc})^2 {\rm erg~s^{-1}}$ for G216, respectively (Table~3).
For these three targets, we can detect significant periodic signals in the ZTF and TESS (Tables~1~and~2). The photometric periodic signals in the ZTF light curves are $f_{ZTF}=28.575(1)~{\rm day^{-1}}$ ($\sim 0.035$~day) for G207,
$24.595(1)~{\rm day^{-1}}$ $(\sim 0.041$~day) for G205 and $22.175(1)~{\rm day^{-1}}$ ($\sim 0.045$~day) for G216, respectively. We also obtain consistent periodic signals in the TESS light curves.
We carried out photometric observations for the three sources with LOT.
For G207 (Figure~\ref{18abr_BJD}), although the data fluctuation is significant, we find an eclipsing feature in the light curve. Figure~\ref{18abr_BJD} shows that the source became fainter around MJD 0.034~(+59475.6) and MJD 0.094~(+59475.6), and the time interval between two epochs is consistent with an integer multiple of the photometric period of $1/f_{ZTF}\sim 0.035~{\rm day}$. The orbital period, however, will be double of the photometric period, since the observation in left panel of Figure~\ref{18abr_BJD} should cover another eclipse if the orbital period is $\sim 0.035$~day. ZTF/TESS light curves may indicate a secondary eclipse with a shallower depth (Figure~\ref{all-light}).
For G216 (Figure~\ref{aalight}), the light curve indicates a periodic modulation, and the time interval between two strong peaks is consistent with the photometric period of $1/f_{ZTF}\sim 0.045$~day, and the LOT light curve is consistent with the TESS/ZTF light curves (Figure~\ref{all-light}). Hence, G216 is a binary system with an orbital period of $0.045$~day. For G205, we could carry out only $\sim 40$~minute observation with LOT. The data, therefore, is not enough to constrain the orbital period of the source.
\subsection{GAIA DR2 4321588332240659584}
\label{g432}
GAIA DR2 4321588332240659584 (hereafter G432) is selected as a candidate for a counterpart of the Swift point
source, 2SXPS~J192530.4+155424, and the X-ray luminosity is measured as
$L_X\sim 1.4^{+2.4}_{-0.8}\times 10^{32}(d/582~{\rm pc})^2{\rm erg~cm^{-2}s^{-1}}$ in the 0.3-10~keV band (Table~3). As compared with the ZTF light curves of other candidates in this study, the optical emission from this source (ZTF18aazmehw)
is more steady with an amplitude of variation of less than
$\sim$1 mag (right panel of Figure~\ref{ztf18aaz}). In the LS-periodogram of the ZTF light curve, we confirm a periodic signal at a frequency of $f_{ZTF}=18.5521(9)~{\rm day^{-1}}$ ($\sim 0.054$~day).
TESS observed the region around G432 in 2019, July (sector 14) and 2021, June (sector 40), for which data were taken approximately every $30$~minutes and $\sim10$~minutes, respectively. Figure~\ref{18aaz-tess} shows LS-periodogram for the data of
sectors~14 and 40. The LS-periodogram (left panel) clearly indicates a periodic signal at $F_0=18.553(1)~{\rm day^{-1}}$, which is consistent with the finding from ZTF. The folded light curve with $f_{ZTF}$ (right panel of Figure~\ref{18aaz-tess})
may be described
by the main peak plus a small secondary, rather than a pure sinusoidal curve, although the significance of the secondary peak is small.
In the LS-periodogram, we can see the signals at the second harmonics ($F_1=2F_0$) and
the effect of aliasing with the data sampling frequency of $F_s\sim 145~{\rm day^{-1}}$, namely $F_{b}=F_s-F_0\sim 126~{\rm day^{-1}}$ and $F_{b1}=F_s-F_1\sim 108~{\rm day^{-1}}$
In addition to the periodic signals related to $F_0$ and $2F_0$, the periodogram shows other signals at
$F_u=32.17(4)~{\rm day^{-1}}$ ($\sim 0.031$~day) and its harmonics $F_u/2$. This signal clearly appears
in the TESS data of the sector~40. To check which pixel of the TESS-FFIs causes the signal,
we extract the light curve of each pixel around the sources for the observation of sector~40. We find that a significant signal with $F_u$
can be detected at one pixel where the power of the signal with $F_0$ becomes the maximum.
The periodic signal with $F_0$ can be seen in other pixels but the power of the signal is lower than that at the pixel where the $F_u$ signal is found. Hence, the signal, $F_u$, may be related to our target, although other optical sources located
in the same pixel (right panel of Figure~\ref{ztf18aaz}) cannot be ruled out as the origin of the signal.
We note that LS periodogram of TESS data is insensitive to the time-correlated noise model discussed in Appendix~\ref{noise}.
If the signal $F_u$ is related to G432, then the origin may be related to the harmonics of $F_0$.
Although the frequency $F_u=32.17(4)~{\rm day^{-1}}$ cannot be described by a simple relation with $F_0$, the frequency $F_s-F_u\sim 6F_0$,
where $F_s\sim 145~{\rm day^{-1}}$ is the data readout frequency, indicates that the signal $F_u$ is related to the 6th harmonic of the fundamental frequency. It is not trivial, however, why the modulation of the 6th harmonic is more evident than those of the 2nd-5th harmonics.
If the frequency $F_u$ is independent of $F_0$,
the signal would be related to the spin of the WD. We cannot find a significant periodic signal at the frequency $F_u$ in the data of sector~14. This may be due to a large uncertainty of each data point of the observation or the true signal is
$F_s-F_u\sim 111~{\rm day^{-1}}$ ($\sim 13$~minutes) for which the data of sector~14 cannot identify. If $F_u$ and $F_0$ are related to the WD spin and orbital period, respectively, G432 could be a candidate for IP due to the fact that the X-ray emission dominates the UV emission, which is a typical property of the emission of IPs (section~\ref{comp}). A data set taken with a higher cadence is required to identify the origin of $F_u$.
\subsection{GAIA DR2 4542123181914763648}
\label{g452}
We select GAIA DR2~4542123181914763648 (hereafter G452) as a candidate of the optical counterpart of
1RXS~J172728.8+132601, for which a 2.5~ks Swift observation measures the X-ray luminosity of $L_X=5.5^{+14}_{-3.2}\times 10^{31}(d/502{\rm pc})^2~{\rm erg~s^{-1}}$ with $N_H=7.5\times 10^{21}~{\rm cm^{2}}$ (Table~3). As the ZTF light curve (ZTF18abttrrr) shows (left panel of Figure~\ref{18abt}), the source repeats outbursts on a time scale of years; the outburst happened in 2021 was alerted as a GAIA transient source,
AT~2021aath \citep[GAIA21eni,][]{2021TNSTR3431....1H}, which was identified as a polar type CV. Nevertheless,
since there is no detailed binary information for the target in the literature, we search for a possible periodic signal in the ZTF light curve.
After removing the data of the outburst from the light curve, the LS-periodogram shows a significant periodic signal at $f_{ZTF}=8.6499(8)~{\rm day^{-1}}$ ($\sim 0.115$~day).
TESS observation for the region around G452 was carried out in 2020,
May and June (sectors 25 and 26) during the outburst around MJD~59000 (Figure~\ref{18abt}).
We extracted the TESS-FFI light curve and removed a long-term
trend caused by the outburst from the light curve. We found a periodic signal with $F_0=8.65(2)~{\rm day^{-1}}$, which is consistent with the result of the ZTF observation. The right panel of Figure~\ref{18abt} shows the folded light curves of the ZTF and TESS data, which can be described by a sinusoidal modulation. As Figure~\ref{18abt} shows, we also observe a shift in the TESS light curve from the ZTF light curve, which may be due to the effect of an outburst during the TESS observation. We do not find another significant periodic signal in the TESS data, which may be consistent with a spin-orbit phase synchronization of the polar.
\subsection{Other candidates}
\label{others}
We searched for a possible periodic signal of 29 GAIA DR2 sources, which are potential counterparts of an unidentified X-ray source. Table~4 presents four sources, for which TESS-FFI data provides a periodic
signal, but no enough data points or no data are available for ZTF observations. As we have mentioned, there are two observational modes for TESS-FFIs,
in which the data readout time intervals are $\sim48~{\rm day^{-1}}$ and $\sim144~{\rm day^{-1}}$, respectively. As shown in Table~4, GAIA DR2 5360633963010856448 and 5964753754945126528 were observed by two observational modes, and the periodic signal $F_0$ reported in Table~4 is identified in both data sets. For GAIA~DR2~2031371479242995584 and~ 1981213682883140864, on the other hand, data from only one observational mode is available, and we cannot discriminate between the true frequency and the aliasing. We note that due to one pixel of the TESS observations containing several GAIA sources, we cannot rule out that the periodic signal is related to another source.
Among the four sources in Table~4, GAIA DR2 5964753754945126528 (hereafter G596), which shows a periodic signal $F_0\sim 8.34~{\rm day^{-1}}$ $(\sim 0.120$~day) in the TESS data, is a promising candidate for a CV, and its location is consistent with
the X-ray source, AX~J1654.3-4337, which
was discovered by the ASCA Galactic plane survey~\citep{2001ApJS..134...77S}. Swift and NuSTAR observations for this source were carried out in 2020 July-August with a total exposure of 6.7~ks and 26~ks, respectively. We extract the events and spectra of the source region
with the command \verb|nupipeline| under \verb|HEASoft ver.6-29| for the NuSTAR data and the package \verb|Xselect| for the Swift data.
We group the spectral bins to contain a minimum of 20~counts in each bin, and fit the spectrum using \verb|Xspec ver.12.12| (Figure~\ref{axj16-spe}). We find that the spectrum in the 0.2-70~keV band is well fitted by the optically thin thermal plasma emission (\verb|tbabs*mekal| model in Xspec) with a temperature of $k_BT=8.9^{+2.0}_{-1.5}$~keV and absorption column density of $N_H=2.2^{+0.08}_{-0.07}\times 10^{21}~{\rm cm^{-2}}$ ($\chi^2=164$ for 188 D.O.F.). The luminosity in the 0.2-70~keV band is estimated to be $L_X=6.0^{+0.6}_{-0.5} \times 10^{31}(d/462~{\rm pc})^2~{\rm erg~s^{-1}}$. The X-ray emission with a plasma temperature of $\sim 10$~keV suggests the emission from an accretion column on the WD surface, indicating a magnetic CV system. Moreover, the observed X-ray luminosity of $<10^{32}~{\rm erg~s^{-1}}$ indicates that the source is not a typical IP, for which the X-ray luminosity is typically $>10^{32}~{\rm erg~s^{-1}}$.
Figure~\ref{axj-tess} presents the folded light curve
of $F_0=8.34~{\rm day^{-1}}$ extracted from the TESS-FFI data.
We can see that the shape of light curve shows a double-peak structure.
Combined with the X-ray spectral properties, this optical emission likely originated from two magnetic poles
heated by the accretion column, and the modulation is likely due to the WD spin. Since the TESS data did not indicate other periodic signals longer than this WD's spin signal within 10~day,
G596/AX~J1654.3-4337 may be a polar, although we cannot rule out the possibility of an IP. We do not find any periodic signal in the NuSTAR data, in which a window effect of the observation dominates the LS-periodogram.
\begin{deluxetable*}{cccccc}
\tablecolumns{5}
\tabletypesize{\footnotesize}
\tablecaption{Other CV candidates}
\tablehead{
\colhead{GAIA} &
\colhead{X-ray source} &
\colhead{Distance} &
\colhead{TESS} &
\colhead{$F_0$\tablenotemark} &
\colhead{Proposed type} \\
\colhead{DR2} &
\colhead{}&
\colhead{(pc)} &
\colhead{Sector} &
\colhead{(${\rm day^{-1}}$)} &
\colhead{}
}
\startdata
5360633963010856448 & 1RXS J104612.9-511819 & 496 & 10, 36, 37 & 15.46(2)\tablenotemark{\rm a} & \\
5964753754945126528 & 2SXPS J165423.6-433745 & 462 & 12, 39 & 8.34(4)\tablenotemark{\rm a} & Polar \\
& AX J1654.3-4337 (AX1654) & & & \\
2031371479242995584 & 1RXS J194401.5+284456 & 416 & 40, 41 & 4.20/139.8\tablenotemark{\rm b} & \\
1981213682883140864 & 2SXPS J220344.5+525450 & 754 & 16, 17 &1.15/46.85\tablenotemark{\rm b} &
\enddata
\tablenotetext{\rm a}{The periodic signal is identified in two different modes}
\tablenotetext{\rm b}{Only data taken by one mode is available.}
\end{deluxetable*}
\section{Discussion and summary}
\label{discuss}
\subsection{Comparison with known CVs}
\label{comp}
Figures~\ref{dis} and~\ref{lxuvx} compare the properties of our CV candidates with those of known CVs. Figure~\ref{dis} shows the distributions of the orbital periods and GAIA G-bands magnitudes (upper panel) or the
X-ray luminosity in the 0.3-10~keV band (lower panel) of our candidates. In the figure, we can see so-called period gap at $P_{orb}\sim 2-3$~hr, in which less CVs have been detected \citep{1983A&A...124..267S, 2010MmSAI..81..849R, 2003Ap.....46..114K,2018ApJ...868...60G}. The figure shows that the nonmagnetic systems (black-filled circle) have been detected with a period in the range of 0.01-1~day. Known IPs (red-filled circle) and polars (blue filled circle), on the other hand,
concentrate on the orbital periods longer and shorter than the period gap, respectively. We find in the figure that
the six CV candidates discussed in the current study are located below the
period of the gap. This will be a selection effect because (i) there is a correlation between
the orbital period and GAIA G-band magnitudes, as seen in top panel of Figure~\ref{dis}, and (ii) we have selected the candidates with the condition $9<M_G<12$ of the G-bands magnitude (Figure~\ref{hr}).
AX J1654.3-4337 (pink diamond), which is a polar candidate as discussed in section~\ref{others},
locates in or close to the period gap. As the bottom panel of Figure~\ref{dis} shows, the X-ray luminosity of our candidate, $L_X\sim 10^{31-32}~{\rm erg~s^{-1}}$ is relatively larger among the CVs located below the period gap. This is because we searched candidates in the X-ray source catalogs of previous surveys, which may have miss many faint X-ray sources.
We can see in the figure that four candidates (G453, G205, G216, and G432) have an orbital period close to or shorter than $\sim 0.05$~day, which is known as the minimum orbital period in the standard binary evolution of
CVs \citep{2010MmSAI..81..849R}. We note however that the orbital periods of G453, G205 and G432 may be double the values in the figure (sections~\ref{threes} and~\ref{g432}), and hence they may have an orbital period longer than the minimum period.
Figure~\ref{lxuvx} shows the distribution of the
flux ratio of the UV band and X-ray band measured by Swift. As can be seen in the figure, the ratio is greater than unity for
most of the nonmagnetic system, which is understood by the emission from the boundary layer of the accretion disk.
For magnetic CVs, on the other hand, the X-ray dominates the UV emission, which is a characteristic of emission from the accretion column~\citep{2017PASP..129f2001M}. Several sources, on the other hand, have a
ratio greater than 10, and the emission
is probably dominated by blackbody emission from the pole with a temperature of $<50$~eV.
We find that our candidates have relatively hard spectrum ($F_{UV}/F_X\le 1$) and the two hardest sources (G432 and G596) are classified as candidates for magnetic CVs. For G432, we identify
two possible periodic signals ($F_0=18.5~{\rm day^{-1}}$ and $F_u=32.2$ or $111~{\rm day^{-1}}$) in the TESS data. With the UV/X flux ratio much smaller than unity, G432 may be a candidate of X-ray faint IP. For G596, the properties of the optical light curve and the X-ray spectrum are consistent with a polar, as described in section~\ref{others}.
In the unidentified X-ray source, we searched for new binary
candidates that show emission properties similar to those of AR~Scorpii, for which (i) no mass is probably transferred from the companion star to the WD, (ii) UV emission dominates
the X-ray emission, and (iii) a magnetic WD heats up the companion star. With current photometric studies, however, we can conclude that these candidates do not belong to the AR Scorpii-type binary system. For example, three candidates (G141, G453 and G454) clearly show
dwarf-nova-type outbursts, suggesting the existence of a mass transfer from the companion to the WD, and thus an accreting system.
With a large optical variation ($\delta m\sim 1.5$), AR Scorpii contains a WD heating up
the companion star. The optical curves of our candidates show a smaller magnitude variation ($\delta m< 1$, e.g. Figure~\ref{aalight} for G216 and Figure~\ref{ztf18aaz} for G432), and do not show
any clear evidence of heating.
G596/AX~J1654.3-4337 is a promising candidate for the magnetic system, but it is a polar, while AR Scorpii is an IP in the sense that the spinning period is different from the orbital period.
Finally, we note that out of our eight candidates, six systems have not experienced a large outburst in the last 4 years, and have been missed in the previous surveys. Our study, therefore, will suggest
that a large population of WD binary systems with an inactive mass transfer could be discovered in future surveys.
In summary, we searched for new CV systems associated with the unidentified X-ray sources listed in ROSAT, Swift and XMM-Newton source catalogs. We selected the GAIA sources with a g-band magnitude of $9<M_G<12$ and a color $0.5<G_{BP}-G_{RP}<1.5$,
and identified 29 sources that are potential counterparts of the unidentified X-ray sources.
We carried out a photometric study with
ZTF, TESS and LOT observations, and we constrained the orbital periods for the seven sources (sections~\ref{g141}-\ref{g452}). Among the seven candidates, G141 and G207 are eclipsing binary systems, and G141 shows a secondary eclipse with a shallower depth. We identified
three candidates (G141, G453 and G454), in which
a mass transfer from the companion to WD
is active, and they exhibit repeated
outbursts (Figures~\ref{ztf18aam}, \ref{ztf18abi} and \ref{18abt}). For the other three
candidates (G207, G205, and G216), on the other hand, it is observed that
the source brightness suddenly drops and stays at a low luminous state (Figure~\ref{ztf3}), suggesting
the mass transfer may be inactive. Based on the detection of two periodic signals in the light curve and UV/X-ray emission properties, we can classify G141 and G432 as IP candidates, although a spectroscopic study is required for confirmation. In addition to seven candidates, we confirmed that the unidentified ASKA source, AX J1654.3-4337 (G569), is a candidate for being a polar.
With the current photometric studies, we could not find any candidate for the AR Scorpii-type WD binary, although we selected the GAIA sources that had a similar
magnitude to AR Scorpii. AR Scorpii still remains a n
unique WD binary system, and a more sophisticated search (e.g. an optical survey for unidentified X-ray source with high cadence) will be required to
find a new AR~Scorpii-type binary system.
\vspace{4mm}
We thank to referee for his/her useful comments and suggestions.
We are grateful to Dr. Kato for providing the source list in VSX catalog
and sending some references.
We also thank the Swift-TOO team for arranging the observations for our sources. J.T. and X.F.W are supported by
the National Key Research and Development Program of China (grant No. 2020YFC2201400) and the National Natural Science Foundation of China (grant No. 12173014). A.K.H.K. is supported by the Ministry of Science and Technology (MOST) of Taiwan through grant Nos. 108-2628-M-007- 005-RSP and 109-2628-M-007-005-RSP. J.M. is supported by the National
Natural Science Foundation of China (grant No. 11673062). C.-P.H. acknowledges support from the MOST of Taiwan through grant MOST 109-2112-M-018-009-MY3. L.C.-C.L. is supported by MOST through grant MOST 110-2811-M-006-515 and MOST 110-2112-M-006-006-MY3. K.-L. L. is supported by the MOST of Taiwan through grant 110-2636-M-006-013, and he is
a Yushan (Young) Scholar of the Ministry of Education of Taiwan. C. Y. H. is supported by the National Research Foundation of Korea through grant 2016R1A5A1013277 and 2019R1F1A1062071.
{\it Note added in proof:}
While this article in press, we were noticed that potential orbital
periods of 4 sources have been already reported in the AAVSO catalog (for 2SXPS J202600.8+333940 and 2SXPS J192530.4+155424) or in vsnet-chat (for 4XMM J172959.0+522948 and 2SXPS J195230.9+372016). We list references of the AAVSO catalogs and archives of vsnet-chat for the 7 sources listed in Table~1. In Appendix C (for manuscript in archive), we briefly mention the information in the catalog and compare with our results.
{\tt ZTF18aampffv=4XMM J172959.0+52294=MGAB-V705}
\url{https://www.aavso.org/vsx/index.php?view}\\
\url{=detail.top&oid=1499054} \\
\url{http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/}\\
\url{vsnet-chat/8923}
{\tt ZTF18abikbmj=1RXS J185013.9+242222=DDE163}
\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=686692}
{\tt ZTF18abrxtii=2SXPS J195230.9+372016} \\
\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=2224634} \\
\url{http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-chat/8866}
{\tt ZTF18aayefwp=2SXPS J202600.8+333940=BMAM-V634}
\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=1543125}\\
\url{http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-chat/8920}
{\tt ZTF17aaapwae=2SXPS J211129.4+445923} \\
\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=2223038}
{\tt ZTF18aazmehw=2SXPS J192530.4+155424=DDE182}
\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=1543028}
{\tt ZTF18abttrrr=1RXS J172728.8+132601=GAIA21eni}\\
\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=2224470}
\facility{{\it Swift}(XRT), {\it XMM-Newton}(EPIC), {\it NuSTAR}(FPM), {\it ZTF}, {\it TESS} and {\it LOT}}.
\software{\newline {\tt Science Analysis System} \\ (\url{https://www.cosmos.esa.int/web/xmm-newton/how-to-use-sas}; \citealt{SAS2004})
\newline {\tt HEASoft} \\ (\url{https://heasarc.gsfc.nasa.gov/docs/software/lheasoft/\\developers\_guide/}; \citealt{HEAsoft2014})
\newline {\tt Xspec} \\ (\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/}; \citealt{Xspec96})
\newline {\tt Lightcurve} \\ (\url{https://heasarc.gsfc.nasa.gov/docs/tess/LightCurve-object-Tutorial.html}; \citealt{2018ascl.soft12013L})
\newline{\tt eleanor} \\(\url{https://eleanor.readthedocs.io/en/latest/}\citealt{2019PASP..131i4502F})
\newline {\tt IRAF} \\ (\url{https://iraf-community.github.io}; \citealt{1993ASPC...52..173T})
\newline {\tt astropy} \\ (\url{https://docs.astropy.org/en/stable/index.html}; \citealt{astropy:2013})
}
\bibliography{adssample}
\appendix
\restartappendixnumbering
\section{LS periodogram with time-correlated noise model}
\label{noise}
We apply the time-correlated noise model based on Equation (13) of \cite{2020A&A...635A..83D}, and assume the covariance matrix to follow
\begin{equation}
C_{i,j}=\delta_{i,j}\sigma_i^2+\sigma^2_{\rm exp}e^{|t_i-t_j|/\tau_{\rm exp}},
\label{cmodel}
\end{equation}
where $\sigma_i$ is the diagonal matrix with the observational error bars that are usually provided in the processed data, and $\sigma_{\rm exp}$ corresponds to the correlated noise with a time scale of $\tau_{exp}$. We choose the amplitude of the fluctuation in the light curve to be $\sigma_{\rm exp}$, and create the periodogram for the different values of $\tau_{\rm exp}$.
For each frequency, the normalized power of the LS periodogram is calculated from \citep{2018ApJS..236...16V}
\begin{equation}
z_1(f)=1-\frac{\hat{\chi}^2(f)}{\hat{\chi}^2_0},
\end{equation}
where $\hat{\chi}^2$ is the minimum value of $\chi^2(f)=(\mathbf{y}-\mathbf{y}_{\rm model})^T \mathbf{C}^{-1}(\mathbf{y}-\mathbf{y}_{\rm model})$, and $\mathbf{y}$ and $\mathbf{y}_{\rm model}$ are the time series of the observation and model, respectively. In addition, $\hat{\chi}^2_0$ is a non-varying reference value. For each periodogram, we scan the frequency between $1~{\rm day^{-1}}<f<100~{\rm day^{-1}}$ and estimate FAP using the method of \cite{2020A&A...635A..83D}. We can see that the LS periodogram of TESS data is insensitive to the time-correlated noise model within the current framework.
Figure~\ref{corre} shows the LS periodogram of the ZTF data with the time-correlated noise model for G141 (left panel) and G205 (right panel). As we demonstrated in section~\ref{g141}, the possible periodic signal, $f_{ZFT}\sim 47~{\rm day^{-1}}$, of G141 cannot be confirmed in the TESS/LOT data. From the left panel of Figure~\ref{corre}, we find that the signal at $f_{ZFT}\sim 47~{\rm day^{-1}}$ disappears from LS~periodogram for $\tau_{exp}>0.1$~day. For G205 (right panel), on the other hand, the periodic signal $f_{ZTF}\sim 24.6~{\rm day^{-1}}$ is confirmed in the TESS data, and the existence of the signal in the periodogram is insensitive to the noise model.
Although the noise model affects the shape of the LS periodogram of the ZTF data, it
is less effective in determining the existence of the periodic signal reported in Table~1. These results suggest that the periodic signal
$f_{ZFT}\sim 47~{\rm day^{-1}}$ of G141 is likely related to the time-correlated noise.
\section{LS-periodograms and folded light curves of ZTF and TESS data}
\restartappendixnumbering
Figures~\ref{tess} and \ref{all-light} show LS-periodograms of the TESS data and folded light curves of ZTF/ TESS data for seven candidates listed in Table~1. Figure~\ref{other} presents the
LS-periodograms and folded light curves of TESS data for the four candidates listed in Table~4.
\section{Information from the International Variable Star Index and vsnet-chat}
While the article in press, we were noted that the candidates of the orbital periods for
4 sources in 7 candidates listed in in Table~1 have been reported in
the international Variable Star Index\footnote{\url{https://www.aavso.org/vsx/index.php?view=search.top}} (VSX, for G205 and G432) and in vsnet-chat (for G141 and G207). We summarize the information in VSX and vsnet-chat for 7 sources.
\paragraph{G141,4XMM~J172959.0+522948} This source is named as MGAB-V705 in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=1499054}}
and the candidate of orbital period $\sim 0.0897008(1)$ day is reported in vsnet-chat~8923\footnote{\url{http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/ vsnet-chat/8923}}. This value is consistent with the value reported in Table~1.
\paragraph{G453, 1RXS J185013.9+242222} As noted in main text, this source has been known as a dwarf nova by previous observation of the outburst. This source is named as DDE 163 in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=686692}}
\paragraph{G207, 2SXPS J195230.9+372016} This source is counterpart of ZTF~18abrxtii, which is listed in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=2224634}}. The orbital period $\sim 0.0699896(1)$~day, which is consistent with the value reported in Table~1, is mentioned in vsnet-chat~8866\footnote{\url{http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-chat/8866}}.
\paragraph{G205, 2SXPS J202600.8+333940} This source is named as BMAM-V634 in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=1543125}}, and the candidate of orbital period $\sim$0.0813141~day obtained from ZTF data \citep{2020ApJS..249...18C} is consistent with our result.
\paragraph{G216, 2SXPS J211129.4+445923} This source is counter part of ZTF~17aaapwae, which is listed in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=2223038}} as a variable star.
\paragraph{G432, 2SXPS J192530.4+155424} This source is named as DDE 182 in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=1543028}}. The candidate of the orbital period $\sim 0.053903$~day
in the catalog is consistent with our result.
\paragraph{G454, 1RXS J172728.8+132601} As noted in the main text, the outburst happened in 2021 was alerted as a GAIA transient source, AT~2021. ZTF~18abttrrr is listed in VSX\footnote{\url{https://www.aavso.org/vsx/index.php?view=detail.top&oid=2224470}} as a variable star.
|
Title:
Multiwavelength observations of Swift J0243.6+6124 from 2017 to 2022 |
Abstract: We have obtained optical spectroscopy and photometry data during four years
after the event. The long-term photometric light-curve and the equivalent
widths of the Halpha and He I 6678 lines were used to monitor the state of the
Be star disk. The Halpha line profiles show evidence for V/R variability that
was accounted for by fitting the Halpha spectral line profile with two Gaussian
functions. We divided our data into three phases according to the intensity of
the X-ray, optical, and infrared emission. Phase I covers the rise and decay of
the giant X-ray outburst that took place in October to November 2017. We
interpret phase II as the dissipation of the Be star equatorial disk and phase
III as its recovery. The timescale of a complete formation and dissipation
process is about 1250 days. The epoch when the dissipation process stopped and
the reformation period began is estimated to be around MJD 58530. We find a
delay of about 100 to 200 days between the minimum of the optical or infrared
intensity and the strength of the Halpha line after the X-ray outburst, which
may indicate that the dissipation of the disk begins from the inner parts. The
motion of the density perturbation inside the disk is prograde, with a V/R
quasi-period of about four years. The source shows a positive correlation in
the (B-V) color index versus V-band magnitude diagram, which implies that the
system is seen at a small or moderate inclination angle.
| https://export.arxiv.org/pdf/2208.00151 |
\title{Multiwavelength observations of Swift J0243.6+6124 \\from 2017 to 2022}
\subtitle{}
\author{Wei Liu\inst{1,2}
\and
Jingzhi Yan\inst{1}
\and
Pablo Reig\inst{3,4}
\and
Xiaofeng Wang\inst{5,6}
\and
Guangcheng Xiao\inst{1}
\and
Han Lin\inst{5}
\and
Xinhan Zhang\inst{5}
\and
Hanna Sai\inst{5}
\and
Zhihao Chen\inst{5}
\and
Shengyu Yan\inst{5}
\and
Qingzhong Liu\inst{1}
}
\institute{Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, \\Nanjing, 210023, China\\
\email{weiliu@pmo.ac.cn}, {jzyan@pmo.ac.cn}%
\and
School of Astronomy and Space Science, University of Science and Technology of China, Hefei, 230026, China
\and
Institute of Astrophysics, Foundation for Research and Technology-Hellas, 71110 Heraklion, Greece
\and
Physics Department, University of Crete, 71003 Heraklion, Greece
\and
Physics Department/Tsinghua Center for Astrophysics, Tsinghua University, Beijing, 100084, China
\and
Beijing Planetarium, Beijing Academy of Sciences and Technology, Beijing, 100044, China
}
\date{Received 27 May 2022 /
Accepted 12 July 2022}
\abstract
{Swift J0243.6+6124 is a high-mass X-ray binary that went into a giant X-ray outburst in 2017. During this event, the X-ray luminosity reached the highest value ever measured in a galactic Be/X-ray binary.}
{Our aim is to study the long-term variability of Swift J0243.6+6124 after the 2017 major X-ray outburst.}
{We have obtained optical spectroscopy and photometry data during four years after the event. The long-term photometric light curve and the equivalent widths of the H$\alpha$ and He I $\lambda$6678 lines were used to monitor the state of the Be star's circumstellar disk. The H$\alpha$ line profiles show evidence for V\,/\,R variability that was accounted for by fitting the H$\alpha$ spectral line profile with two Gaussian functions.
We divided our data into three phases according to the intensity of the X-ray, optical, and infrared emission.}
{Phase I covers the rise and decay of the giant X-ray outburst that took place in October--November 2017. We interpret phase II as the dissipation of the Be star's equatorial disk and phase III as its recovery. The timescale of a complete formation and dissipation process is about 1250 days.
The epoch when the dissipation process stopped and the reformation period began is estimated to be around MJD 58530. We find a delay of $\sim$\,100\,--\,200 days between the minimum of the optical or infrared intensity and the strength of the H$\alpha$ line after the X-ray outburst, which may indicate that the dissipation of the disk begins from the inner parts.
The motion of the density perturbation inside the disk is prograde, with a V\,/\,R quasi-period of $\text{about four}$\, years.
The source shows a positive correlation in the $(B-V)$ color index versus $V$-band magnitude diagram, which implies that the system is seen at a small or moderate inclination angle. }
{Despite the super-Eddington X-ray luminosity during the outburst, the subsequent pattern of long-term optical and IR variability of Swift J0243.6+6124 is typical of Be/X-ray binaries.}
\keywords{stars: emission-line, Be – binaries: close – X-rays: binaries – stars: individual: Swift J0243.6+6124 – stars: neutron
}
\section{Introduction}
According to the luminosity class of the optical companion, high-mass X-ray binaries (HMXBs) are divided into supergiant X-ray binaries and Be/X-ray binaries (BeXBs; Reig \citeyear{2011Ap&SS.332....1R}).
Most of the optically identified HMXBs (or HMXB candidates) are known or suspected BeXBs (Liu et al. \citeyear{2006A&A...455.1165L}).
The optical companion of a BeXB is a Be star, which is a nonsupergiant, fast-rotating B-type (but it may also include late O-type stars; Negueruela et al. \citeyear{2004AN....325..749N}), and luminosity class III-V star that has shown emission lines at some point in its life (Rivinius et al. \citeyear{2013A&ARv..21...69R}).
There are two different disks in Be/X-ray binaries: circumstellar disks around the Be stars, and accretion disks around the neutron stars, which temporarily appear during X-ray outbursts (Ziolkowski \citeyear{2002MmSAI..73.1038Z}; Hayasaki \& Okazaki \citeyear{2004astro.ph.12203H}).
BeXBs are classified into persistent and transient sources according to their X-ray properties (Reig \& Roche \citeyear{1999MNRAS.306..100R}). Transient BeXBs display two types of X-ray outbursts when they are active: type I (or normal) outbursts, and type II (or giant) outbursts. The peak luminosity during type I outbursts is typically $L_\mathrm{X} \leq 10^{37}$ erg s$^{-1}$, while during type II outbursts, it may reach the Eddington limit, $L_\mathrm{X} \sim 10^{38}$ erg s$^{-1}$.
\object{Swift J0243.6+6124} was detected in X-rays for the first time by the \emph{Swift}/BAT on 3 October 2017 (Kennea et al. \citeyear{2017ATel10809....1K}). It is the first Be/X-ray binary emitting at super-Eddington luminosity in our galaxy. \cite{2020A&A...640A..35R} estimated the spectral type and rotational velocity of the companion of Swift J0243.6+6124 to be O9.5Ve and $\nu \sin$ \textit{i} = 210 $\pm$ 20 km s$^{-1}$. X-ray pulsations with a period of $\sim$\,9.86 s were detected by \emph{Swift}/XRT and \emph{Fermi}/GBM (Jenke \& Wilson-Hodge \citeyear{2017ATel10812....1J}). The orbital period is 28.3 days, and the eccentricity is 0.092 (Doroshenko et al. \citeyear{2018A&A...613A..19D}).
There are several different reported values for the distance to this source in the literature: 2.5 $\pm$ 0.5 kpc (Bikmaev et al. \citeyear{2017ATel10968....1B}) and $\sim$\,5 kpc (Reig et al. \citeyear{2020A&A...640A..35R}), both based on optical photometric observations, a lower limit of 5 kpc set by \cite{2018Natur.562..233V}, $\sim$\,6 kpc based on two accretion torque models (Zhang et al. \citeyear{2019ApJ...879...61Z}), and $5.5^{+0.4}_{-0.3}$ kpc given in \textit{Gaia} DR3 catalog (\textit{Gaia} Collaboration et al. \citeyear{2016A&A...595A...1G,2022yCat.1355....0G}). If the distance of the source is assumed for 5 kpc, its peak luminosity is estimated as $1 \times 10^{39}$ erg s$^{-1}$ (0.1--10 keV). This exceeds the Eddington limit for the neutron star during the giant outburst.
The magnetic field strength of the neutron star in Swift J0243.6+6124 is estimated to be approximately $10^{13}$ G (Tsygankov et al. \citeyear{2018MNRAS.479L.134T}; Zhang et al. \citeyear{2019ApJ...879...61Z}), although \cite{2018A&A...613A..19D} advocated for a lower value, at the lower limit of the range $(3-9) \times10^{12}$ G. Based on the detection of electron-cyclotron resonance scattering features (CRSFs), \cite{2022arXiv220604283K} estimated a surface magnetic field of $\sim$ 1.6 $\times$ 10$^{13}$ G for Swift J0243.6+6124, which unambiguously proves the presence of multipole field components close to the surface of the neutron star. This measured surface magnetic field is the strongest of all known neutron stars with detected electron CRSFs, and it is also the strongest for all neutron star ultraluminous X-ray sources. All types of X-ray binaries have been observed to launch jets, with the exception of neutron stars that have strong magnetic fields (stronger than $10^{12}$ G), which implies that their magnetic field strength restrains jet formation (van den Eijnden et al. \citeyear{2018Natur.562..233V}). Therefore, the detection of radio emission from Swift J0243.6+6124 during the X-ray outbursts is a surprising result. However, the radio luminosity is two orders of magnitude dimmer than those seen in other accreting neutron stars with similar X-ray luminosities (van den Eijnden et al. \citeyear{2018Natur.562..233V}), which implies that the magnetic field of neutron stars still plays an important role in the power of launching jets.
In this work, we report new optical spectroscopic observations and photometric observations. These observations witnessed the partial dissipation after the giant outburst and subsequent reformation of the Be star's circumstellar disk.
\section{Observations}
\subsection{Optical spectroscopy}
Optical spectroscopic observations were mainly obtained with two telescopes at two different observatories: The observations from the Xinglong Station of National Astronomical Observatories in Hebei province (China) were obtained with the spectrometer OptoMechanics Research (OMR) or BAO Faint Object Spectrograpy and Camera (BFOSC) on the 2.16 m telescope, and the observations from the Lijiang station of Yunnan Astronomical Observatory in Yunnan province (China) used the spectrometer Yunnan Faint Object Spectrograpy and Camera (YFOSC) on the 2.4 m telescope.
The OMR was equipped with a 1024 $\times$ 1024 (24 micron) pixel TK1024AB2 CCD. The OMR Grism 4 is 1200 lp $\rm mm^{-1}$, giving a nominal dispersion of 1.2 \AA\ pixel$^{-1}$, and covering the wavelength ranges 5500-6900 \AA. The BFOSC was equipped with a 2048 $\times$ 2048 (15 micron) pixel Loral Lick 3 CCD. The dispersion of BFOSC Grism 4 and 8 is 2.97 and 1.20 \AA\ pixel$^{-1}$, covering the wavelength ranges 4000--8700 and 5800--8280 \AA, respectively. The YFOSC was equipped with a 2k $\times$ 4k (13.5 micron) pixel E2V 42-90 CCD. The dispersion of YFOSC Grism 8 is 1.47 \AA\ pixel$^{-1}$, covering the wavelength ranges 4970-9830 \AA. In addition, we analyzed new optical spectroscopic observations obtained from the 1.3 m telescope of the Skinakas observatory (SKO) in Crete (Greece). The 1.3 m telescope of the SKO was equipped with a 2048 $\times$ 2048 (13.5 micron) pixel ANDOR IKON CCD and a 1302 lines $\rm mm^{-1}$ grating, giving a nominal dispersion of $\sim$\,0.8 \AA\ pixel$^{-1}$.
We used the Image Reduction and Analysis Facility (IRAF)\footnote{IRAF is distributed by NOAO, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperation with the National Science Foundation.} software package to reduce and analyze all the spectra, performing bias-subtracted correction and flat-field correction on the data, and then removing cosmic rays. A helium-argon calibration lamp was employed to obtain the pixel-wavelength relation. In order to ensure the consistency of spectral processing, all spectra were normalized to adjacent continua. We measured the equivalent width of the H$\alpha$ lines (hereafter EW(H$\alpha$) for short) for five times, each measurement with a different selection of the continuum. The final EW(H$\alpha$) is the average of the five measurements, and the error is the standard deviation. The typical error of EW(H$\alpha$) is within 5\%. The value of the error is determined by the quality of the continuum. The equivalent widths of the He I $\lambda$6678 lines (hereafter EW(He I $\lambda$6678) for short) were obtained following the same method as for EW(H$\alpha$).
The log of the spectroscopic observations is given in Table \ref{spec}. EW(He I $\lambda$6678) and EW(H$\alpha$) are plotted in the third and sixth panels of Fig.~\ref{EW}, respectively. The evolution of the H$\alpha$ line profiles is plotted in Fig.~\ref{profile}. The evolution of the $\log(V\,/\,R)$ and the peak separation of H$\alpha$ line are plotted in Fig.~\ref{V_R} and listed in Table \ref{V_R_table}.
\subsection{Optical photometry}
Optical photometric observations were obtained from five telescopes at three different observatories: From the Xinglong station of the National Astronomical Observatories, Chinese Academy of Sciences (NAOC), observations were obtained with the Tsinghua-NAOC Telescope (TNT, 80 cm), the 60 cm telescope, and the 2.16 m telescope; from the Lijiang station of Yunnan Observatories (YNAO), the data came from the 2.4 m telescope; and from the Yaoan astronomical observation station of Purple Mountain Observatory (PMO), the data came from the Yaoan High Precision Telescope (YAHPT, 80 cm').
The TNT (80 cm) is an equatorial-mounted Cassegrain system with a focal ratio of f/10, made by AstroOptik, funded by Tsinghua University in 2004 and jointly operated with NAOC, which is equipped with a PI VersArray 1300B LN 1340 $\times$ 1300 thin, back-illuminated CCD with a 20 $\mu$m pixel$^{-1}$ size \citep{2008ApJ...675..626W,2012RAA....12.1585H}. In this configuration, the plate scale is 0.52" pixel$^{-1}$ and gives a field of view of $11.4 \times 11.1$ $\rm arcmin^{2}$.
The 60 cm telescope is an equatorial-mounted system with a focal ratio of f/4.23, which is equipped with the Andor DU934P-BEX2-DD 1024 $\times$ 1024 CCD and provides a field of view of 18 $\times$ 18 $\rm arcmin^{2}$.
The 2.4 m telescope is an altazimuth-mounted Cassegrain system with a focal ratio of f/8, which is equipped with an E2V CCD42-90 2k $\times$ 2k thin, back-illuminated, deep-depletion CCD with a 13.5 $\mu$m pixel$^{-1}$ size. In this configuration, the plate scale is 0.28" pixel$^{-1}$ and gives a field of view of 9.6 $\times$ 9.6 $\rm arcmin^{2}$.
The YAHPT (80 cm') is an altazimuth-mounted, RC optical system with a focal ratio of f/10, made by Astro Systeme Austria, which is equipped with a PIXIS 2048B back-illuminated CCD with a 13.5 $\mu$m pixel$^{-1}$ size. In this configuration, the plate scale is 0.347" pixel$^{-1}$, providing a field of view of 11.8 $\times$ 11.8 $\rm arcmin^{2}$.
The 2.16 m telescope is an equatorial-mounted, RC optical system with a focal ratio of f/9, made by NAOC, CAS Nanjing Astronomical Instruments Co., LTD (NAIRC), and the Institute of Automation of the Chinese Academy of Sciences (CASIA), which is equipped with an Andor-DZ936-BEX2-DD 2048 $\times$ 2048 CCD with a 13.5 $\mu$m pixel$^{-1}$ size. In this configuration, the plate scale is 0.274" pixel$^{-1}$, providing a field of view of 9.36 $\times$ 9.36 $\rm arcmin^{2}$.
In all five telescopes, Swift J0243.6+6124 was observed through the standard Johnson-Cousins $B$, $V$, $R$, and $I$ filters. The photometric data reduction was performed using standard routines and aperture photometry packages (some from the zphot package) in IRAF, including bias subtraction and flat-field correction. In order to derive the variation in the optical brightness, we selected the reference star Gaia 465628266540345216 ($\alpha$: 02 43 38.23, and $\delta$: +61 26 40.7, J2000) \cite[according to ][ the average magnitudes of the reference star are B = 13.67 $\pm$ 0.01, V = 13.02 $\pm$ 0.01, R = 12.65 $\pm$ 0.01, and I = 12.25 $\pm$ 0.02]{2020A&A...640A..35R} in the field of view of Swift J0243.6+6124 to derive its differential magnitudes. The photometric magnitudes are given in Table \ref{phot}.
To study the long-term optical variability of the source, we used the public optical photometric data from the \emph{ASAS–SN}\footnote{https://asas-sn.osu.edu/variables/7306192e-fb93-58a1-98a8-1809\\e318a711} Variable Stars Database (Shappee et al. \citeyear{2014ApJ...788...48S}; Jayasinghe et al. \citeyear{2019MNRAS.486.1907J}). There is a slightly fainter star at 6.2 arcsec from Swift J0243.6+6124 within the full width at half maximum (FWHM). This star is resolved in our photometry. However, the pixel scale and the FWHM in ASAS–SN are 8 arcsec and $\sim$2 pixels, and hence in these images, the neighboring star contributes to the measured flux from Swift J0243.6+6124. The calibrated $V$-band magnitude of the fainter close star is $V$ = 14.52 $\pm$ 0.01 mag (Reig et al. \citeyear{2020A&A...640A..35R}). We removed the brightness of the neighboring star from the total observed flux. The applied corrections are in the range $\Delta$V = 0.20$^{+0.03}_{-0.04}$ mag. We also made use of the public optical photometric data from the international database of the American Association of Variable Star Observers (\emph{AAVSO}\footnote{https://app.aavso.org/webobs/results/?star=000-BML-322\&num\_\\results=200}). Finally, we also included the optical photometric data from \cite{2020A&A...640A..35R}.
The Johnson $V$-band light curve is plotted in the fourth panel of Fig.~\ref{EW}. A detailed view of the 2017 outburst, including pre- and post-outburst observations, is shown in Fig.~\ref{V_IR}. The evolution of the $(B-V)$ color index is plotted in the seventh panel of Fig.~\ref{EW}, and the variation of the $(B-V)$ color index versus $V$-band magnitude is plotted in Fig.~\ref{B-V_V}, where only the data from the 80 cm telescope, the 2.4 m telescope, and the 2.16 m telescope are shown.
\subsection{\emph{NEOWISE} photometry}
We made use of the light curves in the W1 (3.4 $\mu$m) and W2 (4.6 $\mu$m) bands provided by the \emph{NEOWISE} (Mainzer et al. \citeyear{2011ApJ...731...53M}) project through the IRSA viewer\footnote{https://irsa.ipac.caltech.edu/irsaviewer}. We plot them in the fifth panel of Fig.~\ref{EW} and in the bottom panel of Fig.~\ref{V_IR}.
\subsection{X-Ray observations}
The Burst Alert Telescope (BAT)\footnote{https://swift.gsfc.nasa.gov/results/transients/weak/SwiftJ0243.6p61\\24/} on board \emph{Swift} (Krimm et al. \citeyear{2013ApJS..209...14K}), \emph{MAXI}\footnote{http://maxi.riken.jp/star\_data/J0243+614/J0243+614.html} , and the Gamma-ray Burst Monitor (GBM)\footnote{https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/\\swiftj0243.html} on board \emph{Fermi} (Meegan et al. \citeyear{2009ApJ...702..791M}) have been monitoring Swift J0243.6+6124 in the hard X-ray energy band (15–50 keV with BAT, 2--20 keV with \emph{MAXI,} and 12--50 keV with GBM) since October 2017. One type II X-ray outburst and several type I outbursts were detected between October 2017 and January 2019. The X-ray band light curves from BAT (15–50 keV) and \emph{MAXI} (2--20 keV) are plotted in the first panel of Fig.~\ref{EW}. The spin-frequency history measured with GBM is plotted in the second panel of Fig.~\ref{EW}.
\section{Results}
Figure~\ref{EW} shows the X-ray, optical, and IR long-term variability of Swift J0243.6+6124. The observations cover the October 2017 X-ray outburst and the changes experienced by the source during the following four-year period. After the X-ray outbursts, the optical brightness of the source decreased and reached a minimum on $\sim$ MJD 58530. Since then, it has been recovering. There is a clear correlation between the optical and IR flux on long timescales (weeks or months). When the source is active in the X-ray, the optical and IR also correlate with the overall X-ray flux in the sense that the source is bright in the optical and IR at the time of the outbursts. The strength of the H$\alpha$ line also follows the same general trend, although the minimum after the X-ray outbursts appears to be delayed by $\sim$\,100--200 days with respect to the optical or IR continuum flux.
After examining the long-term light curves and spectral evolution, we divided the observations into three different epochs or phases (Fig.~\ref{EW}). These phases reflect significant changes in the properties of the data. Each phase is characterized by a different pattern of X-ray, optical, or IR variability. Phase I corresponds to the giant 2017 X-ray outburst; during phase II, the source experiences a gradual fading of its brightness and a weakening of the spectral line parameters; and in phase III, the long-term trend is reversed and the source exhibits a gradual increase in the brightness and strength of the spectral lines, most significantly, in the H$\alpha$ line.
Phase I (MJD 58030--58180) covers the giant X-ray outburst. The X-ray luminosity changes by about two orders of magnitude ($10^{37}-10^{39}$ erg s$^{-1}$; Doroshenko et al. \citeyear{2020MNRAS.491.1857D}) and the $V$ band by 0.4 magnitudes. EW(H$\alpha$) and EW(He I $\lambda$6678) display erratic variability. The $(B-V)$ color displays the largest variation in the entire period of the observations, with a change of about 0.1 magnitude in 25 days.
In Phase II (MJD 58180--58530), the X-ray variability is characterized by regular type I outbursts with changes in luminosity of about one order of magnitude between $10^{36}-10^{37}$ erg s$^{-1}$. The brightness in the $V$ band decreases by 0.1 magnitudes and in the near-infrared by 0.6 magnitudes. EW(H$\alpha$) decreases from a maximum of $-11$ \AA\ to a minimum of $-5$ \AA. EW(He I $\lambda$6678) presents large scatter, but it also decreases on average. The dispersion in EW(He I $\lambda$6678) measurements is most likely due to the low signal-to-noise ratio (S/N). There are no suitable B-band observations during the first half of this phase; thus we do not have $(B-V)$ data. During the second half, the $(B-V)$ color is lower on average (i.e., bluer emission) than during phase I.
In Phase III (MJD 58530--), the source is no longer detected in X-ray, while all optical and infrared indicators increase gradually. At the end of this phase, EW(H$\alpha$) and the $V$-band magnitude recover to almost pre-outburst values. The overall optical emission becomes redder as the system evolves in this phase.
\subsection{ H$\alpha$ line profiles}
Figure~\ref{profile} shows the profiles of the H$\alpha$ line. Although we are limited by the low spectral resolution, some general variability trends are visible. The H$\alpha$ emission lines present asymmetrically blue-dominated profiles (V>R) between October 2017 and September 2018 (phase I--phase II). The red peak is only noticeable in the earliest spectrum on 27 October 2017. The central rest wavelength (marked by the vertical dashed line in Fig.~\ref{profile}) lies systematically to the right of the peak, confirming the asymmetry. At the end of 2018 and during 2019 (phase II--phase III), the line flux gradually shifts toward the red. Approximately symmetric single-peak profiles are observed in the 2020-2021 spectra (phase III). At the end of 2021, the profiles are again blue-dominated.
The He I line also exhibits a variable asymmetric double-peak profile in phase and with intensity variations similar to those of the H$\alpha$ line. Because the He I line forms much closer to the Be star than the H$\alpha$ line, the He I V\,/\,R variability implies that the density perturbation affects the inner parts of the disk as well as the outer parts where the H$\alpha$ line is formed. Unfortunately, the low S/N and low spectral resolution prevented a detailed analysis of this line.
\section{Discussion}
We have been monitoring the optical counterpart of the Be/X-ray binary Swift J0243.6+6124 since its discovery in October 2017 and obtained optical spectra on a regular basis. Using data from both the archives and the literature, we are able to study the long-term variability over at least four years after the outburst. We attribute this long-term variability to the evolution of the Be star's circumstellar disk.
The variability patterns of the X-ray, optical, and IR parameters allowed us to divide the observations into three different phases. Phase I covers the rise and decay of the X-ray outburst. The largest and fastest variability timescales are seen during this phase. This violent event must have affected the entire structure of the circumstellar disk. In particular, the H$\alpha$ line displayed several sudden changes of 10\%--20\% in its strength on timescales of $\text{about one}$\, month during this phase (see the sixth panel of Fig.~\ref{EW}). These fast changes could be attributed to an inhomogeneous warped disk (Reig et al. \citeyear{2020A&A...640A..35R}). Alternatively, because the fast changes of EW(H$\alpha$) appear to be modulated with the orbital period of the system ($P_{\rm orb}=28.3$ days), reprocessed emission might be at their origin. The high spin-up rate (as seen in the second panel of Fig.~\ref{EW}) suggests that a short-lived accretion disk also formed around the neutron star.
Phase II corresponds to a low optical state in which not only the optical and IR continuum flux gradually decreased, but the H$\alpha$ emission line also became weaker. Type II outbursts involve the accretion of a large amount of material from the equatorial disk and almost always lead to the dissipation of the entire or part of the disk \citep{2016A&A...590A.122R}. We identify this phase with the dissipation of the disk.
The evolution of the X-ray and optical emissions provides some clues about how this dissipation took place. The presence of type I outbursts indicates that the neutron star accretes material from the disk in subsequent periastron passages, while at the same time, the decrease in the $(B-V)$ color indicates that the emission becomes bluer, as expected from a smaller or more compact disk. These results can be understood if the outer parts of the disk cease to be bound, that is, are expelled, while the inner parts of the disk collapse toward the star once the disk formation mechanism stops. The higher X-ray intensity of the last type I outburst in this phase may be due to the fact that the neutron star encountered a higher-density part of this distorted disk. This phase ends when the magnitude and colors reach a minimum. The system did not lose the disk entirely, as the H$\alpha$ line remained with an emission profile and the minimum EW(H$\alpha$) was still $\sim$\,$-5$ \AA. Even the He I $\lambda$6678 did not revert into an absorption profile. The equivalent width of this line remained $\sim$\,0 \AA\ during 2020 and 2021, indicating that the line was filled with emission.
Phase III represents a phase in which the disk grew again. The optical and IR flux increased, as did the strength of the H$\alpha$ line. The overall emission became redder (i.e., $(B-V)$ increased). The latest observations show that EW(H$\alpha$) and EW(He I $\lambda$6678) approached pre-outburst values. After an initial brightening period that lasted for about a year, the $V$-band magnitude stabilized at a level of 12.8${^m}$, at which it has remained since September 2020.
Photometric variability with a characteristic period of $\sim$\,1250 days (MJD 40000--41250) and an amplitude of $\sim$\,0.15 magnitudes in the B band was reported by \cite{2017ATel10989....1N} based on archival data from the Asiago Observatory taken between 1967 and 1976. This timescale is similar to the one during MJD 57250 to 58530 (see Fig.~\ref{V_IR}). When the $V$-band flare is ignored, the underlying trend of the $V$-band observations correlates very well with the IR-band observations. Given the strength of this correlation, the lack of a similar flare in the IR light curve can be attributed to the low cadence of the IR observations. The disk starts to grow on $\sim$ MJD 57250, reaching a maximum size on $\sim$ MJD 58000 when it begins to decline, and reaching a minimum on $\sim$ MJD 58530.
The variation in the $V$/IR-band can be interpreted in terms of the evolution of the Be star's disk. This long-term smooth change in brightness is due to the formation or growth of the disk and its subsequent dissipation.
We estimate that the overall timescale in phase III for the formation and dissipation of the circumstellar disk is about 1500 days. The dissipation phase is significantly faster, about 300 days, than the formation phase of about 1200 days.
Although the evolution of the optical and IR parameters is affected by observational gaps, because the position of the source is too close to the Sun, Fig.~\ref{EW} shows a delay between the minima of the optical and IR continuum (which marks the shift from phase II to phase III) and the minima of EW(H$\alpha$). This delay may be understood by invoking the different sites of the continuum and discrete emission in the equatorial disk of a Be star. According to \cite{2011IAUS..272..325C}, the disk $V$ band is typically formed very close to the star, within about 2\,$R_{\rm star}$. In contrast, the H$\alpha$ emission line is formed at larger radii (Slettebak et al. \citeyear{1992ApJS...81..335S}). If this interpretation is correct, then the minimum flux detected in the continuum first would imply that the dissipation of the disk began from the inner parts. The type I regular X-ray outbursts during phase II indicate that the accretion of the neutron star persisted for about one year after the main outburst. In terms of the strength of the H$\alpha$ line, the giant X-ray outburst took place when EW(H$\alpha$) $\sim$\,$-11$ \AA. The increase rate of EW(H$\alpha$) during phase III implies that the source will reach this value again on $\sim$\,MJD 60250. If we take this value ($-11$ \AA) as the triggering value of the giant outburst, then we should expect another large event by the end of 2023.
The He I $\lambda$6678 follows the same long-term trend as the H$\alpha$ line: fast and large amplitude changes during phase I, a weakening during phase II, and a slow recovery during phase III. The latest spectrum in October 2021, in which a small peak started to develop and EW(He I $\lambda$6678) < $-0.1$ \AA, marks the formation of a new emission line. We note that the giant X-ray outburst occurred when EW(He I $\lambda$6678) $\sim$\,$-(0.2-0.5)$ \AA.
\subsection{ H$\alpha$ line profile variability and V\,/\,R ratio}
The V\,/\,R variability is defined as the intensity variations of the two peaks (known as violet and red peaks) in the split profile of a spectral line. In many Be stars, if they are monitored over a long enough period of time, these variations are quasi-periodic (Okazaki \citeyear{1997A&A...318..548O}).
We define the V\,/\,R ratio of the H$\alpha$ line as V\,/\,R = (I(V) $-$ I$_\mathrm{c}$) / (I(R) $-$ I$_\mathrm{c}$), where I(V), I(R), and I$_\mathrm{c}$ are the intensities of the violet peak, red peak, and continuum, respectively.
We also measured the separation of the violet and red peaks by fitting two Gaussian functions to the spectral line profile. When the disk velocity is assumed to be Keplerian, the peak separation gives a measure of the velocity field.
There is no obvious trend in the peak separation between different spectral line profiles, which is mainly distributed around 175--250 km s$^{-1}$.
The V\,/\,R ratios and the peak separation of the H$\alpha$ line are listed in Table \ref{V_R_table} and plotted in Fig.~\ref{V_R}.
The V\,/\,R variability has been associated with density perturbations in the disk (Hanuschik et al. \citeyear{1995A&A...300..163H}). When this density perturbation moves around inside the disk, the profile changes.
We observe a blue-dominated profile (V>R) in 2017 that turned into an almost single peak profile (V$\sim$R) in 2018 (see Fig.~4 of Reig et al. \citeyear{2020A&A...640A..35R}), and a red-dominated profile (V<R) from the end of 2018 to 2019.
The spectra in 2021 return to blue-dominated profiles (V>R). Thus, we may have covered an entire V\,/\,R cycle. The V\,/\,R quasi-period would be $\text{about four}$\, years, which is normal for BeXBs (Mennickent et al. \citeyear{1997A&A...326.1167M}).
In principle, the question of whether the motion of the perturbation occurs in the same sense (prograde rotation) or opposite sense (retrograde rotation) to the stellar rotation can be determined from the observations.
\cite{1994A&A...288..558T} realized that a prograde rotation implies that (I) a V>R phase, (II) a shell absorption profile, (III) a V<R phase, and (IV) a weak central absorption profile will appear in order. A retrograde rotation would give rise to the reversed sequence: (IV)$\to$(III)$\to$(II)$\to$(I). Because of the small disk inclination (Reig et al. \citeyear{2020A&A...640A..35R}), we cannot distinguish a prograde or retrograde rotation in the characteristic line shapes. These characteristic line shapes can translate into noticeable photometric variations, however.
According to \cite{1997A&A...326.1167M}, we can expect a minimum brightness when V=R prior to the V<R (V>R) phase if the motion is prograde (retrograde).
In Swift J0243.6+6124, the minimum brightness in the photometric $V$ band occurred during the V=R phase before the V<R phase began, $\sim$\,MJD 58450, confirming the prograde nature of the precession inside the disk.
\subsection{Variation in $(B-V)$ color index and inclination of the Be star's disk}
Figure~\ref{B-V_V} shows the $(B-V)$ color index as a function of the $V$ magnitude. We mark the different variability phases defined above with different colors. It has been noted that this kind of plot can be used to constrain the inclination angle of the system \citep{1983HvaOB...7...55H}.
Systems that show a positive correlation, that is, as the disk forms (or equivalently, as EW(H$\alpha$) increases), the optical intensity increases and the emission becomes redder (i.e., $(B-V)$ increases), are thought to be seen at small or moderate inclination angles, while systems that show a negative correlation, in which the optical intensity decreases even though the disk is growing (EW(H$\alpha$) and $(B-V)$ increase), are associated with large inclination angles.
\cite{1983HvaOB...7...55H} introduced the concept of a pseudophotosphere to explain this effect. At large inclination angles (for equator-on stars), the inner parts of the Be envelope partly block the stellar photosphere, and thus the optical brightness decreases. Meanwhile, the overall emission becomes redder because the contribution of the disk increases. At small or intermediate inclination, as the disk grows, an overall (star plus disk) increase in brightness is expected.
Figure~\ref{B-V_V} suggests that Swift J0243.6+6124 is viewed at small or intermediate angles. This result is consistent with the 30$^{\circ}$ angle estimated by \cite{2020A&A...640A..35R} from the emission-line profile.
\section{Conclusions}
We have conducted spectroscopic and photometric observations of Swift J0243.6+6124 to study the changes in the structure of the circumstellar disk of the Be star and the material transfer between the Be star's circumstellar disk and the neutron star during 2017--2022. This period covers a giant X-ray outburst (type II) and several orbit-modulated outbursts (type I).
We divided our data into three phases based on the intensity of the X-ray, optical, and infrared emission.
Phase I covers the 2017 X-ray outburst. In this phase, the source reaches the largest equivalent width of the H$\alpha$ and He I $\lambda$6678 lines and the brightest X-ray, optical, and infrared intensity, characterized by outbursts or flares.
In phase II, the source displays a long-term decrease in the optical and infrared intensities. The X-ray band is characterized by the occurrence of several minor orbit-modulated outbursts.
In phase III, the source exhibits a long-term brightening of the optical and infrared magnitudes, and the equivalent width of the H$\alpha$ and He I $\lambda$6678 lines also increases. During this phase, no X-ray activity is observed.
We focused on the correlated optical, IR, and X-ray long-term variability. During the period of our observations following the giant X-ray outburst, the optical and IR continuum flux and the strength of the H$\alpha$ line first decreased and then increased. We interpret this long-term variability in terms of the dissipation and reformation of the Be star's circumstellar disk.
Although the 2017 giant X-ray outburst is the most luminous ever recorded in a BeXB, it does not lead to the complete loss of the Be star's disk.
We estimate that at the rate at which the disk is reforming, the equivalent width of the H$\alpha$ line will reach pre-outburst values in about 1--1.5 years.
\begin{acknowledgements}
We acknowledge the support of the staff of the Xinglong 2.16 m telescope, the Xinglong 80 cm telescope, and the Xinglong 60 cm telescope. This work was partially supported by the Open Project Program of the CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences.
We acknowledge the support of the staff of the Lijiang 2.4 m telescope. Funding for the telescope has been provided by CAS and the People's Government of Yunnan Province.
Skinakas Observatory is run by the University of Crete and the Foundation for Research and Technology-Hellas.
This research has made use of data provided by the Yaoan High Precision Telescope.
We acknowledge with thanks the variable star observations from the \emph{AAVSO} International Database contributed by observers worldwide and used in this research.
\emph{Swift}/BAT transient monitor results provided by the \emph{Swift}/BAT team. \emph{Fermi}/GBM results provided by the Fermi Science Support Center.
This publication makes use of data products from \emph{NEOWISE}, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration.
This work is supported by the National Key R\&D Program of China (2021YFA0718500) and the National Natural Science Foundation of China (Grants No. U2031205, 11733009).
X. Wang is supported by the National Science Foundation of China (NSFC grants 12033003 and 11633002), the Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002), and the Tencent Xplorer Prize.
\end{acknowledgements}
\begin{appendix}
\section{Table of spectroscopic observations}
\begin{table*}
\caption{Spectroscopic observations of Swift J0243.6+6124 between 2017 and 2021.}
\label{spec}
\centering
\begin{tabular}{cclccc}
\hline\hline
Date & MJD & Telescope/ & Wavelength Range & EW(H$\alpha$) & EW(He I $\lambda$6678) \\
(DD-MM-YYYY) & & Instrument & (\AA) & (\AA) & (\AA) \\
\hline
04-10-2017 & 58030.8342 & 2.16 m/BFOSC & 4000-8700 & -10.6 $\pm$ 0.2 & ... \\
13-10-2017 & 58039.6745 & 2.16 m/BFOSC & 4000-8700 & -10.1 $\pm$ 0.1 & ... \\
27-10-2017 & 58053.6744 & 2.16 m/OMR & 5500-6900 & -8.5 $\pm$ 0.1 & -0.39 $\pm$ 0.08 \\
27-10-2017 & 58053.8025 & 2.16 m/OMR & 5500-6900 & -8.0 $\pm$ 0.4 & -0.41 $\pm$ 0.11 \\
28-10-2017 & 58054.6191 & 2.16 m/OMR & 5500-6900 & -8.7 $\pm$ 0.4 & -0.32 $\pm$ 0.12 \\
29-10-2017 & 58055.6853 & 2.16 m/OMR & 5500-6900 & -9.4 $\pm$ 0.3 & -0.31 $\pm$ 0.16 \\
11-11-2017 & 58068.6412 & 2.16 m/OMR & 5500-6900 & -10.3 $\pm$ 0.6 & -0.28 $\pm$ 0.23 \\
11-11-2017 & 58068.7547 & 2.16 m/OMR & 5500-6900 & -10.6 $\pm$ 0.2 & -0.52 $\pm$ 0.12 \\
12-11-2017 & 58069.7245 & 2.16 m/OMR & 5500-6900 & -11.0 $\pm$ 0.5 & -0.42 $\pm$ 0.09 \\
13-11-2017 & 58070.7783 & 2.16 m/BFOSC & 4000-8700 & -11.2 $\pm$ 0.2 & ... \\
23-11-2017 & 58080.5667 & 2.4 m/YFOSC & 4970-9830 & -10.0 $\pm$ 0.2 & -0.23 $\pm$ 0.06 \\
23-11-2017 & 58080.6792 & 2.4 m/YFOSC & 4970-9830 & -10.0 $\pm$ 0.2 & -0.22 $\pm$ 0.06 \\
24-11-2017 & 58081.5326 & 2.4 m/YFOSC & 4970-9830 & -9.6 $\pm$ 0.2 & -0.27 $\pm$ 0.06 \\
25-11-2017 & 58082.5549 & 2.4 m/YFOSC & 4970-9830 & -9.9 $\pm$ 0.1 & -0.22 $\pm$ 0.03 \\
07-01-2018 & 58125.6306 & 2.4 m/YFOSC & 4970-9830 & -10.7 $\pm$ 0.1 & -0.25 $\pm$ 0.09 \\
08-01-2018 & 58126.5597 & 2.4 m/YFOSC & 3700-7630 & -10.6 $\pm$ 0.2 & -0.30 $\pm$ 0.06 \\
10-01-2018 & 58128.5208 & 2.16 m/BFOSC & 4000-8700 & -10.3 $\pm$ 0.4 & ... \\
23-01-2018 & 58141.5189 & 2.16 m/BFOSC & 4000-8700 & -11.2 $\pm$ 0.1 & ... \\
24-01-2018 & 58142.5440 & 2.16 m/BFOSC & 4000-8700 & -11.7 $\pm$ 0.1 & ... \\
12-02-2018 & 58160.4380 & 2.16 m/BFOSC & 4000-8700 & -10.8 $\pm$ 0.1 & ... \\
19-02-2018 & 58168.4470 & 2.16 m/BFOSC & 4000-8700 & -10.8 $\pm$ 0.2 & ... \\
25-03-2018 & 58202.4619 & 2.16 m/BFOSC & 4000-8700 & -10.7 $\pm$ 0.3 & ... \\
17-09-2018 & 58378.7433 & 2.16 m/OMR & 5500-6900 & -8.2 $\pm$ 0.5 & -0.30 $\pm$ 0.03 \\
23-09-2018 & 58384.7466 & 2.16 m/BFOSC & 4000-8700 & -9.0 $\pm$ 0.1 & ... \\
03-10-2018 & 58394.8377 & 2.16 m/BFOSC & 4000-8700 & -9.4 $\pm$ 0.1 & ... \\
15-11-2018 & 58437.6707 & 2.16 m/BFOSC & 4000-8700 & -7.3 $\pm$ 0.6 & ... \\
17-11-2018 & 58439.6922 & 2.16 m/BFOSC & 4000-8700 & -7.5 $\pm$ 0.2 & ... \\
21-11-2018 & 58443.5873 & 2.16 m/BFOSC & 4000-8700 & -7.6 $\pm$ 0.0 & ... \\
22-11-2018 & 58444.6232 & 2.16 m/BFOSC & 4000-8700 & -7.4 $\pm$ 0.2 & ... \\
27-11-2018 & 58449.6591 & 2.16 m/BFOSC & 4000-8700 & -8.1 $\pm$ 0.1 & ... \\
04-12-2018 & 58455.7521 & 2.16 m/BFOSC & 4000-8700 & -7.4 $\pm$ 0.7 & ... \\
25-12-2018 & 58477.5579 & 2.4 m/YFOSC & 4970-9830 & -6.8 $\pm$ 0.1 & -0.12 $\pm$ 0.06 \\
26-12-2018 & 58478.5499 & 2.4 m/YFOSC & 4970-9830 & -6.9 $\pm$ 0.2 & -0.13 $\pm$ 0.06 \\
27-12-2018 & 58479.5959 & 2.4 m/YFOSC & 4970-9830 & -7.0 $\pm$ 0.2 & -0.13 $\pm$ 0.05 \\
28-12-2018 & 58480.5463 & 2.4 m/YFOSC & 4970-9830 & -7.2 $\pm$ 0.2 & -0.20 $\pm$ 0.08 \\
09-01-2019 & 58492.6229 & 2.16 m/BFOSC & 4000-8700 & -7.2 $\pm$ 0.2 & ... \\
27-01-2019 & 58510.5146 & 2.16 m/BFOSC & 5200-8200 & -7.3 $\pm$ 0.1 & ... \\
27-01-2019 & 58510.5481 & 2.4 m/YFOSC & 3500-8750 & -7.0 $\pm$ 0.1 & -0.18 $\pm$ 0.07 \\
09-02-2019 & 58523.4430 & 2.16 m/BFOSC & 4400-8700 & -6.6 $\pm$ 0.4 & ... \\
07-10-2019 & 58763.8718 & 2.16 m/BFOSC & 4000-8700 & -5.5 $\pm$ 0.3 & ... \\
03-11-2019 & 58790.7485 & 2.16 m/OMR & 5500-6900 & -5.2 $\pm$ 0.4 & -0.04 $\pm$ 0.03 \\
04-11-2019 & 58792.2656 & 2.16 m/OMR & 5500-6900 & -5.3 $\pm$ 0.3 & 0.01 $\pm$ 0.04 \\
26-12-2019 & 58843.6260 & 2.4 m/YFOSC & 4970-9830 & -6.0 $\pm$ 0.2 & 0.06 $\pm$ 0.03 \\
21-07-2020 & 59052.0035 & SKO/1.3 m & 5400-7300 & -6.8 $\pm$ 0.1 & -0.01 $\pm$ 0.02 \\
20-08-2020 & 59082.0787 & SKO/1.3 m & 5400-7300 & -6.7 $\pm$ 0.0 & -0.01 $\pm$ 0.06 \\
14-09-2020 & 59107.0080 & SKO/1.3 m & 5400-7300 & -7.3 $\pm$ 0.1 & ... \\
16-09-2020 & 59108.9829 & SKO/1.3 m & 5400-7300 & -6.7 $\pm$ 0.0 & -0.03 $\pm$ 0.05 \\
29-09-2020 & 59122.0820 & SKO/1.3 m & 5400-7300 & -5.7 $\pm$ 0.1 & 0.08 $\pm$ 0.04 \\
10-10-2020 & 59132.6655 & 2.16 m/BFOSC & 5800-8280 & -6.8 $\pm$ 0.2 & 0.01 $\pm$ 0.05 \\
11-10-2020 & 59133.8369 & 2.16 m/BFOSC & 5800-8280 & -7.1 $\pm$ 0.1 & 0.07 $\pm$ 0.02 \\
12-10-2020 & 59134.7450 & 2.16 m/BFOSC & 5800-8280 & -7.0 $\pm$ 0.1 & -0.01 $\pm$ 0.04 \\
11-08-2021 & 59437.9970 & SKO/1.3 m & 5400-7300 & -8.0 $\pm$ 0.1 & -0.07 $\pm$ 0.04 \\
01-09-2021 & 59459.0720 & SKO/1.3 m & 5400-7300 & -8.3 $\pm$ 0.0 & -0.06 $\pm$ 0.03 \\
05-09-2021 & 59463.0289 & SKO/1.3 m & 5400-7300 & -8.2 $\pm$ 0.1 & -0.09 $\pm$ 0.04 \\
09-10-2021 & 59496.9755 & SKO/1.3 m & 5400-7300 & -8.2 $\pm$ 0.1 & -0.07 $\pm$ 0.05 \\
10-10-2021 & 59497.6557 & 2.16 m/OMR & 6000-6850 & -8.4 $\pm$ 0.1 & -0.11 $\pm$ 0.06 \\
\hline
\end{tabular}
\end{table*}
\section{Table of V\,/\,R ratio and peak separation}
\begin{table*}
\caption{V\,/\,R ratio and peak separation of Swift J0243.6+6124 between 2017 and 2021.}
\label{V_R_table}
\centering
\begin{tabular}{cclccc}
\hline\hline
Date & MJD & Telescope/ & Wavelength Range & $\log(V\,/\,R)$(H$\alpha$) & $\Delta$V(H$\alpha$)\\
(DD-MM-YYYY) & & Instrument & (\AA) & & (km s$^{-1}$)\\
\hline
27-10-2017 & 58053.6744 & 2.16 m/OMR & 5500-6900 & 0.453 & 230\\
27-10-2017 & 58053.8025 & 2.16 m/OMR & 5500-6900 & 0.495 & 211\\
28-10-2017 & 58054.6191 & 2.16 m/OMR & 5500-6900 & 0.537 & 217\\
29-10-2017 & 58055.6853 & 2.16 m/OMR & 5500-6900 & 0.462 & 228\\
11-11-2017 & 58068.6412 & 2.16 m/OMR & 5500-6900 & 0.390 & 195\\
11-11-2017 & 58068.7547 & 2.16 m/OMR & 5500-6900 & 0.404 & 195\\
12-11-2017 & 58069.7245 & 2.16 m/OMR & 5500-6900 & 0.516 & 197\\
17-09-2018 & 58378.7433 & 2.16 m/OMR & 5500-6900 & 0.603 & 222\\
15-11-2018 & 58437.6707 & 2.16 m/BFOSC & 4000-8700 & -0.042 & ...\\
25-12-2018 & 58477.5579 & 2.4 m/YFOSC & 4970-9830 & -0.406 & 232\\
03-11-2019 & 58790.7485 & 2.16 m/OMR & 5500-6900 & -0.193 & 212\\
04-11-2019 & 58792.2656 & 2.16 m/OMR & 5500-6900 & -0.168 & 185\\
20-08-2020 & 59082.0787 & SKO/1.3 m & 5400-7300 & 0.022 & 140\\
09-10-2021 & 59496.9755 & SKO/1.3 m & 5400-7300 & 0.314 & 218\\
10-10-2021 & 59497.6557 & 2.16 m/OMR & 6000-6850 & 0.198 & 207\\
\hline
\end{tabular}
\end{table*}
\section{Table of photometric observations}
\longtab[1]{
\begin{longtable}{cccccc}
\caption{Photometric observations of Swift J0243.6+6124 between 2017 and 2022.}\\
\hline
\hline
MJD & Telescope & \textit{B} & \textit{V} & \textit{R} & \textit{I} \\
& & (mag) & (mag) & (mag) & (mag) \\
\hline
\endfirsthead
\caption{continued.} \\
\hline
\hline
MJD & Telescope & \textit{B} & \textit{V} & \textit{R} & \textit{I} \\
& & (mag) & (mag) & (mag) & (mag) \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
58038.62 & 60cm & 13.725 $\pm$ 0.010 & 12.777 $\pm$ 0.010 & 12.065 $\pm$ 0.010 & 11.241 $\pm$ 0.020 \\
58039.81 & 60cm & 13.704 $\pm$ 0.010 & 12.729 $\pm$ 0.010 & 12.030 $\pm$ 0.010 & 11.206 $\pm$ 0.020 \\
58041.76 & 60cm & 13.652 $\pm$ 0.010 & 12.720 $\pm$ 0.010 & 12.014 $\pm$ 0.010 & 11.176 $\pm$ 0.020 \\
58042.72 & 60cm & 13.700 $\pm$ 0.010 & 12.755 $\pm$ 0.010 & 12.046 $\pm$ 0.010 & 11.234 $\pm$ 0.020 \\
58045.72 & 60cm & 13.724 $\pm$ 0.010 & 12.768 $\pm$ 0.010 & 12.066 $\pm$ 0.010 & 11.216 $\pm$ 0.020 \\
58046.63 & 60cm & 13.710 $\pm$ 0.010 & 12.757 $\pm$ 0.010 & 12.036 $\pm$ 0.010 & 11.198 $\pm$ 0.020 \\
58049.86 & 60cm & 13.681 $\pm$ 0.010 & 12.680 $\pm$ 0.010 & 12.232 $\pm$ 0.010 & 11.152 $\pm$ 0.020 \\
58052.78 & 60cm & 13.582 $\pm$ 0.010 & 12.632 $\pm$ 0.010 & 11.963 $\pm$ 0.010 & 11.093 $\pm$ 0.020 \\
58053.70 & 80cm & 13.644 $\pm$ 0.010 & 12.702 $\pm$ 0.010 & 11.974 $\pm$ 0.010 & 11.156 $\pm$ 0.020 \\
58053.77 & 60cm & 13.636 $\pm$ 0.010 & 12.687 $\pm$ 0.010 & 11.944 $\pm$ 0.010 & 11.131 $\pm$ 0.020 \\
58054.63 & 60cm & 13.405 $\pm$ 0.010 & 12.476 $\pm$ 0.010 & 11.861 $\pm$ 0.010 & 11.058 $\pm$ 0.020 \\
58054.70 & 80cm & 13.543 $\pm$ 0.010 & 12.639 $\pm$ 0.011 & 11.942 $\pm$ 0.011 & 11.137 $\pm$ 0.021 \\
58055.64 & 60cm & 13.617 $\pm$ 0.010 & 12.671 $\pm$ 0.010 & 11.963 $\pm$ 0.010 & 11.155 $\pm$ 0.020 \\
58055.70 & 80cm & 13.631 $\pm$ 0.010 & 12.683 $\pm$ 0.010 & 11.961 $\pm$ 0.011 & 11.140 $\pm$ 0.020 \\
58056.62 & 60cm & 13.626 $\pm$ 0.010 & 12.680 $\pm$ 0.010 & 11.977 $\pm$ 0.010 & 11.160 $\pm$ 0.020 \\
58056.74 & 80cm & 13.629 $\pm$ 0.010 & 12.681 $\pm$ 0.011 & 11.952 $\pm$ 0.011 & 11.126 $\pm$ 0.020 \\
58059.78 & 60cm & 13.233 $\pm$ 0.010 & 12.361 $\pm$ 0.010 & 11.694 $\pm$ 0.010 & 10.949 $\pm$ 0.020 \\
58061.71 & 80cm & 13.465 $\pm$ 0.017 & 12.577 $\pm$ 0.015 & 11.874 $\pm$ 0.013 & 11.038 $\pm$ 0.022 \\
58061.80 & 60cm & 13.443 $\pm$ 0.010 & 12.491 $\pm$ 0.010 & 11.767 $\pm$ 0.010 & 10.956 $\pm$ 0.020 \\
58062.65 & 80cm & 13.547 $\pm$ 0.014 & 12.600 $\pm$ 0.013 & 11.857 $\pm$ 0.012 & 11.028 $\pm$ 0.021 \\
58063.56 & 80cm & 13.433 $\pm$ 0.027 & 12.537 $\pm$ 0.023 & 11.801 $\pm$ 0.019 & 10.992 $\pm$ 0.024 \\
58064.65 & 80cm & 13.483 $\pm$ 0.011 & 12.556 $\pm$ 0.011 & 11.838 $\pm$ 0.011 & 11.007 $\pm$ 0.020 \\
58065.69 & 80cm & 13.541 $\pm$ 0.012 & 12.574 $\pm$ 0.012 & 11.846 $\pm$ 0.012 & 11.019 $\pm$ 0.021 \\
58067.57 & 60cm & 13.398 $\pm$ 0.010 & 12.471 $\pm$ 0.010 & 11.804 $\pm$ 0.010 & 10.975 $\pm$ 0.020 \\
58069.57 & 60cm & 13.504 $\pm$ 0.010 & 12.554 $\pm$ 0.010 & 11.849 $\pm$ 0.010 & 11.008 $\pm$ 0.020 \\
58073.52 & 60cm & 13.571 $\pm$ 0.010 & 12.613 $\pm$ 0.010 & 11.890 $\pm$ 0.010 & 11.078 $\pm$ 0.020 \\
58074.65 & 80cm & 13.440 $\pm$ 0.010 & 12.518 $\pm$ 0.011 & 11.814 $\pm$ 0.011 & 10.990 $\pm$ 0.021 \\
58080.61 & 2.4m & 13.628 $\pm$ 0.010 & 12.663 $\pm$ 0.010 & 11.932 $\pm$ 0.010 & 11.102 $\pm$ 0.020 \\
58081.53 & 2.4m & 13.638 $\pm$ 0.010 & 12.677 $\pm$ 0.010 & 11.943 $\pm$ 0.010 & 11.103 $\pm$ 0.020 \\
58082.55 & 2.4m & 13.649 $\pm$ 0.010 & 12.681 $\pm$ 0.010 & 11.942 $\pm$ 0.010 & 11.122 $\pm$ 0.020 \\
58083.60 & 60cm & 13.540 $\pm$ 0.010 & 12.623 $\pm$ 0.010 & 11.913 $\pm$ 0.010 & 11.096 $\pm$ 0.020 \\
58086.62 & 60cm & 13.572 $\pm$ 0.010 & 12.630 $\pm$ 0.010 & 11.872 $\pm$ 0.010 & 11.142 $\pm$ 0.020 \\
58088.56 & 60cm & 13.591 $\pm$ 0.010 & 12.643 $\pm$ 0.010 & 11.931 $\pm$ 0.010 & 11.129 $\pm$ 0.020 \\
58095.62 & 60cm & 13.661 $\pm$ 0.010 & 12.716 $\pm$ 0.010 & 12.002 $\pm$ 0.010 & 11.187 $\pm$ 0.020 \\
58117.58 & 80cm & 13.700 $\pm$ 0.011 & 12.762 $\pm$ 0.011 & 12.103 $\pm$ 0.011 & 11.215 $\pm$ 0.020 \\
58118.57 & 80cm & 13.720 $\pm$ 0.011 & 12.774 $\pm$ 0.011 & 12.040 $\pm$ 0.011 & 11.218 $\pm$ 0.020 \\
58123.64 & 60cm & 13.733 $\pm$ 0.010 & 12.800 $\pm$ 0.010 & 12.099 $\pm$ 0.010 & 11.296 $\pm$ 0.020 \\
58125.64 & 2.4m & 13.735 $\pm$ 0.010 & 12.783 $\pm$ 0.010 & 12.047 $\pm$ 0.011 & 11.242 $\pm$ 0.020 \\
58126.56 & 2.4m & 13.699 $\pm$ 0.010 & 12.772 $\pm$ 0.010 & 12.068 $\pm$ 0.010 & 11.235 $\pm$ 0.020 \\
58135.60 & 60cm & 13.751 $\pm$ 0.010 & 12.805 $\pm$ 0.010 & 12.097 $\pm$ 0.010 & 11.285 $\pm$ 0.020 \\
58138.57 & 60cm & 13.488 $\pm$ 0.010 & 12.770 $\pm$ 0.010 & 12.064 $\pm$ 0.010 & 11.119 $\pm$ 0.020 \\
58148.53 & 60cm & 13.724 $\pm$ 0.010 & 12.794 $\pm$ 0.010 & 12.079 $\pm$ 0.010 & 11.298 $\pm$ 0.020 \\
58150.51 & 60cm & 13.739 $\pm$ 0.010 & 12.786 $\pm$ 0.010 & 12.088 $\pm$ 0.010 & 11.283 $\pm$ 0.020 \\
58152.52 & 60cm & 13.625 $\pm$ 0.010 & 12.825 $\pm$ 0.010 & 12.145 $\pm$ 0.010 & ... \\
58153.52 & 60cm & 13.732 $\pm$ 0.010 & 12.812 $\pm$ 0.010 & 12.128 $\pm$ 0.010 & 11.320 $\pm$ 0.020 \\
58156.43 & 60cm & 13.690 $\pm$ 0.010 & 12.803 $\pm$ 0.010 & 12.081 $\pm$ 0.010 & 11.277 $\pm$ 0.020 \\
58158.44 & 60cm & 13.733 $\pm$ 0.010 & 12.788 $\pm$ 0.010 & 12.085 $\pm$ 0.010 & 11.264 $\pm$ 0.020 \\
58161.47 & 60cm & 13.751 $\pm$ 0.010 & 12.758 $\pm$ 0.010 & 12.044 $\pm$ 0.010 & 11.219 $\pm$ 0.020 \\
58171.47 & 60cm & 13.792 $\pm$ 0.010 & 12.845 $\pm$ 0.010 & 12.116 $\pm$ 0.010 & 11.340 $\pm$ 0.020 \\
58173.49 & 60cm & 13.719 $\pm$ 0.010 & 12.792 $\pm$ 0.010 & 12.103 $\pm$ 0.010 & 11.290 $\pm$ 0.020 \\
58175.47 & 60cm & 13.756 $\pm$ 0.010 & 12.815 $\pm$ 0.010 & 12.128 $\pm$ 0.010 & 11.325 $\pm$ 0.020 \\
58178.47 & 60cm & 13.735 $\pm$ 0.010 & 12.847 $\pm$ 0.010 & 12.149 $\pm$ 0.010 & 11.337 $\pm$ 0.020 \\
58179.46 & 60cm & 13.698 $\pm$ 0.010 & 12.780 $\pm$ 0.010 & 12.111 $\pm$ 0.010 & 11.346 $\pm$ 0.020 \\
58185.46 & 60cm & 13.779 $\pm$ 0.010 & 12.843 $\pm$ 0.010 & 12.125 $\pm$ 0.010 & 11.323 $\pm$ 0.020 \\
58186.46 & 60cm & 13.626 $\pm$ 0.010 & 12.763 $\pm$ 0.010 & 12.052 $\pm$ 0.010 & 11.250 $\pm$ 0.020 \\
58187.47 & 60cm & 13.712 $\pm$ 0.010 & 12.769 $\pm$ 0.010 & 12.114 $\pm$ 0.010 & 11.312 $\pm$ 0.020 \\
58192.46 & 60cm & 13.727 $\pm$ 0.010 & 12.794 $\pm$ 0.010 & 12.067 $\pm$ 0.010 & 11.296 $\pm$ 0.020 \\
58200.46 & 60cm & 13.780 $\pm$ 0.010 & 12.844 $\pm$ 0.010 & 12.135 $\pm$ 0.010 & 11.315 $\pm$ 0.020 \\
58201.49 & 60cm & 13.805 $\pm$ 0.010 & 12.873 $\pm$ 0.010 & 12.184 $\pm$ 0.010 & 11.381 $\pm$ 0.020 \\
58202.48 & 60cm & 13.771 $\pm$ 0.010 & 12.836 $\pm$ 0.010 & 12.158 $\pm$ 0.010 & 11.372 $\pm$ 0.020 \\
58203.49 & 60cm & 13.675 $\pm$ 0.012 & 12.757 $\pm$ 0.011 & 12.081 $\pm$ 0.010 & 11.293 $\pm$ 0.020 \\
58216.47 & 60cm & 13.790 $\pm$ 0.011 & 12.851 $\pm$ 0.010 & 12.136 $\pm$ 0.010 & 11.355 $\pm$ 0.020 \\
58218.48 & 60cm & 13.738 $\pm$ 0.010 & 12.844 $\pm$ 0.010 & 12.088 $\pm$ 0.010 & 11.365 $\pm$ 0.020 \\
58219.48 & 60cm & 13.780 $\pm$ 0.010 & 12.860 $\pm$ 0.010 & 12.137 $\pm$ 0.010 & 11.376 $\pm$ 0.020 \\
58368.74 & 60cm & 13.863 $\pm$ 0.010 & 12.899 $\pm$ 0.010 & 12.275 $\pm$ 0.010 & 11.540 $\pm$ 0.020 \\
58369.76 & 80cm & 13.827 $\pm$ 0.012 & 12.920 $\pm$ 0.011 & 12.059 $\pm$ 0.010 & 11.504 $\pm$ 0.021 \\
58370.78 & 60cm & 13.806 $\pm$ 0.010 & 12.905 $\pm$ 0.010 & 12.238 $\pm$ 0.010 & 11.474 $\pm$ 0.020 \\
58377.75 & 80cm & 13.859 $\pm$ 0.011 & 12.940 $\pm$ 0.010 & 12.259 $\pm$ 0.011 & 11.498 $\pm$ 0.020 \\
58378.79 & 80cm & 13.854 $\pm$ 0.011 & 12.935 $\pm$ 0.010 & 12.273 $\pm$ 0.010 & 11.481 $\pm$ 0.020 \\
58382.83 & 60cm & 13.849 $\pm$ 0.010 & 12.926 $\pm$ 0.010 & 12.280 $\pm$ 0.010 & 11.496 $\pm$ 0.020 \\
58383.78 & 60cm & 13.853 $\pm$ 0.010 & 12.941 $\pm$ 0.010 & 12.295 $\pm$ 0.010 & 11.501 $\pm$ 0.020 \\
58385.75 & 60cm & 13.803 $\pm$ 0.010 & 12.873 $\pm$ 0.010 & 12.206 $\pm$ 0.010 & 11.510 $\pm$ 0.020 \\
58397.71 & 60cm & 13.860 $\pm$ 0.010 & 12.960 $\pm$ 0.010 & 12.310 $\pm$ 0.010 & 11.529 $\pm$ 0.020 \\
58398.80 & 60cm & 13.855 $\pm$ 0.010 & 12.930 $\pm$ 0.010 & 12.332 $\pm$ 0.010 & 11.552 $\pm$ 0.020 \\
58408.80 & 60cm & 13.887 $\pm$ 0.010 & 12.963 $\pm$ 0.010 & 12.285 $\pm$ 0.010 & 11.519 $\pm$ 0.020 \\
58409.79 & 60cm & 13.873 $\pm$ 0.010 & 12.945 $\pm$ 0.010 & 12.297 $\pm$ 0.010 & 11.527 $\pm$ 0.020 \\
58410.72 & 60cm & 13.886 $\pm$ 0.010 & 12.971 $\pm$ 0.010 & 12.282 $\pm$ 0.010 & 11.514 $\pm$ 0.020 \\
58411.67 & 60cm & 13.812 $\pm$ 0.010 & 12.922 $\pm$ 0.010 & 12.268 $\pm$ 0.010 & 11.494 $\pm$ 0.020 \\
58414.58 & 60cm & 13.850 $\pm$ 0.010 & 12.951 $\pm$ 0.010 & 12.293 $\pm$ 0.010 & 11.530 $\pm$ 0.020 \\
58421.69 & 60cm & 13.887 $\pm$ 0.010 & 12.969 $\pm$ 0.010 & 12.294 $\pm$ 0.010 & 11.520 $\pm$ 0.020 \\
58422.67 & 60cm & 13.919 $\pm$ 0.010 & 12.970 $\pm$ 0.010 & 12.300 $\pm$ 0.010 & 11.525 $\pm$ 0.020 \\
58425.63 & 60cm & 13.858 $\pm$ 0.010 & 12.977 $\pm$ 0.010 & 12.323 $\pm$ 0.010 & 11.549 $\pm$ 0.020 \\
58428.75 & 80cm & 13.855 $\pm$ 0.011 & 12.967 $\pm$ 0.010 & 12.268 $\pm$ 0.010 & 11.557 $\pm$ 0.020 \\
58430.68 & 80cm & 13.883 $\pm$ 0.011 & 12.978 $\pm$ 0.010 & 12.313 $\pm$ 0.010 & 11.564 $\pm$ 0.020 \\
58456.62 & 80cm & 13.896 $\pm$ 0.011 & 12.993 $\pm$ 0.010 & 12.324 $\pm$ 0.011 & 11.581 $\pm$ 0.021 \\
58468.53 & 60cm & 13.902 $\pm$ 0.010 & 12.996 $\pm$ 0.010 & 12.334 $\pm$ 0.010 & 11.570 $\pm$ 0.020 \\
58469.59 & 60cm & 13.896 $\pm$ 0.010 & 12.973 $\pm$ 0.010 & 12.342 $\pm$ 0.010 & 11.571 $\pm$ 0.020 \\
58476.54 & 60cm & 13.814 $\pm$ 0.010 & 12.921 $\pm$ 0.010 & 12.276 $\pm$ 0.010 & 11.512 $\pm$ 0.020 \\
58477.53 & 60cm & 13.835 $\pm$ 0.010 & 12.943 $\pm$ 0.010 & 12.285 $\pm$ 0.010 & 11.525 $\pm$ 0.020 \\
58477.55 & 2.4m & 13.864 $\pm$ 0.010 & 12.963 $\pm$ 0.010 & 12.339 $\pm$ 0.010 & 11.532 $\pm$ 0.020 \\
58478.54 & 60cm & 13.844 $\pm$ 0.010 & 12.938 $\pm$ 0.010 & 12.275 $\pm$ 0.010 & 11.523 $\pm$ 0.020 \\
58478.55 & 2.4m & 13.881 $\pm$ 0.010 & 12.971 $\pm$ 0.010 & 12.301 $\pm$ 0.010 & 11.543 $\pm$ 0.020 \\
58479.60 & 2.4m & 13.884 $\pm$ 0.010 & 12.975 $\pm$ 0.010 & 12.304 $\pm$ 0.010 & 11.548 $\pm$ 0.020 \\
58480.54 & 2.4m & 13.887 $\pm$ 0.010 & 12.970 $\pm$ 0.010 & 12.306 $\pm$ 0.010 & 11.541 $\pm$ 0.020 \\
58481.55 & 60cm & 13.806 $\pm$ 0.010 & 12.938 $\pm$ 0.010 & 12.316 $\pm$ 0.010 & 11.516 $\pm$ 0.020 \\
58482.55 & 60cm & 13.890 $\pm$ 0.010 & 12.982 $\pm$ 0.010 & 12.330 $\pm$ 0.010 & 11.554 $\pm$ 0.020 \\
58484.53 & 60cm & 13.885 $\pm$ 0.010 & 12.974 $\pm$ 0.010 & 12.320 $\pm$ 0.010 & 11.554 $\pm$ 0.020 \\
58485.54 & 60cm & 13.876 $\pm$ 0.010 & 12.960 $\pm$ 0.010 & 12.296 $\pm$ 0.010 & 11.538 $\pm$ 0.020 \\
58487.53 & 60cm & 13.806 $\pm$ 0.010 & 12.894 $\pm$ 0.010 & 12.296 $\pm$ 0.010 & 11.520 $\pm$ 0.020 \\
58489.58 & 60cm & 13.881 $\pm$ 0.010 & 12.965 $\pm$ 0.010 & 12.303 $\pm$ 0.010 & 11.506 $\pm$ 0.020 \\
58490.63 & 60cm & 13.797 $\pm$ 0.010 & 12.881 $\pm$ 0.010 & 12.278 $\pm$ 0.010 & 11.498 $\pm$ 0.020 \\
58491.52 & 60cm & 13.846 $\pm$ 0.010 & 12.965 $\pm$ 0.010 & 12.306 $\pm$ 0.010 & 11.533 $\pm$ 0.020 \\
58499.49 & 60cm & 13.822 $\pm$ 0.010 & 12.925 $\pm$ 0.010 & 12.287 $\pm$ 0.010 & 11.517 $\pm$ 0.020 \\
58500.46 & 60cm & 13.847 $\pm$ 0.010 & 12.977 $\pm$ 0.010 & 12.302 $\pm$ 0.010 & 11.557 $\pm$ 0.020 \\
58501.48 & 60cm & 13.852 $\pm$ 0.010 & 12.951 $\pm$ 0.010 & 12.261 $\pm$ 0.010 & 11.527 $\pm$ 0.020 \\
58504.47 & 60cm & 13.869 $\pm$ 0.010 & 12.955 $\pm$ 0.010 & 12.313 $\pm$ 0.010 & 11.556 $\pm$ 0.020 \\
58509.44 & 60cm & 13.899 $\pm$ 0.010 & 12.990 $\pm$ 0.010 & 12.318 $\pm$ 0.010 & 11.567 $\pm$ 0.020 \\
58510.45 & 60cm & 13.905 $\pm$ 0.010 & 13.006 $\pm$ 0.010 & 12.348 $\pm$ 0.010 & 11.561 $\pm$ 0.020 \\
58510.54 & 2.16m & 13.844 $\pm$ 0.010 & 12.945 $\pm$ 0.010 & 12.301 $\pm$ 0.010 & 11.534 $\pm$ 0.020 \\
58511.46 & 60cm & 13.904 $\pm$ 0.010 & 12.994 $\pm$ 0.010 & 12.360 $\pm$ 0.010 & 11.584 $\pm$ 0.020 \\
58514.47 & 60cm & 13.850 $\pm$ 0.010 & 12.957 $\pm$ 0.010 & 12.281 $\pm$ 0.010 & 11.545 $\pm$ 0.020 \\
58515.45 & 60cm & 13.908 $\pm$ 0.010 & 12.998 $\pm$ 0.010 & 12.370 $\pm$ 0.010 & 11.565 $\pm$ 0.020 \\
58525.45 & 60cm & 13.826 $\pm$ 0.010 & 12.909 $\pm$ 0.010 & 12.275 $\pm$ 0.010 & 11.538 $\pm$ 0.020 \\
58534.47 & 60cm & 13.886 $\pm$ 0.010 & 12.997 $\pm$ 0.010 & ... & 11.597 $\pm$ 0.020 \\
58537.45 & 60cm & 13.872 $\pm$ 0.010 & 12.963 $\pm$ 0.010 & 12.315 $\pm$ 0.010 & 11.564 $\pm$ 0.020 \\
58538.52 & 60cm & 13.899 $\pm$ 0.010 & 12.952 $\pm$ 0.010 & 12.318 $\pm$ 0.010 & 11.558 $\pm$ 0.020 \\
58728.83 & 60cm & 13.835 $\pm$ 0.010 & 12.893 $\pm$ 0.010 & 12.233 $\pm$ 0.010 & ... \\
58731.83 & 60cm & 13.846 $\pm$ 0.010 & 12.926 $\pm$ 0.010 & 12.263 $\pm$ 0.010 & 11.491 $\pm$ 0.020 \\
58732.77 & 60cm & 13.838 $\pm$ 0.010 & 12.911 $\pm$ 0.010 & 12.246 $\pm$ 0.010 & 11.481 $\pm$ 0.020 \\
58733.87 & 60cm & 13.765 $\pm$ 0.011 & 12.852 $\pm$ 0.010 & 12.253 $\pm$ 0.010 & 11.431 $\pm$ 0.020 \\
58743.73 & 60cm & 13.815 $\pm$ 0.010 & 12.915 $\pm$ 0.010 & 12.254 $\pm$ 0.010 & 11.471 $\pm$ 0.020 \\
58744.74 & 60cm & 13.780 $\pm$ 0.010 & 12.873 $\pm$ 0.010 & 12.193 $\pm$ 0.010 & ... \\
58749.67 & 60cm & 13.828 $\pm$ 0.010 & 12.911 $\pm$ 0.010 & 12.237 $\pm$ 0.010 & 11.445 $\pm$ 0.020 \\
58750.75 & 80cm & 13.812 $\pm$ 0.011 & 12.898 $\pm$ 0.010 & 12.312 $\pm$ 0.010 & 11.461 $\pm$ 0.020 \\
58751.76 & 80cm & 13.862 $\pm$ 0.011 & 12.937 $\pm$ 0.010 & 12.255 $\pm$ 0.010 & 11.495 $\pm$ 0.020 \\
58752.67 & 60cm & 13.829 $\pm$ 0.010 & 12.899 $\pm$ 0.010 & 12.247 $\pm$ 0.010 & 11.442 $\pm$ 0.020 \\
58757.61 & 60cm & 13.777 $\pm$ 0.010 & 12.870 $\pm$ 0.010 & 12.209 $\pm$ 0.010 & 11.424 $\pm$ 0.020 \\
58764.59 & 60cm & 13.776 $\pm$ 0.010 & 12.832 $\pm$ 0.010 & 12.216 $\pm$ 0.010 & 11.379 $\pm$ 0.020 \\
58776.64 & 60cm & 13.756 $\pm$ 0.010 & 12.824 $\pm$ 0.010 & 12.108 $\pm$ 0.010 & 11.418 $\pm$ 0.020 \\
58777.69 & 60cm & 13.820 $\pm$ 0.010 & 12.886 $\pm$ 0.010 & 12.229 $\pm$ 0.010 & 11.443 $\pm$ 0.020 \\
58782.70 & 60cm & 13.820 $\pm$ 0.010 & 12.868 $\pm$ 0.010 & 12.184 $\pm$ 0.010 & 11.421 $\pm$ 0.020 \\
58785.71 & 60cm & 13.742 $\pm$ 0.010 & 12.846 $\pm$ 0.010 & 12.187 $\pm$ 0.010 & 11.424 $\pm$ 0.020 \\
58786.68 & 60cm & 13.780 $\pm$ 0.010 & 12.884 $\pm$ 0.010 & 12.201 $\pm$ 0.010 & 11.453 $\pm$ 0.020 \\
58787.68 & 60cm & 13.835 $\pm$ 0.010 & 12.903 $\pm$ 0.010 & 12.224 $\pm$ 0.010 & 11.455 $\pm$ 0.020 \\
58790.59 & 60cm & 13.774 $\pm$ 0.010 & 12.858 $\pm$ 0.010 & 12.191 $\pm$ 0.010 & 11.388 $\pm$ 0.020 \\
58790.63 & 80cm & 13.798 $\pm$ 0.011 & 12.871 $\pm$ 0.011 & 12.122 $\pm$ 0.010 & 11.437 $\pm$ 0.020 \\
58791.66 & 80cm & 13.798 $\pm$ 0.011 & 12.876 $\pm$ 0.010 & 12.209 $\pm$ 0.011 & 11.441 $\pm$ 0.020 \\
58792.63 & 60cm & 13.748 $\pm$ 0.010 & 12.827 $\pm$ 0.010 & 12.170 $\pm$ 0.010 & 11.392 $\pm$ 0.020 \\
58793.63 & 60cm & 13.762 $\pm$ 0.010 & 12.872 $\pm$ 0.010 & 12.200 $\pm$ 0.010 & 11.465 $\pm$ 0.020 \\
58793.83 & 80cm & 13.731 $\pm$ 0.011 & 12.809 $\pm$ 0.011 & 12.215 $\pm$ 0.010 & 11.436 $\pm$ 0.021 \\
58794.63 & 60cm & 13.736 $\pm$ 0.010 & 12.842 $\pm$ 0.010 & 12.214 $\pm$ 0.010 & 11.394 $\pm$ 0.020 \\
58805.61 & 60cm & 13.724 $\pm$ 0.010 & 12.855 $\pm$ 0.010 & 12.171 $\pm$ 0.010 & 11.429 $\pm$ 0.020 \\
58806.61 & 60cm & 13.821 $\pm$ 0.010 & 12.899 $\pm$ 0.010 & 12.227 $\pm$ 0.010 & 11.446 $\pm$ 0.020 \\
58807.67 & 60cm & 13.782 $\pm$ 0.011 & 12.825 $\pm$ 0.010 & 12.185 $\pm$ 0.010 & 11.405 $\pm$ 0.020 \\
58843.64 & 2.4m & 13.800 $\pm$ 0.010 & 12.877 $\pm$ 0.010 & 12.171 $\pm$ 0.011 & 11.397 $\pm$ 0.020 \\
58852.50 & 60cm & 13.818 $\pm$ 0.010 & 12.902 $\pm$ 0.010 & 12.226 $\pm$ 0.010 & 11.429 $\pm$ 0.020 \\
59098.77 & 80cm & 13.771 $\pm$ 0.012 & 12.829 $\pm$ 0.011 & 12.190 $\pm$ 0.010 & 11.339 $\pm$ 0.021 \\
59099.79 & 80cm & 13.766 $\pm$ 0.011 & 12.828 $\pm$ 0.011 & 12.134 $\pm$ 0.011 & 11.339 $\pm$ 0.020 \\
59100.68 & 80cm & 13.767 $\pm$ 0.011 & 12.828 $\pm$ 0.011 & 12.125 $\pm$ 0.011 & 11.342 $\pm$ 0.020 \\
59101.70 & 80cm & 13.768 $\pm$ 0.011 & 12.822 $\pm$ 0.011 & 12.140 $\pm$ 0.011 & 11.336 $\pm$ 0.020 \\
59107.75 & 80cm & 13.735 $\pm$ 0.011 & 12.800 $\pm$ 0.011 & 12.134 $\pm$ 0.011 & 11.331 $\pm$ 0.020 \\
59108.74 & 80cm & 13.697 $\pm$ 0.011 & 12.772 $\pm$ 0.010 & 12.130 $\pm$ 0.011 & 11.317 $\pm$ 0.020 \\
59109.75 & 80cm & 13.756 $\pm$ 0.011 & 12.813 $\pm$ 0.010 & 12.125 $\pm$ 0.011 & 11.322 $\pm$ 0.020 \\
59131.71 & 80cm & 13.762 $\pm$ 0.014 & 12.814 $\pm$ 0.013 & 12.122 $\pm$ 0.012 & 11.325 $\pm$ 0.022 \\
59133.74 & 80cm & 13.730 $\pm$ 0.011 & 12.797 $\pm$ 0.010 & 12.117 $\pm$ 0.011 & 11.313 $\pm$ 0.020 \\
59134.76 & 80cm & 13.756 $\pm$ 0.011 & 12.816 $\pm$ 0.011 & 12.127 $\pm$ 0.011 & 11.324 $\pm$ 0.020 \\
59497.78 & 80cm & 13.743 $\pm$ 0.011 & 12.803 $\pm$ 0.011 & 12.104 $\pm$ 0.011 & 11.308 $\pm$ 0.020 \\
59564.53 & 80cm' & 13.801 $\pm$ 0.010 & 12.812 $\pm$ 0.010 & 12.117 $\pm$ 0.010 & 11.351 $\pm$ 0.020 \\
59565.51 & 80cm' & 13.797 $\pm$ 0.011 & 12.801 $\pm$ 0.010 & 12.114 $\pm$ 0.010 & 11.345 $\pm$ 0.020 \\
59567.51 & 80cm' & 13.800 $\pm$ 0.010 & 12.804 $\pm$ 0.010 & 12.111 $\pm$ 0.010 & 11.346 $\pm$ 0.020 \\
59568.53 & 80cm' & 13.791 $\pm$ 0.010 & 12.793 $\pm$ 0.010 & 12.100 $\pm$ 0.010 & 11.337 $\pm$ 0.020 \\
59569.57 & 80cm' & 13.780 $\pm$ 0.010 & 12.799 $\pm$ 0.010 & 12.103 $\pm$ 0.010 & 11.341 $\pm$ 0.020 \\
59570.54 & 80cm' & 13.797 $\pm$ 0.010 & 12.803 $\pm$ 0.010 & 12.113 $\pm$ 0.010 & 11.350 $\pm$ 0.020 \\
59571.50 & 80cm' & 13.796 $\pm$ 0.010 & 12.809 $\pm$ 0.011 & 12.098 $\pm$ 0.014 & 11.348 $\pm$ 0.020 \\
59572.51 & 80cm' & 13.799 $\pm$ 0.010 & 12.805 $\pm$ 0.010 & 12.107 $\pm$ 0.010 & 11.359 $\pm$ 0.020 \\
59573.71 & 80cm' & 13.771 $\pm$ 0.019 & 12.798 $\pm$ 0.013 & 12.106 $\pm$ 0.013 & ... \\
59576.64 & 80cm' & 13.803 $\pm$ 0.010 & 12.798 $\pm$ 0.010 & 12.104 $\pm$ 0.010 & 11.333 $\pm$ 0.020 \\
59577.63 & 80cm' & 13.794 $\pm$ 0.010 & 12.800 $\pm$ 0.010 & 12.106 $\pm$ 0.010 & 11.345 $\pm$ 0.020 \\
59578.60 & 80cm' & 13.799 $\pm$ 0.010 & 12.799 $\pm$ 0.010 & 12.103 $\pm$ 0.010 & 11.343 $\pm$ 0.020 \\
59579.63 & 80cm' & 13.797 $\pm$ 0.010 & 12.801 $\pm$ 0.010 & 12.114 $\pm$ 0.010 & 11.342 $\pm$ 0.020 \\
59580.67 & 80cm' & 13.797 $\pm$ 0.010 & 12.799 $\pm$ 0.010 & 12.103 $\pm$ 0.010 & 11.337 $\pm$ 0.020 \\
59581.67 & 80cm' & 13.784 $\pm$ 0.010 & 12.799 $\pm$ 0.010 & 12.106 $\pm$ 0.010 & 11.339 $\pm$ 0.020 \\
59582.62 & 80cm' & 13.797 $\pm$ 0.010 & 12.805 $\pm$ 0.010 & 12.113 $\pm$ 0.010 & 11.345 $\pm$ 0.020 \\
59583.63 & 80cm' & 13.794 $\pm$ 0.010 & 12.795 $\pm$ 0.010 & 12.107 $\pm$ 0.010 & 11.347 $\pm$ 0.020 \\
59584.62 & 80cm' & 13.795 $\pm$ 0.010 & 12.804 $\pm$ 0.010 & 12.110 $\pm$ 0.010 & 11.356 $\pm$ 0.020 \\
59586.61 & 80cm' & 13.799 $\pm$ 0.010 & 12.807 $\pm$ 0.010 & 12.116 $\pm$ 0.010 & 11.357 $\pm$ 0.020 \\
59587.60 & 80cm' & 13.798 $\pm$ 0.010 & 12.805 $\pm$ 0.010 & 12.118 $\pm$ 0.010 & 11.356 $\pm$ 0.020 \\
59588.60 & 80cm' & 13.797 $\pm$ 0.010 & 12.806 $\pm$ 0.010 & 12.113 $\pm$ 0.010 & 11.350 $\pm$ 0.020 \\
59589.61 & 80cm' & 13.793 $\pm$ 0.010 & 12.797 $\pm$ 0.010 & 12.118 $\pm$ 0.010 & 11.352 $\pm$ 0.020 \\
59590.59 & 80cm' & 13.806 $\pm$ 0.010 & 12.809 $\pm$ 0.010 & 12.119 $\pm$ 0.010 & 11.349 $\pm$ 0.020 \\
59591.63 & 80cm' & 13.804 $\pm$ 0.010 & 12.802 $\pm$ 0.010 & 12.109 $\pm$ 0.010 & 11.346 $\pm$ 0.020 \\
59608.67 & 80cm' & 13.773 $\pm$ 0.010 & 12.794 $\pm$ 0.010 & 12.113 $\pm$ 0.010 & 11.347 $\pm$ 0.020 \\
59612.66 & 80cm' & 13.736 $\pm$ 0.011 & 12.776 $\pm$ 0.010 & 12.094 $\pm$ 0.010 & 11.341 $\pm$ 0.020 \\
59617.61 & 80cm' & 13.809 $\pm$ 0.010 & 12.810 $\pm$ 0.010 & 12.121 $\pm$ 0.010 & 11.358 $\pm$ 0.020 \\
59621.64 & 80cm' & 13.788 $\pm$ 0.015 & ... & 12.089 $\pm$ 0.011 & 11.328 $\pm$ 0.054 \\
59625.63 & 80cm' & 13.787 $\pm$ 0.011 & 12.791 $\pm$ 0.010 & 12.095 $\pm$ 0.010 & 11.333 $\pm$ 0.020 \\
59634.60 & 80cm' & 13.786 $\pm$ 0.011 & 12.792 $\pm$ 0.010 & 12.091 $\pm$ 0.010 & 11.323 $\pm$ 0.020 \\
59638.59 & 80cm' & 13.789 $\pm$ 0.010 & 12.798 $\pm$ 0.010 & 12.116 $\pm$ 0.010 & 11.349 $\pm$ 0.020 \\
59642.57 & 80cm' & 13.793 $\pm$ 0.010 & 12.803 $\pm$ 0.010 & 12.104 $\pm$ 0.010 & 11.343 $\pm$ 0.020 \\
59646.57 & 80cm' & 13.784 $\pm$ 0.011 & 12.781 $\pm$ 0.010 & 12.104 $\pm$ 0.010 & 11.347 $\pm$ 0.020 \\
59650.56 & 80cm' & 13.784 $\pm$ 0.013 & 12.794 $\pm$ 0.011 & 12.109 $\pm$ 0.013 & 11.337 $\pm$ 0.022 \\
59654.55 & 80cm' & 13.778 $\pm$ 0.019 & 12.800 $\pm$ 0.012 & 12.086 $\pm$ 0.012 & 11.337 $\pm$ 0.021 \\
59658.54 & 80cm' & 13.788 $\pm$ 0.013 & 12.806 $\pm$ 0.011 & 12.103 $\pm$ 0.011 & 11.355 $\pm$ 0.020
\label{phot}
\end{longtable}
}%
\end{appendix}
|
Title:
Solar and stellar flares: frequency, active regions and stellar dynamo |
Abstract: We demonstrate that for weak flares the dependence on spottedness can be
rather weak. The fact is that such flares can occur both in small and large
active regions. At the same time, powerful large flares of classes M and X
occur much more often in large active regions. In energy estimates, the mean
magnetic field in starspots can also be assumed equal to the mean field in the
sunspot umbra. So the effective mean magnetic field is 900 Mx/cm$^2$ in
sunspots and 2000 Mx/cm$^2$ in starspots. Moreover, the height of the energy
storage cannot be strictly proportional to A$^{1/2}$. For stars, the fitting
factor is an order of magnitude smaller. The analysis of the occurrence rate of
powerful solar X-ray flares of class M and X and superflares on stars shows
that, with allowance for the difference in the spottedness and compactness of
active regions, both sets can be described by a single model. Thus, the problem
of superflares on stars and their absence on the Sun is reduced to the problem
of difference in the effectiveness of the dynamo mechanisms.
| https://export.arxiv.org/pdf/2208.03994 |
\title{Solar and stellar flares: frequency,
active regions and stellar dynamo}
\author{M.M.Katsova}
\affiliation{Sternberg State Astronomical Institute, M.V.Lomonosov Moscow State University \\
Universitetskij prosp. 13, 119991, Moscow, Russia}
\author{V.N.Obridko}
\affiliation{IZMIRAN, 4 Kaluzhskoe Shosse, Troitsk, Moscow, 142190}
\affiliation{Central Astronomical Observatory of the Russian Academy of Sciences at Pulkovo, St.Petersburg, Russia}
\author{D.D. Sokoloff}
\affiliation{Moscow State University, Moscow, 119991, Russia}
\affiliation{IZMIRAN, 4 Kaluzhskoe Shosse, Troitsk, Moscow, 142190, Russia}
\affiliation{Moscow Center of Fundamental and Applied Mathematics, Moscow,
119991, Russia}
\author{I.M.Livshits}
\affiliation{Sternberg State Astronomical Institute, M.V.Lomonosov Moscow State University \\
Universitetskij prosp. 13, 119991, Moscow, Russia}
\affiliation{Department of Geography and Environmental Development, Ben-Gurion University of the Negev \\
P.O.B. 653, 84105, Beer-Sheva, Israel}
\keywords{solar activity -- stellar activity -- solar flares -- stellar flares}
\section{Introduction} \label{sec:intro}
Solar flares are a spectacular phenomenon in the solar magnetic activity.
They can more or less directly affect the Earth, and the study of
solar flares is of both applied and academic interest. The
origin of solar flares is obviously associated with the solar magnetic
field and, in this sense, it is related to the action of the solar dynamo
responsible for the formation of the magnetic field. The study of solar
flares, which can be considered a traditional part of solar physics,
has made impressive progress \citep[among many others, see, e.g.][]{P63,
PF02, B08, BG10, K11, Eetal12, Setal12, S13, Aetal14}.
Of course, phenomena similar to the solar flares and known as
stellar flares occur on various stars. As is known, the total
energies of solar flares vary in a wide range of $10^{24}$-$10^{32}$ erg from the
weakest events to the strongest ones \citep[see, e.g.,][]{Zetal20}.
Stellar flares are
best studied on low-mass, red dwarf stars, and their total energies
exceed the maximum solar value by several orders of magnitude
\citep[see, e.g.,][and references therein]{Hetal21}.
Besides, the most powerful of these flare phenomena were mainly recorded on very young dwarfs, including T Tau stars, members of open clusters, fast-rotating subgiants and giants, as well as on chromospherically active components of RS CVn-type close binaries \citep[see, for instance,][]{GAetal03, Fetal04, Schmetal19}.
It is known that the most powerful flares on the Sun are rare phenomena that are characterized by sudden rise in optical continuum emission, and they are called as a white-light flare. Since a source of the flare optical continuum emission has a low contrast against the photosphere, a small flare area, and lives a short time (a few minutes), this prevents to reveal the temporal profile of the flare radiation. Nevertheless, powerful Sun-as-a-star flares were detected in long-term data on Total Solar Irradiance (TSI) \citep{K11}.
Note that same problems make difficult the detection of optical flares on single main sequence G stars, so it was thought that white-light flares have not been seen there until the Kepler mission. The only recently a definite information appears about flare activity of these stars \citep{Jaetal18, Ketal21, Betal21}. From the other side, \cite{Kaetal21} showed that time profiles of solar flares in the UV-continuum emission are similar to impulsive flare light curves registered on the red dwarf during the Kepler mission. This result supports a point that an existence of optic flares in G stars can be suspected in the UV data. Indeed, they were found recently in GALEX NUV-data for these stars \citep{Betal19}.
It seems natural to use the ideas gained from the study of solar flares to understand stellar flares as similar
phenomena in the stellar physics. However, as shows the
progress in observations of the stellar activity, the stars
reasonably similar to the Sun can produce flares substantially more energetic than the strongest solar ones.
The total energies of the strongest stellar flares can
exceed $10^{36}$ erg in the optical wavelength range \cite[see, e.g.,][]{Hetal21}. As for the flare
activity of the solar-type main sequence G stars, there was very little
information prior to the Kepler mission \cite[e.g.][]{Jaetal18, Koetal21, Betal21}. The superflare concept
applicable to powerful non-stationary stellar phenomena was
introduced when the first results of the Kepler mission, which operated in
2009-2018 and detected huge flares on G-type stars, were
published \citep{Metal12,Setal13, Metal15,Netal19, Oetal21}. These publications reported the detection of major stellar flares on solar-type stars with total energies
from $10^{33}$ to $10^{37}$ erg at optical
wavelengths. A more detail analysis showed that most flares
had the total energy of $10^{33}$--$10^{34}$ erg \citep{Tetal21}, while only a
small fraction of the phenomena could be considered superflares
with energies $E=10^{35}$--$10^{36}$ erg. Now, it is
clear that the most powerful events with $E> 10^{36}$ erg occur
either on components of the close binary stellar systems, or on subgiants and giants, or on very young and/or fast rotating stars that have not reached the main
sequence \citep{B15, KN18, Tetal21}.
The results of analysis of numerous multiwavelength observations of stellar flares and other non-steady phenomena on red dwarfs and solar-like stars reviewed by \cite{G05} provides evidences for their common physical nature with solar flares and confirm this idea first expressed by \cite{GP72}.
The process can be approximately described as a deposit of the free energy
of the non-potential magnetic field in a certain volume, its impulsive
release during non-steady event, and the subsequent response of the
atmosphere to the resulting acceleration of particles and plasma heating.
At the same time, already \cite{Getal87} paid attention to the deficiency
of up-to-date models of solar flares for explanation of the strongest
stellar flares. In particular, it seems plausible that dynamo underlying
magnetic in superflares stars is not fully identical to the conventional
solar dynamo.
Indeed,
the maximum total flare energy on solar-type stars can be several times more than $3 - 5\times10^{34}\;$erg.
This estimate is based on the magnetic virial theorem \citep{Letal15, KL15}.
Similar value is discussed now in the recent statistics of all the primary Kepler mission data \citep{Oetal21}.
It looks plausible that magnetic configurations sufficient to accumulate corresponding magnetic energy have to be different in size and/or morphology from conventional sunspots.
Nevertheless, we believe that
the problem of superflares is, perhaps, less dramatic than it seemed earlier, however
still does exist and expects its explanation.
We are going to propose corresponding revision in this paper.
To assess the similarity or difference between the solar and stellar flares correctly, it is necessary to take into account a number of circumstances.
First, a double selection of observational data must be taken into account. For natural reasons (the sensitivity of the equipment), only the most powerful white-light flares, which are extremely rare in the Sun (about 0.4\% of the total number of flares observed on the Sun over 15 years, see section 2 below), are recorded on stars. In addition, the rotational modulation technique makes it possible to detect only the largest concentrated spots or spot groups on stars. We discuss these issues in Sections 1 and 2.
Another circumstance that must be taken into account is that flares of different energy depend in different ways on the area of the active region. The frequency and energy of weak X-ray flares B and C are virtually independent on the area of the active region and, therefore, cannot be used to assess the similarity or dissimilarity with superflares on stars. This issue is discussed in Section 2.
And finally, when evaluating the total magnetic energy in the active region, we cannot use the extreme values of several kG, which are observed in sunspots. These values correspond to a very small part of the spot. To find the total energy, it is necessary to obtain the integral values, for which we have to know the distribution of the magnetic field over the active region. In this case, we cannot use the photometric values of the spot area, but have to introduce the concept of a magnetic boundary and, additionally, to determine the relative fraction of the umbral area. These estimates are given in Section 3.
These problems were discussed by various authors: see, for example \citep{B08, BG10}. A particularly detailed and thorough analysis was carried out by \cite{B05}, who not only described methods for studying starspots, but also provided extensive observational material, which was subsequently used by other authors \citep{Aetal13, Setal13, Netal13, Netal19, Metal15, Hetal21, Oetal21}. It should be noted that the procedures used to determine the spot areas on the Sun and stars are essentially different. In the former case, the observer directly calculates the area of each spot from the full image of the Sun and, then, sums up the values obtained. The penumbra is traditionally included in the spot area. The procedure of determining the total spottedness on stars is more complicated. First of all, one has to find out the variation in the star brightness. The methods for determining this variation, such as the light--curve modeling, Doppler imaging, Zeeman--Doppler imaging, molecular bands modeling, are described in detail by \cite{B05}. These methods are based on the use of different radiation characteristics of a star, continuous spectrum in different ranges, different spectral lines, Doppler effect, magnetic splitting, and molecular spectrum. Generally speaking, these data can refer to different layers in the stellar atmosphere. Standing apart is the spot temperature. The large sunspot areas and temperature contrasts found in active stars suggest that the photometric and spectroscopic variability of these stars is dominated by the starspot umbra. Our current knowledge about starspot temperatures is based on measurements obtained from simultaneous modeling of brightness and color variations, Doppler imaging, modeling of molecular bands and atomic line-depth ratios, the latter being the most accurate method. A representative sample of starspot temperatures for active dwarfs, giants and subgiants is provided in Table 5 and plotted in Fig.~7 by \cite{B05}.
Then, the spot area ($A_{\rm spot}$) of superflare stars is estimated from the normalized amplitude of light variations ($\Delta F/F$) by using the following equation
\begin{equation}
A_{\rm spot}=\frac{\Delta F}{F}A_{\rm star}\left[1-\left(\frac{T_{\rm spot}}{T_{\rm star}}\right)^4\right]^{-1}, \label{spot}
\end{equation}
where $A_{\rm star}$ is the apparent area of the star and $T_{\rm spot}$ and $T_{\rm star}$ are the temperature values of the starspot and stellar photospher.
No matter how the temperature values are obtained, it turns out that in solar-like stars they are close to
the temperature in the sunspot umbra \citep[see][Table~5]{B05}. This means that, in fact,
we find the total area of the umbra or, to be more precise, the starspot area on stars can be
considered coinciding with the area of the umbra. Therefore, in energy estimates,
the mean magnetic field in starspots can also be assumed equal to the mean field in
the sunspot umbra.
Considering all of the above--mentioned circumstances, we arrived at a conclusion that the problem of superflares on stars and their absence on the Sun is reduced to the difference in the effectiveness of the dynamo mechanism. This conclusion and certain consequences for the problem of the generation of magnetic fields in the Sun and stars are discussed in Section 4.
\section{Comparing solar and stellar flares} \label{Comp}
The structure of our research depends substantially on the
particular difficulties of comparing the solar and stellar flares
listed in this section.
First of all, we have to emphasize that the task of comparing the stellar and solar flares is far from straightforward. Indeed, it is often stated that \cite[see, e.g.][]{Hetal21}
solar flares do not fit into the linear trend visible in stellar data. The
point, however, is that due to the instrumental limitations, the
stellar flares are only white-light flares, which are the most
energetic ones, while the weak flares are not observable and, therefore, are absent on the plot. In contrast, the solar data
contain all flares with peak X-ray flux from $10^{-7}$ W/m$^2$ up to the flux of the order of $10^{-3}$ W/m$^2$, i.e., from subflares up to
the major proton events. The most energetic flares belong to
classes M and X. This makes up the energy range from $10^{28}$ to $10^{32}$ erg and corresponds to a spottedness of no more than 3000 m.v.h. The minimum detectable spottedness on stars is approximately 1000 m.v.h. \citep{Oetal21, Oetal21a}, but in general stellar flares have energies from $10^{34}$ to $10^{36}$ erg, which corresponds to the spottedness range from 0.01 to 0.3 of the area of visual solar hemisphere.
Another relevant problem connected with the fact that the total spot area (spottedness) is not the only parameter that determines directly the flare energy. This point is discussed in Section 3.
Note also that the expression "spot area" or "spottedness" is
understood quite differently when referred to the Sun and stars. As to
the Sun, we assume that the spottedness is just the total area of
individual spots visible in the white light relative to the
area of the solar hemisphere. In contrast, the stellar spottedness is
usually estimated from the rotation modulation of the stellar brightness without taking into account the spot distribution over the star surface. This method, however, gives basically different estimates for a single large spot and for many spots of moderate size distributed more or less homogeneously over the stellar surface. In particular, when observing the Sun as a star by this method we see almost no rotational modulation even in the periods of very high activity.
Summarizing all said above, we expect a strong selective effect
in stellar observations, which gives preference to the contribution of
a single or a few very large spots. Therefore, the solar
data have to be properly selected to be comparable with the stellar
ones.
It can be expected that for weak flares the dependence on spottedness can be rather weak. The fact is that such flares can occur both in small and large active regions.
At the same time, powerful large flares of classes M and X (the peak flux larger than $10^{-5}$ W/m$^2$) occur much more often in large active regions. It is known that there are positive correlations between
the sunspot coverage and the energy of the largest solar flares \cite[see, e.g.][]{Setal00}.
To test these considerations, we performed an additional analysis. The occurrence rates N of flares of different classes were estimated for the period 1992--2016 using data from the catalogue $http://hec.helio-vo.eu/hec/hec\_gui.php$, which contains 44566 X-ray flares. GOES Soft X-ray Flare List was used. The flare classes are determined by their peak fluxes as follows:
B stands for fluxes $(1-9) \times 10^{-7}$ W/m$^2$ (17747
flares), C stands for $(1-9) \times 10^{-6}$ W/m$^2$ (24190
flares), M corresponds to $(1-9) \times 10^{-5}$ W/m$^2$
(2449 flares), and X stands for $(1-9) \times 10^{-4}$ W/m$^2$
(180 flares). Based on these data, we calculated the monthly mean occurrence rates of flares of each class and compared them with the monthly spottedness data taken from
$https://solarscience.msfc.nasa.gov/greenwch/sunspot\_area.txt$. Then,
we gathered separately the data for classes B and C, and classes M and X and plotted them versus the spottedness (Fig.~1). For weak flares (B and C), the occurrence rate $N_{BC}$ is almost independent of spottedness (Fig.~1). The exponent is only 0.237, correlation coefficient 0.514 .
\begin{equation}
\log N_{BC}=1.55185 \pm 0.0845 + (0.23674 \pm 0.0286)\log A \, .
\label{eq2}
\end{equation}
A pronounced relationship between the occurrence $N_{MX}$ and the spottedness $A$ is seen only for strong flares of class M and X (Fig.~1, bottom). The exponent of the power-law dependence for these
flares is much higher and amounts to 1.363, correlation coefficient 0.728.
\begin{equation}
\log N_{MX}=-3.152 \pm 0.2749 +(1.36349 \pm 0.09318)\log A \, .
\label{eq3}
\end{equation}
An important characteristic of flares is their occurrence rate. The simplest and physically meaningful
relationship between the flare energy and occurrence rate is expressed by a power function. Fig. 2 based
on the picture from \cite{Getal87} repeated later by \cite{G05}
shows the
relationship between the energy of flares observed in the photometric $B$-band ($E_B$) and their occurrence rates for
different dwarf stars, indicated on the plot, and for the stars of Pleiades and Orion, as well. These data are the result of thousands of hours of patrol photoelectric observations at various world observatories. By now, more than 3000 stars of the type under discussion have been registered. The data shown in the figure refer mainly to red dwarfs (the spectral class M). The total energy ranges from 10$^{28}$ to 10$^{36}$ erg, which is much broader than the superflare range (10$^{33}$--10$^{36}$ erg). The energy range of superflares was determined from observations of the Kepler mission, which could not discriminate weak flare because of the background noise. More recent data obtained with the Transiting Exoplanet Survey Satellite (TESS) and Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) are described by \cite{Tetal21}.The authors developed their own technique, which allowed recording weaker events than those detected by Kepler. \cite{Coletal22}, validated the procedure by comparing the results obtained with other techniques by \cite{Maretal21} for AU Mic and found a rate of events of $\approx5$ flares per day with energy $E_f > 5\times10^{31}$ erg. In addition to that, they studied the system DS Tuc A and found a rate of energetic events of $\approx2$ flares per day with energy greater than $2\times10^{32}$ erg. It is interesting to note that these data agree well with ground-based observations of red dwarfs (see Fig. 2).
It should
be noted that a good agreement between the observed relation and the power function, which is represented
by a straight line in the logarithmic diagram, does not exist in the entire energy range. Sometimes there
is saturation for very strong flares and a sudden dip due to the observational selection for very weak
flares \citep{G05}. Therefore, in general, the figure shows only the linear sections of the relation.
Significant deviations from linearity are observed for UV Ceti, AD Leo and EQ Peg AB and are shown by
dotted lines. Additionally, the figure shows the relationship for solar flares in H$\alpha$. The total length of the dynamic energy range of flares on each star does not exceed two orders of magnitude.
The range of the energies and occurrence rates on the diagram is rather broad (7--9 orders of magnitude),
while the angular coefficients of the dependencies $\beta=d\log\nu /d\log E$ lie in a narrow range from
--0.5 to --0.9. In the clusters, they are somewhat larger in the absolute value (from --0.8 to --1.0);
for solar flares in H$_{\alpha}$, $\beta=-0.8$ \citep{G05}. We have plotted a similar curve using
observations of X-ray flares mentioned above. The occurrence rates are 0.110 per hour for class C flares
(24190 events), 0.0112 events per hour for class M flares (2449 events), and $8.208 \times 10^{-4}$
per hour for class X flares (180 events). The dip for class B flares was ignored as it was when plotting
the relations for starflares. The result is represented in Fig. 2 by a thick line and large squares.
One can see that, here, the value of $\beta$ (-1.06) is close to the values for superflares in the
clusters and is typical of stars of approximately the solar age \citep{G05}. The similarity of the
values of $\beta$ shows that despite the significant difference in energy, superflares on stars
and solar flares are apparently determined by the same processes \citep{G05}.
It should be noted that on the Sun, the linear part of the dynamic energy range also does not apparently exceed 2--3 orders of magnitude. If there were no saturation and with the observed value of $\beta$, at least one superflare with an energy in the range of $10^{34}$--$10^{35}$ erg would have to occur on the Sun every 550 years, which is not confirmed by the historical and archaeological data available.
Note also that, as follows from Fig. 2, flares of the same energy on the Sun (and apparently in general on G dwarfs) are 30--100 times less frequent than on red dwarfs.
To avoid misunderstandings, we note that the value of ${\nu}$ used by us, calculated per one hour, corresponds to the $f_\ast$ used by \cite{Tetal21}, which is calculated per a year, but does not coincide with the index $f_n$ introduced there, in which additional normalization is carried out for the number of observed stars and the energy range. This more physically sound parameter yields $\beta = -1.76$.
\section{Scaling stellar data}
Now, our task is to suggest a reasonable scaling for the energy $E$ of stellar flares. We depart from the conventional assumption that the energy is determined by the magnetic energy density $B^2/8 \pi$ (where $B$ is the magnetic field) and the volume of energy accumulation. The latter is proportional to the spot area $A$ and the height $h$ of the volume.
We depart from the viewpoint that the initial flare energy release
originates in the magnetic energy in a volume. This energy is converted
in particle acceleration, optical and X-ray radiation, plasma motions
\cite{Eetal04, Betal05, V12}. Immediate origin for the energy release
in a flare is a current dissipation which s proportional with the scaling
factor $f_r$ to the total magnetic field energy. Estimates of $f_r$ contain
various uncertainties \cite{Setal12} and varies from 0.01 to 0.5.
We argue that solar and stellar flares are physically similar and fortunately
we can accept that $f_r$ is the same for solar and stellar flares.
Giving below corresponding references we do not focus below attention on
this scaling factor.
Thus, the total energy released in a flare is described by the expression
\begin{equation}
E=f_r\int{\frac{B^2}{8\pi}}dV
\end{equation}
Direct calculations by this formula are difficult even for the Sun and
require observations with good spatial resolution and high-quality
full-vector maps of the magnetic field. Such calculations with some
additional assumptions have been performed by several authors
\citep[e.g., see][]{Letal15, Zetal20} and generally confirmed the
above concept with the parameter $f_r\approx0.1$. However,
such calculations are completely impossible today for stars and,
therefore, a simpler formula is used:
\begin{equation}
E=f_r\frac{\overline{B^2}}{8\pi}V
\end{equation}
Here, $B$ is the mean field in the given volume, which is determined as follows:
\begin{equation}
V=\overline{A}\cdot\overline{H}
\end{equation}
Situation with another scaling factors $f_h$ and $f_s$ is more delicate
as they may be substantially different for the Sun and stars. The point
is that estimating $B$ we can use magnetic field averaged over the whole
sunspot while for the stars we have to use magnetic field averaged over
the umbra as magnetic field in the whole star spot is not accessible
for observations. The total sunspot area $A$ for the Sun is determined
photometrically and includes sunspot umbra and penumbra. What about stars,
the value $A$ is determined using Eq.~(1) basing on spectral temperature
related to the umbra \cite{B05, Hetal21}. We need the scaling factor $f_s$
to reproduce this difference.
\begin{equation}
\overline{A}=f_s A_{\rm spot}
\end{equation}
The value of $\overline{B}$ changes accordingly. For the Sun, we have to use the mean field value over the entire sunspot, and for the stars, the mean field in the spot umbra.
And finally, to estimate $\overline{H}$, we use the expression connecting the height of the energy release region with the radius of a round spot:
\begin{equation}
H=f_h A^{1/2}_{\rm spot}
\end{equation}
Note that this approximation raises serious doubt. Here, we also need to introduce a model parameter $f_h$, since the region of primary energy release must contain free (nonpotential) energy. Deviations from potentiality arise when the pressure and energy of plasma motions are greater than or comparable to the potential energy of the magnetic field. In the majority of solar flare models, the height $H$ is 10-20 thousand kilometers, i.e., is comparable to the radius of a very large sunspot, but is much smaller than the radius of a stellar spot.
Without this additional parameter, very large values of $A$ will yield too large $H$; e.g., at $A=0.1$, we will get a height comparable to the radius of the star, which, obviously, cannot give any reasonable current density. This means that, at equal heights of the energy release region, the parameter $f_h$ on the stars is smaller.
The other drawback of this approximation is that $\overline{H}$ is not an additive parameter. If several large spots are observed on the star, their area is summed up, and the parameter $\overline{A}$ increases proportionally, while the value of $\overline{H}$ does not change.
Combining these formulas, we obtain our Eq. 9
\begin{equation}
E = f_h\cdot f_s\cdot f_r (B^2/8 \pi) A^{3/2}.
\label{eq4}
\end{equation}
When comparing with observations, it is usually not taken into account that the magnetic field strength and the dimensionless fitting factors may change as the spot coverage changes \citep[e.g., see][]{Netal13, Metal12, Metal15, Oetal21}. Note that the above equation is nearly the same as Eq.~(9) by \cite{Netal13} however we introduce scale factors while \cite{Netal13} use observed quantities combined with a reasonable estimate of the ratio of the spot temperature to the stellar photospheric temperature.
We use the scaling factors to take into account that solar and stellar data are obtained using substantially different methods.
Therefore, on the diagrams in logarithmic coordinates this dependence is represented as a straight line, and comparison with observations is carried out by selecting a constant value of the magnetic field. The usual conclusion is that it is not possible to find a constant field value at which Eq. 9 would describe both solar and stellar flares. This does not take into account that both the B value and the fitting factors can be different on the Sun and stars.
As mentioned above in the Introduction, in sunspot observations, the photometric picture is used to calculate the entire area of a sunspot including the penumbra. The sum of these values is defined as the spottedness and is contained in all solar catalogs. For stars, a different procedure is used, which is based on the temperature difference between the star and the observed spot. This difference corresponds to the temperature difference between the spot umbra and the star. This means that, in fact, we find the total area of the umbra or, to be more precise, the area of a starspot can be considered coinciding with the area of the umbra. Therefore, in energy estimates, the mean magnetic field in starspots can also be assumed equal to the mean field in the sunspot umbra.
Thus, to estimate the parameters included in Eq.~(9), it is
necessary to know the mean magnetic field in a spot. However, the sunspot boundaries are determined based on photometric properties. Unfortunately, there is still no generally
accepted definition of the magnetic boundary of a spot. In this
work, we used SDO/HMI observations to solve this problem.
We considered the daily SDO/HMI data on the line-of-sight magnetic field component for the period from May 1, 2010 to October 31, 2016 -- the total of 2375 days. The daily data on sunspot numbers were downloaded from the WDC-SILSO website, Royal Observatory of Belgium, Brussels http:
//sidc.oma .be / silso / datafiles (version 2). The total daily sunspot
areas were taken from the NASA website https://solarscience.msfc.nasa.gov/greenwch.shtml.
The daily values of the line-of-sight field component were
recalculated into the radial component by dividing by the cosine of
the position angle. The area of each pixel was also corrected.
Then, we calculated the relative fraction $S_B$ of the area occupied by fields above a
certain limit. This fraction was expressed in
millionths of the solar hemisphere, as is customary when studying
the total areas of sunspots.
These daily values are expressed in m.v.h. of the hemispheric area and are
calculated for several thresholds ranging from 0 to 1800 G. As a first
approximation to finding the magnetic field boundary, we calculated the regression between $S_B$ and the total sunspot area. It turned out that, at the magnetic spot boundary of 550 G, the
correlation between these values reaches 0.98. Moreover,
this correlation is valid in a very narrow range; even at the threshold values of 500
and 575 G, the correspondence deteriorates.
The calculation procedure is described in more detail by
\cite{OS18%
}. Close values for the magnitude of the vertical component of the magnetic field at the outer boundary of the penumbra are also given by \cite{KM96, Setal06, Aetal13, BI11}
Assuming that the boundary of the spot area responsible for a flare
corresponds to the magnetic field of 550 G and plotting
the mean magnetic field $B_s$ in the spot versus
the spottedness (Fig.~3a), we learn that the spottedness $A=300$
m.v.h. (i.e. $3 \times 10^{-4}$ of the area of the visible solar hemisphere)
corresponds to $B=800$ G, while $A=900$ m.v.h. gives $B=900$ G. Taking
into account that the area of the solar hemisphere is $3.044 \times
10^{22}$ cm$^2$, we obtain from Eq. 4 the following lower
estimates for the total magnetic energies stored in sunspot regions with $A=300$ m.v.h and $A=900$ m.v.h.: $E = 8 \times
10^{32}f_h f_s f_r$ erg and $E = 5.8 \times 10^{33}f_h
f_s f_r$ erg correspondingly.
When making similar calculations for stellar flares, we have to take into
account that the spottedness $A=3 \times 10^{-2}$ m.v.h. is rather high, and the starspot must be more compact than the sunspot to be distinguished by the rotation modulation of the stellar
brightness.
As mentioned above, the method of determining the area from temperature data leads to the fact that the determined area of the spot is essentially the area of the umbra. Therefore, under the assumption that the average field in the umbra is the same in sunspots and starspots, it can be calculated based on the knowledge of the magnetic field at the umbra--penumbra boundary. Unfortunately, the method that was used above to determine the outer boundary of the spot could not be applied due to the lack of a database on the sum of umbra areas on hemisphere. There are a number of works in which this umbra--penumbra boundary is discussed on the basis of direct observations \citep {J11, Jetal15, Jetal17, Jetal18, Setal18, Letal20}.
In this work, we have chosen the value of 1800 Mx/cm$^2$ as the umbra boundary, which is quite close to the results of \cite{J11, Jetal15, Jetal17, Jetal18}. So we obtain the average magnetic field of the umbra 2000 Mx/cm$^2$ (see Fig.~3b).
As mentioned above, the temperature in stellar spots corresponds to the temperature of the sunspot umbra. Therefore, Fig.~3b shows a diagram for starspot umbra.
After having estimated the mean magnetic field $B$ in sunspots and
in starspots, let us estimate the scaling factors $f_h$, $f_s$, and
$f_r$.
Assuming that the mechanism of solar and stellar flares is the same, this height should be approximately conserved. To determine the dependence of this height on spottedness, the factor $A^{1/2}$ is provided in formula (9). It is easy to see that this multiplier gives too large values and reflects only the general trend. So for areas $3 \times 10^{-4}$, $10^{-3}$, $10^{-2}$ and $10^{-1}$ of the solar hemisphere, we get the values of $A^{1/2}$ equal to $30$, $50$, $170$ and $550$ Mm. In the Sun, the estimated height of the energy release domain is $10-20$ Mm \citep[see e.g.][]{Shetal18, Setal20, Zetal20}. Therefore, for the Sun, we must take the parameter $f_h = 0.3$, while in superflaring stars we obtain $f_h = 0.1$.
$f_s$ is a dimensionless scaling factor determining the share of the
region occupied by the strongest magnetic field. This is actually
the relative area of the umbra in a sunspot.
We assume $f_s$=0.2 \citep[see][]{Betal14}.
When estimating $f_s$ for the stars that produce superflares, we have to take into
account that the spottedness $A$ is expected to be large. In
principle, a large spottedness can be achieved by increasing either the
number or the size of spots. In the former case, however,
the effect will be undetectable by observations based on the rotational modulation. Therefore, we have to assume that the observed stellar spottedness is determined by the relative area of the umbra of a single
large stellar spot. In other words, for stars with superflares we have to assume $f_s = 1.0$.
$f_r$ is part of the magnetic energy converted to radiation during a
flare. When estimating $f_r$, one has to be more careful. \cite{Hetal21}, following \cite{Setal12} who, in turn, followed \cite{Metal05, Setal08}, obtained $f_r = 0.01-0.5$ and based the width of the fitting strip on this estimate. Below, we will comment on this estimate in more detail.
Variations in the photospheric magnetic field in strong solar flares was investigated by \cite{SH05, PS10, Maetal12}. Recently, \cite{CDetal18} analysed 77 solar flares and found out that most major flares (class above M1.6) were accompanied by abrupt and permanent variations in the photospheric magnetic field. They
considered 38 X class flares and 39 M class flares. For each flare, they isolated an area in the corresponding active region where the field variation lasted as long as 15 min. The amplitude of the field variation ranged
from the lower observational limit of about $10$ G to about
$450$ G (two cases). The mean amplitude was $69$ G.
In the case of X class flares, the variations used to be substantially larger than in the case of M class flares. The authors insist that the above amplitude estimates are representative.
\cite{Letal15, Shetal18, Setal20, Zetal20, Aetal21} estimated the ratio of the free and total energy as $0.15-0.25$. The results depend on the extrapolation method. However, only part of the free energy (from percents to dozens of percents) can be spent to create a flare. The volume of the flare occupies part of the active region only. During a flare, the energy can even grow somewhere inside the active region. The components of the photospheric magnetic field can grow stepwise during the flare \cite[e.g., see][]{P13, Setal17} for the horizontal component and \cite{PS10} for the line-of-sight component). On the one hand, the buoyancy can transport the magnetic flux in the flaring region even during a flare, while on the other hand, the flux can trigger the flare. There are flare models that take into account the energy income during the flare \citep[e.g.,][]{Moetal05, MS06}.
All these factors result in a 2-3-order difference in the
flaring power and, thus, account for C, M, and X flares. In principle,
one can estimate $f_r$ for individual flares or flares of particular
types. Here, however, we are interested in the general link between
the energy and spottedness. Therefore, we did not use the value of
$f_r$ in further calculations, since $f_r$ and, to some extent,
$f_s$ cannot be reliably determined from observations and they are
not a general characteristic of the flare phenomenon. They determine
the energy of each individual flare, leading to a huge scatter
in the observed values.
\begin{table}
\large
\caption{Model parameters}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$A$, m.v.h. & $B$, Mx$\cdot$cm$^{-2}$ & $f_h$ & $f_s$ & $f_r$ & $\log E$ & $H$, km \\
\hline
-3.5 & 800 & 0.3 & 0.2 & 0.1 & 30.659 & 9302 \\
-3.0 & 900 & 0.3 & 0.2 & 0.1 & 31.511 & 16543 \\
-2.0 & 2000 & 0.1 & 1.0 & 0.1 & 33.926 & 17438 \\
-1.5 & 2000 & 0.1 & 1.0 & 0.1 & 34.676 & 31009 \\
-1.0 & 2000 & 0.1 & 1.0 & 0.1 & 35.426 & 55144 \\
\hline
\end{tabular}
\end{table}
A summary of the model parameters we adopted and the calculated values of energy E and flare height H are shown in Table 1 and Figure 4. A cloud of points from \cite{Metal15} is partially copied to the same figure for energies above $10^{30}$ erg and spottedness above $10^{-4}$ of the solar hemisphere. It can be seen that the values obtained by us generally agree with the observations. This figure is consistent with Figs.~4 and 5 in \cite{Oetal21} for Sun-like stars.
We conclude that the solar and stellar flares can be considered within the framework of a unique approach with specific governing parameters applied in the particular cases.
\section{Discussion and Conclusions}
Naturally, we (as well as all other authors we cite) assume that spots on the Sun and stars are of the same nature. In this case, the transformation of the rotational modulation into the area of the spot (or, more precisely, the umbra of the spot) using Eq. (1) is quite natural and provides a basis for comparing the dependence's of the number of flares on the spottedness. If we assume that sunspots and stars have different structures, then the very comparison of flare activity with spottedness loses its meaning and needs to be fundamentally revised.
We have demonstrated that the mechanisms of solar flares and stellar superflares are basically identical and the corresponding data can be described by Eq.~(9) with realistic parameters. A compact solar active region with the umbral area of the order of 0.1 of the solar hemisphere and the magnetic field $B = 2$ kG (which gives the magnetic field strength of about $100$ G after averaging over the whole stellar surface) can produce a superflare with $E = (1-3) \times 10^{36}$ erg.
Thus, the flare generation mechanism can be the same on the Sun and stars.
The main difference is that the spottedness on the Sun is no more than a few thousand m.h.v., while on the stars it reaches tenths of the area of the disk. As a result, the variation in the solar optical radiation is less than 0.1\% \citep{Fr06, Fr12} and on M dwarfs it reaches 10\%. At the same time, flares on the Sun are 1-2 orders of magnitude less frequent than on these stars and their energies do not exceed $3\times10^{32}$ erg. The mean magnetic field in sunspots is lower by a factor of 2 than in giant spots associated with superflares. Accordingly, the magnetic flux on M dwarfs is 3 orders of magnitude higher than on the Sun.
The problem is why the solar dynamo produces magnetic fields
associated with active regions of about $10^{-3}$ of the area of the
solar hemisphere, i.e. by $2-3$ orders of magnitude smaller than
required to get a superflare (about $A=0.1$ of the area of
the stellar hemisphere). Note that earlier \cite{Ketal21} admitted the possibility that superflares are governed by a physical mechanism basically different from the solar mechanisms.
A comprehensive quantitative model which accumulates physical processes from dynamo action in stellar interior up to the flare formation is far above contemporary theoretical abilities even for the Sun not to say about another stars. Here we discuss some hints concerning this future theory which can be associated with superflares.
Perhaps, to explain very high spottedness of superflaring stars one may to assume that the dynamo action domain on such stars is located just beneath the surface of the star rather than somewhere near the bottom of the convection zone, e.g., in the overshoot layer as it is generally believed.
Indeed, helioseismology data indicate that the solar convection zone
contains two layers of substantially differential rotation. One is the so-called overshoot layer near the bottom of the convection
zone, while another is located near the solar surface. There are
solar dynamo models with dynamo action concentrated in the upper
layer of differential rotation \citep[say,][]{Br05}; however, the
models with deep location of the dynamo active region %
are more
popular.
In order to produce an area with a strong magnetic field on the solar
surface in the form of a sunspot, the dynamo driven magnetic field has to propagate through a thick dynamo inactive layer. The magnetic field
generated in the upper dynamo active layer has to propagate
through a very thin layer only. So, it is reasonable to suggest that the spots produced in the latter case will be larger than those produced in the former case. Of course, this
suggestion needs to be confirmed by a dynamo model that would
include a more or less realistic description of the spot formation.
There are various ideas concerning the particular mechanisms of formation of stellar spots \citep[e.g.][]{P75, Betal2013, Jetal14, Getal16} so that
such confirmation requires an extensive modeling, which is, obviously, beyond the scope of this paper.
Another helpful point is that the G dwarfs where very strong flares occur are fast rotators, and one can expect that dynamo drivers here are substantially stronger than in the Sun. It may also help to produce more magnetic energy than on the Sun. Further modification of the idea is to suppose that the distribution of the dynamo drivers on superflaring stars differs substantially from that on the Sun (e.g., the activity wave propagates mainly towards the surface rather than towards the equator), which produces even more magnetic energy \citep{KO16, KK18, Ketal18}.
Note that the occurrence rate of weak flares (class C and even weaker
subflares) is not related to the
spottedness. These weak flares occur in the Sun almost every day,
and their occurrence rate is almost independent on $A$ (Fig.~1).
We note that the points with error bars in Fig.~4 do not exactly approximate the data for particular stellar flares. It looks possible to improve this fitting introducing additional parameters in the model however our intention is to show that the general shape of flares distribution can be fitted by quite simple model and that various flares are related to compatible physical processes. We appreciate however that a further complication of the model may be interesting as well being associated with the fact that morphology of stellar spots may be different from solar ones. In particular, we suppose here that the spots are more or less homogeneous. Supposing, that very powerful solar flares are associated with sunspot groups which contains many smaller sunspots what looks plausible according to available observations \cite{M90, Tetal19} one could obtain lower estimate for the point with $A=-3.0$ in Fig.~4.
To summarize our results, we can say that they are rather expected. Indeed, it looks plausible that larger spottendess gives more powerful stellar flares. Again, the very fact that the stellar activity cycles can be observed by contemporary observational methods means that there are stars with the spottendess substantially larger than observed in the Sun. We note, however, that the expected results must be supported by detailed argumentation, which is presented in this paper.
Thus, the problem of a sharp difference in energy between the solar and stellar flares does not require revision of the flare models. This difference is due to the fact that the efficiency of spot formation decreases with the age of the star and the increase of its rotation period. This issue was studied in detail from a theoretical point of view in \citep{Pip21} (see also numerous references therein). Observational data also confirm this dependence for a large number of stars \citep{Tetal21}.
Here we concentrate attention mainly on G stars however M dwarfs data are obviously interested in this respect, see \cite{Netal17} and subsequent papers.
\begin{acknowledgments}
DDS thanks support by BASIS fund number 21-1-1-4-1.
Authors thank the financial support of the Ministry of Science and Higher Education of the Russian Federation, program 075-15-2020-780 (VO, MK) and 075-15-2022-284 (DS).
\end{acknowledgments}
\bibliography{sample631}{}
\bibliographystyle{aasjournal}
|
Title:
Signatures and Detection Prospects for sub-GeV Dark Matter with Superfluid Helium |
Abstract: We explore the possibility of using superfluid helium for direct detection of
sub-GeV dark matter (DM). We discuss the relevant phenomenology resulting from
the scattering of an incident dark matter particle on a Helium nucleus. Rather
than directly exciting quasi-particles, DM in this mass range will interact
with a single He atom, triggering an atomic cascade which eventually also
includes emission and thermalization of quasi-particles. We present in detail
the analytical framework needed for modeling these processes and determining
the resulting flux of quasi-particles. We propose a novel method for detecting
this flux with modern force-sensitive devices, such as nanoelectro-mechanical
system (NEMS) oscillators, and derive the sensitivity projections for a generic
sub-GeV DM detection experiment using such sensors.
| https://export.arxiv.org/pdf/2208.14474 |
\title{Signatures and Detection Prospects for sub-GeV Dark Matter with Superfluid Helium}
\author[a,b]{Yining You,}
\author[a]{Jordan Smolinsky,}
\author[a]{Wei Xue,}
\author[a]{Konstantin Matchev,}
\author[a]{Keegan Gunther,}
\author[a]{Yoonseok Lee,}
\author[a]{Tarek Saab}
\affiliation[a]{Department of Physics, University of Florida, Gainesville, FL 32611, USA}
\affiliation[b]{Bard High School Early College DC, Washington, DC 20019, USA }
\abstract{We explore the possibility of using superfluid helium for direct detection of sub-GeV dark matter (DM). We discuss the relevant phenomenology resulting from the scattering of an incident dark matter particle on a Helium nucleus. Rather than directly exciting quasi-particles, DM in this mass range will interact with a single He atom, triggering an atomic cascade which eventually also includes emission and thermalization of quasi-particles. We present in detail the analytical framework needed for modeling these processes and determining the resulting flux of quasi-particles. We propose a novel method for detecting this flux with modern force-sensitive devices, such as nanoelectro-mechanical system (NEMS) oscillators, and derive the sensitivity projections for a generic sub-GeV DM detection experiment using such sensors.
}
\emailAdd{youy@ufl.edu}
\emailAdd{jsmolinsky@ufl.edu}
\emailAdd{weixue@ufl.edu}
\emailAdd{matchev@ufl.edu}
\emailAdd{kgunther@ufl.edu}
\emailAdd{ysl@ufl.edu}
\emailAdd{tsaab@ufl.edu}
\section{Introduction}
Although the existence of dark matter (DM) has been definitively supported by gravitational evidence \cite{Zwicky:1933gu,Bertone:2016nfn,Arbey:2021gdg},
its nature is still one of the biggest mysteries in modern physics.
Recent theoretical developments \cite{Knapen:2017xzo,lin2019tasi} and tighter exclusion limits from both direct detection experiments \cite{Agnese_2019, Armengaud_2019, Aprile_2019, Aprile_2018, Alkhatib_2021, Agnes_2018, Abdelhameed_2019} and the Large Hadron Collider (LHC) at CERN \cite{Behr:2022tyz} are increasingly pointing towards DM being lighter than first thought, with mass less than $\sim$1 GeV. This motivates the design of new direct detection experiments which specifically target the sub-GeV and sub-MeV DM mass range \cite{alexander2016dark,battaglieri2017us,Essig:2022dfa}.
Direct detection experiments achieve the greatest sensitivity when the fundamental excitation of the target material corresponds to the recoil energy or momentum scale from the scattering dark matter. Therefore, superfluid $^4$He, with a spectrum of quasi-particle excitations with momenta $\lesssim {\rm keV}$ \cite{PhysRevB.103.104516,landau1987statistical}, becomes an excellent target material candidate for detecting very light dark matter \cite{Hertel:2018aal,Knapen:2016cue,Schutz:2016tid,Acanfora_2019,Baym:2020uos,Caputo:2019cyg,Caputo:2019xum,Caputo:2019ywq,Caputo:2020sys}.
Although superfluid $^4$He has been considered as a target for neutrino and dark matter studies since the 1980s \cite{Lanou:1987eq,Huang:2007jh,bandler1993projected,Bradley:1996cu,Winkelmann:2006pw,Winkelmann:2006rg,Lanou:1988iq,Bandler:1991ep,Bandler:1992zz,Adams:1996ge}, its experimental application developed swiftly in recent years as the particle physics community refocused its interest on the search for light dark matter candidates \cite{alexander2016dark,battaglieri2017us,lin2019tasi,Bottino:2002ry,Bottino:2003cz,Shelton:2010ta,Feng:2008ya,Foot:2008nw,CMS:2012lmn,Fermi-LAT:2011vow}. One effort in applying superfluid $^4$He is the search for light Weakly Interacting Massive Particles (WIMPs) of mass below 10 GeV \cite{Guo:2013dt,Ito:2013cqa,Osterman:2020xkb}. Due to the kinematics, the minimum velocity of light WIMPs required for elastic nuclear recoil is fairly low for Helium compared to heavier materials like Xenon and Germanium, and this fact could offer some additional sensitivity beyond that of the Xenon experiment. Other efforts have focused on the search for light dark matter candidates with masses below MeV \cite{Schutz:2016tid,Knapen:2016cue,Acanfora_2019,Caputo:2019cyg,Baym:2020uos}. They utilize the fact that the fundamental excitation (quasi-particle) spectrum of the superfluid $^4$He matches the recoil energy scale or momentum scale of the scattering dark matter. Therefore, each dark matter scattering event leads to the production of one or two phonon quasi-particles in the superfluid, for which the event rates and cross sections can be derived analytically. However, the practical detection of single excitations in the superfluid remains an experimental challenge.
In lieu of these previous proposals, in a previous theoretical work \cite{Matchev:2021fuw} we studied the possibility to use superfluid $^4$He to detect DM in the {\it sub-GeV} mass range, i.e., for a range of DM masses between 1 MeV and 1 GeV.\footnote{For a calculation of the multi-phonon production in a crystal target over the whole keV to GeV DM mass range, see \cite{Campbell-Deem:2022fqm}.}
The DM of our Milky way local group has a typical velocity of magnitude $10^{-3}c$ \cite{Kavanagh:2016xfi,Lee:2012pf,Radick:2020qip,Necib:2018igl,Ibarra:2017mzt}. Therefore, a DM particle with mass above 1 MeV has a de Broglie wavelength smaller than $\sim {\cal O}(1) \textup{~\AA}$, the distance between helium atoms in the superfluid. As a result, the DM initially scatters with a single helium atom rather than producing multiple phonon quasi-particles. As we shall elaborate, this mass range conveniently produces a predominantly neutral atomic cascade, which has the advantages of (a) increasing the yield of quasi-particles and improving the sensitivity compared to that in the sub-MeV range; and (b) simplifying the modeling and projections of the experimental reach.
The basic physics processes are illustrated in the pentaptych of figure~\ref{fig:schematicplot}. In the first panel, the recoiling helium atom inherits the $\sim$ MeV momentum scale of the DM. It will proceed and scatter against other helium atoms, which themselves will scatter in turn, and so on, producing an avalanche of atoms (second panel). At the end of this atomic cascade, the initial recoil momentum has been distributed over a large number of daughter atoms moving isotropically with respect to the initial scattering location. These atoms have a de Broglie wavelength comparable or larger than the inter-atomic distance of $\sim {\cal O}(1) \textup{~\AA}$. At this level, the 2-2 elastic scattering becomes subdominant and each low momentum atom interacts with the superfluid background and radiates quasi-particle excitations (including but not limited to phonons), as shown in the third panel. In our previous theoretical work, we focused on constructing an effective field theory (EFT) to describe this process \cite{Matchev:2021fuw}. Here we build a numerical simulation which explicitly models the (relevant processes in the) atomic cascade. We will find that the quasi-particles are produced in a small region $\sim {\cal O}(1)$ nm around the DM impact, and subsequently thermalize (fourth panel). Using the thermal distribution of quasi-particles we can trace the transport of momentum through the superfluid and derive the momentum imparted to a generic nanoelectro-mechanical system (NEMS) sensor suspended within (fifth panel). This allows us to derive the sensitivity projections for a generic sub-GeV DM detection experiment using a NEMS sensor.
The organization of our paper is as follows. In \cref{sec:DMscatterHe}, we review the elastic scattering between a DM particle and an initial helium atom. The calculation produces the helium recoil rate profiles for DM of different masses. In \cref{sec:otherprocesses}, we review other scattering processes that may happen in the superfluid, and explain that they are subdominant for the sub-GeV regime. In \cref{sec:HeCascade}, we review the well known quantum mechanics treatment of helium-helium atomic cascade. In \cref{sec:HeEmitsQP}, we review our previous theory proposal's result on the Lagrangian construction of helium atoms emitting quasi-particles. The calculation provides a quasi-particle production rate for a helium atom of an arbitrary momentum. In \cref{sec:IntofQPs}, we review the parameters of quasi-particles and show that they are thermalized by the time all quasi-particles are radiated from the slow helium atoms. In \cref{sec:Thermalization}, we determine the temperature of the thermalized quasi-particle system by two different methods --- an analytical approximation or a Monte Carlo simulation. In \cref{sec:flux&force}, we calculate the momentum signal on the oscillator sensor in a realistic spatial configuration. Finally, we present the experimental constraints on the DM-Nucleus coupling strength. The five sections are chronologically ordered, with each section feeding a particle profile to the next section (see \cref{fig:schematicplot}). In \cref{sec:conclude}, we conclude our findings.
\section{\bf Dark matter scattering off Helium atoms}
\label{sec:DMscatterHe}
\subsection{DM elastic scattering}
We begin by considering the DM scattering rate off helium atoms via elastic nuclear recoil, which is the dominant process in the sub-GeV DM mass range. As shown in \cref{fig:schematicplot}, the superfluid Helium response to a dark matter scattering event depends only on the momentum of the initial recoiling He atom and is agnostic to the microscopic physics of the dark matter sector. For the purposes of our analysis, in this section we shall introduce a simple toy model of dark matter interactions which will be used to derive experimental sensitivity limits in \cref{sec:flux&force}. Suppose that a fermionic dark matter particle $\chi$ couples to the nucleon $n$ (we assume equal couplings to neutrons and protons) via a heavy scalar mediator particle $\phi$,
\begin{equation}
\mathcal{L}_\text{int}=g_\chi \, \phi\, \bar{\chi}\chi+g_n\, \phi \, \bar{n}n.
\end{equation}
The differential DM-Helium cross section is
\begin{equation}
\frac{ {\rm d} \, \sigma_{\chi\rm{\,He}}}{ {\rm d} {\bf q}^2}
= \frac{ A^2 g_\chi^2 g_n^2} {4 \pi v^2} \frac{F_n^2(\textbf{q}^2) } {(\textbf{q}^2+m_\phi^2)^2}
\simeq
\frac{ A^2 g_\chi^2 g_n^2} { 4\pi v^2} \frac{1 } {m_\phi^4} \, ,
\label{eq:DM-He-Sigma}
\end{equation}
where $A=4$ is the atomic number of Helium. To arrive at the last result, we have taken the massive mediator limit $m_\phi\gg q_{\max}$ and approximated the nuclear form factor as $F(\textbf{q}^2)\to 1$, which is justified by the fact that the inverse momentum of a sub-GeV mass DM,
$\hbar/q$, is much larger than the nucleus size $\sim$ fm.
The differential nuclear recoil rate ${\rm d}\Gamma / {\rm d} q^2$
is linearly proportional to the number of target helium atoms $N_{\rm He}$, the local DM density $\rho_\odot=0.3$ GeV/cm$^3$,
and depends on the DM velocity distribution $f_\chi(\textbf{v})$ and the differential cross section (\cref{eq:DM-He-Sigma}),
\begin{eqnarray}
\frac{dR}{d\textbf{q}^2} &=& \frac{\rho_\odot}{m_\chi} N_\text{He} \, \int d^3v f_\chi(v) v \, \frac{d\sigma_{\chi\text{He}}}{d\textbf{q}^2}
\nonumber \\
&=& \frac{\rho_\odot}{m_\chi} N_\text{He} \, \int d^3v f_\chi(v) \, \frac{A^2 \, \sigma_{\chi n} }{4 v \mu_{\chi n}^2 } \ ,
\label{eq:differentialrate}
\end{eqnarray}
where in the second line
we have introduced the DM-nucleon reduced mass, $ \mu_{\chi n}$, and the
DM-nucleon cross section $\sigma_{\chi n} = \frac{g_\chi^2 g_n^2 \mu_{\chi n}^2}{ \pi m_\phi^4}$.
The DM velocity distribution $f_\chi(\textbf{v})$ in our local frame is taken as the Maxwell-Boltzmann distribution with an escape velocity cutoff $v_{\text{esc}}$,
\begin{equation}
f_{\chi}(\mathbf{v})=\frac{\Theta(v_{\text{esc}}-|\mathbf{v}+\mathbf{v}_{e}|)}{N(v_{0},v_{\text{esc}})}\exp\left(-\frac{(\mathbf{v}+\mathbf{v}_{e})^{2}}{v_{0}^{2}}\right),
\label{eq:MB distribution}
\end{equation}
where the normalization factor
$$ N(v_{0},v_{\text{esc}})=\pi^{3/2}v_{0}^{3}
\left[\text{erf}\left(\frac{v_{\text{esc}}}{v_{0}}\right)-2\frac{v_{\text{esc}}}{v_{0}}\exp\left({-\frac{v_{\text{esc}}^2}{v_{0}^2}}\right)\right]$$
ensures that $\int d^3 v f(v) = 1$.
The Heaviside function in \cref{eq:MB distribution} constraints the DM velocity in the galactic frame to be below the escape velocity. Here we choose
the velocity dispersion $v_{0} = 220\ \text{km/s}$,
the velocity of the earth $v_{e} = 240\ \text{km/s}$, and the escape velocity $v_{esc} = 500\, \text{km/s}$.
The recoil momentum of the Helium atom is the essential quantity which governs the spectrum of the produced quasi-particles. By using eqns.~(\ref{eq:DM-He-Sigma}-\ref{eq:MB distribution}), we obtain the differential Helium recoil rate shown in \cref{fig:DM-Helium recoil}. We note that heavier dark matter particles imply harder recoil spectra.
\subsection{Other DM scattering signatures: direct production of quasi-particles and inelastic scattering}
\label{sec:otherprocesses}
In addition to elastic scattering off individual helium atoms, there are several other DM-induced processes, which are subdominant in the MeV-GeV DM mass range targeted in this paper, but may become dominant for DM masses either below MeV or above GeV. In the sub-MeV mass range, the DM de Broglie wavelength is larger than several$\textup{~\AA}$, spanning several helium atoms in the liquid.
Such a light DM particle could directly produce quasi-particle excitations via
coherent scatterings.
At higher masses above a GeV, the exchange momentum is large enough to trigger excitation and ionization of helium.
\paragraph{Coherent quasi-particle production.}
The general scattering rate involving both coherent and incoherent quasi-particle production can be derived using the many-body quantization method \cite{Matchev:2021fuw,Knapen:2016cue}
\begin{equation}
\frac{dR}{d\textbf{q}^2\, d\omega} = \frac{\rho_\odot}{m_\chi} N_\text{He} \, \int d^3v f(v) \, \frac{A^2 \, \sigma_{\chi n} }{4 v \mu_{\chi n}^2 }\, S(\textbf{q},\omega) \ ,
\label{eq:directQP}
\end{equation}
where the form factor $S(\textbf{q},\omega)$ is the Dynamic Structure Function (DSF) of the superfluid. At low momentum, $q \sim {\rm \rm keV}$, the DSF is measured from the neutron energy loss in neutron scattering experiments \cite{Silver:1989zz}. At high momentum, $q \gg {\rm keV}$, the scattering is
mainly incoherent, which is initiated by nuclear recoil and produces quasi-particles by the subsequent helium radiation.
The integration of the incoherent part of the DSF $\int\,d\omega S_{inc}(\textbf{q},\omega) = 1$ \cite{griffin1993excitations},
so that the quasi-particle production rate is identical to the nuclear recoil rate in eq.~(\ref{eq:differentialrate}), as expected.
In principle, the DSF $S(\textbf{q},\omega)$ can be applied to study DM scattering at any momentum, but
at high momentum $S(\textbf{q},\omega)$ is unknown and it is practical to use the nuclear recoil formula eq.~(\ref{eq:differentialrate}). The coherent scattering becomes dominant at low momentum, where the measured or simulated DSF \cite{Silver:1989zz,campbell2015dynamic} could be employed
to evaluate the relevant quasi-particle production rate. The production rate of phonon quasi-particles can also be derived
using either the impurity method \cite{landau1941theory,Matchev:2021fuw} or effective field theory \cite{Acanfora_2019,Nicolis_2018,PhysRevLett.119.260402}.
To compare elastic scattering with coherent emission of quasi-particles, we show their respective event rates in \cref{fig:DMtoPhonon}. The nuclear recoil rate (blue dashed line) is dominant for masses above $\sim 1$ MeV, while coherent emission (green solid line) dominates for sub-MeV mass DM. The nuclear scattering rate is derived from eq.~(\ref{eq:differentialrate}) by integrating over the momentum ${\bf q}^2$ starting from $q_{\rm c} = 4.5 \, {\rm keV}$, assuming that below $q_{c}$ DM will scatter coherently with the superfluid.
The coherent scattering rate is derived from \cref{eq:directQP} by integrating over ${\bf q}^2$ up to $q_{c}=4.5$ keV, with the DSF data given in \cite{campbell2015dynamic}. The DSF $S(\textbf{q},\omega)$ peaks at $ \omega \sim {\rm meV}$ and decreases quickly at high energy, because producing a large
number of quasi-particles from a single coherent scattering is suppressed by couplings and phase space. The exchanged momentum in coherent scattering is restricted up to $q_c$, while the momentum of nuclear recoil is $q \simeq m_\chi v $.
Then for masses larger than MeV the coherent rate scales as $1/m_\chi^3$, while the incoherent rate goes only like $1/m_\chi$ and is therefore dominant at high mass. Furthermore, in each event, the coherent scattering generates ${\cal O}(1)$
quasi-particles, but the incoherent scattering could produce many more quasi-particles, depending on the DM mass.
\paragraph{Charge exchange, ionization and excitation processes.}
In the mass range above $\sim 0.1$ GeV, the DM kinetic energy and helium nuclear recoil energy are larger than $100$ eV, which is sufficient to excite or ionize a helium atom. Considering the interactions among helium atoms,
the elastic scatterings are still predominant and neutral helium atoms
constitute the majority of the final products after these processes.
This is illustrated in \cref{fig:Charge exchange + Ionization}, showing the leading cross sections of
charge exchange processes and ionization processes.
After the primary helium atom or ion are produced, they continue interacting with the helium atoms in the superfluid through the processes of charge exchange scattering \cite{PhysRev.109.355,PhysRev.38.1342,PhysRevA.9.2434,cramer1957elastic,Hegerberg_1978,PhysRevA.39.4440,pivovar1962electron,PhysRevA.32.829,PhysRevA.20.1816,Grozdanov_1980,Fulton_1966,hertel1964cross,Stich_1985,RevModPhys.30.1137,WU198857,Hvelplund_1976}, ionization \cite{doi:10.1246/bcsj.49.933,thomas_1927,PhysRev.124.128,PhysRev.135.A1575,PhysRev.109.355,PhysRev.178.271,PhysRevA.36.2585,PhysRevA.63.062717,Shah_1985}, and excitation \cite{osti_4196582,Heer1965ExcitationOH, PhysRev.124.128,Mei:2007jn}.
At the energy scale below $\sim$ keV, the cross sections for neutral helium producing helium ions are much smaller than the reverse processes.
As a result, neutral helium atoms are the dominant final products once these interactions reach equilibrium \cite{Guo:2013dt}.
The recoil energy is dissipated in the following channels:
(1) ionized atom conversion into neutral atoms,
(2) neutral atom cascade, (3) quasi-particle production by atoms, (4) decay of excited atoms into IR photon and singlet and triplet dimer excimers $\text{A}^1 \Sigma_u^+$ and $\text{a}^3 \Sigma_u^+$. Existing efforts in the literature \cite{Guo:2013dt,Hertel:2018aal,Ito:2011cy,Ito:2013cqa,Adams:1995mk} have led to a clear estimation of the partition of recoil energy among these interaction channels. Essentially, quasi-particle production is the only relevant process at recoil energies below $20$ eV, and it accounts for the dominant fraction all the way up to $100$ keV.
\section{Helium atom cascade}
\label{sec:HeCascade}
In this section we will discuss the theoretical description and the details of our simulation of the neutral atomic cascade.
We simplify the DM-helium scattering model by considering only the neutral atoms portion of the cascade.
This is justified by the numerical analysis in \cite{Guo:2013dt} and the summary in \cref{sec:DMscatterHe}
showing that the primary recoiling helium atom/ion will quickly produce a neutral atomic cascade.
\subsection{Elastic scattering of Helium atoms}
\label{sec:elastic}
A recoiling helium atom with momentum in the keV to MeV range has de Broglie wavelength smaller than the inter-atomic distance in the superfluid. Therefore, the initial scattered atom triggers a cascade, i.e., a series of $2\to2$ scatterings among helium atoms within the fluid. Because the kinetic energy is comparable to the helium atomic potential \cite{bennewitz1972he,feltgen1973determination,bishop1977low}, the $2\to2$ atomic scattering at this energy scale is non-perturbative. Nonetheless, given a potential function, we may numerically solve the Schrodinger equation using Partial Wave Expansion. Consider the wavefunction $\Psi(\textbf{r})$ as a function of the displacement vector $\textbf{r}$ between two helium atoms. The initial and final asymptotic boundary conditions constrain the wavefunction as follows:
\begin{equation}
\Psi(\boldsymbol{r})|_{r\to\infty}=\exp(ikz)+f_\Psi(\theta,k)\frac{\exp(ikr)}{r}.
\label{eq:helium wave}
\end{equation}
The first term is an initial plane wave, the second term is an outgoing spherical wave weighted with a scattering amplitude $f_\Psi(\theta,k)$, and $k$ is the reduced momentum in the center-of-mass frame (equal to one half of the incoming atom momentum in the lab frame).
Expanding \cref{eq:helium wave} in terms of Legendre Polynomials $P_l$, the Schrodinger equation of the system can be solved numerically, and the scattering amplitude $f_\Psi(\theta,k)$ depends on the phase shift $\delta_l(k)$, where $l$ is the angular index of the Legendre expansion. In \cite{Matchev:2021fuw} we calculated the scattering cross section
\begin{equation}
\sigma_{\text{He-He}}(k)=\frac{8\pi}{k^{2}}\sum_{l\in\text{even}}(2l+1)\sin^{2}\delta_{l}(k),
\label{eq:HeCrossSection}
\end{equation}
using phase variation \cite{calogero1967variable} and WKB approximation \cite{miller1969wkb,miller1971additional}. We use the total scattering rate for the estimation in the next section of the helium cascade time which is related to the final shower size and quasi-particle number density. The differential cross section is incorporated in the simulation of the atomic cascade, and has the form
\begin{equation}
\frac{d\sigma_{\text{He-He}}}{d\Omega}=\frac{1}{k^2}\left|\sum_{l\in\text{even}}(2l+1)\left[e^{2i\delta_l(k)}-1\right]P_l(\cos\theta)\right|^2.
\label{eq:HeCSDiff}
\end{equation}
For the convenience of comparison with the next section, in \cref{fig:HeRad} we plot the experimental total cross section using data from \cite{feltgen1973determination} (red dashed curve).
\subsection{Quasi-particle emission from Helium atoms}
\label{sec:HeEmitsQP}
The elastic scatterings keep dissipating the energy to more and more helium atoms, and decreasing their momenta.
When a helium atom's momentum drops below $10$ keV, the atom's de Broglie wavelength becomes comparable to the inter-atomic distance of the superfluid. The moving atom then collectively scatters against the surrounding atoms in the fluid and produces quasi-particles.
Unlike the case of sub-MeV dark matter directly producing quasi-particles \cite{Schutz:2016tid,Knapen:2016cue,Acanfora_2019,Caputo:2019cyg}, the helium emission
process is non-perturbative. In \cite{Matchev:2021fuw} we proposed an effective $U(1)$ current-current coupling between a helium atom and the superfluid:
\begin{equation}
\mathcal{L}_{JJ}=\lambda_{1}\frac{1}{m_{{\rm He}}\Lambda}J^{0}J_{{\rm He}}^{0}+\lambda_{2}\frac{m_{{\rm He}}}{\Lambda^{3}}J^{i}J_{{\rm He}}^{i} \, .\label{eq:HeJJ}
\end{equation}
$\Lambda$ is the cutoff scale of the superfluid EFT. We can estimate $\Lambda$ using the inverse of the atomic space, $\Lambda \sim {\cal O}(1) \, {\rm keV}$.
For the phonon, the cutoff is related to
the energy density of the superfluid $\rho$ and the sound speed of phonon, $\Lambda = (\rho \, c_s)^{1/4} \simeq 0.83 \, {\rm keV}$. $J^0$ is the number density operator of the superfluid, and its normalized matrix element is the Dynamic Structure Function (DSF) \cite{Baym:2020uos,Knapen:2016cue,silver1988theory}. The general form of the total emission rate of multiple quasi-particles is as follows:
\vspace{2mm}
\begin{equation}
\Gamma_\text{inel} = \frac{2\pi\rho}{m_\text{He}} \int \frac{d^{3}k} {(2\pi)^{3}}\left(\frac{\lambda_1}{m_{\text{He}}\Lambda}
+ \frac{\lambda_2 m_{\text{He}}}{\Lambda^3} \frac{\boldsymbol{v}_{\text{He}}\cdot\boldsymbol{k}\omega}{k^2}\right)^{2}S(\boldsymbol{k},\omega),
\label{eq:multi-particle radiation}
\end{equation}
where $\rho$ is the superfluid mass density, $v_\text{He}$ is the initial helium velocity, and $S(\textbf{k},\omega)$ is the DSF.
For the purpose of comparing the elastic atomic scattering cross-sections with that for quasi-particle emission, in \cref{fig:HeRad} we show the emission rate for {\em single} quasi-particles. This is because our numerical integration of the phase space for multiple production $\sim\int d^3k\,S(k,\omega)$ and of the phase space for single production $\sim\int d^3k\, S(k)\,\delta (\omega-\omega(k))$ with current available data shows that single quasi-particle production is dominant. In \cref{fig:HeRad} we show results for several combinations of the coupling parameters: $\lambda_1=4\pi,\lambda_2=0$ (blue), $\lambda_1=0,\lambda_2=4\pi$ (orange), and $\lambda_1=4\pi,\lambda_2=4\pi$ (green).
Although the values of $\lambda_1$ and $\lambda_2$ are unknown, in this region the system is strongly coupled, so we take the value of $4\pi$.
In the simulation, we choose $\lambda_1=4\pi,\lambda_2=0$, but when we consider the energy/momentum conservation and thermalization after the quasi-particle production,
the final quasi-particle flux will not be sensitive to the specific couplings due to the thermalization.
The physical expectation of low energy scattering originates from the two requirements: (1) helium atoms predominantly radiate quasi-particles rather than elastically scatter with each other in the superfluid; (2) quasi-particle modes only exist below $\sim 7$ keV. The first requirement shows that the vector coupling in \cref{eq:HeJJ} must exist. The second requirement restricts the momentum integration to an upper cutoff $\sim 7$ keV. Below the cutoff, the emission rate increases with the momentum of the atom because of the phase space enhancement. Beyond the cutoff, the emission rate decreases because the phase space integration in \cref{eq:multi-particle radiation} has reached its maximum, leaving powers of helium atom momentum only in the denominator of the result. The curves in \cref{fig:HeRad} thus have a cusp at $7$ keV because of this different behavior below and above the cutoff.
\subsection{Monte Carlo simulation of the Helium cascade and radiation of quasiparticles}
\label{sec:MCsimulation}
In this section, we will present our simulation results about the developing helium cascade and the radiation of quasi-particles from fast moving helium atoms. We do not include the subsequent quasi-particle decays and quasi-particle self-interactions, whose treatment is postponed for the next section.
Because of the large number of daughter particles which must necessarily be produced to conserve both energy and momentum, little information about the overall final structure of the shower may be gleaned from a direct analytical treatment. However, because the individual interactions of the shower constituents are quantum mechanical and probabilistic, the problem is amenable to a Monte Carlo approach. In particular the distance that each helium atom travels between interactions, the type of interaction it experiences, and the subsequent evolution of its daughter particles are all properly described by probability distributions (all necessary results were derived and/or collected in \cite{Matchev:2021fuw}), which we exploit to generate ensembles of simulated events.
The momentum-dependent cross section $\sigma_\text{el}$ of {\em elastic} atomic helium scattering is known from experiment \cite{feltgen1973determination} (see red line in \cref{fig:HeRad}). On the other hand, the rate of {\em inelastic} emission of quasi-particles has been computed in \cite{Matchev:2021fuw} and is given by \cref{eq:multi-particle radiation}. This rate can be cast as an inelastic ``cross section'' according to the heuristic
\begin{equation}
\sigma_\text{inel} = \frac{\Gamma_\text{inel}}{n_\text{He} v} \ ,
\end{equation}
where $n_\text{He}=\rho/m_\text{He}$ denotes the number density of helium atoms in the superfluid.
The total cross section of these processes defines a mean free path
\begin{equation}
\ell_0 = \frac{1}{n_\text{He} (\sigma_\text{inel} + \sigma_\text{el})}
\end{equation}
and thus a probability distribution
\begin{equation}
P_\text{interaction}(\ell) \propto \exp\left[-\frac{\ell}{\ell_0}\right]
\end{equation}
that describes the distance $\ell$ an energetic helium atom is expected to travel before interacting with the superfluid to produce either another energetic helium atom or a quasi-particle. The respective probabilities for each type of daughter particle are given by
\begin{equation}
P_\text{el} = \frac{\sigma_\text{el}}{\sigma_\text{el} + \sigma_\text{inel}} \ , \hspace{0.5in} P_\text{inel} = \frac{\sigma_\text{inel}}{\sigma_\text{el} + \sigma_\text{inel}} \ .
\end{equation}
Note that the cross sections and consequently the length scale of the probability distribution are all functions of the helium momentum.
At this point we have all the ingredients needed to describe the structure of the Monte Carlo simulation: each simulated event begins as a single helium atom at the origin with its momentum oriented along the $z$-axis. Using this momentum we evaluate the cross sections of both processes and sample the resulting distribution to determine what type of interaction the helium atom experiences and where this interaction occurs. The momenta of daughter particles are sampled from the appropriate differential rates provided in sections~\ref{sec:elastic} and \ref{sec:HeEmitsQP} above, and the cross sections of those daughter particles are evaluated anew. In this way the simulation proceeds recursively generating new daughter particles and tracking their trajectories between collisions. This process is depicted schematically in the second and third panels of \cref{fig:schematicplot}.
In \cref{fig:spectra} we show typical quasi-particle spectra from our simulation. In the first (second, third, fourth) row of panels we show distributions of quasi-particle momenta (energies, velocities, $\cos\theta$), for an initial He atom momentum of 50 keV (left column) and 500 keV (right column). The top panels show that the number of phonons is significantly lower (compared to the number of rotons) due to the phase space suppression. Nonetheless, quasi-particle decays to softer modes, not included in our simulation, will eventually populate that region. The panels in the second row reveal that the energy spectrum starts with a peak at the gap energy $\Delta=0.75$ meV which corresponds to the local minimum of the roton dispersion. This is also reflected in the velocity graphs on the third row, where the peak around zero velocity is composed of slow rotons and maxons, as well as slow quasi-particle modes above $\sim 5$ keV. The plots in the last row demonstrate that the resulting distribution is almost isotropic. The discussion in \cref{sec:IntofQPs} below will show that all these quasi-particles will become thermalized. Therefore, instead of using the generated quasi-particles from \cref{fig:spectra}, in the later sections we shall sample the quasi-particle spectrum from a Bose-Einstein distribution.
\section{Effects of interactions among quasi-particles}
In this section we investigate the effects of interactions between the quasi-particles produced at the end of the cascade. These interactions may thermalize the ensemble of quasi-particles. In order to determine whether this actually occurs, we first do a back-of-the-iphone estimate comparing the quasi-particle interaction rate $n\sigma v$ and the quasi-particle production rate $\Gamma_\text{inel}$, for a given recoil energy or momentum. After that, we present the theoretical solution of last scattering surface and the thermal temperature. More accurate results from our simulation are presented in \cref{sec:simulationT}.
\subsection{Theoretical overview}
\label{sec:IntofQPs}
We consider the two well known types of excitations with analytic dispersion relations -- phonons and rotons. The ``phonon" refers to a quasi-particle with momentum below $\sim 1.2$ keV; its dispersion is linear with a small cubic correction:
\begin{equation}
E_{\text{phonon}}\simeq c_{s}(p-\frac{\gamma}{\Lambda^{2}}p^{3}),\label{eq:PhononDis}
\end{equation}
where $c_{s}=240\,\text{m}/\text{s}$ is the sound speed, $\Lambda=(\rho c_{s})^{1/4}$
is the UV scale of the superfluid, and $\gamma\sim {\cal O}(1)$ is a dimensionless
parameter such that $\gamma/\Lambda^{2}=0.27\textup{~\AA}^2$. The ``roton" is a quasi-particle of momentum $\sim p_\ast=3.84$ keV at the local minimum of the dispersion curve. Its energy is parameterized as
\begin{equation}
E_{\text{roton}}\simeq\Delta+\frac{(p-p_{*})^{2}}{2m_{*}},\label{eq:RotonDis}
\end{equation}
where $m_{\ast}\simeq0.16m_{\text{He}}$
is the effective roton mass.
Several relevant interactions are well studied in the literature \cite{landau1941theory,landau1949theory,landau1987statistical,Nicolis_2018,1965511,PhysRevLett.119.260402,Matchev:2021fuw}, including phonon decay, phonon 2-2 scattering, roton 2-2 scattering, and phonon-roton 2-2 scattering\footnote{Phonon-roton scattering is the most complicated among these processes. There is existing controversy over the leading orders of cross section in the scenario that the initial roton is not at the exact bottom of the dispersion curve. In a following theory project, we will elaborate on the novel development of the last process, and discuss the different results involving initial phonon and roton states.}. In Table~1 of \cite{Matchev:2021fuw}, we listed the main results for these interaction cross sections. Using the parameter values of eqns.~(\ref{eq:PhononDis}) and (\ref{eq:RotonDis}), we find that the cross sections are of similar magnitude, $\sigma\,v\sim 10^{-6} - 10^{-7}\ \text{keV}^{-2}$.
With those ingredients, we are now ready to check for thermalization. If at the end of the atomic cascade, when
all quasi-particles have just been produced, the quasi-particles have already experienced multiple interactions, we can safely claim that the quasi-particle system is thermalized. We perform a back-of-the-envelope estimation as follows.
From \cref{fig:DM-Helium recoil}, we know that the initial recoil momentum $P_\text{ini}$ of the helium atom triggering the cascade ranges from 1 to $10^3$ keV. According to \cref{fig:HeRad}, when the helium atom momentum drops to below $\sim 20$ keV,
the atom will predominantly start to radiate quasi-particles, thus not affecting the number of atoms in the cascade. Therefore, energy conservation implies that a recoil momentum of $P_\text{ini}$ keV will dissipate to $\left(\frac{P_\text{ini}}{20\ \text{keV}}\right)^2$ slower helium atoms. Each of those slow atoms in turn will radiate about 100 quasi-particles, assuming that the quasi-particles' energies are $\sim 1$ meV. We take the typical scattering and radiation rate from \cref{fig:HeRad} as $10^{13}$ s$^{-1}$. The radiation of all quasi-particles will take $\Delta t \sim 100\times 10^{-13}\ \text{s}=10^{-11}$ s, which is longer than the prior atomic cascade time because the number of atoms increase exponentially with time during a cascade. Therefore, we estimate the total time of cascade and radiation to be $10^{-11}$ s.
During this time, the quasi-particles (with velocity ${\cal O}(100)\, \text{m}/\text{s}$) may expand up to $\Delta R \sim 100\ \text{m}/\text{s} \times 10^{-11}\ \text{s} = 10^{-9} \ \text{m} = 1$ nm. We then estimate the the interaction rate $n\sigma v$ for quasi-particle self-interactions as
$$
n\sigma v \sim\frac{\left(\frac{P_\text{ini}}{20\ \text{keV}}\right)^2 \times 100\times \sigma\,v}{\frac{4\pi}{3} (\Delta R)^3} \sim
\left(\frac{P_\text{ini}}{20\ \text{keV}}\right)^2 \times 10^{11}\ \text{s}^{-1}.
$$
We see that the corresponding timescale ranges from $10^{-14}$ s (for $P_\text{ini}\sim 1$ MeV) to $10^{-11}$ s (for $P_\text{ini}\sim 20$ keV). This timescale is smaller than the timescale of $10^{-11}$ s for producing the quasi-particle shower. Therefore, quasi-particles dissipated from $P_\text{ini}\gg 20$ keV recoil momentum will interact with each other multiple times, i.e. become fully thermalized, by the time when all quasi-particles are produced. When considering recoil momenta $\lesssim 20 \, {\rm keV}$, only a few quasi-particles are produced at this scale, and they infrequently interact with each other. Thus we treat these quasi-particles as free-streaming from the beginning.
\subsection{Quasi-particle thermalization}
\label{sec:Thermalization}
In the following, we present our procedure to derive the thermalized distribution of quasi-particles. First of all, we estimate the last scattering surface of the final quasi-particle \enquote{plasma} system, assuming an isotropic configuration distribution. Borrowing from the analogous concept in Cosmology \cite{Dodelson:2003ft,Weinberg:2008zzc}, at the last scattering surface, the quasi-particle interaction rate equals the expansion rate. Modeling the isotropic quasi-particle \enquote{plasma} as a sphere, the last scattering radius $R_{\text{ls}}$ is determined by the radius for which the optical depth is unity:
\begin{equation}
\tau=\int_{R_{\text{ls}}}^{\infty}\frac{dr}{(n\sigma)^{-1}}=1,\quad n=\frac{N}{V},
\label{eq:expansion}
\end{equation}
where $N$ is the total number of quasi-particles, $\sigma$ is their interaction rate, and $V=4\pi r^3/3$.
The solution of the previous equation gives the last scattering radius $R_\text{ls}=\sqrt{3N\sigma/8\pi}$.
$N$ is estimated by assuming that all final quasi-particle excitations have energy 1 meV: $N=\frac{P_\text{ini}^2/2m_\text{He}}{1\ \text{meV}}$.
Plugging in the numerical values from the previous subsection, we find the magnitude of the last scattering radius $R_\text{ls}$ to be ${\cal O}(1)$ to ${\cal O}(10)$ nm, smaller than the distance between the sensors in realistic detectors. Therefore, the relevant distribution detected by the sensors is the thermalized spectrum of quasi-particles. The number of quasi-particles is not fixed, but can change due to a) decays of unstable phonons (with momenta below $1.2$ keV) to other phonons; b) decays of other quasi-particles to stable quasi-particles with momenta between $1.2$ and $4.6$ keV \cite{Hertel:2018aal,donnelly1981specific}; and c) multiple quasi-particle interactions which can change the number of quasi-particle modes. Therefore, the literature treats the chemical potential of quasi-particles as zero. Then we can express the Bose-Einstein distribution of the quasi-particles as
\begin{equation}
n_\text{B-E}(\vec{p})= \frac{1}{\exp\frac{\omega(p)}{k_B T} - 1}.
\end{equation}
Using energy conservation, the initial recoil energy equals the total energy of the Bose-Einstein ensemble:
\begin{equation}
\frac{P_\text{ini}^2}{2m_\text{He}}=\frac{4}{3}\pi R_\text{ls}^3 \int_0^{4.6\text{keV}}\frac{d^3p}{(2\pi)^3}\,\omega(p)\,n_\text{B-E}(p),
\label{eq:solveT}
\end{equation}
where $\frac{4}{3}\pi R_\text{ls}^3$ is the volume of the thermal system. The momentum integration runs from 0 to 4.6 keV because we assume only phonons and stable quasi-particles exist in the thermalized distribution.
\subsection{Monte Carlo simulation and the thermal temperature}
\label{sec:simulationT}
We can use our MC simulations described in \cref{sec:MCsimulation} to provide a numerical estimate for $T$ from the previous subsection. The simulation provides averaged values of the different scattering cross sections between quasi-particles of all types. These cross-sections can be used in \cref{eq:expansion} for the calculation of $R_\text{ls}$, which can then be substituted in \cref{eq:solveT} to solve for $T$. The result is shown as the blue solid line in \cref{fig:Thermal T}, for which we have used the value $\sigma=8.64\ \text{keV}^{-2}$ suggested by our simulations. As mentioned in \cref{sec:IntofQPs}, the region of $P_\text{ini}$ below $20$ keV is unnecessary for our simulation, since the recoiled helium atom will radiate quasi-particles without an atomic cascade, and the number of quasi-particles will be insufficient for thermalization.
One potential problem with our simulations is that the expression for the phonon-roton scattering cross section is only valid for phonons that have a much smaller momentum than the roton \cite{Matchev:2021fuw,Nicolis_2018,PhysRevLett.119.260402}. Therefore, we cannot reliably sample the cross section between hard phonons and rotons.
In order to get an idea of the effect from this theoretical uncertainty, we introduce a $k$-factor for the phonon-roton cross section and in \cref{fig:Thermal T} show results for $k=10$ (red dashed line), $k=1$ (green dashed line) and $k=0.1$ (orange dashed line). The width of the shaded band enclosed between the orange and red dashed lines is indicative of the corresponding uncertainty on the derived thermal temperature $T$, which should be kept in mind when discussing projections for the experimental sensitivity below.
\section{Experimental signals}
\label{sec:flux&force}
The previous analysis and simulations are applicable to various types of dark matter searches in superfluid helium.
One such proposal is the HeRALD experiment \cite{Hertel:2018aal}, which tries to detect dark matter scattering events occurring in the bulk of the superfluid by sensing quantum evaporation of helium atoms from the superfluid surface.
Here we study the detection prospects of mechanical oscillators like NEMS, which instead sense the force (or momentum deposit) due to quasi-particles.
For a given momentum threshold and detector size, we deduce the potential to discover the dark matter signal using mechanical oscillators and
compare with the HeRALD experiment, leaving the detailed experimental design and study of the backgrounds for a future work \cite{MSXY}.
\paragraph{Detector setup.} In order to study the quasi-particle flux onto a generic device, we consider a simple detector model as follows. We assume a large supply of 2D square-shaped sensors of area $a \times a$, which are placed on a 3D grid inside a superfluid target, with an inter-sensor distance of $d$. The distance between the event and the closest target sensor is up to half of a diagonal $\sqrt{3}d/2$. Each sensor at the cell vertex first receives the signal from its own octic cube. Therefore, we simplify the geometry and study the events happening within a cube of volume $d^3$ with a sensor at the center (see \cref{fig:IllustrationRad}).
We also model the detection as an event free streaming to the sensor of area $a^2$ from a distance $r$. Assuming the equilibrium quasi-particle ensemble to be isotropic (which is confirmed in our simulations), the probability of a single given particle reaching the surface is given by the ratio between its solid angle to the detector $\Omega(\theta,\phi,r,a)$ and the total solid angle $4\pi$. The solid angle $\Omega(\theta,\phi,r,a)$ belongs to a leaning pyramid configuration and is solved numerically using Mathematica build-in commands. If $P_{\rm ini} > 20 \, {\rm keV}$, the quasi-particles reach thermalization. Then we calculate the differential profile, $dN/dp$,
the number of quasi-particles with given momentum $p$, according to a thermal distribution:
\begin{equation}
\frac{dN}{dp}=\frac{V}{2\pi^2}\frac{p^2}{\exp{\frac{\omega(p)}{T}}-1},
\label{eq:dn/dp}
\end{equation}
where $V$ is the last scattering volume estimated from \cref{eq:expansion}, while $T$ is the thermal temperature discussed in \cref{fig:Thermal T}. Results for a few representative values of $P_{\text ini}$ are shown in \cref{fig:Thermal Quasi-particle number profile}.
For $P_{\rm ini} < 20 \, {\rm keV}$, the quasi-particles are free-streaming from their point of origin.
We estimate the number of events using energy conservation. According to the simulation results in \cref{sec:MCsimulation}, for the estimation of the quasi-particle flux we further assume that most of the quasi-particles are rotons.
\paragraph{${\cal O}(1)$ keV threshold.} Sensors of momentum threshold $\sim {\cal O}(1)$ keV are capable of detecting single quasi-particles. Taking 1 keV threshold as an example, as long as at least one quasi-particle reaches the sensor, the event can be counted as detectable. There is a higher probability of detection when the particles are streaming from a closer distance and a normal angle from the sensor. We thus extract this probability between the event and the nearest sensor:
\begin{equation}
P(\theta,\phi,a,r)\,=\,1-\left(1-\frac{\Omega(\theta,\phi,a,r)}{4\pi}\right)^{N(p_\text{min})}.
\label{eq:PofO1keV}
\end{equation}
Here $p_\text{min}$ is the threshold momentum of the sensor, and $N(p_\text{min})$ is the total number of quasi-particles with momentum above threshold, which is integrated from \cref{eq:dn/dp}. We plot an example density plot for this probability function in the left panel of \cref{fig:probabilitygraph}. The number of quasi-particles arriving at the sensor decreases when the events happen further from the sensor and away from its normal direction.
Since the DM-Helium scattering rate eq.~(\ref{eq:differentialrate}) is independent to the coordinate space, we average the probability $P(\theta,\phi,a,r)$ over the whole sensor cell and call it the Spatial Efficiency (SE):
\begin{equation}
{\cal E}_S\,=\,\frac{1}{d^3}\int^{d/2}_{-d/2}dx\int^{d/2}_{-d/2}dy\int^{d/2}_{-d/2}dz\,P(\theta,\phi,a,r),
\label{eq:EFofO1keV}
\end{equation}
where radial distance $r=\sqrt{x^2+y^2+z^2}$, the zenith angle $\theta=\tan^{-1}(\sqrt{x^2+y^2}/z)$, and the azimuth angle $\phi=\tan^{-1}(y/x)$. The spatial efficiency eventually depends on the recoil momentum $P_\text{ini}$, the sensor distance $d$, and the sensor area $a^2$. We show the spatial efficiency function \cref{eq:EFofO1keV} results for ${\cal O}(1)$ keV threshold scenario in \cref{fig:SpatialEff1}. Comparing the three curves, we notice that the spatial efficiency is generally proportional to the area $a^2$ of the sensor. This is because the probability (\ref{eq:PofO1keV}) takes the limit $N(p_{\min})\Omega(\theta,\phi,a,r)/4\pi$ at a large particle number $N(p_{\min})\gg 1$.
In addition to the nearest sensor, the other sensors further away will also receive some signal, which we roughly estimate to be on the order of $40\%$ (for a threshold of 1 keV), but to be conservative, this correction will not be included in our sensitivity projections below.
\paragraph{${\cal O}(10)$ keV threshold and beyond.} Sensors of momentum threshold $\sim {\cal O}(10)$ keV are incapable of detecting a single quasi-particle. Taking 10 keV threshold as an example, we estimate that it takes $\ge 3$ quasi-particles reaching the sensor to trigger a detectable event. The probability (\ref{eq:PofO1keV}) can thus be generalized into a cumulative Binomial distribution:
\begin{equation}
P(\theta,\phi,a,r)\,=\,1-\sum_{m=0}^{2}C_{N}^{m}\left(1-\frac{\Omega(\theta,\phi,a,r)}{4\pi}\right)^{N-m}\left(\frac{\Omega(\theta,\phi,a,r)}{4\pi}\right)^m.
\label{eq:PofO10keV}
\end{equation}
Here the number $N$ is the total number of quasi-particles without the cap of $p_\text{min}$. Similar formalism can be applied to ${\cal O}(100)$ keV threshold. For example, we estimate that at least 25 quasi-particles are required to trigger a 100 keV threshold sensor. Therefore, the cumulative sum in \cref{eq:PofO10keV} is up to $m=24$. The total number of quasi-particles $N$ must exceed the minimum quasi-particle number that triggers the threshold, which is the maximum value of $m$ in this equation. For example, there are $\lesssim 3$ quasi-particles produced below 5 keV recoil momentum (no matter whether thermalized or not), thus the probability and the spatial efficiency vanishes.
Despite these changes, the spatial efficiency function follows the same formalism (\ref{eq:EFofO1keV}). As shown in \cref{fig:SpatialEff10}, the spatial efficiency in this scenario is about three orders of magnitude smaller than the previous case. Moreover, the spatial efficiency no longer scales with the area but with $a^3$. It is therefore unnecessary to compute the contribution from the sensors further away because they are unable to compensate for the loss of sensitivity.
\paragraph{Sensitivity projection.} DM with a given mass could generate a recoil momentum of any value with the rate distribution shown in \cref{fig:DM-Helium recoil}. The momentum threshold and other parameters of the detector determine the probability of an event at location $(\theta, \phi, r)$ being detectable. Mathematically, the integration of $d\Gamma/dq^2$ from \cref{fig:DM-Helium recoil} is thus weighted by the curves in the \cref{fig:DepEff} when we calculate the total event rate. Although the spatial efficiency is a continuous function, the total detectable event rate vanishes when the total number of quasi-particles is less than the minimum requirement to trigger the threshold, as mentioned above.
In \cref{fig:Money plot}, we plot the estimated reach in DM-Nucleus recoil cross section $\sigma_{\chi n}$ for several detector configurations, based on 90 percent confidence level (2.3 events) per exposure time per target mass \cite{Schutz:2016tid,Feldman:1997qc,Lista:2016chp,Bhat:2010zqs,Sinervo:2002sa,Barlow:2003cx}, assuming zero background. For a one year exposure with one kilogram of target, the constraint on the DM-Helium scattering rate is:
\begin{equation}
\frac{1}{M_\text{target}}\times\int_{q_{\min}}^{q_{\max}} {\cal E}_S (q) \, \frac{d {\rm R}}{d\textbf{q}^2}2q\,dq \, <0.728\times 10^{-7}\text{s}^{-1}\text{kg}^{-1},
\end{equation}
where $q_{\min}$ is the cutoff by $N>1$ in \cref{eq:PofO1keV} or $N>m_{\max}$ in \cref{eq:PofO10keV}, and $q_{\max}$ is the recoil momentum upper limit from classical kinematics. The variation of exposure includes per day per kilogram, per year per kilogram, and per year per 10 kilogram, with the expected number of events scaling accordingly. A combination of smaller distance between sensors, larger area of the sensor, and longer exposure time allows to probe smaller cross sections.
\section{\bf Conclusions}
\label{sec:conclude}
We have proposed a new method of detecting sub-GeV DM using the spectrum of the quasi-particle excitations in superfluid $^4$He generated as a result of a DM collision with a He nucleus. The key idea is to leverage modern force-sensitive devices, such as NEMS oscillators, to detect the momentum flux of the quasi-particles.
With a kg-year exposure, we have demonstrated that this superfluid experiment can strongly constrain the DM-Nucleus interaction within the sub-GeV mass range. Our study is also the first to theoretically study (1) the production of quasi-particles from the resulting helium atom cascade; (2) the thermalization and decoupling of quasi-particles; and (3) the temperature of the thermalized quasi-particle system in a superfluid DM detector. The findings include:
\begin{itemize}
\item
The DM collision initiates a cascade of helium atoms which gradually lose momentum by elastic scattering, and later, once they reach ${\cal O}(10)$ keV, by radiation of quasi-particles.
\item Quasi-particles generated by a recoil momentum $P_\text{ini}\gtrsim {\cal O}(10)$ keV will thermalize as a result of their self-interactions. For $P_\text{ini}\lesssim {\cal O}(10)$ keV, the quasi-particles are free streaming from the beginning.
\item The temperature of the thermalized quasi-particles system generated by sub-GeV DM is ${\cal O}(1)$ Kelvin.
\end{itemize}
These results lead to a prediction of the detectability of events in the vicinity of a NEMS sensor. With a 10 kg-yr exposure and a 1 keV momentum threshold, the ideal detector setup is able to push the DM-nucleon exclusion region down to about $10^{-41}$ to $10^{-43}\,\text{cm}^2$.
\acknowledgments
We are grateful to Rasul Gazizulin, Wei Guo, Tongyan Lin and Bin Xu for useful discussions. This work was supported in part by the United States Department of Energy under Grant No. DE-SC0022148, and the National Science Foundation under Grant No. PHY-2110766.
\vspace{5mm}
\bibliographystyle{JHEP}
\bibliography{ref}
|
Title:
Design of SCALES: A 2-5 Micron Coronagraphic Integral Field Spectrograph for Keck Observatory |
Abstract: We present the design of SCALES (Slicer Combined with Array of Lenslets for
Exoplanet Spectroscopy) a new 2-5 micron coronagraphic integral field
spectrograph under construction for Keck Observatory. SCALES enables
low-resolution (R~50) spectroscopy, as well as medium-resolution (R~4,000)
spectroscopy with the goal of discovering and characterizing cold exoplanets
that are brightest in the thermal infrared. Additionally, SCALES has a 12x12"
field-of-view imager that will be used for general adaptive optics science at
Keck. We present SCALES's specifications, its science case, its overall design,
and simulations of its expected performance. Additionally, we present progress
on procuring, fabricating and testing long lead-time components.
| https://export.arxiv.org/pdf/2208.10721 |
\keywords{Exoplanets, Coronagraphy, Adaptive Optics, Infrared, Integral Field Spectroscopy}
\section{INTRODUCTION}
\label{sec:intro} %
Thanks to progress in adaptive optics and instrumentation, we have now directly detected a small number of exoplanets, with $\sim20-25$ imaged and a handful characterized with spectroscopy\cite{2022arXiv220505696C}.
In the last decade, coronagraphic integral field spectrographs, including GPI\cite{2014PNAS..11112661M}, SPHERE\cite{2008SPIE.7014E..18B}, and CHARIS\cite{2015SPIE.9605E..1CG}, have taken advantage of spatial and spectroscopic differences between planets and speckles to discover new exoplanets that were previously hidden in the glare of their host stars.
All of these integral field spectrographs are coupled with extreme adaptive optics systems that are designed to provide the high-contrasts ($\sim10^6$) necessary to detect a faint exoplanet next to its bright host star.
Despite all of this progress, even with extreme adaptive optics systems, we can currently only access the brightest planets with the widest angular separations from their stars: young giant planets that are significantly hotter, more massive, and on wider orbits than Jupiter.
Expanding the planet characterization parameter space requires novel instrumentation.
Here we present the design of a new instrument, the Slicer Combined with Array of Lenslets for Exoplanets Spectroscopy (SCALES)\footnote{Previously, the instrument has been called Santa Cruz Array of Lenslets for Exoplanet Spectroscopy \cite{2018SPIE10702E..A5S,2020SPIE11447E..64S}. However, given the broad partnership that has coalesced around this instrument concept, we have changed the name to Slicer Combined with Array of Lenslets for Exoplanet Spectroscopy, which has the same acronym: SCALES. This also follows from Arizona Lenslets for Exoplanet Spectroscopy (ALES)\cite{2015SPIE.9605E..1DS,2018SPIE10702E..3FS}, which prototyped many of the technologies used in SCALES.}, which adopts many of the same technologies as this previous generation of high-contrast instruments (e.g., integral field spectroscopy, coronagraphy, extreme adaptive optics).
However, while GPI, SPHERE and CHARIS all operate in the near-infrared (1-2 $\mu$m), SCALES is designed to operate in the thermal infrared (2-5 $\mu$m), where self-luminous exoplanets are brighter and easier to detect in the glare of their host star. As a result, SCALES will be able to detect and characterize colder and lower-mass planets than were previously accessible to high-contrast exoplanet-imaging instruments.
A summary of SCALES's 3 main observational modes is listed in Table \ref{SCALES specs}. Spectral resolutions are shown in Figure \ref{fig:resolution}.
\begin{table}[ht]
\caption{SCALES Top-Level Specifications}
\label{SCALES specs}
\begin{center}
\begin{tabular}{|l|ll|ll|l|}
\hline
\multicolumn{1}{|c|}{\textbf{}} & \multicolumn{2}{c|}{\textbf{Low-Resolution IFS}} & \multicolumn{2}{c|}{\textbf{Medium-Resolution IFS}} & \multicolumn{1}{c|}{\textbf{Imager}} \\ \hline
\multirow{6}{*}{\textbf{Wavelength}} & \multicolumn{1}{l|}{2.0-2.4$\mu$m} & R$\sim$150 & \multicolumn{1}{l|}{\multirow{2}{*}{2.0-2.4$\mu$m}} & \multirow{2}{*}{R$\sim$6,000} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Up to 16 filters \\ spanning 1-5$\mu$m\end{tabular}} \\ \cline{2-3}
& \multicolumn{1}{l|}{2.0-4.0$\mu$m} & R$\sim$50 & \multicolumn{1}{l|}{} & & \\ \cline{2-5}
& \multicolumn{1}{l|}{2.0-5.0$\mu$m} & R$\sim$35 & \multicolumn{1}{l|}{\multirow{2}{*}{2.9-4.15$\mu$m}} & \multirow{2}{*}{R$\sim$3,000} & \\ \cline{2-3}
& \multicolumn{1}{l|}{2.9-4.15$\mu$m} & R$\sim$80 & \multicolumn{1}{l|}{} & & \\ \cline{2-5}
& \multicolumn{1}{l|}{3.1-3.5$\mu$m} & R$\sim$200 & \multicolumn{1}{l|}{\multirow{2}{*}{4.5-5.2$\mu$m}} & \multirow{2}{*}{R$\sim$7,000} & \\ \cline{2-3}
& \multicolumn{1}{l|}{4.5-5.2$\mu$m} & R$\sim$200 & \multicolumn{1}{l|}{} & & \\ \hline
\textbf{Field of View} & \multicolumn{2}{l|}{2.15$\times$2.15"} & \multicolumn{2}{l|}{0.36$\times$0.34"} & 12.3$\times$12.3" \\ \hline
\textbf{Spatial Sampling} & \multicolumn{2}{l|}{0.02"} & \multicolumn{2}{l|}{0.02"} & 0.006" \\ \hline
\textbf{Coronagraphy} & \multicolumn{2}{l|}{Vector-Vortex} & \multicolumn{2}{l|}{Vector-Vortex} & TBD \\ \hline
\end{tabular}
\end{center}
\end{table}
For more information on SCALES, please see the following proceedings in this conference:
\noindent\textbf{Optical Design}: Paper No. 12184-159 (Renate Kupke et al.)\cite{Reni2022}\\
\textbf{Imaging Channel}: Paper No. 12188-65 (Ravinder Banyal et al.)\cite{Banyal2022}\\
\textbf{Slicer for Med-Res IFS}: Paper No. 12184-154 (Stelter et al.)\cite{DenoSlenslit2022}\\
\textbf{Cold-Stop / Lyot Stop}: Paper No. 12185-332 (Li et al.)\cite{Jialin2022}\\
\textbf{Aperture Masks}: Paper No. 12183-89 (Lach et al.)\cite{Lach2022}\\
\textbf{Keck Instrument Development}: Paper No. 12184-4 (Kassis et al.)\cite{Kassis2022}\\
\newpage
\section{SCIENCE OVERVIEW}
SCALES combines the two most powerful methods for imaging exoplanets: (1) thermal infrared ($2-5 \mu$m) imaging, which detects exoplanets at wavelengths where they are bright (Figure \ref{fig:contrast}), and (2) integral-field spectroscopy, which distinguishes exoplanets from residual starlight based on the shapes of their spectral energy distributions. For a 300 K planet, this combination creates a $\sim4-5$ magnitude boost in sensitivity compared to an H-band IFS, and a $\sim$2.2 magnitude boost in sensitivity compared to an L-band imager\footnote{For a given planet temperature, we estimate colors in common bandpasses (imaging) or optimal bandpasses (IFS), using model atmospheres\cite{2014ApJ...787...78M}, and calculate contrasts with respect to a Raleigh-Jeans tail. The optimal filter provides an effective contrast boost for IFS data\cite{2018SPIE10702E..A5S}. For IFS data, we also independently adopt a 1 magnitude gain consistent with empirically-demonstrated IFS-based starlight speckle suppression \cite{2014SPIE.9148E..0UM,2015MNRAS.454..129V,SPHERE_manual}}. By operating at longer wavelengths than other high-contrast integral-field spectrographs, SCALES will extend the wavelength range we use to characterize planets, and also discover new planets (in particular, cold planets) that are not detectable with near-infrared instruments. Despite the competitiveness of the exoplanet imaging field, SCALES’ unique parameter space ensures that it will lead a broad range of new science. End-to-end simulations of an exoplanet imaging observation and a Solar System synoptic program are shown in Figure \ref{fig:sci-sim}.
\subsection{SCALES Exoplanet Discovery}
SCALES will detect a wide range of exoplanets, including many beyond the limits of current instruments. Here, we discuss SCALES’ anticipated impact on exoplanet searches, focusing on detections not previously possible in large numbers (planets with known masses, cold-start planets, accreting protoplanets). These predictions were made using an end-to-end simulator accounting for SCALES’ optical design, as well as instrumental and atmospheric transmission, emission, and noise sources\cite{2020SPIE11447E..4ZB}.
\subsubsection{Old, Cold Gaia Exoplanets}
Roughly when SCALES is completed, Gaia’s extended mission will be releasing a catalog of $\sim$70,000 exoplanets \cite{2014ApJ...797...14P}. Some of these will orbit nearby stars widely enough to be directly imaged. Using the SCALES predicted contrast curve and following \cite{2019AJ....158..140B}, we predict 23 planets ($<$13 M$_{j}$) and 110 brown dwarfs ($>$13 M$_{j}$) will be detectable by SCALES at this time. Adding astrometry from \textit{Roman} in 2030, this increases to 30 planets and 176 brown dwarfs (Figure \ref{fig:Gaia}). This is particularly exciting, since in general direct imaging cannot measure planet mass, and atmospheric properties are degenerate without masses. Furthermore, many of the new Gaia planets amenable to direct imaging will be $\sim$300 K (cold enough for water clouds and different atmospheric chemistry – e.g. NH$_{3}$, more CH$_{4}$ – to be visible in M-band spectra; \cite{2016ApJ...826L..17S}). These cannot be imaged with today’s near-infrared high-contrast IFSs, but can be imaged at longer wavelengths (Figure \ref{fig:contrast}). These yields assume a Keck AO upgrade (a high-order deformable mirror) expected before SCALES commissioning. There is a risk that this upgrade may be delayed; however, even with current Keck AO, SCALES would still detect 6 Gaia planets in 2025 and 19 in 2030, nearly all of which would be colder than the current coldest directly imaged exoplanet ($\sim$700 K; \cite{2015Sci...350...64M}). Despite smaller yields in this worst-case scenario, each detection would be a high-impact result. More near-future AO upgrades are under consideration, including predictive control (a software upgrade progressing with promising results; \cite{2019SPIE11117E..0WJ,2022JATIS...8b9006V}), and an adaptive secondary mirror\cite{Hinz2022}.
\subsubsection{Directly Detecting Accreting Protoplanets in Gapped Protoplanetary Disks}
By observing young planets in their nascent disks, it is possible to observe mass accretion directly. To date, only a couple of accreting exoplanets and candidates have been discovered \cite{2012ApJ...745....5K,2018A&A...617A..44K}. Part of the difficulty of imaging protoplanets is that they are usually embedded in protoplanetary disks, whose scattered light can resemble exoplanets \cite{2016SPIE.9907E..0DS}. Integral-field spectroscopy, specifically at the wavelengths where these planets peak in brightness (4 microns\cite{2015Natur.527..342S}), will distinguish between circumstellar disk scattered light and protoplanet emission by constraining spectral slopes and excesses in IR Hydrogen lines such as Br-$\gamma$ and Br-$\alpha$ (Figure \ref{fig:PDS 70}). Recent Keck AO upgrades are also aimed at imaging planets around young stars\cite{2018SPIE10703E..1ZB}.
\subsection{SCALES Exoplanet Characterization}
Studying exoplanets beyond their locations, masses, and radii requires photometry and spectroscopy of the planets themselves. Spectroscopy can reveal a planet’s thermal structure, chemical composition, cloud properties, spatial inhomogeneity, and more\cite{2011ApJ...733...65B,2012ApJ...754..135M}. The observational signatures of these properties are often degenerate and require broad wavelength coverage to disentangle \cite{2014ApJ...792...17S,2016ApJ...817..166S}. Clouds, which are ubiquitous on exoplanets \cite{2013cctp.book..367M}, have optical properties that vary slowly with wavelength, and can be confused with thermal structure effects over a narrow bandpass \cite{2016ApJ...820...78L}. Molecules can probe chemical reactions, vertical mixing, and atomic abundances such as C/O ratios \cite{2013Sci...339.1398K}, but only if a relatively complete set are measured over the broad wavelength range where they are individually detectable \cite{2015ApJ...804...61B}.
SCALES will enable these studies by complementing existing near-infrared IFSs. In Figure \ref{fig:characterization}, we summarize its large range of exoplanet characterization opportunities. SCALES will break degeneracies that plague atmospheric characterization with imaging alone, resulting in detailed measurements of molecular abundances (including previously-unobserved species), metallicities, and surface gravities, all for a new planet population. High-contrast medium-resolution spectroscopy is a new capability that SCALES will offer for the first time at any wavelength, allowing line-by-line identification of molecules, while simultaneously providing broad bandpass continuum measurements, all within the environment of a coronagraphic IFS.
\subsection{Additional Science Opportunities}
Thermal infrared imaging and spectroscopy are used for a wide range of Solar System, galactic, and extragalactic observations. Keck’s current thermal infrared imager, NIRC2, enables $\sim$70 papers per year. SCALES will give users additional flexibility by allowing users to use a NIRC2-like imager alongside an integral field spectrograph. A particular goal for SCALES is to enable a coordinated program of synoptic observations of Solar System objects, taking advantage of morning twilight, when regular observations are complete, but the sky brightness has not yet changed in the thermal infrared.
\newpage
\section{INTEGRAL FIELD SPECTROGRAPH ARCHITECTURE}
SCALES features a lenslet-based integral field spectrograph as well as a hybrid lenslet/slicer integral field spectrograph (dubbed ``slenslit"), which share a collimator, camera and detector. The lenslet-based integral field spectrograph produces low spectral resolutions, while the slenslit produces medium spectral resolutions over a smaller field-of-view.
The SCALES low-resolution IFS uses the lenslet-array architecture\cite{1995A&AS..113..347B}. In this concept, each lenslet serves as a spatial pixel (or ``spaxel''), which samples the field and is dispersed into a spectrum by a downstream spectrograph. The spectra are interleaved by rotating the disperser with respect to the lenslet array. For exoplanet imaging, lenslet-based IFS’s are preferable to slicer IFS’s because the lenslet array samples the field before any optical aberrations are imparted by downstream spectroscopic optics.
Image slicers are an alternative IFS architecture that produce higher-resolution spectra than lenslet arrays. However, their optics impart aberrations that are detrimental to high-contrast imaging. For SCALES, we are proposing a new approach, which we call a slenslit, that combines a lenslet array with a slicer to achieve the best of both worlds: the lenslet array samples the field before the slicer and spectrograph impart optical aberrations, and the slicer re-formats the lenslet spots into a pseudo-slit that can be dispersed into longer spectra.
An illustration of the concept is shown in Figure~\ref{fig-sp:IFS-types}.
Instrument stability is important for exoplanet imaging, so the SCALES fore-optics do not employ swappable magnifiers or lenslet arrays.
Because of this, and the shared requirements for wavelength, sampling, and crosstalk, the medium-resolution IFS uses the same lenslet pitch as the low-resolution IFS, and the spectra are separated by the same number of pixels at the detector.
Spectra are dispersed the length of the detector maximize spectral resolution and to allow use of 1st-order gratings which avoid spectroscopic order overlap.
For an H2RG, we can fit 340 spectra across the detector when each spectrum is separated from its neighbor by 6 pixels (or 108$\mu$m).
This corresponds to an 18$\times$18 lenslet field-of-view (after a full optical design, we are only able to fit a 17$\times$18 lenslet field-of-view).
The lenslet pitch is 341$\mu$m, which is close to 3 times 108$\mu$m (the separation of the low-resolution spectra).
Therefore, a slicer that interleaves by 3 will achieve the same 6-pixel spectrum separation for the medium-resolution mode as the low-resolution mode.
An illustration of the interleaving is shown in Figure~\ref{fig-sp:IFS-types}.
A summary of the IFS properties and a plot of its spectral resolutions is shown in Table~\ref{tab-sp:ifs-summary}.
\begin{table}[]
\begin{center}
\caption{Summary of IFS Layout}
\label{tab-sp:ifs-summary}
\begin{tabular}{|l|ll|ll|}
\hline
\multicolumn{1}{|c|}{\textbf{}} & \multicolumn{2}{c|}{\textbf{Low-Resolution IFS}} & \multicolumn{2}{c|}{\textbf{Medium-Resolution IFS}} \\ \hline
\textbf{IFS Type} & \multicolumn{2}{l|}{Lenslet} & \multicolumn{2}{l|}{Slenslit} \\ \hline
\textbf{Number of Spaxels} & \multicolumn{2}{l|}{108x108} & \multicolumn{2}{l|}{17x18} \\ \hline
\textbf{Plate Scale} & \multicolumn{2}{l|}{0.02"} & \multicolumn{2}{l|}{0.02"} \\ \hline
\textbf{Field-of-View} & \multicolumn{2}{l|}{2.14"$\times$2.14"} & \multicolumn{2}{l|}{0.34"$\times$0.36"} \\ \hline
\textbf{Length of Spectra} & \multicolumn{2}{l|}{54 pixels} & \multicolumn{2}{l|}{$\sim$1900 pixels} \\ \hline
\textbf{Separation Between Spectra} & \multicolumn{2}{l|}{6 pixels} & \multicolumn{2}{l|}{5-7 pixels} \\ \hline
\end{tabular}
\end{center}
\end{table}
\newpage
\section{OPTICAL DESIGN OVERVIEW}
The SCALES instrument is optimized for exoplanet imaging and must do the following:
\begin{itemize}
\item Use Keck’s best adaptive optics configuration for high-contrast imaging
\item Preserve the excellent image quality produced by Keck adaptive optics
\item Use coronagraphy to suppress starlight
\item Provide maximum sensitivity by following best practices for IR instrumentation
\item Operate in the desired wavelength range and with the appropriate spectral resolutions for exoplanet characterization
\item Without compromising the exoplanet imaging goals, the instrument should provide a versatile system for 1-5$\mu$m adaptive optics science.
\end{itemize}
These driving principles have led us to a top-level optical design that includes:
\begin{itemize}
\item \textbf{Fore-optics}
Two relays which, in order, make (1) a pupil plane used for a rotating cold-stop, (2) a focal plane used for insertable coronagraphic masks, (3) a pupil plane used for insertable Lyot stops, and (4) a focal plane sampled by a lenslet array for integral field spectroscopy. The optics are all-reflective, except for the coronagraph and lenslet array. Immediately after the first pupil plane are a pair of filter wheels that are primarily used by the imager.
\item \textbf{Imager}
An insertable mirror at the Lyot stop plane diverts light away from the lenslet array and towards the imaging channel. The imaging channel consists of an off-axis hyperboloid and a flat mirror, which image onto an H2RG detector.
\item \textbf{Spectrograph}
A collimator and camera that reimage the lenslet array spots onto an H2RG detector.
Between the collimator and camera are selectable reflective dispersers (prisms for low-resolution and gratings for medium resolution). Between the camera and detector are selectable filters matched to each disperser.The optics are all reflective, except for the prisms and filters.
\item \textbf{Slenslit slicer}
A set of optics can divert and re-insert light from the spectrograph into an image slicer (a `scenic bypass'). While the regular spectrograph is a typical lenslet-based low-resolution spectrograph, the combination of a lenslet + slicer (which we call a ``slenslit'') is a new type of integral field spectrograph that uses a lenslet array to sample the field (setting image quality in advance of the spectrograph), followed by a slicer that re-positions the 2D array of lenslet spots into a 1D pseudo-slit that can be dispersed over many more pixels than a standard lenslet-based IFS. The optics are all reflective.
\end{itemize}
A diagram showing the full optical layout, and denoting the above four subsystems, is shown in Figure~\ref{figures:zemax-optical-design}. The SCALES optical design is described elsewhere in these proceedings \cite{Reni2022}, but notably, the design provides diffraction-limited performance at the lenslet array focal plane (21 nm maximum wavefront error), the imaging detector (56 nm maximum wavefront error) and the IFS detector (83 nm maximum wavefront error). The throughput of the imager ranges from $\sim$55\%-70\%. The throughput of the low-res IFS ranges from $\sim$50\%-60\%. The throughput of the med-res IFS ranges from $\sim$35\%-45\%.
\section{INSTRUMENT LAYOUT}
All of the SCALES optics are mounted on a single optics bench in a cryostat that is installed behind the Keck adaptive optics system. In Figure~\ref{figures:swx-opto-mechanical-design}, we present a CAD rendering of the optics bench, and describe its different elements below. In Figure \ref{figures:cryostat}, we present a CAD rendering of the full instrument.
Light path (mechanisms are bolded):
\begin{center}
\textbf{Fore-Optics}
\end{center}
\begin{itemize}
\item F1) Cryostat entrance window: AR-coated CaF$_2$
\item F2) Fold-flat 1: This optic and all other reflective optics are diamond-turned RSA aluminum on an RSA aluminum mount
\item F3) Off-axis-parabola 1: Collimates and makes a pupil plane
\item F4) \textbf{Cold-stop}: Rotating optic at a pupil plane that baffles light coming from outside of the Keck pupil. The optic rotates to keep a fixed position angle, although it is generally left fixed for exoplanet imaging (Marois et al. 2007)
\item F5) \textbf{Filter Wheels}: 2 wheels containing filters that are used with the imaging channel.
\item F6) Off-axis-parabola 2: Focuses light and makes a focal plane
\item F7) Fold 2
\item F8) \textbf{Coronagraph Stage}: Linear stage at a focal plane holding selectable coronagraphs and an open position.
\item F9) Off-axis-ellipse: Creates a pupil plane and a focal plane at the desired magnification for the lenslet array
\item F10) \textbf{Lyot Stop Wheel}: Rotary mechanism at a pupil plane that holds selectable Lyot stops, neutral density filters, an open position, and a mirror to divert light to the imaging channel.
\item F11) \textbf{Flat 3/tip-tilt Stage}: Precision steering mirror for placing and holding exoplanets and other objects at a desired location on the lenslet array.
\item F12) Folds 4/5
\item F13) Lenslet Array: Silicon lenslet array mated to a pinhole grid for diffraction suppression.
The lenslet array has two regions: one for the low-resolution IFS and one for the medium-resolution IFS.
On the input side, the lenslets are at a focal plane.
The pupil images made by the lenslet array become the focal plane for the spectrograph.
\end{itemize}
\begin{center}
\vspace{2cm}
\textbf{Imager}
\end{center} \begin{itemize}
\item I1) Off-axis-hyperboloid: Creates a telecentric image at the detector focal plane at the appropriate magnification.
\item I2) Imager Fold
\item I3) Detector: $0.6-5.3 \mu$m-sensitive H2RG detector
\end{itemize}
\begin{center}
\textbf{Spectrograph}
\end{center}
\begin{itemize}
\item S1) \textbf{Mode Selector:} Linear stage that either lets the low-resolution lenslet light through to the spectrograph, or diverts and re-inserts the medium-resolution light to and from the slicer
\item S2) Collimator 1/2: Two-mirror system to collimate the lenslet array light and make a pupil plane
\item S3) \textbf{Disperser Carousel}: Rotary mechanism at a pupil plane with reflective dispersers on its outer edge.
The dispersers are LiF prisms for the low-resolution mode and gratings for the medium-resolution mode.
\item S4) Camera 1/2: Two-mirror system to focus the collimated light to a focal plane
\item S5) \textbf{IFS Filter Wheels:}
Two Rotary mechanisms that contains bandpass filters for keeping the IFS spectra from overlapping, as well as blank and open positions
\item S6) Detector: 0.6-5.3$\mu$m sensitive H2RG detector
\end{itemize}
\begin{center}
\textbf{Slicer Optics}
\end{center}
\begin{itemize}
\item SL1) Input-relay: 3-element relay to make a focal plane at the image slicer
\item SL2) Image slicer: divides the lenslet spots into rows
\item SL3) Pupil mirrors: re-images the slices onto the field mirrors
\item SL4) Field mirrors: steers the lenslet spots into a staggered pseudoslit
\item SL5) Output-relay: 3-element relay to insert the slicer pseudoslit back into the spectrograph
\end{itemize}
\section{PROJECT STATUS}
The SCALES project passed its preliminary design review for the baseline instrument in November 2021. The imaging channel, which was considered an upgrade, held a separate preliminary design review in June 2022. Subsequent final design reviews are on a subsystem-by-subsystem basis. Some components have been built or acquired early as a risk reduction (described below). The bulk of the purchasing is expected to begin in Fall 2022, with Integration and Testing beginning in late 2023 and shipping in 2025.
\subsection{INTEGRATION AND TESTING FACILITIES}
SCALES will be integrated in a clean room at the UC Observatories Instrument shop in Santa Cruz. The clean room will hold the SCALES cryostat when it arrives. In the mean time, it is being set up to hold a liquid nitrogen cryostat for testing cryogenic mechanisms (Section \ref{mechanism testing}), a liquid nitrogen cryostat for testing optics (Section \ref{optic testing}) and a closed-cycle cryostat for testing detectors (Section \ref{detector testing}). Additional cryogenic testing facilities are available at UC Irvine, UCLA, and the Indian Institute of Astrophysics.
\subsection{EARLY DEVELOPMENT OF A CRYOGENIC MECHANISM}\label{mechanism testing}
SCALES has 10 cryogenic mechanisms, each of which involves a broad team of engineers (optical, mechanical, electrical, software) as they progress through design, fabrication, testing, and operations. For SCALES, we made the decision to build one full cryogenic mechanism, the corongraph slide, early in the project, which allows our full team to iterate on the design. Initial testing of the coronagraph slide was discussed in a previous SPIE Proceedings \cite{2021SPIE11823E..1WG}. Since then, we have added a heat-sink to the motor, and are testing new Hall effect sensors to replace mechanical switches. A photograph of the coronagraph slide is shown in Figure \ref{fig:coronagraph slide}.
\subsection{EARLY TESTING OF A DIAMOND-TURNED OPTIC}\label{optic testing}
With the exception of the entrance window, filters, dispersers, and the lenslet array, the SCALES optics are all diamond-turned aluminum. Requirements for wavefront error and surface roughness are quite stringent, although multiple companies have the capabilities to meet our requirements.
The SCALES team is developing test setups for the small diamond-turned mirrors. Among these measurements, we are measuring the wavefront error of each optic with a Zygo interferometer at room temperature and at operating temperature. The test setup for measuring the optics at operating temperature is shown in Figure \ref{fig:Son-X}.
\subsection{EARLY DEVELOPMENT OF DETECTOR SUBSYSTEMS}\label{detector testing}
SCALES requires two infrared detectors: one for the integral field spectrograph and one for the imager. The state of the art at SCALES' wavelengths are Teledyne H2RG detectors. In July 2021, we received two detectors from the \textit{James Webb Space Telescope} project--SCA 17168 and SCA 17195, which were originally developed and characterized by the Goddard NIRSpec team\cite{2014PASP..126..739R}.
Teledyne’s ground-based H2RGs feature 32 readout channels and the flexibility to run slow mode (typically 100 kpix/sec/output) or fast-mode (typically 2 Mpix/sec/output).
This allows full-frame readouts as fast as 0.065 seconds.
For JWST NIRSpec, the cable that connects the detector wire-bonds to a pinout is hardwired for 4 readout channels and slow mode.
The minimum readout for JWST NIRSpec is 10.5 seconds.
For ground-based infrared astronomy, the night sky is orders-of-magnitude brighter than it is in space and so faster readouts are generally required to avoid saturating on the sky background.
The SCALES team is contracting with Astroblank Scientific to speed up the detector readouts by using a hybrid buffered mode, where Teledyne's SIDECAR electronics use the fast-mode A2Ds to record slow-mode frames from the detector. We expect to be able to read full frames in 1 second or less, which is sufficient to not saturate on sky background for the imager and integral field spectrograph. Further development of subframes, to allow imaging of bright sources, is expected at a later date.
\acknowledgments %
We are grateful to the Heising-Simons Foundation, the Alfred P. Sloan Foundation, and the Mt. Cuba Astronomical Foundation for their generous support of our efforts. This project also benefited from work conducted under NSF Grant 1608834 and the NSF Graduate Research Fellowship Program. In addition, we thank the Robinson family and other private supporters, without whom this work would not be possible. This work benefited from the 2022 Exoplanet Summer Program in the Other Worlds Laboratory (OWL) at the University of California, Santa Cruz, a program funded by the Heising-Simons Foundation.
\bibliography{main} %
\bibliographystyle{spiebib} %
|
Title:
IceCube search for neutrinos coincident with gravitational wave events from LIGO/Virgo run O3 |
Abstract: Using data from the IceCube Neutrino Observatory, we searched for high-energy
neutrino emission from the gravitational-wave events detected by advanced LIGO
and Virgo detectors during their third observing run. We did a low-latency
follow-up on the public candidate events released during the detectors' third
observing run and an archival search on the 80 confident events reported in
GWTC-2.1 and GWTC-3 catalogs. An extended search was also conducted for
neutrino emission on longer timescales from neutron star containing mergers.
Follow-up searches on the candidate optical counterpart of GW190521 were also
conducted. We used two methods; an unbinned maximum likelihood analysis and a
Bayesian analysis using astrophysical priors, both of which were previously
used to search for high-energy neutrino emission from gravitational-wave
events. No significant neutrino emission was observed by any analysis and upper
limits were placed on the time-integrated neutrino flux as well as the total
isotropic equivalent energy emitted in high-energy neutrinos.
| https://export.arxiv.org/pdf/2208.09532 |
\title{IceCube search for neutrinos coincident with gravitational wave events from LIGO/Virgo run O3}
\correspondingauthor{The IceCube Collaboration}
\email{analysis@icecube.wisc.edu}
\input{authors.tex}
\date{\today}
\collaboration{1000}{The IceCube Collaboration}
\noaffiliation
\keywords{high-energy astrophysics, neutrino astronomy, multi-messenger astrophysics}
\section{Introduction} \label{sec:intro}
Since the initial discoveries of astrophysical high-energy neutrinos in 2013 \citep{IceCube:2013low, PhysRevLett.113.101101} and gravitational waves (GWs) in 2015 \citep{LIGOScientific:2016aoc}, we have entered an exciting era of multi-messenger astronomy. We now have over 10~years of IceCube neutrino data from the full detector configuration and 90 reported GW events with high astrophysical probability by the LIGO Scientific, Virgo and KAGRA Collaborations (LVK). This abundance of multi-messenger data allows for statistically robust searches for common sources of GWs and high-energy neutrinos. Searches dating back before the individual confident discoveries of astrophysical GWs and high-energy neutrinos have not found a significant joint emission \citep{ 2008CQGra..25k4039A, 2009IJMPD..18.1655V, 2011PhRvL.107y1101B, 2012PhRvD..85j3004B, 2013JCAP...06..008A, 2014PhRvD..90j2002A, 2016PhRvD..93l2010A}. Following the first confident GW observation, several attempts from IceCube and ANTARES have not found significant emission of coincident high energy neutrinos \citep{, 2017ApJ...850L..35A,2017PhRvD..96b2005A,Albert_2020, Aartsen:2020mla,Veske:2021Q6}. Searches for neutrinos in the low-energy regime have also been conducted by IceCube \citep{IceCube:2021ddq}, Super-Kamiokande \citep{2021ApJ...918...78A}, KamLAND \citep{2021ApJ...909..116A}, and Borexino \citep{2017ApJ...850...21A}.
The discovery of such a joint emission would provide important information about physics of the source and improve our understanding of the sources of the individual messengers. Currently, a joint emission is expected to come from formed jets during the GW emission, which accelerates charged particles. These charged particles would produce mesons. From their decays and the decays of their secondaries, high-energy neutrino emission is expected \citep{Fang:2017tla,PhysRevD.98.043020}. Moreover, the inclusion of neutrino information to the gravitational-wave observation would help in constraining the location of the source more precisely in the sky, enabling more explorations to be done on it via the telescopes with narrow field of views. These motivations keep the search efforts vibrant despite the estimated low chance for joint detections with the current detectors \citep{2011PhRvL.107y1101B, 2012PhRvD..85j3004B, Fang:2017tla}.
In this article, we present our low-latency follow-up searches and archival searches for high-energy neutrino emission from the GW events detected during the complete third observing run of advanced LIGO and Virgo detectors (O3). In Section \ref{sec:detectors}, we describe the IceCube detector and its neutrino data used for this analysis, and the GW detector runs followed up in this paper. In Section \ref{sec:Methods}, we provide relevant details about the searches done by two main analysis methods; unbinned maximum likelihood (UML) and Low Latency Algorithm for Multi-messenger Astrophysics (LLAMA). More detailed discussions on the methods can also be found in our previous publication \citep{Aartsen:2020mla}. Section \ref{sec:pipelines} describes the low-latency operation of the pipelines for following-up the candidate GW event alerts reported during the O3 run at the Gravitational-wave Candidate Event Database (GraceDB) \footnote{\url{https://gracedb.ligo.org/}}, and summarizes the results. In Section \ref{sec:results}, we present the results of our archival searches using both LLAMA and UML methods. These archival searches were performed on the 44 confident GW events from GWTC-2.1 \citep{LIGOScientific:2021usb} \footnote{Most of the events in GWTC-2.1 were already reported in the catalog GWTC-2 \citep{LIGOScientific:2020ibl}. These were previously analyzed by both LLAMA and UML searches \citep{Veske:2021Q6}. The LLAMA pipeline reanalyzed them with a refined background distribution, while the UML results remained the same.} and 36 GW events from GWTC-3 \citep{LIGOScientific:2021djp}. These analyses include a search within a time window of $\pm$500~s around the GW events, a dedicated follow-up on the candidate optical flare from GW190521 \citep{PhysRevLett.125.101102,Graham:2020gwr}, and an extended two-week search on the neutron star containing events by the UML pipeline.
\section{The Neutrino and Gravitational Wave Observations}
\label{sec:detectors}
\subsection{The IceCube Detector}
\label{sec:icecube}
The IceCube Neutrino Observatory is a cubic-kilometer detector array located at the geographic South Pole \citep{IceCube:2016zyt}.
The detector consists of 86 strings drilled deep into ice. These strings hold 60 Digital Optical Modules (DOMs) between depths
of 1.5 km and 2.5 km in the Antarctic ice. The main component of the DOMs are photomultiplier tubes used to detect the
Cherenkov light emitted by charged particles produced when neutrinos interact in ice.
There are two main event topologies seen within IceCube data: tracks and cascades. Tracks are produced when muon neutrinos undergo charged-current interactions and produce muons that travel along a straight line and deposit Cherenkov light along its path. Cascades, which mainly consist of electromagnetic showers, are generated via charged-current interactions of electron neutrinos and neutral-current interactions of neutrinos of all flavors within the ice. Tracks are excellent for pointing towards various astrophysical sources since they have an angular resolution of $\lesssim \, 1^\circ$, which is much better than the pointing resolution of cascades ($\gtrsim \,10^\circ$)\citep{2014PhRvD..89j2004A, 2014JInst...9P3009A}.
The analyses presented here use neutrino data from a low-latency data stream known as the Gamma-ray Follow-Up (GFU) Online event stream. The GFU Online event selection is able to rapidly reconstruct neutrino events observed in the IceCube detector and the data is made available within roughly 30~s, allowing for rapid neutrino follow-ups. The GFU dataset uses track events detected with IceCube, since their pointing resolutions are well suited for follow-up analyses. The details of the selection can be found in \cite{Aartsen:2016qbu} and the online version of the dataset, which we use in this article, is described further in \cite{Kintscher:2016uqh}.
The dataset consists of through-going muon tracks originating primarily from cosmic-ray backgrounds from the atmosphere. In the southern sky, the sample is dominated by the atmospheric muons while in the northern sky, the sample is dominated by atmospheric neutrinos. Atmospheric muons do not contribute to the rate in the northern sky due to Earth absorption. The all-sky neutrino event rate ranges from 6-7~mHz depending on seasonal variation of atmospheric neutrinos \citep{Heix:2019jib}. Overall the rate of astrophysical neutrinos is roughly three orders of magnitude lower than that of the atmospheric backgrounds \citep{IceCube:2016xci}.
\subsection{The third observing run of ground-based gravitational-wave detectors}
\label{sec:lvc}
On April 1$^{\rm st}$ 2019 at 15:00 UTC the LIGO and Virgo detectors network \citep{TheLIGOScientific:2014jea,TheVirgo:2014hva} started their third observing run with an increased sensitivity enabling the detection of gravitational waves from compact binary coalescences at a rate of greater than 1 merger per week \citep{LIGOScientific:2021usb, LIGOScientific:2021djp}. During the period of October 1$^{\rm st}$ 15:00~UTC to November 1$^{\rm st}$ 15:00~UTC the detectors were not collecting data, thus separating the observation run to two segments, O3a followed by O3b, which ended on March 27$^{\rm th}$ 2020 at 17:00~UTC. The near-realtime analysis of LIGO-Virgo data by the LIGO Scientific and Virgo Collaborations (LVC) allows for the broadcasting of open public alerts. On the other hand, an in-depth offline analysis provides an update to the catalog of GW events.
In this paper, since a combination of the events from IceCube and the GW events from O3 are used, the analyses becomes dependent on the localizations of both the neutrino and the GW events. Figure \ref{fig:localizations} compares the sky localizations of the skymaps of the candidate GW events published in the GW catalogs (O1 to O3) and the neutrino events detected by IceCube, within the GFU dataset. The 90\% localizations of both are used to make the comparison. It is seen that we are mainly limited by the localization uncertainties of the GW skymaps. These uncertainties are expected to reduce within the future runs of the ground-based gravitational-wave detectors.
\section{Methods} \label{sec:Methods}
There are two main searches that we employed: the UML and LLAMA searches. Both the UML and LLAMA analyses performed short time scale follow-ups for each reported GW event. The analyses searched for neutrino emission within a $\pm$500~s time window centered around the GW merger time. This time window was used both in the realtime and archival searches. The time window is a conservative empirical estimate of the delay between the GW and neutrino emission for a model based on gamma-ray bursts \citep{Baret:2011tk}.
Additionally, the UML analysis performed a long time scale analysis on all binary neutron star (BNS) and neutron star-black hole (NSBH) candidates. This search, called the 2-week follow-up, is motivated by models which predict neutrino emission on longer time scales from binaries with at least one neutron star \citep{Fang:2017tla,Decoene:2019eux}. We searched within a time period of [-0.1,+14]~days around the GW merger times.
Both analyses also performed a neutrino follow-up search on the candidate optical counterpart to the binary black hole (BBH) merger GW190521 observed by Zwicky Transient Facility (ZTF) \citep{Graham:2020gwr}. ZTF observed a flaring active galactic nuclei (AGN), J124942.3+344929, which coincided with the 90\% credible region of the GW event's sky localization. This flare can be explained by the accretion of the gas in the AGN disk to the kicked final black hole of the merger \citep{McKernan_2019}. The motivation for the neutrino follow-up was the expected formation of a jet accelerating particles due to the chaotic accretion dynamics around the kicked black hole travelling through the AGN disk.
\subsection{Unbinned Maximum Likelihood} \label{sec:UML}
The unbinned maximum likelihood (UML) method tests for a point-like neutrino source coincident with the GW localization region. The likelihood takes into account the direction, angular error, and reconstructed energy of each neutrino on the sky.
The sky is divided into equal area bins using the Healpix pixelization scheme \citep{2005ApJ...622..759G}. We then perform a likelihood ratio test where the test statistic (TS) is the log-likelihood ratio. The TS is computed at each pixel in the sky by maximizing the log-likelihood ratio and weighting the result by the GW localization probability in the given pixel. The pixel with the largest TS value is taken to be the best-fit location for a joint GW-neutrino source and the associated TS is considered the final observed TS for the analysis. For a full detailed description of the likelihood and TS used here, see \cite{Hussain:2019xzb}.
To compute the significance for each GW follow-up, we perform 30,000 pseudo-experiments with scrambled neutrino data to generate a background TS distribution. Then scrambling is carried out by randomly assigning a time for the neutrinos, which is equivalent to a scramble in right ascension, while maintaining the declination dependence of the data. The final observed TS for a given GW event is then compared to its background distribution to compute a $p$-value.
In the case where the observed TS is consistent with background, we place 90\% upper limits (ULs) on the time-integrated neutrino flux, $E^2F$, assuming an $E^{-2}$ spectrum, where $F\,=\, dN/dE\,dA$. The limits are computed by injecting simulated signal neutrinos into the sky according to the GW localization probability. We then follow the all-sky scan procedure described above to compute a TS for a given value of injected neutrino flux. We run 500 trials for a given injected neutrino flux with a random injection location chosen for each trial. The 90\% UL on the neutrino flux is then defined as the flux for which 90\% of trials produce a TS value greater than the observed TS value for the GW event.
Upper limits to the isotropical equivalent energy ($E_{\rm iso}$ ULs) are computed in a similar manner. Once again we assume an $E^{-2}$ spectrum and convert our injected $E_{\rm iso}$ into a flux at Earth by sampling a location on the sky as well as a distance to the GW source according the the 3D localization probability provided by LIGO/Virgo. The flux is then converted to an expected number of events observed at IceCube using the dataset's declination dependence and effective area.
Note that all reported ULs are only valid within a certain range of energies. The energy range of our data sample depends strongly on declination. The central 68\% energy range in the southern hemisphere is roughly 5 $\times$ 10$^5$~GeV~-~10$^7$~GeV and in the northern hemisphere ranges from roughly 5 $\times$ 10$^3$~GeV to 10$^5$~GeV.
For the follow-up of the potential optical counterpart, AGN J124942.3+344929, we do not include any of the GW spatial information because we are testing for neutrino emission from the precise location of the AGN rather than the full GW contour. This method is equivalent to the full all-sky scan method described above except the localization skymap is a delta function at the single pixel containing the AGN.
\subsection{Low Latency Algorithm for Multi-messenger Astrophysics (LLAMA)} \label{sec:LLAMA}
The LLAMA analysis is based on the calculation of Bayesian probabilities of the observed coincidences of GWs and high-energy neutrinos \citep{PhysRevD.100.083017}. The odds ratio of the coincidence arising from a joint astrophysical emission of GWs and neutrinos being unrelated, considering any of them being not astrophysical as well, is used as a test statistic. For the analysis of confirmed GW detections followed up in this study, the GW events are assumed to be certainly astrophysical. The origins of the neutrinos are quantified for astrophysical or background scenarios. This requires the effective area of IceCube, past triggers of the GFU stream (which are predominantly of atmospheric origin), and the reconstructed energies of the neutrinos and their sky localizations. In addition to this, an $E^{-2}$ astrophysical spectrum is assumed. The relation between the GW and neutrinos are quantified via the difference between their detection times, their respective sky localizations, and the mean distance reconstruction of the GW event. Together with the astrophysical emission energy $E_{\mathrm{iso}}$, which is log-uniform between $10^{46}-10^{51}$ erg, the distance reconstruction of the GW event accounts for the propagation of the neutrinos in space.
Precomputed background distributions are used for calculating the $p$-values. In order to include the distance information of the GW events appropriately, different background distributions are constructed for different source types (BNS, NSBH, BBH coalescences). For this purpose, GW events are simulated for each source category and they are randomly matched with scrambled past GFU detections. The number of neutrinos matched with each GW event is drawn according to a Poisson distribution with a mean corresponding to the average GFU trigger count in 1000 s. The 90\% upper limits on the time-integrated neutrino flux are calculated as described in the appendix of \cite{Aartsen:2020mla}.
The neutrino follow-up on the candidate optical counterpart of GW190521 assumes the described emission model in \cite{Graham:2020gwr}. The model assumes a linearly decreasing emission intensity. The start and end times of the emission were found from the observed light curves by following the same model, which also includes an optical diffusion delay obeying a Maxwell-Boltzmann distribution. The least-squares estimations for the start and end times of the emission were found to be 23 and 80 days after the merger respectively, the same as that found in \cite{Graham:2020gwr}. For the search, the neutrino emission is assumed to be linearly decreasing in this time window, as assumed in \cite{Graham:2020gwr}, free of any diffusion effect; and coming from a point source located at the AGN's position.
\section{Low-Latency Operation} \label{sec:pipelines}
\label{sec:low-latency}
Both UML \citep{Aartsen:2020mla} and LLAMA \citep{2019arXiv190105486C,PhysRevD.100.083017} analyses deployed low-latency pipelines designed to perform automated neutrino follow-up searches after receiving notices from LVC through the Gamma-ray Coordinates Network (GCN) \citep{llama:2019icrc,Hussain:2019xzb}.
These pipelines allow for rapid neutrino follow-ups and the dissemination of results to the astronomical community via GCN circulars. Low-latency neutrino information can help inform the observing strategies of electromagnetic observatories searching for electromagnetic counterparts to GW events. For example, observatories such as $Swift$-XRT were able to use IceCube's neutrino follow-up results to narrow the search region for several GW events \citep{Keivani:2020utg}. While no electromagnetic counterparts were found during the O3 observing run, these pipelines show the discovery potential of low-latency multi-messenger astronomy in identifying joint sources of photons, GWs, and neutrinos.
Both analyses take advantage of the GCN notices to receive information about a given GW event and trigger a dedicated neutrino follow-up search. The pipelines use a python package, PyGCN \citep{pygcn}, to continuously monitor the GCN system for GCN notices sent by LVC. Due to the low-latency of the GFU Online stream ($\sim$30~s) and the speed of the follow-up analyses ($\sim$56~min), IceCube was able to rapidly circulate results from neutrino follow-ups to the astronomical community by using subsequent GCN circulars. Figure \ref{fig:latency} shows the distribution of response times between the IceCube GCN circulars and the GW merger time. The latency shown in the figure takes into account the time taken by LVC to send the initial GCN notice. Also included in the latency is the final vetting of the IceCube results by the collaboration's Realtime Oversight Committee (ROC) before sending the IceCube follow-up results via GCN circulars. Follow-ups with observed $p$-value $\leq$ 1\% in either pipeline or any follow-ups that were deemed interesting to the astronomical community by the ROC, resulted in releasing the directional information of the potentially significant neutrino candidate via GCN circulars.
During O3, there were a total of 56 non-retracted candidate GW events that were publicly shared. We ran follow-ups on these events and 4 of them resulted in the release of the directional information of a neutrino to the astronomical community. These released coincidences were further followed-up by different telescopes and observatories, e.g. $Swift$-XRT. For each of these events, the LVC GCN notices and the GCN circular archives are linked. The archives show all follow-ups performed by each observatory, including the follow-ups that use IceCube information. These events were the following:
\begin{itemize}
\item S190517h\footnote{GW event GCN notice \url{https://gcn.gsfc.nasa.gov/notices_l/S190517h.lvc}}\textsuperscript{,\,}\footnote{GCN circular archive \url{https://gcn.gsfc.nasa.gov/other/GW190517h.gcn3}}: This candidate BBH merger event had one neutrino located in the 90\% credible sky region of the GW localization. Due to this spatial coincidence the neutrino's localization was shared with the community \footnote{\url{https://gcn.gsfc.nasa.gov/gcn/gcn3/24573.gcn3}}, despite its low statistical significance.
\item S190728q\footnote{GW event GCN notice \url{https://gcn.gsfc.nasa.gov/notices_l/S190728q.lvc}}\textsuperscript{,\,}\footnote{GCN circular archive \url{https://gcn.gsfc.nasa.gov/other/GW190728q.gcn3}}: This candidate BBH merger event originally had a two-detector localization which did not yield any significant neutrino coincidence. The localization was later improved by the incorporation of the Virgo data, which increased the significance of one of the neutrinos. With the final online skymap the coincidence had the $p$-values 1.0\% and 1.6\% for the LLAMA and UML searches, respectively \footnote{\url{https://gcn.gsfc.nasa.gov/gcn/gcn3/25210.gcn3}}. Figure \ref{fig:S190728q_updates} shows the various localization skymaps sent by LVC and the associated results from each pipeline, which were reported in low-latency via GCN circulars. The skymaps were refined over a period of roughly 14 hours following the initial GCN notice sent by LVC. It is seen that the $p$-values from both pipelines become more significant as the localization is refined, since the neutrino candidate 3 remains within the high probability region of the skymap as the GW localization shrinks. Figure \ref{fig:S190728q} shows the zoomed in updated skymap of GW190728\_064510 with the coincident neutrino overlaid.
\item S191216ap\footnote{GW event GCN notice \url{https://gcn.gsfc.nasa.gov/notices_l/S191216ap.lvc}}\textsuperscript{,\,}\footnote{GCN circular archive \url{https://gcn.gsfc.nasa.gov/other/GW191216ap.gcn3}}e: This candidate BBH merger event was one of the events for which the results of the two analyses disagreed. It was located relatively close, at $\sim400$ Mpc. Due to this atypically close distance for a BBH merger, the neutrino-GW coincidence was favored by the LLAMA search which assigned a $p$-value of 0.6\%, whereas the UML search obtained a $p$-value of 22\% \footnote{\url{https://gcn.gsfc.nasa.gov/gcn/gcn3/26460.gcn3}}. The most interesting response to our GCN notices came after the release of the neutrino coinciding with this event. HAWC observatory sent out another notice saying their most significant \emph{subthreshold} gamma-ray trigger coincides both with the neutrino and GW's localizations \footnote{\url{https://gcn.gsfc.nasa.gov/gcn/gcn3/26472.gcn3}}. No further counterpart was found from the region and due to the uncertain nature of the gamma-ray trigger the state of the triple coincidence remained inconclusive.
\item S200213t\footnote{GW event GCN notice \url{https://gcn.gsfc.nasa.gov/notices_l/S200213t.lvc}}\textsuperscript{,\,}\footnote{GCN circular archive \url{https://gcn.gsfc.nasa.gov/other/GW200213t.gcn3}}: This event was the only candidate BNS merger for which a coincident neutrino was released. However, it didn't find a place in the published GW catalogs unlike the three events above. The UML and LLAMA searches obtained $p$-values of 0.3\% and 1.7\% respectively for the neutrino coincidence\footnote{\url{https://gcn.gsfc.nasa.gov/gcn/gcn3/27043.gcn3}}.
\end{itemize}
Both of these low-latency pipelines are being prepared to continue neutrino follow-ups during the fourth observing run of LIGO, Virgo and KAGRA detectors, planned to start in 2023.
\section{Archival Searches on Catalogs} \label{sec:results}
Once the catalogs containing the confident GW detections were published by LVC, we performed archival searches on these events. There were several GW events added or subtracted in the catalog when compared to the the candidate events shared with the community by LVC during the O3 run. Initially, LVC released the catalog GWTC-2 \citep{LIGOScientific:2020ibl}, which contained 39 events from the first half of O3. These events were analyzed using both UML and LLAMA methods and no significant neutrino emission was found \citep{Veske:2021Q6}. Later, this catalog was renewed by LVC resulting in the publication of GWTC-2.1\citep{LIGOScientific:2021usb}, which has an updated statistic used for the classification of the events as confident detections. This updated catalog has 44 GW events out of which 8 were new when compared to GWTC-2. Three events from GWTC-2 were retracted in the updated catalog. Here, we present the results of the 44 confident events in GWTC-2.1. The 36 common events were reanalyzed by the LLAMA search with a renewed background distribution, which was generated with the latest population estimates for the binary black holes. No appreciable change was found with the previous analysis. The results of the UML analysis for the common events stayed the same. Finally, LVC also published GWTC-3 \citep{LIGOScientific:2021djp}, a catalog containing the confident GW events observed during the second half of the O3 run \citep{LIGOScientific:2021djp}. These events were also analyzed as a part of the archival search.
First, we present the results of the searches for neutrino emission within a time window of $\pm$500~s around the 80 mergers in GWTC-2.1 and GWTC-3. We did not observe a significant neutrino emission from any GW event by any analysis. ULs were placed on the time-integrated, energy scaled neutrino flux, $E^2F$, as well as the $E_{\rm iso}$, emitted in high-energy muon neutrinos. Table \ref{tab:results} summarizes the results for each follow-up of GW events in GWTC-2.1 performed by both analyses. Similarly, Table \ref{tab:results2} shows the results for the GW events in GWTC-3. Figure \ref{fig:pvals} shows the histogram of the $p$-values for the collection of GW events from GWTC-1 \citep{Abbott_2019}, GWTC-2.1 \citep{LIGOScientific:2021usb} and GWTC-3 \citep{LIGOScientific:2021djp} from both analyses and the background expectations. The set of events did not show any significant sign of emission. The shown background expectation for the UML analysis was derived from the background TS distributions of each GW. The LLAMA analysis' background $p$-value distribution is seen to be uniform for all kinds of events. The different results for the LLAMA and the UML analyses arise from the inherent differences in the statistical approaches of the two --- one being a Bayesisan approach including priors of the GW source and the other being a purely frequentist approach. This is also true for the $p$-values obtained in the low-latency search described in section \ref{sec:low-latency}.
Figure \ref{fig:eiso} shows the $E_{\rm iso}$ ULs for all GW events in GWTC-1, GWTC-2.1 and GWTC-3 along with the total rest mass energy of the initial compact objects and the total energy radiated by the system post-merger. The total radiated energy is computed by taking the difference of the total rest mass energy of the two progenitors and the final remnant object.
No significant neutrino emission was observed in the second archival search presented here, which is the 2-week follow-up.
There are only 3 GW events in GWTC-2.1 which may have at least one neutron star in the binary system: GW190425, GW190814, and GW190917\_114630. Also 4 NSBH events were published in the GWTC-3 catalog: GW191219\_163120, GW200105\_162426 (marginal event), GW200115\_042309, GW200210\_092254. All of these events have at least one progenitor object with a mass estimate lower than 3~M$_{\odot}$ \citep{LIGOScientific:2021usb,LIGOScientific:2021djp}. The 2-week follow-up is performed on these 7 GW events. Once again, we place 90\% ULs on the time-integrated neutrino emission from each of the 7 GWs tested here. Table \ref{tab:2week_results} shows the $p$-values and ULs for these events and Fig. \ref{fig:2week_ts_maps_gwtc2} shows the final test statistic maps for these events. There was no difference between the neutron star containing events in GWTC-2 and GWTC-2.1.
Finally, no significant neutrino emission was found for the follow-ups on the candidate optical counterpart of GW190521 by both analysis methods. The modelled search of LLAMA yielded a $p$-value of 0.79, 90\% upper limit on the $E^2F$ of 0.05 GeV cm$^{-2}$ and 90\% upper limit on $E_{\rm iso}$ of 8$\times10^{53}$ erg. The UML analysis found a $p$-value of 0.25, with a 90\% upper limit on the time-integrated flux of $E^2F$=0.081 GeV cm$^{-2}$.
\section{Conclusion} \label{sec:conclusion}
Finding joint sources of GWs and high-energy neutrinos can help shed light on the sources of the highest energy neutrinos and cosmic rays \citep{doi:10.1146/annurev-nucl-101918-023510}. Studying these joint sources will also further expand our understanding of energetic outflows from the mergers of compact objects. The completion of the O3 realtime observing run and the release of the update to the second GW catalog, GWTC-2.1, followed by the release of GWTC-3 have provided a substantial increase in the number of reported GW candidates available for follow-up searches.
We developed low-latency pipelines which ran automated neutrino follow-ups for all GW events reported by LVC during the O3 observing run. Two different analyses, UML and LLAMA, both ran in low-latency and followed up each of the 56 candidate events reported during the O3 run. Four of the follow-up searches resulted in the release of the neutrino candidate's direction to the public via GCN circulars. This information prompted follow-up searches in electromagnetic observatories such as $Swift$-XRT, demonstrating the power of low-latency multi-messenger observations in informing the observing strategies of other observatories. The unresolved triple coincidence for GW191216, involving a subthreshold gamma-ray trigger from HAWC observatory, triggered the development of general multi-messenger search methods for many messengers \citep{Veske_2021}.
In addition to the low-latency follow-ups, we performed three offline analyses of the GW events reported in GWTC-2.1 and GWTC-3. The first analysis searched for neutrino emission within a $\pm$500~s time window centered around the GW merger time. Both the UML and LLAMA methods performed this search and no significant neutrino emission was observed in either search.
The second analysis was a 2-week follow-up of all BNS and NSBH candidate events with the UML search. All the GW events followed up in this analysis had at least one progenitor object with a mass estimate of $<$3~M$_{\odot}$. No significant neutrino emission was observed and 90\% ULs were placed on the time-integrated neutrino flux from each source.
The third analysis searched for neutrino emission from the potential optical counterpart of the BBH merger GW190521 reported by ZTF. The UML analysis tested a time window of 112~days following the GW merger time which covers the entire flare in the optical light curve. The UML analysis assumed a uniform neutrino emission within the time window. The LLAMA analysis assumed linearly decreasing neutrino emission in a 57 day time window according to the contemplated emission scenario for the optical flare. No significant neutrino emission was observed in both analysis methods and we derived 90\% ULs on the time-integrated flux and the $E_{\mathrm{iso}}$ from the AGN J124942.3+344929.
Apart from the analyses presented here, there also exists a gravitational wave follow-up analysis with neutrinos of a few 10 -100s of GeV energies detected by IceCube \citep{2022icrc.confE.939B}. This upcoming analysis will provide additional information, complimentary to the analyses with high-energy neutrinos presented here. Additionally, a search for extremely low energy neutrinos, with 0.5-5 GeV energies, from IceCube was conducted, and found no significant emission of neutrinos \citep{IceCube:2021ddq}.
The low-latency and archival searches will continue to function during the upcoming O4 run of LVK. It is expected that the O4 operational run will demonstrate enhanced performance, thereby increasing the rate of expected mergers. This would provide more opportunities to conduct multi-messenger studies which may lead to a potential discovery of neutrino and gravitational wave correlations. Additionally, the inclusion of more detectors from LVK will reduce the area of the sky localizations of the GW skymaps. This is also expected to contribute towards higher significances in case of coincident detections.
\begin{table*}
\begin{tabular}{|ccc|cc|ccc|}
\hline
\multicolumn{3}{|c}{GWTC-2.1}&\multicolumn{2}{|c}{ LLAMA } &\multicolumn{3}{|c|}{UML} \\ \hline
& & Area & & $E{^2F}$ UL & & $E{^2F}$ UL & \\
\multirow{-2}{*}{Event} & \multirow{-2}{*}{Type} & {[deg$^2$]} & \multirow{-2}{*}{$p$-value} & [GeVcm$^{-2}$] & \multirow{-2}{*}{$p$-value} & [GeVcm$^{-2}$] & \multirow{-2}{*}{$E_{\rm iso}$ UL [erg]}\\ \hline
GW190403\_051519 & BBH & 5589.4& 0.51 & 0.14 & 0.46 & 0.101 & 1.86 $\times$ 10$^{55}$ \\ \hline
GW190408\_181802 & BBH & 148.8 & 0.22 & 0.048 & 0.17 & 0.0512 & 4.85 $\times$ 10$^{53}$ \\ \hline
GW190412 & BBH & 20.9 & 0.27 & 0.041 & 0.13 & 0.0459 & 8.31 $\times$ 10$^{52}$ \\ \hline
GW190413\_052954 & BBH & 1484.5 & 0.30 & 0.087 & 0.28 & 0.133 & 7.01 $\times$ 10$^{54}$ \\ \hline
GW190413\_134308 & BBH &730.6 & 0.27 & 0.34 & 0.34 & 0.270 & 2.84 $\times$ 10$^{55}$ \\ \hline
GW190421\_213856 & BBH & 1211.5& 0.81 & 0.46 & 0.56 & 0.393 & 1.40 $\times$ 10$^{55}$ \\ \hline
GW190425 & BNS &9958.2 & 0.16 & 0.22 & 0.94 & 0.176 & 1.66 $\times$ 10$^{52}$ \\ \hline
GW190426\_190642 & BBH & 8214.5& 0.42 & 0.17 & 0.18 & 0.282 & 1.25 $\times$ 10$^{55}$ \\ \hline
GW190503\_185404 & BBH & 94.4& 0.94 & 0.54 & 0.34 & 0.584 & 4.99 $\times$ 10$^{54}$ \\ \hline
GW190512\_180714 & BBH & 218.0& 0.81 & 0.23 & 0.85 & 0.199 & 1.74 $\times$ 10$^{54}$ \\ \hline
GW190513\_205428 & BBH &518.4 & 0.99 & 0.043 & 0.94 & 0.0514 & 6.73 $\times$ 10$^{53}$ \\ \hline
GW190514\_065416 & BBH & 3009.7& 0.25 & 0.089 & 0.44 & 0.0453 & 3.96 $\times$ 10$^{54}$ \\ \hline
GW190517\_055101 & BBH & 473.3& 0.21 & 0.48 & 0.26 & 0.366 & 6.05 $\times$ 10$^{54}$ \\ \hline
GW190519\_153544 & BBH & 857.1& 0.067 & 0.15 & 0.21 & 0.0914 & 3.20 $\times$ 10$^{54}$ \\ \hline
GW190521 & BBH & 1008.2& 0.62 & 0.37 & 0.63 & 0.359 & 1.90 $\times$ 10$^{55}$ \\ \hline
GW190521\_074359 & BBH &546.5 & 0.11 & 0.049 & 0.15 & 0.0451 & 2.36 $\times$ 10$^{53}$ \\ \hline
GW190527\_092055 & BBH &3662.4 & 0.65 & 0.41 & 0.88 & 0.326 & 1.01 $\times$ 10$^{55}$ \\ \hline
GW190602\_175927 & BBH & 694.5& 0.31 & 0.34 & 0.17 & 0.370 & 9.73 $\times$ 10$^{54}$ \\ \hline
GW190620\_030421 & BBH &7202.1 & 0.20 & 0.36 & 0.23 & 0.121 & 4.13 $\times$ 10$^{54}$ \\ \hline
GW190630\_185205 & BBH & 1216.9& 0.64 & 0.15 & 0.81 & 0.427 & 5.31 $\times$ 10$^{53}$ \\ \hline
GW190701\_203306 & BBH & 46.1& 1.0 & 0.039 & 0.87 & 0.0385 & 7.65 $\times$ 10$^{53}$ \\ \hline
GW190706\_222641 & BBH & 653.8& 0.99 & 0.036 & 0.92 & 0.0356 & 3.17 $\times$ 10$^{54}$ \\ \hline
GW190707\_093326 & BBH & 1346.& 0.43 & 0.24 & 0.63 & 0.202 & 4.74 $\times$ 10$^{53}$ \\ \hline
GW190708\_232457 & BBH & 13675.4& 0.11 & 0.11 & 0.56 & 0.0720 & 1.62 $\times$ 10$^{53}$ \\ \hline
GW190719\_215514 & BBH &2890.1 & 0.83 & 0.054 & 0.91 & 0.0512 & 4.90 $\times$ 10$^{54}$ \\ \hline
GW190720\_000836 & BBH &463.4 & 0.99 & 0.13 & 0.94 & 0.0872 & 5.34 $\times$ 10$^{53}$ \\ \hline
GW190725\_174728 & BBH & 2292.5& 0.048 & 0.19 & 0.59 & 0.0918 & 4.04 $\times$ 10$^{53}$ \\ \hline
GW190727\_060333 & BBH &833.8 & 0.89 & 0.38 & 0.74 & 0.324 & 1.53 $\times$ 10$^{55}$ \\ \hline
GW190728\_064510 & BBH & 395.5& 0.0084 & 0.89 & 0.04 & 0.315 & 6.36 $\times$ 10$^{53}$ \\ \hline
GW190731\_140936 & BBH & 3387.3& 0.25 & 0.93 & 0.61 & 0.385 & 1.81 $\times$ 10$^{55}$ \\ \hline
GW190803\_022701 & BBH & 1519.5& 0.31 & 0.037 & 0.64 & 0.0354 & 1.69 $\times$ 10$^{54}$ \\ \hline
GW190805\_211137 & BBH & 3949.1& 0.74 & 0.20 & 0.93 & 0.180 & 2.56 $\times$ 10$^{55}$ \\ \hline
GW190814 & BBH* & 19.3& 1.0 & 0.24 & 1.0 & 0.259 & 5.68 $\times$ 10$^{52}$ \\ \hline
GW190828\_063405 & BBH &520.1 & 0.93 & 0.21 & 0.98 & 0.178 & 2.74 $\times$ 10$^{54}$ \\ \hline
GW190828\_065509 & BBH & 664.0& 0.84 & 0.38 & 0.84 & 0.368 & 3.73 $\times$ 10$^{54}$ \\ \hline
GW190910\_112807 & BBH & 10880.3& 0.22 & 0.45 & 0.77 & 0.177 & 1.90 $\times$ 10$^{54}$ \\ \hline
GW190915\_235702 & BBH & 396.9& 0.56 & 0.036 & 0.44 & 0.0354 & 3.61 $\times$ 10$^{53}$ \\ \hline
GW190916\_200658 & BBH & 4499.2& 0.52 & 0.16 & 0.85 & 0.108 & 1.22 $\times$ 10$^{55}$ \\ \hline
GW190917\_114630 & NSBH* & 2050.6& 0.20 & 0.19 & 0.72 & 0.203 & 6.37 $\times$ 10$^{53}$ \\ \hline
GW190924\_021846 & BBH & 357.9& 0.031 & 0.037 & 0.23 & 0.0346 & 4.46 $\times$ 10$^{52}$ \\ \hline
GW190925\_232845 & BBH & 1233.5& 0.39 & 0.11 & 0.59 & 0.0908 & 3.41 $\times$ 10$^{53}$ \\ \hline
GW190926\_050336 & BBH & 2505.9& 0.13 & 0.78 & 0.33 & 0.280 & 2.30 $\times$ 10$^{55}$ \\ \hline
GW190929\_012149 & BBH & 2219.3& 0.11 & 0.34 & 0.22 & 0.276 & 1.85 $\times$ 10$^{55}$ \\ \hline
GW190930\_133541 & BBH & 1679.6& 0.14 & 0.038 & 0.31 & 0.0427 & 1.05 $\times$ 10$^{53}$ \\ \hline
\end{tabular}
\caption{Results for the events in GWTC-2.1 \citep{LIGOScientific:2021usb} for the 1000~s follow-up. GW190814 is labelled as a BBH merger here although the type of the lighter object with $\sim2.6$ M$_\odot$ is unknown \citep{Abbott:2020khf}. GW190917\_114630 is labelled as NSBH since its estimated source properties are more like that of an NSBH event although the event was found to be significant by a BBH template. The table also shows the area on the sky containing 90\% probabilities from the GW skymap.}
\label{tab:results}
\end{table*}
\begin{table*}
\begin{tabular}{|ccc|cc|ccc|}
\hline
\multicolumn{3}{|c}{GWTC-3}&\multicolumn{2}{|c}{ LLAMA } &\multicolumn{3}{|c|}{UML} \\ \hline
& & Area & & $E{^2F}$ UL & & $E{^2F}$ UL & \\
\multirow{-2}{*}{Event} & \multirow{-2}{*}{Type} & {[deg$^2$]} & \multirow{-2}{*}{$p$-value} & [GeVcm$^{-2}$] & \multirow{-2}{*}{$p$-value} & [GeVcm$^{-2}$] & \multirow{-2}{*}{$E_{\rm iso}$ UL [erg]}\\ \hline
GW191103\_012549 & BBH & 2519.6 & 0.53 & 0.049 & 0.71 &0.049 & $1.96\,\times \,10^{53}$\\ \hline
GW191105\_143521 & BBH & 728.7 & 0.27 & 0.28 & 0.54 & 0.267 & $1.28\,\times \,10^{54}$\\ \hline
GW191109\_010717& BBH & 1784.3 & 0.14 & 0.48 & 0.05 & 0.508 & $5.03\,\times\, 10^{54}$\\ \hline
GW191113\_071753& BBH & 2993.3 & 0.076 & 0.52 & 0.19 & 0.441 & $3.12\,\times \,10^{54}$\\ \hline
GW191126\_115259& BBH & 1514.5 & 0.77 & 0.13 & 1.00 & 0.138 & $1.42 \, \times \, 10^{54}$\\ \hline
GW191127\_050227& BBH & 1499.2 & 0.38 & 0.078 & 0.83 & 0.081 & $2.96\,\times\,10^{54}$\\ \hline
GW191129\_134029& BBH & 848.3 & 0.25 & 0.35 & 0.30 & 0.425 & $8.95\,\times\,10^{53}$\\ \hline
GW191204\_110529& BBH & 4747.7 & 0.16 & 0.36 & 0.49 & 0.085 & $1.46\,\times\,10^{54}$\\ \hline
GW191204\_171526& BBH & 344.9 & 0.97 & 0.26 & 1.00 & 0.280 & $3.96\,\times\,10^{53}$\\ \hline
GW191215\_223052& BBH & 595.8 & 0.98 & 0.26 & 1.00 & 0.211 & $2.98\,\times\,10^{54}$\\ \hline
GW191216\_213338& BBH & 480.1 & 0.0049 & 0.093 & 0.10 & 0.071 & $2.57\,\times\,10^{52}$ \\ \hline
GW191219\_163120& NSBH & 2232.1 & 0.09 & 0.26 & 0.71 & 0.219 & $2.80\,\times\,10^{53}$\\ \hline
GW191222\_033537& BBH & 2299.2 & 0.95 & 0.36 & 1.00 & 0.375 & $1.1\,\times\,10^{55}$ \\ \hline
GW191230\_180458& BBH & 1012.2 & 0.37 & 0.36 & 0.28 & 0.488 & $3.18\,\times\,10^{55}$ \\ \hline
GW200105\_162426& NSBH & 7881.8 & 0.20 & 0.13 & 0.81 & 0.095 & $2.98\,\times\,10^{52}$ \\ \hline
GW200112\_155838& BBH & 4250.4 & 0.58 & 0.18 & 0.79 & 0.133 & $8.43\,\times\,10^{53}$ \\ \hline
GW200115\_042309&NSBH & 511.9 & 0.34 & 0.038 & 0.45 & 0.045 & $2.12\,\times\,10^{52}$ \\ \hline
GW200128\_022011& BBH & 2677.5 & 0.46 & 0.25 & 0.47 & 0.243 & $9.31\,\times\,10^{54}$\\ \hline
GW200129\_065458& BBH & 81.8 & 0.033 & 0.041 & 0.05 & 0.406 & $1.73\,\times\,10^{53}$ \\ \hline
GW200202\_154313& BBH & 159.3 & 0.0057 & 0.039 & 0.06 & 0.038 & $2.43\,\times\,10^{52}$ \\ \hline
GW200208\_130117& BBH & 38.0 & 0.94 & 0.33 & 1.00 & 0.518 & $9.25\,\times\,10^{54}$ \\ \hline
GW200208\_222617& BBH & 1889.2 & 0.41 & 0.045 & 0.90 & 0.043 & $4.98\,\times\,10^{54}$ \\ \hline
GW200209\_085452& BBH & 924.5 & 0.84 & 0.50 & 1.00 & 0.041 & $1.81\,\times\,10^{54}$\\ \hline
GW200210\_092254&BBH & 1830.7 & 0.28 & 0.071 & 0.79 & 0.081 & $2.51\,\times\,10^{53}$\\ \hline
GW200216\_220804&BBH & 3009.5 & 0.065 & 0.066 & 0.46 & 0.236 & $2.82\,\times\,10^{54}$\\ \hline
GW200219\_094415&BBH & 702.1 & 0.98 & 0.23 & 1.00 & 0.035 & $9.57\,\times\,10^{54}$\\ \hline
GW200220\_061928&BBH & 3484.7 & 0.23 & 0.22 & 0.05 & 0.357 & $4.23\,\times\,10^{55}$ \\ \hline
GW200220\_124850& BBH & 3168.9 & 0.42 & 0.13 & 0.53 & 0.118 & $6.31\,\times\,10^{54}$ \\ \hline
GW200224\_222234& BBH& 49.9 & 0.90 & 0.068 & 1.00 & 0.079 & $9.33\,\times\,10^{53}$ \\ \hline
GW200225\_060421& BBH& 509.0 & 0.0048 & 0.10 & 0.20 & 0.055 & $3.03\,\times\,10^{53}$ \\ \hline
GW200302\_015811& BBH& 7010.8 & 0.16 & 0.67 & 0.21 & 0.531 & $4.34\,\times\,10^{54}$ \\ \hline
GW200306\_093714 & BBH & 4371.2 & 0.15 & 0.074 & 0.57 & 0.046 & $9.99\,\times\,10^{53}$ \\ \hline
GW200308\_173609 & BBH & 18705.7 & 0.24 & 0.38 & 0.29 & 0.326 & $7.18\,\times\,10^{55}$ \\ \hline
GW200311\_115853& BBH & 35.6 & 1.0 & 0.047 & 1.00 & 0.076 & $4.38\,\times\,10^{53}$ \\ \hline
GW200316\_215756& BBH & 410.4 & 0.17 & 0.066& 0.04 & 0.110 & $5.19\,\times\,10^{53}$ \\ \hline
GW200322\_091133 & BBH & 31571.1 & 0.23 & 0.18 & 0.87 & 0.148 & $4.39\,\times\,10^{55}$ \\ \hline
\end{tabular}
\caption{Results for the events in GWTC-3 \citep{LIGOScientific:2021djp} for the 1000~s follow-up. The central 68\% energy range of the events contributing to the limits shown here ranges from $5\,\times$ 10$^5$~GeV~-~10$^7$~GeV in the southern hemisphere and $5\,\times$ 10$^3$~GeV~-~10$^5$~GeV in the northern hemisphere.}
\label{tab:results2}
\end{table*}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Event & Type & $p$-value & $E{^2F}$ UL [GeVcm$^{-2}$] \\ \hline
GW190425 & BNS & 0.43 & 0.661 \\ \hline
GW190917\_114630 & NSBH & 0.84 & 0.442 \\ \hline
GW190814 & BBH & 0.59 & 0.309 \\ \hline
GW191219\_163120 & NSBH & 0.67 & 0.347 \\ \hline
GW200105\_162426 & NSBH & 0.47 & 0.382 \\ \hline
GW200115\_042309 & NSBH & 0.68 & 0.078 \\ \hline
GW200210\_092254 & NSBH & 0.13 & 0.303 \\ \hline
\end{tabular}
\caption{Results for the 2 week follow-up analysis using the UML method. 3 events from GWTC-2.1 \citep{LIGOScientific:2021usb} and 4 events from GWTC-3 \citep{LIGOScientific:2021djp} were followed up as they were the only potential BNS/NSBH candidates.}
\label{tab:2week_results}
\end{table*}
\section*{Acknowledgements}
The IceCube collaboration acknowledges the significant contributions to this manuscript from Aswathi Balagopal V., Raamis Hussain and Do\u{g}a Veske.
The authors gratefully acknowledge the support from the following agencies and institutions: USA – U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, U.S. National Science Foundation-EPSCoR, Wisconsin Alumni Research Foundation, Center for High Throughput Computing (CHTC) at the University of Wisconsin–Madison, Open Science Grid (OSG), Extreme Science and Engineering Discovery Environment (XSEDE), Frontera computing project at the Texas Advanced Computing Center, U.S. Department of Energy-National Energy Research Scientific Computing Center, Particle astrophysics research computing center at the University of Maryland, Institute for Cyber-Enabled Research at Michigan State University, and Astroparticle physics computational facility at Marquette University; Belgium – Funds for Scientific Research (FRS-FNRS and FWO), FWO Odysseus and Big Science programmes, and Belgian Federal Science Policy Office (Belspo); Germany – Bundesministerium für Bildung und Forschung (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Initiative and Networking Fund of the Helmholtz Association, Deutsches Elektronen Synchrotron (DESY), and High Performance Computing cluster of the RWTH Aachen; Sweden – Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation; Australia – Australian Research Council; Canada – Natural Sciences and Engineering Research Council of Canada, Calcul Québec, Compute Ontario, Canada Foundation for Innovation, WestGrid, and Compute Canada; Denmark – Villum Fonden and Carlsberg Foundation; New Zealand – Marsden Fund; Japan – Japan Society for Promotion of Science (JSPS) and Institute for Global Prominent Research (IGPR) of Chiba University; Korea – National Research Foundation of Korea (NRF); Switzerland – Swiss National Science Foundation (SNSF); United Kingdom – Department of Physics, University of Oxford.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org) \citep{RICHABBOTT2021100658}, a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan.
\bibliography{main}{}
\bibliographystyle{aasjournal}
\appendix
\vspace{-0.36cm}
\section{Skymaps}
This appendix includes the skymaps obtained in the context of this analysis. Figure \ref{fig:2week_ts_maps_gwtc2} shows the TS maps for the two-week follow-up analysis. Figures \ref{fig:1000s_skymaps} and \ref{fig:1000s_skymaps_gwtc3} show the skymaps with the GW probabilities and the observed neutrinos within the 1000~s time window in the archival search.
\vspace{-0.3cm}
\vspace{-1.26cm}
\setcounter{figure}{8}
\setcounter{figure}{9}
|
Title:
Effects of Reheating on Moduli Stabilization |
Abstract: Moduli potential loses its minima due to external energy sources of inflaton
energy density or radiation produced at the end of inflation. But, the
non-existence of minima does not necessarily mean destabilization of moduli. In
fact, the destabilization of moduli is always dependent on the initial field
values of the fields. In this work, we study carefully how the effects of
reheating ease the problem of moduli destabilization. The associated time scale
to produce the thermal bath allows a larger initial field range to stabilize
the field. Contrary to the usual notion, the allowed initial field range is
larger for higher temperatures when the effective potential is of a run-away
nature. This eases the moduli destabilization problem for heavy mass moduli.
For low mass moduli ($\lesssim$ 30 TeV), the allowed field range still causes
the cosmological moduli problem by violating the BBN constraints unless its
initial abundance is suppressed.
| https://export.arxiv.org/pdf/2208.00427 |
\flushbottom
\section{Introduction}
\label{sec:intro}
In supersymmetric theories beyond the Standard Model, there exist several massless scalar fields. To avoid stringent fifth-force constraints, these fields must be massive, and their mass is usually related to the effects of supersymmetry breaking. In the context of supergravity, these fields are gravitationally coupled with other fields whose decay widths are Planck suppressed. For our considerations, we will call these fields collectively `moduli' represented by $\sigma$ (or its canonically normalized version $\phi$). In the context of String Theory, these fields characterize either the value of the low energy gauge/Yukawa coupling constants or the volume of the compact internal manifolds. Due to phenomenological constraints related to the time variations of the coupling constants or to avoid the decompactification of internal spaces, it is crucial that the vev of these fields are fixed at finite values. This requires a clear understanding of the sources of the potential in some fundamental theory, as well as the evolution of these fields in a cosmological background \cite{Binetruy:2006ad}.
The typical decay widths of these moduli fields are given by $\Gamma_{\phi} \sim m_\phi ^3/M_{\rm pl}^2$, where $m_\phi$ is the mass of these fields. Therefore, if the field is lighter than $20$ MeV, its decay time is larger than the age of the Universe. On the other hand, unless the field is lighter than $10^{-26}$ eV with an initial amplitude of $\sim M_{\rm pl}$, it carries too much energy to overclose the Universe. These scalar fields typically remain away from their global minimum and when their masses become the order of the Hubble constant, they start to oscillate \cite{Dine:1995uk}, \cite{Cicoli:2016olq}. The energy densities carried by these fields redshift slower than any preexisting radiation, and soon may start to dominate the energy density of the Universe. But, as long as the fields are heavier than $\sim 30$ TeV, it decays well before the Big Bang Nucleosynthesis (BBN), and it is cosmologically consistent with all observations. In several models in supergravity and String Theory, moduli have masses in the range of few GeV to TeV, and it is related to the supersymmetry breaking scale. In this case, unless moduli abundance is highly suppressed i.e $n_\phi/ s \lesssim 10^{-12}$, the moduli will decay during or after BBN whose decay products spoil the light element abundance. Here, $n_{\phi}$ is moduli number density and $s$ is the entropy density. In the literature, the problems created by moduli with masses of few GeV to TeV is dubbed as cosmological moduli problem \cite{Coughlan:1983ci}, \cite{Banks:1993en}, \cite{deCarlos:1993wie}.
As mentioned before, the moduli fields at their present minimum must be massive. Moreover, the value of the potential at the minimum needs to be the present value of the cosmological constant required to explain the recent cosmic acceleration. In several String Theory constructions, the potential also has another minimum at an infinite field value, and the two minima are separated by a finite barrier height \cite{Kachru:2003aw}. In some cases, additionally, there might be another AdS global minimum, and in that case, the typical tunneling time must be smaller than the age of the Universe \cite{Kallosh:2004yh}. A schematic moduli potential (blue line) is shown in Fig.~\ref{fig:KKLT_potential} where the de-Sitter minimum at a finite field value is at $\sigma_{\rm min}$, and the barrier is at $\sigma_{\rm max}$ with potential value $V_{\rm max}$. In the plot, we have shown the potential in terms of the canonically normalized field $\phi$.
Even if the modulus potential has desired properties, it is important that the modulus field remains stabilized at a finite field value at $\sigma_{\rm min}$. This is not necessarily guaranteed in a cosmological setup. For example, even if the potential is of the above form with the required minimum at a finite field value, the field may start to evolve from high up the potential on the left-hand side of the minimum where the potential is very steep. In this case, even with cosmological damping, the field might have enough kinetic energy to overshoot the finite barrier height \cite{Brustein:1992nk}. Even worse, if the field has an initial field value greater than $\sigma_{\rm max}$, the field is guaranteed to eventually reach the destabilization limit of the infinite value.В The issue of overshooting the barrier due to initial field configuration is called the `Brustein and Steinhardt' problem in the literatureВ \cite{Brustein:1992nk}. Several solutions to the problem have been proposed in the literatures \cite{Barreiro:1998aj}, \cite{Brustein:2004jp}, \cite{Kaloper:2004yj} \cite{Barreiro:2005ua}, and all are related to the idea of adding additional background energy densities. Therefore, dynamical analysis with the sensitivity of the initial conditions is necessary to understand the issue of moduli stabilization. В
Let us call the moduli potential with the above-mentioned properties $V_0 (\sigma)$. But, in any realistic theory, this potential is not in isolation. In fact, in the context of inflation in supergravity and in String Theory, the moduli fields are generically coupled with the inflaton $\varphi$. The nature of the coupling is dictated by the supergravity, and the total potential during inflation in the simplest form roughly looks like
\begin{equation} \label{v_total_inf}
V_{\rm total} = V_0 (\sigma) + V_{\rm inf} (\varphi, \sigma)~,
\end{equation}
where the coupling term typically is of the form $V_{\rm inf} (\varphi, \sigma) = V(\varphi)/ \sigma^n$ with $ n > 0$. Here $V(\varphi)$ drives inflation. For any explicit model, the potential for $V_{\rm total}$ can be much more complicated than the separable form of Eq.~\eqref{v_total_inf}. But, the crucial point is that if $V(\varphi)$ is large enough, $V_{\rm total}$ will have a run-away direction along the modulus field \cite{Kallosh:2004yh}. In particular, when $V(\varphi) \sim V_{\rm max}$, the potential completely loses its minimum at a finite field value. In this case, it was argued that the modulus field will eventually roll towards the large vev. This in fact leads to the famous KL bound of the Hubble scale during inflation $H_{\inf} \lesssim m_{3/2}$ for the KKLT model \cite{Kallosh:2004yh}. It is important to note that the bound arises due to the finite barrier height and coupling between the inflation and the moduli sector.
There is another source of destabilization of the modulus potential, and that is the main topic of this work. At the end of inflation, the energy density stored in the inflaton decays to some lighter species producing a hot thermal plasma in the end. The thermal plasma induces a temperature-dependent potential for the modulus field of the form $V_{\rm thermal} \sim T^4/ \sigma$ where
\begin{equation} \label{v_total_thermal}
V_{\rm total} = V_0 (\sigma) + V_{\rm thermal} (T, \sigma)~.
\end{equation}
As like the inflaton dependent term above, it is clear that for high enough temperature $T_{\rm crit} \sim V_{\rm max}^{1/4} $, the minima will disappear and the full potential will have a run-away behavior \cite{Buchmuller:2004xr}, \cite{Buchmuller:2004tz}. In fact, this is the original argument of having a maximum reheating temperature which spoils the moduli stabilization structure of the zero temperature potential of $V_0 (\sigma)$. In this case, it was assumed that as soon as the potential loses its local minimum, the field will destabilize leading to infinite vev in the far future. The thermal corrections to the moduli potential always induce some initial misalignment of the field, and it leads to a certain amount of the modulus coherent oscillations \cite{Nakayama:2008ks}. For lighter mass modulus this resurrects the usual cosmological moduli problem. In the context of LARGE Volume type IIB flux compactifications, the finite-temperature corrections to the modulus potential have been calculated explicitly in \cite{Anguelova:2009ht} and some cosmological implications have been discussed in \cite{Anguelova:2009ht}, \cite{Gallego:2020vbe}.
Note that even though in Eq.~\eqref{v_total_inf} and in Eq.~\eqref{v_total_thermal}, we have written the total potential during and after inflation separately, in reality, both the $V_{\rm inf}$ and $V_{\rm thermal}$ exist simultaneously. In fact, the former translates to the thermal bath due to the process of reheating and it causes the temperature-dependent contributions to the potential. In our work, we will consider this conversion of energy consistently and study how the process affects the dynamics of the moduli stabilization. We will emphasize that the condition of not having a minimum for $V_{\rm total}$ as discussed in \cite{Buchmuller:2004xr}, \cite{Buchmuller:2004tz} is not the same as the condition of destabilization of the potential. In another way, the effective potential becoming run-away nature does not correspond to the destabilization of the modulus field. The dynamics of the field need to be accounted for \cite{Barreiro_2008}, \cite{Papineau:2009tv}. In fact, the question of destabilization is always initial field value dependent, and our analysis shows that the corrections due to radiation bath do not make things worse in any way.
For any value of temperature, there will always be an initial field range for which the field does not overshoot. When $T \gtrsim T_{\rm crit}$, the local minimum is no longer there, and the effective potential is run-away nature. The issue is involved with the dynamics of the field as well as changing the form of the potential as time passes. The field is moving under the Hubble damping which is proportional to the energy density of the Universe that includes either the inflaton energy density or the radiation produced through reheating. The Hubble damping helps the field to settle at its minimum for some range of initial field values. If the initial temperature $T \lesssim T_{\rm crit}$, the potential minimum always exists. In this case, if the initial field value is larger than the $\sigma_{\rm max}$, the field overshoots. Again, there will be another field value smaller than the $\sigma_{\rm max}$ for which the field will overshoot due to attaining high initial kinetic energy at the steep part of the potential.
If we consider radiation energy density as an initial condition \cite{Barreiro_2008}, its energy density is continuously falling and thus affecting the shape of the potential. Therefore, it is possible that before the field crosses the barrier height, an instantaneous minimum is created again and the field gets trapped without destabilizing the potential. Moreover, in this case, the position of the maximum and the minimum change with time. Surely, the question of overshooting depends on the initial field position of the modulus field. We will analyze how the initial allowed field range varies with the initial temperature. Contrary to the usual notion, we will find that the allowed field range is larger for temperature above $T_{\rm crit}$.
Again, reheating is not an instantaneous phenomenon; the temperature is produced in a gradual manner. It allows the field to relax to its instantaneous minimum more easily than the case when the radiation bath is assumed as an initial condition. In this case, the relaxation of the modulus field to its minimum depends on the total decay width of the inflaton $\varphi$. Before appreciable decay happens, the modulus feels only the zero-temperature potential. Now, the smaller the decay width, it takes longer time to produce the thermal bath, thus distorting the potential. Broadly, the effects of reheating allow the field to stabilize for a larger range of initial conditions. Our analysis will focus on this issue in detail. We will also discuss how the allowed initial field range causes cosmological moduli problems by violating BBN bound on nucleosynthesis.
This paper is organized in the following manner. In the next section, we will discuss how thermal baths correct the zero-temperature potential, and without considering dynamics, we will calculate the critical temperature when the potential becomes of a runaway nature. In Sec. \ref{Dynamics}, we will analyze the dynamics of the field when a radiation bath is considered as an initial condition. Our focus would be to find out the allowed initial field range for different values of temperature ranging from below $T_{\rm crit}$ to above $T_{\rm crit}$. In Sec. \ref{Effect of Continuous reheating and Moduli Dynamics}, we will focus our attention on accounting for the radiation bath generation from the decay of the inflaton. We will find out how it further relaxes the allowed field range. In Sec. \ref{moduli_abundance}, we discuss moduli abundance constraints on initial field values from BBN, and we will contrast it with the constraints coming from overshooting. In Sec. \ref{Discussions and Conclusion}, we discuss and conclude.
\section{Thermal corrections and critical temperature}
\label{thermal_corrections}
At the end of inflation, a radiation bath is produced from the gauge and matter fields. In turn, the zero-temperature modulus potential $V_0 (\sigma)$ receives temperature-dependent corrections. The corrections depend on the masses and couplings of the particles present in the thermal bath. The moduli particles typically do not take part in the thermal bath due to their Planck suppressed couplings\footnote{But, see also \cite{Anguelova:2009ht}.}. But, gauge and matter fields contribute to the modulus potential through loops.
In thermal field theory, the free-energy $F(g, T)$ contributes to the effective potential, where $g$ is the gauge coupling constant. At high temperature, the free energy has a perturbative expansion in $g$, and up to the leading order it is given by $F(g, T)=(a_0+a_2 g^2)T^4$ \cite{Kapusta:2006pm}. The parameters $a_0 (<0)$ and $a_{2} (>0)$ depending on the underlying gauge theory and the matter content considered, and for our consideration, we will treat $a_{0}$ and $a_{2}$ as free variables. The first term originates from the $1$-loop thermal corrections that represent the ideal gas of non-interacting particles. On the other hand, the second term corresponds to the interactions among the particles in the thermal bath and appears in the $2$-loop level.
In this case, the temperature-dependent effective potential looks like \begin{equation} \label{v_total}V_{\rm total} = V_0 (\sigma) + (a_0+a_2 g^2)T^4~. \end{equation}In String theory the gauge coupling constant is related to the modulus field: $g^2 = \kappa/\sigma$, where $\kappa$ is a constant of $\mathcal O$(1) \cite{Buchmuller:2004xr},\cite{Buchmuller:2004tz}. In the end, the temperature-dependent part of the potential becomes moduli-dependent with its runaway nature, and its strength is governed by the temperature. The zero-temperature potential typically has a local minimum separated from its minimum at infinite vev by a finite barrier height \cite{Kachru:2003aw},\cite{Kallosh:2004yh}. At sufficiently high temperature, the potential completely becomes of the run-away nature and the field asymptotically goes to large vev with gauge coupling becoming small. Therefore, in analyzing the dynamics of a modulus field at the end of inflation, in addition to the $V_0 (\sigma)$, the temperature-dependent part of the potential is crucial, more specifically, how the temperature is produced.
The zero-temperature potential $V_0(\sigma)$ originates from some moduli stabilization mechanism in String Theory. The potential at its minimum is positive only after the addition of an uplifting term to an otherwise supersymmetric anti-De Sitter minimum. Due to the uplifting, the SUSY is broken and a finite barrier is created whose height is almost equal to the depth of the anti-De Sitter minimum. Therefore, in this kind of set-up $V_{\rm max} \sim m_{3/2}^2 M_{\rm pl}^2$, where $m_{3/2}$ is the gravitino mass parameterizing the supersymmetry breaking scale. The critical temperature is the temperature at which the finite minimum of the moduli potential disappears. Mathematically, the critical temperature $T_{\rm crit}$ is defined by the appearance of a saddle point at some field value $\sigma_{\rm crit}$:
\begin{equation}
V_{\rm total}'(\sigma_{\rm crit},T_{\rm crit}) = 0 \label{1deri}, ~~ V_{\rm total}''(\sigma_{\rm crit},T_{\rm crit}) = 0~.
\end{equation}
Now, empirically the above conditions for the disappearance of the potential minimum translate to the condition of $V_{\rm total} \gtrsim \mathcal{O}(1) V_{\rm max}$, and in terms of the temperature, it translates to $T_{\rm crit}\sim V_{\rm max}^{1/4}$. Using the relation between the $V_{\rm max}$ and the gravitino mass, the approximate expression for the critical temperature is \cite{Buchmuller:2004tz}\begin{equation}\label{crit temp} T_{\rm crit} \sim c \sqrt{m_{3/2}M_p}~,\end{equation}where $c \sim \mathcal{O}(1)$, and it depends on the explicit model parameters \cite{Buchmuller:2004xr},\cite{Buchmuller:2004tz}.
For example, here we quickly review the KKLT moduli stabilization potential and the effects of thermal corrections on the potential. We will need the form of the potential in our future analysis. In the KKLT set-up, the dilaton and the complex structure modulus are stabilized at a higher energy scale by suitable choices of fluxes. The low-energy effective potential is governed by one K\"ahler modulus that corresponds to the overall volume of the internal space.В The superpotential and the K\"ahler potential for the complex volume modulus $\rho=\sigma+i\alpha$ are given respectively by \cite{Kachru:2003aw}\begin{equation} W=W_{0}+Ae^{-a\rho}, ~~~~ K=-3\ln(\rho+\overline{\rho})~.\end{equation}By appropriately choosing $W_0$ and $A$, we set $\alpha = 0$ (stabilised) for our analysis. The $F$-term scalar potential inВ $\mathcal{N}=1$ supergravity looks likeВ \begin{equation}\label{pot} V_0^{\rm KKLT}(\sigma) = \frac{aAe^{-a\sigma}}{2\sigma^2}\left(\frac{1}{3}aA\sigma e^{-a\sigma}+W_{0}+Ae^{-a\sigma}\right) + \frac{D}{\sigma^3}\end{equation}where the last term is added to make the AdS minimum to de Sitter minimum. Due to this uplifting term, the supersymmetry is broken, and the value of $D$ is chosen so that the potential at its local minimum is almost zero or close to the value of the present cosmological constant.В
For our case, we first choose the following values of the parameters $A=1, a=0.1, W_0 = -10^{-8}, D= 3.3\times10^{-17}$ and for these choices of parameters, $\sigma_{\rm min} \sim 211 $. Due to the uplifting potential, the potential has a finite barrier at $\sigma_{\rm max} \sim 241$, and the height is related to the gravitino mass $m_{3/2}$. For this case, $m_{\phi}$ is around $2.6\times10^{6}$ GeV, and following Eq.~\eqref{crit temp}, the correspondingВ critical temperature is around $7.3\times10^{12}$ GeV . The temperature-dependent potential for these choices of parameters is shown in Fig.~\ref{fig:KKLT_potential}.В For several well-motivated models of the racetrack and K\"ahler stabilization, the critical temperature has been calculated, and it has been shown that the results are closely tied to the supersymmetry breaking \cite{Buchmuller:2004xr}. The effects of finite temperature have also been discussed in several String models \cite{Anguelova:2007ex},В \cite{Papineau:2008xf}.В
We will also consider parameter values that correspond to the moduli mass $m_{\phi} \sim 40$ GeV. This can be realised with the choice of $A=1, a=0.1, W_0 = -2.96\times10^{-13}, c= 3\times10^{-26}$. In this case, the maximum and the minimum of the potential is at $\sigma_{\rm min}=320, \sigma_{\rm max}=353$ respectively, and the corresponding critical temperature is around $10^{10}$ GeV. The above-mentioned choices of parameters that lead to the moduli masses of $m_{\phi} \sim 10^6$ GeV and $\sim 40$ GeV are for some specific reason. Irrespective of the moduli abundance, for the mass of $10^6$ GeV, the particle will decay well before the BBN. Therefore, in this case, there is no cosmological moduli problem as mentioned before. On the other hand, for the case of $m_\phi \sim 40$ GeV, the field will decay after BBN, and to avoid spoiling BBN predictions, its abundance must be suppressed adequately. As we will see later in Sec.~\ref{moduli_abundance}, the severity of moduli destabilization will crucially depend on the moduli masses.В
For all the above examples, the potential has one minimum which got uplifted to the supersymmetry breaking the Minkowski minimum. But, in certain models, the modulus can be stabilized at the supersymmetric Minkowski minimum whose barrier height is unrelated to the $m_{3/2}$. On the other hand, $T_{\rm crit}$ is related to the barrier height. In this kind of set-up, therefore, $T_{\rm crit}$ can not be related to theВ $m_{3/2}$. One example of this type is the superpotential in KL model \cite{Kallosh:2004yh},\begin{equation} W=W_{0}+Ae^{-a\rho}+Be^{-b\rho},\end{equation}and K$\ddot{a}$hler potential as like the KKLT case.В If we consider only real part of the field, the effective scalar potential for KL model is,\begin{equation}\label{pot1}V_{\rm KL}(\sigma)=\frac{e^{-2(a+b)\sigma}}{6\sigma^2}\left(bBe^{a\sigma}+aAe^{b\sigma}\right)\times\left[Be^{a\sigma}(3+b\sigma)+e^{b\sigma}\left(A(3+a\sigma)+3e^{a\sigma}W_{0}\right)\right],\end{equation}For the choice of parameters $A=1, B=-1.03, a= \frac{2\pi}{100}, b=В \frac{2\pi}{99}, W_{0}=-2\times10^{-4}$ the potential has minimum at finite position with zero potentialВ value \cite{Kallosh:2004yh}. In this case, we do not need any supersymmetry breaking to get zero potential value at the minimum. So, $m_{3/2}$ is equal to zero, and the Eq.~\eqref{crit temp} can not be used to find out critical temperature, and it needs to be evaluated numerically.В
Now, we use the definition of the critical temperature of Eq.~$\eqref{1deri}$ for general moduli potential. Using Eq.~\eqref{1deri} for the moduli potential of Eq.~\eqref{v_total}, we obtain,\begin{equation}V'_{0}(\sigma_{\rm crit}) = \frac{a_{2}}{\sigma^2_{\rm crit}}T^4_{\rm crit}, ~~V''_{0}(\sigma_{\rm crit})=-\frac{2a_{2}}{\sigma^3_{\rm crit}}T^4_{\rm crit}\label{dri}~,\end{equation}and it leads to\begin{equation}\label{trans}\frac{V'_{0}(\sigma_{\rm crit})}{V''_{0}(\sigma_{\rm crit})}=-\frac{\sigma_{\rm crit}}{2}~.\end{equation}Here, $V_{0}(\sigma _{\rm crit})$ may be KKLT or KL potential at zero temperature. The value of $\sigma_{\rm crit}$ can be found by solving Eq.~\eqref{trans} numerically in the graphical method, and for KKLT and KL potential, we show the results in Fig.~ \ref{crit_field_value}. The $\sigma_{\rm crit}$ is the intersection of the graphs $V'_{0}(\sigma)$ and $-\left(\sigma/2\right)V''_{0}(\sigma)$.
In Fig. \ref{crit_field_value}, we have plotted the canonical modulus field $\phi=\sqrt{3/2}\ln(\sigma)$. In the left panel of Fig. \ref{crit_field_value}, we observe that there exist two points of intersections for KL potential but only one of them is valid for which $T_{\rm crit}$ is a real positive number. This happens for $\phi_{\rm crit}=5.13$ or $\sigma_{\rm crit}\simeq 65.93$, then from Eq.~\eqref{dri} we get,\begin{equation}T^{\rm KL}_{\rm crit}\simeq 5\times 10^{15}{\rm GeV},\end{equation}where $a_2$ is considered equal to 1. In the right panel of Fig.~\ref{crit_field_value} for KKLT potential, we observe that there exists only one intersection point with parameter values $A=1, a=0.1, W_{0}=-10^{-8}, D=3.3\times 10^{-17}$ and the value of $\phi_{\rm crit}=6.60$ or $\sigma_{\rm crit}=218.96$, then from Eq.~\eqref{dri} we get,\begin{equation}T^{\rm KKLT}_{\rm crit}\simeq 2\times 10^{13}~ {\text GeV}~,\end{equation}which is almost equal to the critical temperature calculated using the Eq.~\eqref{crit temp}.
The above analysis of critical temperature has the following drawbacks. Firstly, the analysis is only related to finding the inflection point of the effective temperature-dependent potential. Reaching the associated temperature does not necessarily mean that the moduli are destabilized. As mentioned earlier, the moduli will be destabilized only when the field crosses the top of the barrier height, and this is a dynamical question that necessarily depends on the initial field values. The dynamical analysis involves Hubble damping which includes any background energy density present in the system. Moreover, due to the non-canonical nature of the kinetic term, analysis of the effective potential is not sufficient.
Secondly, in any realistic set-up, the radiation bath needs to be produced at the end of inflation. In the standard cold inflation scenario, the Universe is solely dominated by the inflaton energy density at the end of inflation. Now, the production of temperature is a continuous process with its associated time scale. It is crucial to understand how the field evolves while the reheating temperature of the thermal bath is produced. The most well-understood source of the thermal bath is due to the decay of inflaton via the process of reheating. Therefore, it is crucial to analyze the dynamics of the field in the expanding background where the production of thermal bath is also taken care of. In the next sections, we will discuss these effects, and see how it affects the range of initial conditions for successful moduli stabilization.В
\section{Dynamics with initial radiation bath}
\label{Dynamics}
In this section, we will discuss the dynamics of a modulus field in the presence of a thermal bath. For now, we will not worry about how this thermal bath is created, and therefore the radiation energy density will be considered as an initial condition \cite{Barreiro_2008}. To simplify our discussions, we will consider the dynamics of the real part ($\sigma$) of the complex field $\rho=\sigma+i\alpha$. A more general two-field analysis with two initial temperatures can be found in \cite{Barreiro_2008}. In contrast to the previous analysis, we will study how the allowed field space is changed when the initial temperature is varied in a suitable range.
The dynamics of the modulus field, in this case, is governed by the following equations,
\begin{align}
&\ddot{\sigma}+3H\dot{\sigma}-\frac{\dot{\sigma}^2}{\sigma}+\frac{2\sigma^2}{3}V'_{\rm total}(\sigma,\rho_{r})=0 \label{dyn1}\,, \\
&\dot{\rho_{r}}+4H\rho_{r}=0 \label{radiation_1}\,,\\
&3M^2_{p}H^2=\frac{3}{4}\left(\frac{\dot{\sigma}}{\sigma}\right)^2+V_{0}(\sigma)+\rho_{r} \label{energy1}~.
\end{align}
The Eq.~\eqref{dyn1} is the modulus field evolution where the third and the fourth terms also get contributions from the non-canonical K\"ahler potential. Here $\rho_{r}$ is the background radiation energy density that evolves with time in the expanding spacetime via Eq.~\eqref{radiation_1}, and the last Eq.~\eqref{energy1} is the Friedmann equation. We can obtain the pressure and the energy density of the thermal fluid as $P_{r}=F(g,T)$ and $\rho_{r}=-P_{r}+T\frac{dP_{r}}{dT}$, and hence,
\begin{equation}
\rho_{r}=-3a_{0}(1+r g^2)T^4 , \label{energy density}
\end{equation}
where $r=a_2/a_0$, $g$ is the gauge coupling constant, and $a_2$ and $a_0$ depend on the micro-physics of the thermal bath. The Eq.~\eqref{energy density} tells us the relation between the temperature and the radiation energy density and we use either of these quantities interchangeably. So, the temperature corrected potential of Eq.~\eqref{v_total} in terms of $\rho_{r}$ can be rewritten as,
\begin{equation}\label{eff_pot}
V_{\rm total}=V_{0}(\sigma)-\frac{1}{3}\frac{r\rho_{r}g^2}{1+r g^2} +a_{0}T^{4}~.
\end{equation}
For our consideration $V_0 (\sigma)$ would be the KKLT potential for two specific choices of parameter sets given in Sec.~\ref{thermal_corrections}, and $g^2 = \kappa/\sigma$ with $\kappa$ being $4\pi$.
It has been noted earlier that when the initial temperature $T_{\rm init}$ related to the radiation bath is larger than the critical temperature $T_{\rm crit}$, the effective potential is without a minimum. But, as time progresses the effects of the temperature-dependent part of the potential decrease, and the local minimum and the maximum (barrier height) start to appear, eventually going to the form of the zero-temperature potential. In this situation when $T_{\rm init} > T_{\rm crit}$, the field does not necessarily overshoot, and it becomes dependent on the initial field values. If the field starts to move from the far left, the field gains enough kinetic energy to cross the barrier even if the minimum is created before the overshooting time. We denote this value from the left side by $\phi^{\rm L}_{\rm init}$ for which the field just overshoots.
Similarly, there will be another field value larger than $\phi^{\rm L}_{\rm init}$ for which field again overshoots and we denote that by $\phi^{\rm R}_{\rm init}$. We will see soon that the instantaneous position of the maximum (minimum) of the potential moves toward larger (smaller) values as the temperature decreases. Therefore, the $\phi^{\rm R}_{\rm init}$ will be always smaller than the field value at which zero temperature potential has the maximum. When the initial temperature is smaller than the critical temperature, the minimum of the potential exists from the very beginning. In summary, for field values between $\phi^{\rm L}_{\rm init}$ and $\phi^{\rm R}_{\rm init}$, the modulus field does not overshoot, and we can have consistent cosmology as long as the modulus field satisfies other cosmological bounds related to its abundance.
To solve the dynamics of the field, we will always take the initial field velocity to be zero. For some reasons if the initial field velocity is large, allowed field space for not overshooting the barrier will shrink. We will take $r = -1.3 $ for our initial analysis, and at the end, we will discuss the effects of varying $r$.
We first discuss the dynamics of the modulus field for the case of critical temperature around $7.3\times10^{12}$ GeV; see Fig.~\ref{fig:KKLT_potential} for the form of the effective potential. When the initial temperature is below, but close to the critical temperature, the evolution of the field is shown in the panels of Fig. \ref{dynamics1}. %
In these plots, the dashed red line corresponds to the instantaneous position of the maximum, and the dot-dashed brown line represents the instantaneous position of the minimum. Note that the positions of the minimum and the maximum are closer while the temperature is large, and as the temperature drops down, the maximum moves to higher field values, and the minimum moves to lower field values with asymptotic values being for the case of zero temperature potential. The blue line corresponds to the case for which the initial field value is such that the field does not overshoot, and in the end, it oscillates around the minimum. But, in the case of the black line the field just overshoots and eventually reaches the infinite vev representing destabilization of the field.
The left panel of Fig. \ref{dynamics1} shows the dynamics when the field starts to move with $\phi_{\rm init} < \phi_{\rm min}$, and in this case, as soon as the field crosses the barrier with finite velocity, the field overshoots. The right panel shows the case $\phi_{\rm init} > \phi_{\rm min}$, and in this case, the initial field value is even greater than the instantaneous maximum of the potential, i.e the field is at the right side of the barrier. The Hubble damping due to high initial temperature with initial zero velocity holds the field at its place. With time, the instantaneous maximum moves towards larger field values, and eventually the field becomes on the left side of the barrier. It means that if the field crosses the instantaneous barrier, that does not necessarily mean destabilization. In summary, we see that the field does not overshoot the barrier height if the initial field value is within the range $6.5 < \phi_{\rm init}< 6.7$, and obviously this range varies depending on the initial temperature. At the end of this section, we will show the variations of the allowed range with the initial temperature. If the initial temperature is much lower than the critical temperature, the changes in the instantaneous minimum and the maximum are not appreciable. In this case, the value of
$\phi^{\rm R}_{\rm init}$ is governed by the position of the barrier i.e if the initial field value is greater than the $\phi_{\rm max}$, the field immediately overshoots. On the other hand, the value of $\phi^{\rm L}_{\rm init}$ is fully determined by the slope of the effective potential.
In Fig.~\ref{dynamics3}, the dynamics of the field are plotted when the initial temperature is much above the $T_{\rm crit}$. In this case, the effective potential is run-away nature to start with, and the field does not necessarily overshoot, and it depends on the initial conditions of the field. In fact, as is seen from the plot, the dashed red line (instantaneous maximum) line and the dot-dashed brown line (instantaneous minimum) exist only after a certain time. It is evident that as soon as the field crosses the instantaneous maximum, the field overshoots to larger values. For the particular choice of initial temperature, both $\phi^{\rm L}_{\rm init}$ and $\phi^{\rm R}_{\rm init}$ are on the two sides of the instantaneous minimum vev. But, for larger initial temperatures, it turns out that both the initial field values are smaller than the $\phi_{\rm min}$.
Finally, in Fig.~\ref{dynamics4} (left panel), we show how the allowed initial field range changes with the temperature of the radiation bath. When the temperature is much smaller than the $T_{\rm crit}$ (vertical orange dotted line), the instantaneous minimum and the maximum nearly overlap with their zero temperature values respectively. In this case, $\phi^{\rm R}_{\rm init}$ is determined by the position of the maximum that does not change appreciably with temperature as long as $T_{\rm init} \ll T_{\rm crit}$. It makes $\phi^{\rm R}_{\rm init}$ nearly independent of temperature below $T_{\rm crit}$. On the other hand, for $T_{\rm init} \ll T_{\rm crit}$ the potential becomes very steep on the left side of the minimum, and the initial condition $\phi^{\rm L}_{\rm init}$ is fully determined by whether the gained kinetic energy is enough to overshoot the barrier. Again, this becomes effectively independent of temperature. This explains why the field range is insensitive to initial temperature as long as it is sufficiently smaller that $T_{\rm crit}$.
On the other hand, when $T_{\rm init}$ becomes comparable or larger than the $T_{\rm crit}$, the dynamics of the field are fully governed by the temperature-dependent part of the potential. In this case, both the values of В $\phi^{\rm L}_{\rm init}$ and В $\phi^{\rm R}_{\rm init}$ become smaller as $T_{\rm init}$ becomes larger. But, the allowed field range $\Delta \phi = \phi^{\rm R}_{\rm init} - \phi^{\rm L}_{\rm init}$ becomes larger compared to the values for $T_{\rm init} \ll T_{\rm crit}$. In fact, in this case, both $\phi^{\rm L}_{\rm init}$ and В $\phi^{\rm R}_{\rm init}$ become eventually smaller than the value of the minimum for the zero-temperature potential shown by a horizontal dot-dashed blue line. В In summary, even though for $T_{\rm init} \gg T_{\rm crit}$, the potential becomes run-away nature, the allowed field space for which no overshooting happens is large. We would like to emphasize that the issue of overshooting even exists for the zero-temperature potential. When we consider the effects of radiation bath, the issue does not become worse. In fact, the allowed field range becomes larger. In this sense, the effects of the temperature do not make the situation worse in any sense. This is one of the important conclusions we make.
В In the right panel of Fig.~\ref{dynamics4}, we again show the allowed field range when the critical temperature is smaller than the previous case, for example $T_{\rm crit} = 3\times 10^{10} \text{GeV}$. We broadly conclude that the allowed field range remains roughly the same. Obviously, for this case, the allowed range is around the zero-temperature minimum which is at a higher field value. Also for $T_{\rm init} \ll T_{\rm crit}$, the allowed range is slightly smaller due to the shorter barrier height.
\section{Reheating and moduli dynamics}
\label{Effect of Continuous reheating and Moduli Dynamics}
At the end of inflation, the radiation bath is created from the decay of the inflaton, and the process might be complicated, as well as, it will depend on the details of the model. For our analysis, we consider that the inflaton decays via perturbative processes with total decay width $\Gamma_{\varphi}$. The details of the decay process or decay products do not affect our discussions. The crucial point is that at the end of inflation, the moduli potential is still dictated by the zero-temperature potential, and as the energy density of the thermal bath starts to grow, the field starts to feel the temperature-corrected potential. The process of creating the thermal bath has its associated time scale of $\Gamma^{-1}_{\varphi}$, and that allows the modulus field time to settle around its minimum. This effect relaxes the overshooting problem. In this section, we will analyze the process in detail, and compare it to the case when the radiation bath is assumed to exist from the beginning of modulus evolution.
At the bottom of the inflation potential where the decay process is happening during the oscillations of the field, the potential can be approximated as
\begin{equation}
V(\varphi)=\lambda\frac{\varphi^k}{M_{p}^{k-4}},
\label{inf_pot}
\end{equation}
where $\varphi$ is the inflaton field, and $\lambda$ is dimensionless coupling constant. We will consider cases of $k = 2$, or $4$. The equation of motion of the inflaton field when we include the effects of the inflaton decay can be written as,
\begin{equation}
\ddot{\varphi}+(3H+\Gamma_\varphi)\dot{\varphi}+V'(\varphi)=0,
\end{equation}
where $\Gamma_\varphi$ is the total inflaton decay width. If we assume that the decay of the inflaton is relatively slow, i.e. the oscillation time-scale is much shorter than $\Gamma_\varphi^{-1}$ and $H^{-1}$, then the governing equation for the energy density of the inflaton can be written as \cite{mukhanov:2005sc}
\begin{equation}
\dot{\rho_{\varphi}}+3H(1+\omega_{\varphi})\rho_{\phi} \simeq -\Gamma_{\varphi}(1+\omega_{\varphi})\rho_{\varphi}, \label{inflaton}
\end{equation}
where the equation of state parameter $\omega_{\varphi} = (k-2)/(k+2)$.
The evolution of the radiation energy density produced by inflaton decay is governed by
\begin{equation}
\dot{\rho_{r}}+4H\rho_{r}\simeq (1+\omega_{\varphi})\Gamma_{\varphi}\rho_{\varphi}~. \label{radiation1}
\end{equation}
We have assumed that the system thermalises instantaneously. If the modulus field is present within this thermal bath, the dynamics of the field is dictated by
\begin{align}
&\ddot{\sigma}+3H\dot{\sigma}-\frac{\dot{\sigma}^2}{\sigma}+\frac{2\sigma^2}{3}V'_{\rm total}(\sigma,\rho_{r})=0 \label{dyn2}\,, \\
&3M^2_{p}H^2=\frac{3}{4}\left(\frac{\dot{\sigma}}{\sigma}\right)^2+V_{0}(\sigma)+\rho_{r}+\rho_{\varphi} \label{energy2}~.
\end{align}
We will solve these Eqs. \eqref{inflaton}, \eqref{radiation1}, \eqref{dyn2} and \eqref{energy2} simultaneously and will find the initial field range for which the modulus field does not overshoot the barrier height.
The decay process is nearly complete by the time $\Gamma^{-1}_{\varphi}$ and the decay products are thermalized with a temperature. We call that the reheating temperature $T_{\rm R}$. But, during the process of decay, the maximal temperature of the decay products is $T_{\max} \simeq \left (\Gamma_{\varphi} H_{\rm inf} M_{p}^2 \right)^{1/4}$, and it is larger than the final thermalised temperature $T_{\rm R}$ \cite{Kolb:1990vq}. Here, $H_{\rm inf}$ is the scale of inflation. Without dynamical analysis, the potential should destabilise as soon as $T_{\rm max} > T_{\rm crit}$ \cite{Buchmuller:2004xr},\cite{Buchmuller:2004tz}. Obviously, that is not the case as the field at the end of inflation feels only the zero-temperature potential. Once the decay starts to happen, the temperature bath is created with its associated distortion of the potential due to its temperature-dependent corrections. The correction is maximum at the temperature $T_{\rm max}$. In contrast to the analysis in the previous section where radiation bath is considered as an initial condition \cite{Barreiro_2008}, in this case, the radiation bath is created with the associated time-scale. In this case, the field experiences temperature-dependent corrections that are initially zero, and then gets the maximum effects at $T_{\rm max}$, and eventually again without any effect. Moreover, the produced temperature is dependent on the scale of inflation $H_{\inf}$ and the decay width of the inflaton $\Gamma_\varphi$. In addition to that, the effects depend on the parameters $k$ of Eq.~\eqref{inf_pot}, and $r$, parameterizing the effects of the gauge coupling constants, see Eq.~\eqref{energy density}. In the following discussions, we will explore dependencies on all these parameters.
To understand the effects of temperature generation via reheating, we first show the dynamics for a fixed value of $T_{\rm max}$ and contrast that with the case when the same value of the temperature was taken as an initial condition in the last section. As an example, in Fig~\ref{dynamics5}, we show the dynamics of the field for $T_{\rm max} = 5.5\times 10^{12}$ GeV. For this case, we have taken the maximum value of $H_{\rm inf}$ that is consistent with the upper limit of the tensor-to-scalar ratio produced during inflation \cite{Planck:2018jri}. In this case, $T_{\rm max}$ is just below the $T_{\rm crit} = 7.3\times 10^{12}$ GeV. From the plot, it is clear how the positions of the maximum and the minimum approach each other as the temperature rises close to the $T_{\rm crit}$ and again settle to their zero-temperature values once the radiation energy density red-shifts away. This plot should be directly contrasted with Fig.~\ref{dynamics1} that has the same initial temperature $T_{\rm init} = 5.5\times 10^{12}$ GeV. Comparing the two plots, we see that the value of $\phi^{\rm R}_{\rm init}$ is not changed much, but the value of $\phi^{L}_{\rm init}$ is changed reasonably due to the timescale of temperature generation. We also note that the field remains stuck for some time at its position even though the field is at the runaway slope. To be specific, for the field range $4.4 \leq \phi_{\rm init} \leq6.72$ the field does not overshoot whereas this range was $ 6.49 \leq \phi_{\rm init} \leq 6.7$ when the effects of radiation bath production was ignored. The value of $\phi^{\rm R}_{\rm init}$ remains the same as this value is nearly fixed by the position of the zero temperature maximum. When $T_{\rm max}$ is well below the $T_{\rm crit}$, the positions of the maximum and the minimum do not change much over the evolution of the field, and as soon as the field crosses the local maximum, the field overshoots to large field values. The overall allowed initial range is always larger than the case when the radiation density was considered as an initial condition.
Similarly, in Fig.~\ref{dynamics6}, we show dynamics of the field when the maximum temperature produced is larger than the critical temperature. Again, in this case also the allowed initial field range is larger compared to the initial radiation bath case. The Fig.~\ref{dynamics6} should be contrasted with Fig.~\ref{dynamics3} to see the effects of continuous reheating.
To achieve the specific value of $T_{\rm max}$, once the value of $H_{\rm inf}$ is specified, the value of $\Gamma_{\varphi}$ is also fixed. As noted above, this relaxation of the initial condition happens due to the associated time scale to produce the temperature. This is illustrated in Fig.~\ref{init_vs_H_inf}, where we show the allowed initial field values as a function of $H_{\rm inf}$ for a fixed value of $T_{\rm max}$. Note that the larger values of $H_{\rm inf}$ correspond to smaller values $\Gamma_\varphi$. For smaller values of the $\Gamma_\varphi$, the decay process of the inflaton to produce a radiation bath will take a longer time, and this will allow larger initial field space. For all the values of $H_{\rm inf}$, the allowed field range is always larger than the case when the effects of radiation bath production are not taken into consideration. Only in the large $\Gamma_\varphi$ (i.e small $H_{\rm inf}$) limit, the radiation bath will be produced instantaneously, and we get back the results of the previous section. In Fig.~\ref{init_vs_H_inf}, we show the field range for both $k = 2$ and $k = 4$, and we see that for $k =4$, the allowed range decreases slightly.
Till now our discussion is done for a fixed value of $r = -1.3$. To understand the effects of $r$, we show the results in Table~\ref{VAFRDVR} where $\Delta \phi^{\rm IR}$ corresponds to the case of instantaneous reheating with initial radiation bath, i.e the results of Sec.~\ref{Dynamics}, and $\Delta \phi^{\rm CR}$ corresponds to the field range where radiation is produced by continuous decay. We find no appreciable effects for the case when the initial temperature or the maximum temperature is well below the $T_{\rm crit} = 7.3\times 10^{12}$ GeV. On the other hand, when the temperature is large, the effect is slightly more prominent for larger negative values of $r$. For large negative values of $r$, the barrier height reduces, and in effect, it makes the allowed range smaller.
\begin{center}
\begin{table}
\begin{tabular}{ |p{1cm}|p{4.2cm}|p{4.2cm}|p{4.2cm}| }
\hline
Value of r & Allowed field range for initial radiation bath:\newline ($\Delta\phi_{\rm init}^{\rm IR}=\phi^{R}_{\rm init}-\phi^{L}_{\rm init}$) & Allowed field range for continuous reheating: \newline ($\Delta\phi_{\rm init}^{\rm CR}=\phi^{R}_{\rm init}-\phi^{L}_{\rm init}$) & Difference between two ranges: \newline $\Delta\phi=\Delta\phi^{\rm CR}_{\rm init}-\Delta\phi_{\rm init}^{\rm IR}$\\
\hline
{}& $T_{\rm init}=2.2\times 10^{14} GeV$ & $T_{max}=2.2\times 10^{14} GeV, \newline H_{\rm inf}=2.4\times10^{14} GeV$ & {} \\
\hline
-0.7& 0.61 & 2.30 & 1.69\\
\hline
-1.3& 0.36 & 2.27 & 1.91\\
\hline
-1.7& 0.09 & 2.25 & 2.16\\
\hline
{}& $T_{\rm init}=3.1\times 10^{10} GeV$ & $T_{\rm max}=3.1\times 10^{10} GeV, \newline H_{\rm inf}=2.4 \times 10^{14} GeV$ & {}\\
\hline
-0.7 & 0.19 & 2.33 & 2.14\\
\hline
-1.3 & 0.19 & 2.33 & 2.14\\
\hline
-1.7 & 0.19 & 2.33 & 2.14\\
\hline
\end{tabular}
\caption{Variations of allowed field ranges for different values of $r$.}
\label{VAFRDVR}
\end{table}
\end{center}
\vspace{-0.7cm}
Finally, in Fig.~\ref{field_Tmax}, we show allowed initial field values when we vary $T_{\rm max}$ for a fixed value of $H_{\rm inf}$. The first thing to note is that the allowed field range does not change for a wide range of temperatures including temperatures above $T_{\rm crit}$. Therefore, not only does the modulus field not overshoot above $T_{\rm crit}$, but also the allowed field range remains roughly the same. In this range of $T_{\rm max }$, the dynamics of the field are governed by the energy density of the inflaton $ \rho_{\varphi}$. When $T_{\rm max}$ is very large, the allowed field range starts to decrease as larger $\Gamma_{\varphi}$ allows the inflaton to decay quickly. Note that for $T_{\rm max} \sim 10^{14}$ GeV (above $T_{\rm crit}$), the allowed field range remains almost the same like the lower values of $T_{\rm max}$. In this case, the time separation between $T_{\rm max}$ and $T_{\rm R}$ is large, and the energy density is dominated by $\rho_{\varphi}$ which redshifts slower than the radiation energy density. In effect, the Hubble damping term holds for a longer time.
In summary, we conclude that the effects of reheating allow the field time to relax to its minimum without overshooting. Therefore, allowed initial field space increases compared to the case when radiation density is assumed to be present from the beginning. This effect roughly improves the allowed initial field range by one order of magnitude, see Table~\ref{VAFRDVR}, compare plots between the left panels of Fig.~\ref{dynamics4} and Fig.~\ref{field_Tmax} or Fig. \ref{dynamics1} and Fig~\ref{dynamics5}.
\section{Moduli abundance and initial conditions}
\label{moduli_abundance}
As noted in the introduction, if the mass of the moduli field $m_{\phi}$ is within the range of $10^2$-$10^3$ GeV, it decays just after the nucleosynthesis.
The most stringent bound comes from the resulting overproduction of $D~+~ _{}^{3}\textrm{He}$, and it requires that the moduli abundance relative to the entropy density $s$ at the time of reheating after the inflation should satisfy \cite{10.1143/ptp/93.5.879}, \cite{Kawasaki:2004qu}%
\begin{equation}
\frac{\rho_{\phi}}{s} \lesssim 10^{-14} ~{\rm GeV} . \label{n/s_bound}
\end{equation}
If $m_{\phi} \lesssim H_{\rm inf}$, the moduli is not expected to sit at its zero temperature minimum, but it is in general shifted to a large field value $\phi_{\rm init}$ during inflation. The field begins to oscillate around its zero temperature minimum when the Hubble parameter $H$ becomes close to $m_{\phi}$. The moduli energy density $\rho_{\phi}$ divided by the entropy density $s$ is estimated as \cite{Hagihara_2019},%
\begin{equation}\label{abs}
\frac{\rho_{\phi}}{s}=
\begin{cases}
\frac{1}{8}T_{R}\left(\frac{\phi_{\rm init}}{M_{p}}\right)^2& \text{for } t_{\rm osc} < t_{R}\\
\frac{1}{8}T_{\rm osc}\left(\frac{\phi_{\rm init}}{M_{p}}\right)^2 & \text{for } t_{\rm osc} > t_{R}
\end{cases}
\end{equation}
where $t_{R} (T_{R})$ is the time (temperature) at the end of reheating and $t_{osc} (T_{osc})$ is the time (temperature) at the beginning of the moduli oscillation. Here, it has been assumed that the equation of the state of the universe behaves as a non-relativistic matter before the completion of reheating produced by the inflaton. To satisfy the bound of Eq.~\eqref{n/s_bound}, the initial field value of the moduli, when it starts to oscillate, should have an upper bound \cite{Linde_1996} %
\begin{equation}\label{abs}
\phi_{\rm init} \lesssim
\begin{cases}
10^{-6} M_{p}& \text{for } t_{\rm osc} < t_{R}~,\\
10^{-10} M_{p} & \text{for } t_{\rm osc} > t_{R}~.
\end{cases}
\end{equation}
In our analysis, we have seen that $\phi_{\rm init} \lesssim {\mathcal O} (0.1 - 0.01)$ is required to avoid the overshooting problem. On the other hand, the requirement of Eq.~\eqref{abs} is much more stringent, but only applicable to the moduli masses that cause problems for BBN light elements.
Typical moduli potentials in String theory have local minimums separated from their global minimum by a finite barrier height. For several phenomenological reasons, as discussed in the introduction, the field must be stabilized at the local minimum. The overshooting has a typical time-scale, and that is much smaller than the decay time. Therefore, preventing overshooting is absolutely necessary for all relevant moduli masses. Both for the case of zero-temperature potential or thermally corrected potential, the issue of overshooting depends on the initial conditions. For a given potential, the constraint on the initial conditions relaxes further when the effects of reheating are considered. A modulus field heavier than $\mathcal{O}(100)$ TeV decays well before the BBN, and the bound of Eq.~\eqref{abs} is not applicable, and in this case, the constraints on the initial conditions due to overshooting are applicable. On the other hand, for lighter moduli masses, as long the bound of Eq.~\eqref{abs} is satisfied, the overshooting constraints are automatically satisfied.
\section{Conclusions}
\label{Discussions and Conclusion}
In this work, we have studied the issue of moduli stabilization at the end of inflation. It is well known that constructing suitable moduli stabilizing potential is not enough to ensure that the moduli is stabilized at finite vev. It is necessary to understand the cosmological evolution of the field. Moreover, the zero-temperature potential is distorted due to the presence of radiation baths produced from the inflaton energy density. In earlier work, the dynamical analysis was done where the radiation bath was assumed as an initial condition with fixed initial temperature \cite{Barreiro_2008}. In our work, we focus on how allowed initial field space changes as the initial temperature is changed. Moreover, we discuss in detail the effects of radiation bath generation from the decay of the inflaton.
In \cite{Buchmuller:2004xr},\cite{Buchmuller:2004tz}, it was first noted that large enough thermal corrections to the potential wash away the local minimum of the potential. It was assumed that the field would destabilize immediately by reaching a large vev. This immediately puts an upper limit on the reheating temperature ($T_{\rm R}$) or the maximum temperature ($T_{\rm max}$) produced during the process of reheating. As we note, this is necessarily not the case. The issue of destabilization is always dynamical, and therefore initial field value dependent. Even for the zero temperature potential, the field destabilizes for certain initial field values \cite{Brustein:1992nk}. We find that the effects of temperature-dependent corrections do not make things worse. In fact, the allowed field space increases when the temperature is larger than the critical temperature at which the potential loses its minimum - see Fig.~\ref{dynamics4}. At the same time, when the effects of temperature generation via reheating are considered, this constraint relaxes further. The creation of a radiation bath introduces a time scale that allows the modulus field to settle more easily at its minimum. Roughly, it allows one order of magnitude more field range for stabilization.
Typical moduli potentials in String Theory have a finite barrier height, and therefore the field is always prone to overshoot that barrier if the initial conditions are not suitable. Typical overshooting time is much smaller than the decay time of modulus of all relevant masses. For heavier moduli masses ($\gtrsim 30$ TeV), the field decay before BBN, and in this case, it is absolutely necessary that the initial value of the fields are in the suitable range. On the other hand, for lighter moduli masses, even though overshooting must be avoided, the constraints coming from BBN are much more stringent. This constraint can be satisfied by tracking the field to its minimum with nearly no oscillations around \cite{Linde_1996} .
The current work can be taken further in several directions. Firstly, we considered temperature generation only via perturbative decays of the inflaton. The process can be much more complicated via non-perturbative effects like preheating etc. Those effects might be incorporated systematically. But, for our consideration, the only relevant quantity is the time-scale related to the thermal bath generation, and in the current work, it is simply parameterized by $\Gamma_{\varphi}^{-1}$.
We have noted in the introduction that a large value of inflationary potential washes away the local minimum, leading to the KL bound \cite{Kallosh:2004yh}. Again, in this case, also, the assumption is that the field runs away to the large vevs as soon as the minima are lost. The current analysis shows that this is not necessarily the case. In a more complete analysis, the moduli field evolution needs to be studied from the time of inflation, and we leave this for future work.
In summary, we conclude that the effects of radiation bath at the end of inflation do not make the moduli stabilisation issue worse. In fact, for the temperatures larger than $T_{\rm crit}$, the allowed initial field space is similar to the zero temperature potential case. Moreover, if the decay of the inflaton is slow to produce the bath, the field gets extra time to relax further.
So, we can relax the upper bound of the initial temperature of the Universe or the maximum reheating temperature for a certain range of the initial field values.\\\\
\noindent
{\bf {\large Acknowledgements:}} KA is supported by IISER Kolkata doctoral fellowship. KD is partially supported by the grant MT R/2019/000395 and Indo-Russian project grant DST /INT /RUS/RSF/P-21, both funded by the DST,Govt of India.
\bibliographystyle{unsrt}
\bibliography{references.bib}
|
Title:
A new constraint on primordial lepton flavour asymmetries |
Abstract: A chiral chemical potential present in the early universe can source helical
hypermagnetic fields through the chiral plasma instability. If these
hypermagnetic fields survive until the electroweak phase transition, they
source a contribution to the baryon asymmetry of the universe. In this letter,
we demonstrate that lepton flavour asymmetries above $|\mu|/T \sim 4 \times
10^{-3}$ trigger this mechanism even for vanishing total lepton number. This
excludes the possibility of such large lepton flavour asymmetries present at
temperatures above $10^6$ GeV, setting a constraint which is about two orders
of magnitude stronger than the current CMB and BBN limits.
| https://export.arxiv.org/pdf/2208.03237 |
\preprint{CERN-TH-2022-134}
\preprint{RESCEU-13/22}
\preprint{KEK-TH-2441}
\preprint{MS-TP-22-23}
\preprint{TU-1164}
\title{A new constraint on primordial lepton flavour asymmetries}
\author{Valerie Domcke}
\email{valerie.domcke@cern.ch}
\affiliation{Theoretical Physics Department, CERN, 1211 Geneva 23, Switzerland}
\affiliation{Institute of Physics, Laboratory for Particle Physics and Cosmology, EPFL, 1015 Lausanne, Switzerland}
\author{Kohei Kamada}
\email{kohei.kamada@resceu.s.u-tokyo.ac.jp}
\affiliation{Research Center for the Early Universe, The University of Tokyo, Hongo 7-3-1 Bunkyo-ku, Tokyo 113-0033, Japan}
\author{Kyohei Mukaida}
\email{kyohei.mukaida@kek.jp}
\affiliation{KEK Theory Center, Tsukuba 305-0801, Japan}
\affiliation{Graduate University for Advanced Studies (Sokendai), Tsukuba 305-0801, Japan}
\author{Kai Schmitz}
\email{kai.schmitz@uni-muenster.de}
\affiliation{University of M\"unster, Institute for Theoretical Physics, 48149 M\"unster, Germany}
\author{Masaki Yamada}
\email{m.yamada@tohoku.ac.jp}
\affiliation{FRIS, Tohoku University, Sendai, Miyagi 980-8578, Japan}
\affiliation{Department of Physics, Tohoku University, Sendai, Miyagi 980-8578, Japan}
\date{\today}
\noindent\textbf{Introduction\,---\,}%
The observed baryon-to-photon ratio $\eta_B^{\rm obs} = n_{\rm b}/n_\gamma = \left(6.12 \pm 0.04\right) \times 10^{-10}$~\cite{Planck:2018vyg,ParticleDataGroup:2020ssz}, together with the baryon-plus-lepton number ($B+L$) violating sphaleron processes in the Standard Model (SM), constrains the baryon and lepton number asymmetries in the thermal plasma of the early universe at temperatures above the electroweak phase transition (EWPT) to $|\mu_{B-L}|/T \lesssim 10^{-9}$. The \emph{lepton flavour asymmetries (LFAs)}, carrying charge $\Delta_\alpha \equiv B/3 - L_\alpha$
with $\alpha = e, \mu, \tau$, could however be much larger as long as an (approximate) $B-L$ symmetry insures $|\sum_\alpha \mu_{\Delta_\alpha}/T| \lesssim 10^{-9}$.
Taking into account neutrino oscillations which become efficient just before the onset of Big Bang Nucleosynthesis (BBN), the constraint on the asymmetry in the electron neutrinos at the time of BBN,
$\mu_{\Delta_e} / T_\nu |_\text{BBN} = - 0.001 \pm 0.016$~\cite{Pitrou:2018cgg}, merely limits such primordial LFAs to $ g_{*,s}^\text{BBN}/g_{*,s}(T) \, |\mu_{\Delta_\alpha}|/T_\nu \lesssim 0.12 \, (1.0)$
for the two values for the neutrino mixing angle $\sin^2\theta_{13} = 0$ and $0.04$ considered in Refs.~\cite{Pastor:2008ti,Mangano:2010ei,Castorina:2012md}.
Here $g_{*,s}$ accounts for the number of relativistic degrees of freedom at different temperatures.
The resulting contribution to extra radiation is at most around $\Delta N_\text{eff} \simeq 5\%$.
These bounds are considerably weaker than in the case of significant $B-L$ violation, $\mu_{B-L} \sim \mu_{\Delta_\alpha}$, for which the bounds on the electron-flavour asymmetry at BBN apply to all primordial LFAs
~\cite{Mangano:2010ei,Barenboim:2016lxv,Oldengott:2017tzj,Burns:2022hkq} (see~\cite{Iocco:2008va,Pitrou:2018cgg} for a review).
The possibility of such large LFAs has recently received renewed attention, in particular as a possibility to explain the baryon asymmetry of our universe through leptoflavourgenesis~\cite{Mukaida:2021sgv} (see also Refs.~\cite{Kuzmin:1987wn,Khlebnikov:1988sr,March-Russell:1999hpw,Laine:1999wv,Shu:2006mm,Gu:2010dg} for related works) and as a possible explanation for the recently observed helium anomaly~\cite{Matsumoto:2022tlr,Burns:2022hkq}, indicating a smaller value for primordial helium-4 abundance compared to the standard BBN prediction (see e.g.~\cite{Kohri:1996ke,March-Russell:1999hpw,Pastor:2008ti} for earlier works).
Lepton (flavour) asymmetries have moreover been considered to ameliorate the Hubble tension~\cite{Seto:2021tad} and improve the overall fit to cosmological data~\cite{Yeung:2020zde}.
See e.g.~\cite{Dreiner:1992vm,Casas:1997gx,McDonald:1999in,March-Russell:1999hpw,Kawasaki:2002hq,Yamaguchi:2002vw,Takahashi:2003db,Asaka:2005pn,Shu:2006mm,Gu:2010dg,Harigaya:2019uhf,Gelmini:2020ekg,Mukaida:2021sgv,Kawasaki:2022hvx} for models generating large lepton (flavour) asymmetries and their implications for baryogenesis.
In this letter we derive a new constraint on LFAs present in the early universe above a temperature of $10^6$~GeV, which is significantly stronger than existing constraints except for the special case of an (approximate) ${\mu + \tau}$ symmetry. This new constraint will in particular rule out primordial LFAs as a possible explanation to the helium anomaly and will equally rule out tauphobic leptoflavourgenesis from $\mu$ asymmetry. The essence of this new constraint is the observation that LFAs can trigger a chiral plasma instability (CPI) which sources helical hypermagnetic fields~\cite{Joyce:1997uy} (see also \cite{Brandenburg:2017rcb,Schober:2017cdw,Schober:2018ojn}). These helical magnetic fields survive until the EWPT, at which their conversion into electromagnetic fields sources a contribution to the baryon asymmetry of the universe~\cite{Giovannini:1997gp,Giovannini:1997eg,Kamada:2016cnb}. Avoiding overproduction of the baryon asymmetry places an upper bound on the LFAs. Thus, in a similar spirit that non-perturbative $SU(2)_L$ processes (sphalerons) set a constraint on lepton number violation, we point out that non-perturbative $U(1)_Y$ processes (CPI) constrain lepton flavour violation.
\smallskip\noindent\textbf{Chiral plasma instability\,---\,}%
Hypermagnetic fields in the thermal plasma of the early universe can be described by chiral magnetohydrodynamics (MHD)~\cite{Durrer:2013pga},
\begin{align}
0 = \frac{\partial \bm{B}_Y}{\partial \eta} + \bm{\nabla} \times \bm{E}_Y \,, \quad 0 = \bm{\nabla} \times \bm{B}_Y - \bm{J}_Y \,,
\label{eq:MHD}
\end{align}
where $\eta$ denotes conformal time and
\begin{align}
\bm{J}_Y = \sigma_Y (\bm{E}_Y + \bm{v} \times \bm{B}_Y) + \frac{2 \alpha_Y}{\pi} \mu_{Y,5} \bm{B}_Y \,.
\label{eq:current}
\end{align}
Here $\sigma_Y \simeq 10^2 T$ denotes the conductivity of the thermal plasma, $\bm{v}$ is the fluid velocity, $\alpha_Y$ is the hypercharge fine structure constant of the hypercharge gauge group $U(1)_Y$ and $\mu_{Y,5}$ is the chiral chemical potential associated with $U(1)_Y$,
\begin{align}
\mu_{Y,5} = \sum_{i} \varepsilon_i g_i Y_i^2 \mu_i \,,
\label{eq:mu5def}
\end{align}
where $\varepsilon_i = \pm 1$ denotes right/left-handed particles, $g_i$ is the multiplicity and $Y_i$ is the hypercharge of the SM particle species $i$. The second term in Eq.~\eqref{eq:current}, referred to as the chiral magnetic effect~\cite{Vilenkin:1980fu,Alekseev:1998ds,Son:2004tq,Fukushima:2008xe}, is the origin of the chiral plasma instability~\cite{Joyce:1997uy}. It will prove convenient to express Eq.~\eqref{eq:MHD} in terms of the helicity stored in the hypermagnetic fields and the chiral chemical potential~\cite{Boyarsky:2011uy,Domcke:2019mnd},
\begin{align}
\partial_\eta h_k & = - \frac{2 k^2}{\sigma_Y} h_k + \frac{8 \alpha_Y}{\pi} \frac{\mu_{Y,5}}{\sigma_Y} \rho_{B,k} \\
\partial_\eta \rho_{B,k} & = - \frac{2 k^2}{\sigma_Y} \rho_{B,k} + \frac{2 \alpha_Y}{\pi} \frac{\mu_{Y,5}}{\sigma_Y} k^2 h_{k} \,,
\end{align}
where $h_k$ and $\rho_{B,k}$ are the Fourier components of the hypermagnetic helicity and energy density, respectively, and the fluid velocity has been neglected. Combining these two equations, all modes $k < k_\text{CPI} \equiv 2 \alpha_Y |\mu_{Y,5}|/\pi$ become tachyonically unstable, leading to the generation of helical hypermagnetic fields with a typical length scale of $1/k_\text{CPI}$ seeded by thermal fluctuations. The fastest growing mode is $k \sim k_\text{CPI}/2$ and the time scale of its growth can be estimated as $\eta_\text{CPI} \sim 2 \sigma_Y/k_\text{CPI}^2$, indicating that the CPI becomes effective at~\cite{Kamada:2018tcs}
\begin{align}
T_\text{CPI} \sim 10^5~\text{GeV} \, \left(\frac{10^2}{g_*}\right)^{\tfrac{1}{2}} \left( \frac{\alpha_Y}{0.01} \right)^2 \left( \frac{10^2 T}{\sigma_Y}\right) \left( \frac{\mu_{Y,5}/T}{10^{-3}} \right)^2\bigg|_{T_\text{CPI}} .
\label{eq:TCPI}
\end{align}
This analytical estimate is in good agreement with the numerical MHD simulations presented in~\cite{Schober:2017cdw}.
We expect that thermal fluctuations provide initial seeds of hypermagnetic helicity of order $h_k \sim T^4 (k/T)^4/ k$ for $k \ll T$, where $(k/T)^4$ represents the suppression at the tail of Boltzmann distribution. This should be amplified to $\mathcal{O}(T^2 |\mu_{Y,5}^{\rm ini}| / \alpha_Y)$ to complete the CPI, as we will see shortly,
where $\mu_{Y,5}^\text{ini}$ denotes the value of the chiral chemical potential at the onset of the CPI.
Focusing on the fastest growing mode, we estimate the time scale of the completion of CPI to be $\eta_{\rm CPI} * \ln \alpha^{-4} (T/\mu_{Y,5}^{\rm ini})^2 \sim \mathcal{O}(10) \eta_{\rm CPI}$.
The chiral plasma instability ends once $\mu_{Y,5} \simeq 0$, i.e.\ when the chiral asymmetry in the plasma has been converted to helical magnetic fields.%
\footnote{
In practice, it suffices that $|\mu_{Y,5}| \lesssim 10^{-3}$ to end the CPI, since this pushes $T_\text{CPI}$ below the equilibration temperature of the electron Yukawa, which will efficiently complete the erasure of $\mu_{Y,5}$ as discussed below.
}
At the final stages of CPI the effect of the velocity fields can no longer be neglected,
but the main conclusions drawn above remain valid~\cite{Schober:2017cdw}.
\smallskip\noindent\textbf{Conserved charges in the SM plasma\,---\,}%
Besides the four well-known conserved charges of the SM above the electroweak phase transition (hypercharge and the three flavoured $B-L$ charges $\Delta_\alpha$) the SM plasma in the early universe also features approximately conserved charges whenever Yukawa couplings or non-perturbative sphaleron processes are not efficient enough to keep up with the expansion rate of the universe. At any given temperatures, approximating the SM interactions to be either inefficient or equilibrated,
the chiral chemical potential~\eqref{eq:mu5def} can be expressed as a linear combination of the respective conserved charges, with all other SM chemical potentials entering Eq.~\eqref{eq:mu5def} expressed in terms of these conserved charges~\cite{Domcke:2020kcp}.
Our main focus in this letter will be on the temperature regime $10^9~\text{GeV} \gtrsim T \gtrsim 10^6$~GeV, where the weak and strong sphaleron process as well as all Yukawa couplings of the second and third generation are efficient. The Yukawa couplings of the first generation quarks, as well as the electron Yukawa coupling and the off-diagonal down-strange quark Yukawa coupling remain inefficient, conserving the charges associated with $\mu_{u-d}$, $\mu_e$ and $\mu_{2B_1 - B_2 - B_3}$. Solving the system of linear equations for all chemical potentials including the equilibrated SM interactions as constraint equations (see~\cite{Domcke:2020kcp,NewPaper} for details) yields
\begin{align}
\frac{\mu_{Y,5}}{T} = \frac{513}{358} \frac{\mu_e}{T} + \frac{173}{1074} \frac{\bar \mu_{u-d}}{T} + \frac{151}{358} \frac{\mu_{\Delta_e}}{T} - \frac{10}{179} \frac{\mu_{\Delta_{\mu + \tau}} }{T} \,,
\label{eq:mu5_1}
\end{align}
for $10^9~\text{GeV} \gtrsim T \gtrsim 10^6$~GeV.
Here the bar indicates that we have summed over the three color degrees of freedom of the $u-d$ charge and $\mu_{\Delta_{\mu + \tau}} \equiv \mu_{\Delta_\mu} + \mu_{\Delta_\tau}$.
In the remainder of this letter we will for simplicity assume initial conditions with $\mu_e^\text{ini} = \bar \mu_{u-d}^\text{ini} = 0$ and $\sum_\alpha \mu_{\Delta_\alpha} = 0$. Eq.~\eqref{eq:mu5_1} demonstrates that a $B$$-$$L$ flavour asymmetry generically generates a non-vanishing value for the chiral chemical potential $\mu_{Y,5}$ at $10^9~\text{GeV} \gtrsim T \gtrsim 10^6$~GeV.
As described above, such a non-zero $\mu_{Y,5}$ can trigger a CPI which drives $\mu_{Y,5}$ to zero, at the cost of generating a fermion asymmetry as well as generating helical hypermagnetic fields. The equations for the individual fermion currents $J_i^\mu$ are dictated by the chiral anomaly,
\begin{align}
\partial_\mu J^\mu_i = \varepsilon_i g_i Y_i^2 \frac{\alpha_Y}{ \pi} \bm{E}_Y \bm{B}_Y + \dots \,,
\end{align}
where the dots indicate the SM Yukawa interaction and sphaleron processes and the zero component of the current is determined by the corresponding chemical potential, $q_i = \bar \mu_i T^2/6$. Given that in the temperature range of interest, these do not affect the $e$ and $u-d$ currents, the charge associated with the linear combination $\mu_e - \bar \mu_{u-d} = 0$ is preserved throughout the CPI. Together with setting $\mu_{Y,5} = 0$ at the completion of the CPI in Eq.~\eqref{eq:mu5_1}, we obtain
\begin{align}
\frac{856}{537} \frac{\mu_e}{T} = - \frac{151}{358} \frac{\mu_{\Delta_e}}{T} + \frac{10}{179} \frac{\mu_{\Delta_{\mu+\tau}}}{T}
= - \frac{\mu_{Y,5}^\text{ini}}{T}\,,
\label{eq:mu5_2}
\end{align}
right after the CPI has completed.
The conservation law for the total helicity density, derived from the chiral anomaly equation, then dictates the generation of helicity density
\begin{align}
h = - \frac{\pi T^2}{3 \alpha_Y} \mu_e = - \frac{\pi T^2}{3 \alpha_Y} \bar\mu_{u-d}
= \frac{\pi T^2}{\alpha_Y} \,
\frac{179}{856} \, \mu_{Y,5}^\text{ini} \,,
\label{eq:helicity}
\end{align}
where $\mu_e$ and $\bar \mu_{u-d} = \mu_e$ denote the asymmetry in the right-handed electrons and first generation quarks after the CPI and we have assumed zero initial net helicity.
When the temperature drops below $10^6$~GeV, the first generation quark Yukawa couplings equilibrate and $\bar \mu_{u-d}$ is no longer associated with a conserved charge. Eq.~\eqref{eq:mu5_1} is replaced by
\begin{align}
\frac{\mu_{Y,5}}{T} = \frac{711}{481} \frac{\mu_e}{T} + \frac{5}{13} \frac{\mu_{\Delta_e}}{T} - \frac{4}{37} \frac{\mu_{\Delta_{\mu + \tau}} }{T} \,,
\label{eq:mu5_3}
\end{align}
which, when compared to Eq.~\eqref{eq:mu5_2} and taking into account $\mu_{\Delta_{\mu+\tau}} = - \mu_{\Delta_e}$, only marginally modifies the final value for $\mu_e$ and hence the helicity if the CPI occurs in this temperature range.%
\footnote{For completeness, we note that in the temperature regime $10^{11}~\text{GeV} > T > 10^9$~GeV, when the muon Yukawa and some of the second and third generation quark Yukawa couplings are not equilibrated, the analogue of Eq.~\eqref{eq:mu5_2} reads
\begin{align}
\frac{\mu_{Y,5}}{T} = \frac{1765}{589} \frac{\mu_e}{T} + \frac{188}{589} \frac{\mu_{\Delta_{e+\mu}}}{T} - \frac{88}{589} \frac{\mu_{\Delta_{\tau}} }{T} \,, \nonumber
\end{align}
with coefficients which are numerically again quite similar to Eq.~\eqref{eq:mu5_2}. Note however that since only the third generation lepton Yukawa coupling is in equilibrium, $\mu+\tau$ symmetric LFAs yield a non-vanishing $\mu_{Y,5}$ whereas the $e + \mu$ symmetric case does not.}
At $10^5$~GeV the electron Yukawa interaction equilibrates~\cite{Bodeker:2019ajh}, $\mu_e$ becomes a function of $\mu_{\Delta_\alpha}$, and $\mu_{Y,5}$ vanishes independent of the initial values for $\mu_{\Delta_\alpha}$. Hence the CPI can only be triggered above the electron Yukawa equilibration temperature of about $10^5$~GeV. Taking into account the discussion below Eq.~\eqref{eq:TCPI}, this means that the CPI should become effective at a temperature above $\mathcal{O}(10^6) \GeV$ in order to complete by the time of $T = \mathcal{O}(10^{5}) \GeV$.
\smallskip\noindent\textbf{Baryogenesis from decaying helical magnetic fields\,---\,}%
If this helicity survives until the EWPT, then the conversion of hypermagnetic field to electromagnetic field generates a baryon asymmetry~\cite{Kamada:2016cnb},
\begin{align}
\eta_B^0 = c_B^\text{dec} \frac{\alpha_Y}{2 \pi} \frac{h}{n_\gamma} \left(\frac{g_{*,s}^0}{g_{*,s}^\mathrm{ewpt}} \right) \,.
\label{eq:would-be_AS}
\end{align}
Here $g^0_{*,s}/g_{*,s}^\mathrm{ewpt} \simeq 0.04$, denotes the ratio of the degrees of freedom in the thermal plasma at the EWPT and today, $n_\gamma = 2 \, \zeta(3) \, T^3/\pi^2$ is the photon number density and $c_B^\text{dec} \simeq 0.05$ parametrizes the efficiency of baryogenesis from decaying hypermagnetic fields at the EWPT~\cite{Kamada:2020bmb,NewPaper}. Given current uncertainties on the dynamics of the EWPT, $c_B^\text{dec}$ may vary by almost three orders of magnitude~\cite{Kamada:2016cnb,Jimenez:2017cdr}. This does however not change the conclusion that any value $|\mu_{Y,5}^\text{ini}|/T \gtrsim 10^{-3}$ which is sufficient to trigger (and complete) the CPI before the equilibration of the electron Yukawa interaction, see Eq.~\eqref{eq:TCPI}, will lead to an baryon asymmetry which is orders of magnitude above the observed value of $\eta_B^\text{obs} \sim 10^{-9}$. This can be seen immediately by inserting Eq.~\eqref{eq:helicity} into Eq.~\eqref{eq:would-be_AS}.\footnote{
LFAs can also directly generate a baryon asymmetry during sphaleron decoupling, see e.g.~\cite{Kuzmin:1987wn,Khlebnikov:1988sr,March-Russell:1999hpw,Laine:1999wv,Mukaida:2021sgv}. This contribution is expected to be significantly smaller than the one obtained from Eq.~\eqref{eq:would-be_AS} and does not change our conclusions.
}
Moreover,
our conclusions hold even if the electroweak phase transition is first-order due to beyond-the-SM effects, in which case the efficiency factor $c_B^{\rm dec}$ would be {\it larger}~\cite{Giovannini:1997eg}.
Such large values of the chiral chemical potential, and consequently large values of the helicity density, also ensure that the turbulent regime of MHD is reached, triggering a so-called cascade pushing the helicity to larger length scales and thus protecting it from magnetic diffusion operating at small scales~\cite{Durrer:2013pga,Kahniashvili:2012uj,Banerjee:2004df}. An estimate of the kinetic and magnetic Reynolds numbers returns values much larger than unity, indicating that a helicity generated at $10^9~\text{GeV} \gtrsim T \gtrsim 10^5$~GeV should indeed survive until the EWPT.
\smallskip\noindent\textbf{Constraints on LFAs\,---\,}%
From the discussion above we conclude that lepton flavour asymmetries $\mu_{\Delta_\alpha}$ which are large enough to generate a chiral chemical potential $\mu_{Y,5}$ which triggers and completes the CPI before the equilibration of the electron Yukawa coupling are excluded since they would overproduce the matter--antimatter asymmetry of the universe. Accounting for uncertainties in the determination of the onset of the CPI we consider the parameter space in which our estimate~\eqref{eq:TCPI} of the CPI temperature lies above $10^6$~GeV excluded. Combining Eq.~\eqref{eq:TCPI} and \eqref{eq:mu5_2} then yields
\begin{align}
\left| \frac{151}{358} \frac{\mu_{\Delta_e}}{T} - \frac{10}{179} \left( \frac{\mu_{\Delta_\mu}+\mu_{\Delta_\tau}}{T} \right) \right| <
2.1 \cdot 10^{-3} \,,
\end{align}
where we have set $g_* = 106.75$, $\alpha_Y = 0.011$ and $\sigma_Y =
50~T$ at $10^6$~GeV~\cite{Baym:1997gq,Arnold:2000dr}.
Imposing $B-L$ conservation, this translates to
\begin{align}
\left| \frac{\mu_{\Delta_e}}{T} \right| = \left| \frac{\mu_{\Delta_\mu} + \mu_{\Delta_\tau}}{T} \right| <
4.3 \cdot 10^{-3} \,,
\label{eq:bound}
\end{align}
which is the main observation of this letter.
To compare our result with the existing bounds in the literature, we have to account for the entropy injection by the decoupling of relativistic particles. Noting that $ T^2 \mu_{\Delta_\alpha} / s$ is preserved in an adiabatically expanding universe, with $s$ denoting the entropy density,
we obtain
\begin{align}
\left. \frac{\mu_{\Delta_\alpha}}{T} \right\vert_{T = T_1} = \left( \frac{g_{*,s}(T_1)}{g_{*,s}(T_2)} \right) \left. \frac{\mu_{\Delta_\alpha}}{T} \right\vert_{T = T_2} \,,
\end{align}
where in particular $g_{*,s}^{\rm BBN}/g_{*,s}^{\rm ewpt} \simeq 0.1$.
This in particular provides a bound on the LFAs which is about two orders of magnitude stronger than existing bounds on primordial lepton flavour asymmetries~\cite{Pastor:2008ti,Mangano:2010ei}. In fact, inserting $\mu_{\Delta_e}/T_\nu = - (\mu_{\Delta_\mu} + \mu_{\Delta_\tau})/T_\nu$ with $ |\mu_{\Delta_e}|/T_\nu \lesssim 1$
into Eq.~\eqref{eq:mu5_1} yields $\mu_{Y,5} \lesssim 0.5$ and thus $T_\text{CPI} \lesssim 10^{10}$~GeV, justifying our focus on the temperature range of $10^9~\text{GeV} \gtrsim T \gtrsim 10^6$~GeV for the onset of the CPI.
Moreover, our constraint excludes tauphobic leptoflavourgenesis, which considers $\mu_{\Delta_\mu}/T = - \mu_{\Delta_e}/T \simeq 0.4$ and $\mu_{\Delta_\tau}/T = 0$~\cite{March-Russell:1999hpw,Mukaida:2021sgv},
if the
asymmetries are generated above $10^6$~GeV.
On the other hand, leptoflavourgenesis with a sizable tau flavour component,
$\mu_{\Delta_\tau}/T \simeq 8 \cdot 10^{-3}$~\cite{Shu:2006mm,Gu:2010dg,Mukaida:2021sgv} is marginally consistent with our bound within the uncertainty that comes from the rough estimation for the time scale of completion of CPI ($=\mathcal{O}(10) \eta_{\rm CPI}$).
A large asymmetry in the electron flavour, $- \mu_{\Delta_e}/T_\nu = \mu_{\nu_e}/T_\nu \simeq 0.04$, has been proposed e.g.\ in~\cite{Burns:2022hkq} to address the helium anomaly. One possible implementation of this is a significant violation of $B$$-$$L$ after the EWPT but before BBN, resulting in $\mu_{\nu_\mu}/T_\nu \simeq \mu_{\nu_\tau}/T_\nu \sim \mu_{\nu_e}/T_\nu \simeq 0.04$ at BBN, see e.g.~\cite{Borah:2022uos}. Alternatively, if the LFAs are created before the EWPT, $|\mu_\text{B-L}|/T \lesssim 10^{-9}$ together with the equilibration of LFAs through neutrino oscillations just before the onset of BBN, leads to a significant suppression of the impact of LFAs on BBN and CMB observations~\cite{Froustey:2021azz}. This is particularly relevant given the relatively large neutrino mixing angle $\sin^2 \theta_{13} = 0.022$~\cite{ParticleDataGroup:2020ssz} which leads to an onset of the electron neutrino oscillations before BBN. As demonstrated in~\cite{Pastor:2008ti,Mangano:2010ei}, the neutrino distributions do however not reach full kinetic equilibrium before decoupling, and the resulting deviation from a Fermi-Dirac distributions leads to non-vanishing effective values of $\mu_{\nu_\alpha}^\text{eff}/T_\nu$ which impact both the light element abundances produced during BBN as well as the surviving neutrino radiation $\Delta N_\text{eff}$. Obtaining $\mu_{\nu_e}^\text{eff}/T_\nu \simeq 0.04$ to address the helium anomaly, requires a primordial value of $-\mu_{\Delta_e}/T_\nu = (\mu_{\Delta_\mu} + \mu_{\Delta_\tau})/T_\nu = {\cal O}(1)$ at $T \sim 10 \MeV$~\cite{Pastor:2008ti,Mangano:2010ei}, which is firmly ruled out by our new constraint~\eqref{eq:bound}. Our constraint moreover excludes the possibility that the helium anomaly is addressed by a more moderate LFA, $-\mu_{\Delta_e}/T_\nu \simeq 0.04$, with the onset of neutrino oscillations delayed by non-standard neutrino interactions~\cite{Dolgov:2004jw}. We conclude that our new constraint~\eqref{eq:bound} rules out the possibility of explaining the helium anomaly with primordial LFAs, independent of the precise equilibration temperature of the neutrino oscillations.
Two obvious caveats to this constraint deserve to be mentioned. First, if the LFAs are generated only at temperatures below $10^5$~GeV, the constraints derived here do not apply.
Scenarios considered in Refs.~\cite{Dreiner:1992vm,Casas:1997gx,McDonald:1999in,Kawasaki:2002hq,Yamaguchi:2002vw,Takahashi:2003db,Asaka:2005pn,Harigaya:2019uhf,Gelmini:2020ekg,Kawasaki:2022hvx} are in this category because they generate large lepton (flavour) asymmetry after the electroweak phase transition. Second, in models with $\mu + \tau$ symmetry (in addition to the total $B$$-$$L$ symmetry), the chiral chemical potential $\mu_{Y,5}$ vanishes below $10^9$~GeV and the constraints derived here are evaded. Note that in this latter case the LFAs are erased once $\mu - \tau$ neutrino oscillations begin, making a solution to the helium anomaly based on this construction challenging.
\smallskip\noindent\textbf{Conclusions\,---\,}%
In this letter we point out that non-perturbative SM processes associated with the chiral magnetic effect in the hypercharge gauge group can be used to set constraints on large lepton flavour asymmetries present in the early universe at temperatures above a $10^6$~GeV. In the absence of a $\mu + \tau$ symmetry, we constrain the flavoured $B$$-$$L$ asymmetries to $|\mu_{\Delta_\alpha}|/T <
0.004$. These constraints are currently not limited by experimental accuracy, but rather by theory uncertainties. A more accurate simulation of the dynamics of the chiral plasma instability in the regime close to the equilibration temperature of the electron Yukawa interaction could potentially improve this bound by a factor $\sqrt{10}$ by resolving the regime where the CPI becomes relevant but is not completed before the electron Yukawa equilibrates.
In this regime, it may moreover be possible to obtain the observed baryon asymmetry, as discussed in Ref.~\cite{Kamada:2018tcs}.
Further progress may be made by dropping the approximation of instant equilibration of the various Yukawa couplings and instead solving the Boltzmann equations for the Yukawa interactions once they become marginally relevant. We hope that our work sparks future research in these directions.
While the focus of this letter is on constraining lepton flavour asymmetries, the mechanism considered here also constrains scenarios where any of the fermion asymmetries is large, even if the asymmetry is washed out at lower temperatures (see e.g.~\cite{Co:2019wyp,Co:2020xlh,Co:2020jtv,Co:2019jts}). This also includes, e.g., scenarios of leptoflavourgenesis that rely on large fermionic input charges generated at very high energies. The transport equations of the SM will redistribute the asymmetries according to the conserved charges in the different temperature regimes, but generically at temperatures above $10^5$~GeV, $\mu_{Y,5}$ is of the same order as the largest initial fermion asymmetry (see, e.g., Ref.~\cite{Domcke:2020kcp}). As discussed in this letter, this can trigger the CPI, generating helical magnetic fields which can lead to an overproduction of the baryon asymmetry.
\medskip\noindent\textit{Acknowledgments\,---\,}%
We thank Miguel Escudero for helpful discussions on the helium anomaly as well as Keisuke Harigaya and Mikhail Shaposhnikov for comments on the draft.
K.~K. was supported by JSPS KAKENHI, Grant-in-Aid for Scientific Research (C) JP19K03842.
K.\,M.\, was supported by MEXT Leading Initiative for Excellent Young Researchers Grant No.\ JPMXS0320200430,
and by JSPS KAKENHI Grant No.\ JP22K14044.
M.\,Y.\ was supported by the Leading Initiative for Excellent Young Researchers,
MEXT, Japan, and by JSPS KAKENHI Grant No.\ JP20H05851 and JP21K13910.
\bibliographystyle{JHEP}
\bibliography{refs}
\newpage
\onecolumngrid
\newpage
\renewcommand{\thesection}{S\arabic{section}}
\renewcommand{\theequation}{S\arabic{equation}}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\thetable}{S\arabic{table}}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{page}{1}
|
Title:
Galactic satellite systems in CDM, WDM and SIDM |
Abstract: We investigate the population of bright satellites ($M_{*} \geq 10^{5}
\mathrm{M}_{\odot}$) of haloes of mass comparable to that of the Milky Way in
cosmological simulations in which the dark matter (DM) is either cold, warm or
self-interacting (CDM, WDM and SIDM respectively). The nature of the DM gives
rise to differences in the abundance and structural properties of field halos.
In WDM, the main feature is a reduction in the total number of galaxies that
form, reflecting a suppression of low-mass DM haloes and lower galaxy formation
efficiency compared to CDM. For SIDM, the changes are structural, restricted to
the central regions of haloes and dependent on the assumed self-interaction
cross-section. We also consider different baryonic subgrid physics models for
galaxy formation, in which supernova gas blowouts can or cannot induce the
formation of a core in dwarf galaxies. Overall, the inclusion of baryons lessen
the differences in the halo properties in the different DM models compared to
DM-only simulations. This affects the satellite properties at infall and
therefore their subsequent tidal stripping and survival rates. Nonetheless, we
find slightly less concentrated satellite radial distributions as the SIDM
cross-section increases. Unfortunately, we also find that the satellite
populations in simulations with baryon-induced cores in CDM and WDM can mimic
the results found in SIDM, making the satellite stellar mass and maximum
circular velocity functions heavily degenerate on the assumed nature of the DM
and the adopted subgrid modelling. These degeneracies preclude using the
brightest satellites of the Milky Way to constrain the nature of DM.
| https://export.arxiv.org/pdf/2208.07376 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
dark matter -- galaxies: haloes
\end{keywords}
\section{Introduction}
The precise nature of the dark matter (DM) is as-yet unknown, despite making up the largest fraction of the universal matter energy-density budget \citep{2020.Planck}. This is because its existence has only been inferred through astrophysical tests relying on gravitational probes, such as the rotation curves of galaxies \citep{Rubin.1970}, strong gravitational lensing \citep{Wambsganss.2004} or X-ray emission from galaxy clusters \citep{Voigt.2006}. Despite ongoing searches for a particle counterpart that could account for most of the dark matter, none have yet made a conclusive detection, directly \citep{Marrodan.2016} or indirectly \citep{Gaskins.2016}.
Nonetheless, assuming dark matter is a heavy particle whose distribution on large scales is solely dictated by gravity results in a remarkable agreement between predictions and observations on large cosmological scales \citep{Davis.1985}. These range from the distribution of galaxies at the present-day \citep{Cole.2005,Springel.2006, Rodriguez.2016}, to the anisotropies imprinted in the Cosmic Microwave Background, back when the Universe was only 300,000 years old \citep{2020.Planck}.
A natural particle candidate satisfying these criteria are weakly-interacting massive particles (WIMPs; \citealt{Ellis.1984}). These are hypothetical particles which arise on electroweak scales -- $\mathcal{O}(\mathrm{GeV - \mathrm{TeV}})$ -- and whose predicted relic abundance is similar to the one required by the inferred DM density. Within the WIMP landscape, an exciting prospect is the lightest neutralino, a particle predicted from well-motivated minimal super-symmetric extensions to the Standard Model. These considerations make cold dark matter (CDM) the \textit{de facto} DM model. However, no direct evidence for supersymmetry \citep{Canepa.2019} or WIMP-like dark matter candidates \citep{Aprile.2018} has been detected yet. As more of the plausible parameter space is excluded, we may need to re-visit our expectations on what the particle nature of the DM is.
Nonetheless, there are other well-motivated models which have not yet been ruled out. One such example is warm dark matter (WDM), a lighter particle than CDM with masses in the keV range. A promising WDM particle is the sterile neutrino, which is a hypothetical right-handed equivalent of the Standard Model (SM) neutrino. These arise naturally in many Grand Unified Theories (e.g. \citealt{Pati.1974}) and could provide a natural explanation for the small mass of SM neutrinos via the see-saw mechanism \citep{King.2015}. Cosmologically, its lighter nature entails its free-streaming length -- the spatial scale over which primordial density perturbations are erased -- is larger than in CDM. Consequently, its power spectrum is suppressed at small spatial scales relative to CDM. This has a number of interesting consequences, from a decrease in the number of low mass haloes to a delay in their formation time. The latter effect also results in structural changes in the distribution of dark matter haloes, such as lower concentrations. Thus, WDM is able to reproduce the success of CDM on large scales, whilst modifying the predictions on smaller scales.
Another alternative is a particle that is able to scatter elastically with itself, self-interacting dark matter (SIDM). Although initially proposed to solve the so-called missing satellites and cusp \textit{versus} core `problems' \citep{Spergel.2000}, there are several particle physics models that naturally result in self-interactions between DM particles (e.g. \citealt{McDonald.2002,Buckley.2010}). This leads to changes in the velocity and density profiles of the central regions of haloes, turning their cuspy NFW-like distributions to cored isothermal ones. Moreover, if the cross-section is large enough, core-collapse can be triggered and revert the flat density core to a super-cusp. Although the largest velocity-independent cross-sections are likely ruled out based on cluster-mass constraints \citep{Peter.2013,Rocha.2013}, there is still the possibility of large cross-sections at low masses via velocity-dependent cross-sections. Nonetheless, it is worth noting that many of the previous constraints have been overstated to some degree, because off simplifying assumptions, limited cluster statistics, or a lack of baryons in the simulations from which the constraints are derived \citep{Robertson.2018}.
The above changes to the DM model thus primarily alter predictions on small scales, either in the abundance of low mass structure or the distribution of DM in the centre of haloes. Consequently, we need to test these models in an appropriate environment where these changes are observationally accessible. An excellent test bench for this is the Local Group. This is because surveys such as SDSS, DES and ATLAS have made possible the discovery of low-surface brightness objects that probe the edge of galaxy formation \citep{Torrealba.2016}. Moreover, GAIA offers a unique view into the kinematics of some of these objects, leading to the discovery of the `feeble-giants' Antlia II \citep{Torrealba.2019}, whose properties are difficult to explain in a Universe dominated by collisionless dark matter \citep{Caldwell.2017,Fu.2019, Borukhovetskaya.2022}.
Objects orbiting around larger, more massive ones are subject to gravitational tides, which strip dark matter from their haloes. At fixed orbital parameters, the efficiency of this process depends sensitively on the internal structure of the DM haloes \citep{Penarubia.2010}. Thus, differences in the satellite's underlying inner dark matter distribution are amplified, leading to very different satellite populations based on their survivability. Thus, this suggests that in principle, we may indirectly probe the nature of dark matter by comparing the properties of the present-day population of satellites around the Milky Way (MW) to the results of hydrodynamical simulations.
For the purposes of this study, the inclusion of baryons is paramount for reliable predictions. Firstly, it allows a more meaningful comparison to observations, since not all DM halos host galaxies. Secondly, the processes associated with galaxy formation and evolution can alter the global properties of haloes and how dark matter is distributed within. These effects are mass dependent and could, in principle, be degenerate with changes to the DM model \citep{Khimey.2021, Burger.2022}, e.g. core formation driven by supernovae-driven gas blowouts \citep{Navarro.1996b,Read.2005} \textit{vs} self-interacting dark matter. Moreover, the presence of a disk, and subsequent contraction of the DM halo, can greatly enhance the destruction of subhaloes \citep{Sawala.2017,Garrison-Kimmel.2017, Richings.2020}.
Limits on available computational power means we need to resort to subgrid implementations to model baryonic physics when simulating galaxy formation in a cosmological setting. Although they are able to make realistic predictions once calibrated \citep{Genel.2014, Schaller.2015, Ludlow.2017, Hopkins.2018}, there are different parametrisation choices and many of their free parameters can be degenerate with others. This can result in different predictions on yet unconstrained relations, such as the properties of the IGM \citep{Kelly.2021}.
One such example particularly relevant to stripping is whether supernovae-driven gas blowouts are able to form cores in dwarf galaxies. Depending on the choice of subgrid parameters, simulations produce dwarfs with central density cores (FIRE, \citealt{Onorbe.2015}; NIHAO, \citealt{Tollet.2016}) or not (EAGLE and AURIGA, \citealt{Bose.2019}). The definitive or insufficient evidence for the existence of cores in dwarf galaxies is hotly debated, with some attributing their inferred presence to difficulties in the kinematic modelling \citep{Oman.2019,Roper.2022}. Nonetheless, it is important to consider both possibilities, especially from the point of view of disentangling baryonic effects from different DM models.
Given the all of the above, this paper sets out to study how the properties of the satellite systems of haloes with masses similar to our MW -- within a factor of two -- change when the DM is neither cold nor collisionless. Given the importance of baryons, and that they may affect the inner dark matter distribution in satellites, we also consider different values for the subgrid parameters to explore variations in the population of satellites associated to this. To this end, we simulate cosmic structure formation in CDM, WDM and a range of SIDM cross-sections in the same $(12~\mathrm{Mpc})^{3}$ periodic volume. This allows us to focus on the same haloes in this suite of thirteen different simulations and study how the properties of their satellite systems change.
This paper is structured as follows. Section 2 introduces the different models we use to simulate structure formation, from N-body to full-hydrodynamical realisations. Section 3 presents the methods used to measure and compare the properties of interest and the sample selection. This is followed by an overview of the changes in the properties of field haloes driven by different models. Subsequently, we shift our analysis to our sample of mass-selected haloes to investigate how their satellite populations are affected under different models. Finally, we investigate the cause behind the differences that these changes have had on the their satellite stripping and survivability.
\section{Simulations}
In this section we give an overview of the EAGLE subgrid physics used in this work and describe how we model the changes in the dark matter and baryon models.
\subsection{The code}
The EAGLE project \citep{Schaye.2015,Crain.2015} is a suite of
hydrodynamical cosmological simulations that follow the formation and
evolution of cosmic structure from $\Lambda$CDM initial conditions
assuming the cosmological parameter values from
\citet{Planck_Collaboration.2014}. They were performed using a
modified version of the P-Gadget3 code \citep{Volker.2005} that
incorporates subgrid prescriptions for the physics relevant to galaxy
formation and evolution: radiative cooling and photoheating \citep{Wiersma.2009}, star formation and evolution
\citep{Schaye.2004,Schaye.2008}, stellar feedback
\citep{Dalla_Vecchia.2012}, black hole seeding
\citep{Springel.2005,Booth.2009}, its subsequent growth and
stochastic, thermal AGN feedback.
The values of the parameters used in
modelling these processes were set by requiring a good match to the
observed $z = 0.1$ galaxy stellar mass function, the distribution of
galaxy sizes and the amplitude of the central black hole mass {\em vs}
stellar mass relation. Once calibrated in this way, EAGLE reproduces a
number of population statistics \citep{Schaller.2015,Ludlow.2017}.
We use the calibration made for the higher mass resolution version of EAGLE to simulate structure formation in a periodic volume of $(12~\mathrm{Mpc})^{3}$. We populate it with $2 \times 512^{3}$ particles, half of which are dark matter and the rest gas particles. This corresponds to a particle mass resolution of $\sim 4\times 10^{5}$ and $\sim 8\times 10^{4} \, \Msun$, respectively. The initial conditions were generated using MUSIC \citep{Hahn.2011}.
\subsection{Baryonic physics}
An important parameter determining whether gas blowouts can flatten the density profiles of dark matter haloes in hydrodynamical simulations is the star formation density threshold \citep{Benitez-Llambay.2019}. This parameter sets the minimum density required for a gas particle to be eligible to become a star particle. The EAGLE subgrid physics uses a metallicity ($Z$) dependent term given by \citep{Schaye.2004}:
\begin{equation}
\rho_{\rm th} = n_{\rm th, 0}\Big(\dfrac{Z}{0.04}\Big)^{\alpha} \, ,
\label{density_threshold_equation}
\end{equation}
where $n_{\rm th, 0} = 10^{-1} \, \mathrm{cm}^{-3}$ and $\alpha = 0.64$. These values result in thresholds that are comparatively lower than other hydrodynamical simulations, e.g. $10^{2}\,\mathrm{cm}^{-3}$ in GASOLINE \citep{Zolotov.2012} or $10^{3}\,\mathrm{cm}^{-3}$ in FIRE-2 \citep{Fitts.2017}. Consequently, gas cannot accumulate in sufficient quantities at the centres of haloes to become gravitationally relevant before being blown out via supernovae feedback resulting from star formation. As a result, the EAGLE model cannot form cores through baryonic blowouts \citep{Navarro.1996a}.
Nonetheless, $\rho_{\rm th}$ is a free parameter of the subgrid physics. Indeed, star forming gas clouds in the real universe reach gas densities in excess of $10^{4} \, \mathrm{cm}^{-3}$ \citep{Lada.2009}. It is thus possible that internal structural changes that occur in the real Universe are not captured by the low values of star formation threshold used in the fiducial subgrid parameters of EAGLE. Thus, we explore how baryon-induced cores affect the satellite population of the objects with masses similar to our Milky Way by running models with higher density thresholds, setting $\rho_{\rm th}$ to a constant value of $10 \, \mathrm{cm}^{-3}$. Although this is still comparatively low than other simulations, it is large enough for gas blowouts to turn cusps into cores at the dwarf galaxy scale in EAGLE \citep{Benitez-Llambay.2019}. We refrain from using larger density thresholds as this would drastically reduce the efficiency of the thermal supernova feedback implemented in our simulations. This would make dwarf galaxies unrealistically baryon-dominated in their centres at all times \citep{Benitez-Llambay.2019}, unless other subgrid model parameters are re-calibrated. We have checked that basic galaxy properties, such as the stellar-to-halo-mass relation, do not change significantly across the models used in this work.
To distinguish between both baryonic physics models, we henceforth refer to the fiducial, low density threshold value as LT and the higher value as HT from here on. Simulations without baryons are referred to as dark matter only (DMO).
\subsection{Warm dark matter}
We obtain the power spectrum of WDM, $P_{\rm WDM}(k)= T^{2}(k)P_{\rm CDM}(k)$, using the transfer function of \citet{Bode.2001}:
\begin{equation}
T^{2}(k) = [1 + (\alpha k)^{2\nu}]^{-5/\nu} \, .
\end{equation}
Here, $\nu$ is a fitting constant equal to 1.2 and the parameter $\alpha$ depends on the assumed mass of the WDM particle:
\begin{equation}
\alpha = 0.049\Big[\dfrac{m_{\rm th}}{\mathrm{keV}} \Big]^{-1.11}\Big[ \dfrac{\Omega_{\rm WDM}}{0.25}\Big]^{0.11} \Big[ \dfrac{h}{0.7} \Big]^{1.22} \, h^{-1} \, \mathrm{Mpc}\, .
\end{equation}
For this work we assume $m_{\rm th} = 2.5\, \mathrm{keV}$. This is lighter than the equivalent thermal relic mass of a 7~KeV sterile neutrino model associated with the unidentified 3.5~KeV X-ray line \citep{Boyarsky.2014}. Nonetheless, we choose this value to enhance the differences with respect to CDM to allow for an easier comparison. We can estimate the mass scale where the differences with respect to CDM are noticeable, $m_{1/2}$. It corresponds to the Jean's mass of a perturbation with a wavelength equal to the one where the WDM power spectrum is half of the CDM one. For the values used in this work, $m_{1/2} = 1.4 \times 10^{9} \Msun$.
\subsection{Self-interacting dark matter}
Self-interactions are modelled using the Monte-Carlo implementation described in \citet{Robertson.2017}. Dark matter particles can scatter each other when they are closer than the Plummer-equivalent softening length of the simulations. The probability of any two neighbouring particles scattering is a function of their relative velocity and the assumed cross-section.
In this study, we use three different cross-sections; two velocity-independent, isotropic cross-sections of 1 and 10~$\mathrm{cm}^{2} \mathrm{g}^{-1}$ and an anisotropic, velocity-dependent one given by:
\begin{equation}
\dv{\sigma }{\Omega} = \dfrac{\sigma_{T,0}}{4 \pi \Big(1 + \dfrac{v^{2}}{w^{2}}\mathrm{sin}^{2}\dfrac{\theta}{2}\Big)^{2}}\; ,
\end{equation}
where $v$ is the relative velocity magnitude between particles in their centre of mass frame and $\theta$ the scattering angle relative to their incoming direction. The above expression results from assuming that the particles scatter in a Yukawa potential under the Borne approximation \citep{Ibe.2010}.
The parameters $w$ and $\sigma_{T,0}$ correspond to the velocity scale below which the cross-section is roughly constant and its asymptotic, low-velocity value, respectively. We use $w = 560\,\mathrm{km}\mathrm{s}^{-1}$ and $\sigma_{T,0} = 3.04\,\mathrm{cm}^{2}\,\mathrm{g}^{-1}$ to reproduce the best-fitting mass-dependent cross-section of \citet{Kaplinghat.2016}, which is derived from constraints on the inferred cross-section from dwarf to cluster scale haloes. In practice, these values yield an approximately constant cross-section of $\sim 3 ~\mathrm{cm}^{2} \mathrm{g}^{-1}$ on dwarf galaxy scales.
\section{Methods}
Here we discuss how we find cosmic structure and link subhaloes across snapshots to build their merger trees. We also show how we remove WDM spurious groups, select our sample of haloes and their satellites and correct for orphan galaxies. The former are satellite galaxies whose host dark matter halo has been lost from the halo catalogues. Their omission leads to underestimates of the satellite radial distributions in the central regions of haloes, where they are the dominant population.
\subsection{Structure finding and merger trees}
To identify cosmic structures, we assign particles into distinct
groups according to the friends-of-friends (FoF) percolation algorithm
\citep{Davis.1985}. Each group is made up of particles that are within
0.2 times the mean interparticle separation from one
another. Gravitationally bound substructures are found with the SUBFIND algorithm \citep{Springel.2001}, which, using particle velocity and position information, identifies self-bound structures within a larger FoF group.
We follow the time evolution of all SUBFIND groups using their merger trees, which are built by cross-matching a subset of the most bound particles between consecutive time outputs \citep{Jiang.2014}. This implementation is able to link SUBFIND groups that have temporarily disappeared from the catalogues (e.g. due to insufficient density contrast near centres of more massive haloes) for five consecutive data outputs or less. The main progenitor branch is then found by identifying the progenitor branch with the largest integrated mass \citep{DeLucia.2007}. This reduces the influence that halo switching, prone to occur during major mergers, has on the identification of the main progenitor at high redshifts.
\subsection{WDM spurious group removal}
Particle-based simulations starting from a density perturbation power spectrum with a resolved cut-off produce spurious structure along filaments. This is a consequence of the discrete representation of the underlying density field \citep{Wang.2007}. Consequently, this results in an artificially high number of objects below the mass scale where no structure is expected to form.
In this study, we remove them from the WDM simulations using the two criteria of \citet{Lovell.2014}. Firstly, we remove all groups whose peak bound mass is below the mass scale at which the number of spurious groups is equal to genuine ones, $ M_{\rm lim}$. This is related to the mass resolution of the simulation and the assumed power spectrum via:
\begin{equation}
M_{\rm lim} = 5.05 \, \bar{\rho} d k^{-2}_{\rm peak},
\end{equation}
where $d$ is the mean interparticle separation, $k_{\rm peak}$ the wavelength at which the dimensionless power spectrum, $\Delta^{2}(k)$, peaks and $\bar{\rho}$ the mean density of the universe. For the simulations and WDM model used in this study, $M_{\rm lim} = 1.4 \times 10^{8} \, \Msun$ .
Finally, we select the particles bound to the group when it first reached half of its peak bound mass. We then compute the inertia tensor of those particles in the initial conditions and define the sphericity as the ratio between the smallest and largest eigenvalue of their inertia tensor, $s \equiv c/a$. All groups with $s \leq 0.16$ are removed, since the Lagrangian regions associated to spurious groups are significantly more flattened than those in which genuine haloes form.
\subsection{Halo and subhalo matching across simulations}
We match the main SUBFIND group of each FoF group across simulations by selecting their 100 most bound particles as identified by SUBFIND. We then select a candidate match by identifying which group the majority of the particles belong to in the other simulations. The process is then repeated in reverse, and if this bijective process is successful, we confirm the match.
Matching substructure is less trivial owing to the fact that the same object may have followed different paths and have been stripped to varying degrees once it entered the virial region of a larger object. To minimise the effect of these differences, we perform the bijective match at the time when their bound mass peaked.
\subsection{Sample Selection}
As we are interested in studying the satellite system of haloes similar to that of our own Milky Way, we restrict our analysis to haloes of mass
$M_{200}$\footnote{$M_{200}$ is defined as the mass contained within a sphere of mean density 200 times the critical density of the universe.} at $z = 0$ in the range $0.5 -2.5\times 10^{12} \,
\Msun$. This is within a factor of two from recent observational estimates of the Milky
Way's halo mass \citep{Callingham.2019,Cautun.2020}. Eight
haloes satisfying this criterion were identified in each version of the simulations. However, one is undergoing a merger at $z=0$, which we remove from further consideration.
Their resolved satellite systems are defined by identifying all SUBFIND groups that are within 300~kpc from the centre of their host halo and have one or more bound stellar particle at $z = 0$. We also enforce that the identified structures are heavily dark matter dominated, namely, $M^{\rm DM}_{\rm SUB}/M^{\rm tot}_{\rm SUB} > 0.8$. This additional condition stems from the fact that dense clumps of gas in the HT versions are identified as self-bound structures by SUBFIND. Their inclusion in the satellite population would lead to biased radial distribution functions, as they form in the inner few kiloparsecs of the dark matter halo, where the gaseous disk is located. Some gas clumps are also present in the low threshold versions, but are far less common than in the higher density threshold counterparts.
\subsection{Orphan galaxies}
In simulations of structure formation with limited resolution, substructure can be artificially disrupted. Substructure is lost whenever its mass drops below the 20 particle threshold limit imposed by SUBFIND on bound structures. The decrease in the bound number of particles can occur, for example, when a subhalo has been tidally stripped or the density contrast is insufficiently high for it to be detected near the central regions of a more massive neighbour. This does not necessarily imply that they have been disrupted, since increasing the particle mass resolution would lead to their ongoing survival for a longer time. This is both due to an increased capability in tracking objects to lower masses, as $m_{\rm limit} \sim 20 m_{\rm dm}$, and due to a reduction in the effect of tides resulting from smaller artificial cores.
Thus, accounting for these `disrupted' objects improves the convergence of the predicted radial distribution function of satellites around Milky Ways \citep{Newton.2018}. Moreover, they are required to correctly predict the satellite luminosity functions at stellar masses below $10^{5} \, \Msun$, even in high resolution simulations \citep{Bose.2020}.
In this study, we tag as orphans all dark matter haloes that had at least one bound stellar particle before being lost from the merger trees. We then use their most bound DM particle -- identified during the last data output when they were resolved -- as a proxy for the position and velocity of the orphan galaxy. A small subset of orphans end up sharing the same tracer particle ID. In such cases, we discard the higher redshift counterparts and keep the one orphaned at a later time.
Once the orphan population is identified, we track their positions until one of the two conditions given in \citet{Simha.2017} are fulfilled. The first one is that they have existed for longer than the time for their orbit to decay due to dynamical friction:
\begin{equation}
\dfrac{T_{\rm df}}{\tau_{\rm dyn}} = \Big(\dfrac{r}{r_{\rm circ}}\Big)^{-1.8}\Big(\dfrac{J}{J_{\rm circ}}\Big)^{0.85}\dfrac{M_{\rm FoF}(<r)/M_{\rm sub}}{2B(1)\ln \Lambda} \, ,
\end{equation}
where $r$ and $J$ are the orbital radius and angular momentum of the orphan, and the corresponding values for a circular orbit of the same binding energy are $r_{\rm circ}$ and $J_{\rm circ}$, respectively. The Coulomb logarithm is taken to be $\ln \Lambda = \ln M_{\rm vir}/M_{\rm sub}$ and $B(x) \equiv \mathrm{erf}(x) - 2x e^{-x^2}/\sqrt{\pi} $. The dynamical timescale of the halo, $\tau_{\rm dyn}$, is estimated as:
\begin{equation}
\tau_{\rm dyn}(z) = \dfrac{1}{\sqrt{4\pi G \Delta_{\rm vir}(z)\rho_{\rm crit}(z)}}\, ,
\end{equation}
where $\rho_{\rm crit}$ is the critical density of the universe and $\Delta_{\rm}$ is the overdensity of a just-collapsed spherical top hat density perturbation \citep{Bryan.1998, Eke.1996}:
\begin{equation}
\Delta_{\rm vir}(z) = 18\pi^{2} + 82[\Omega_{\rm m}(z)-1] + 39[\Omega_{\rm m}(z)-1]^2 \, .
\end{equation}
The dynamical friction timescale is first calculated immediately after the galaxies are orphaned. If the orphan subsequently enters the virial region of a more massive FoF group, we re-calculate and update its value.
The second condition is to stop tracking orphans once they come within a radius that encloses a mean density equal to the mean density of the orphan, $\bar{\rho}_{\rm FoF}(<R_{\rm tid}) = \bar{\rho}_{\rm sub} (<R_{\rm sub})$. For the spatial scale of the subhalo, $R_{\rm sub}$, one may chose $R_{\rm max}$ or the half-light radius of the galaxy it hosts, $R_{50}$. Here we use the latter, since we are interested in modelling when the luminous component of the galaxy is affected by tides. A subset of orphans have no associated $R_{50}$, e.g those with only one bound stellar particle. In such cases, we compute the median of $\rho(<R_{50})/\rho(<R_{\rm max})$ for orphans with known $R_{50}$ and multiply $\rho(<R_{\rm max})$ by this correction factor to estimate $\rho(<R_{\rm 50})$.
\subsection{Orbit integration}
The typical time resolution between consecutive data outputs for our simulations ($\sim 300~ \mathrm{Myr}$) is much larger that the dynamical timescales of the central regions of the haloes in the mass range we study here. This means that outputs are unlikely to `catch' satellites near pericentre, potentially leading to an underestimate in their numbers in the central regions. This can affect estimates for the central radial distribution of satellites, as well as whether the tidal disruption criterion is fulfilled.
We interpolate the orbits of satellites between consecutive data outputs. Here we use the method described in \citet{Richings.2020}, with a few notable differences. Firstly, we use the AGAMA package \citep{Vasiliev.2019} instead of GALPY \citep{Bovy.2015}. Secondly, we align the z-axis of the coordinate system with the $z=0$ angular momentum of the galaxy's stellar component, if present. Finally, we use an axisymmetric multipole expansion for the potential sourced by the DM and a cylindrical one \citep{Cohl.1999} for that of the baryons. The latter choice is made to model more accurately a flattened potential.
\section{Field haloes}
Here we discuss how the global and internal properties of field haloes differ among different DM models, as well the choice of subgrid physics. We begin by comparing the abundance and global properties of all haloes, luminous or dark, across our simulations. We discuss changes in the galaxy formation efficiency, namely, the fraction of luminous haloes in a given mass range. Finally, we study how the dark matter distribution differs between matched pairs of DM haloes across all simulations.
\subsection{Halo mass functions}
In Fig.~\ref{halo_mass_function} we show the halo mass function as measured in all simulations available for this volume. We have defined the virial mass as the mass contained within a sphere whose mean density is 200 times $\rho_{\rm crit}$. Focusing on the CDM DMO version, we show the expected power law dependence on $M_{200}$ in the mass range $10^{8} - 10^{11} \Msun$. At higher masses, we observe a deviation from this behaviour. This is driven by Poisson fluctuations that arise as a consequence of the small number of massive objects in our simulations. Indeed, within a volume of $(12\,\mathrm{Mpc})^3$, we expect less than 10 MW-mass haloes to form.
The corresponding SIDM DMO simulations show no appreciable differences relative to the CDM versions in the sampled mass range, regardless of the cross-section value. This is because the primordial density fluctuation power spectrum was assumed to be the same across these two models. The addition of self-interactions primarily affects the central regions of DM haloes, where higher densities allow for more frequent interactions between particles. There are no significant differences in the distribution of dark matter near the virial radius nor in the number of objects that form, and hence there are no changes in the halo mass functions relative to CDM.
On the other hand, the WDM DMO simulation shows large differences with respect to the CDM and SIDM models. Although at higher masses these are negligible, they become significant close to and below the half-mass mode of our WDM model. This is evident as a reduction in the number of haloes at a fixed $M_{200}$ on those mass scales. This is due to the suppression of small spatial scale density perturbations, which results in fewer low-mass objects forming compared to the CDM and SIDM models. However, we point out that the systems that do form are less massive than their CDM counterparts, as shown in Fig.~\ref{mass_ratio}.
In this mass range, the hydrodynamical versions of all models exhibit a systematic suppression with respect to their DMO counterparts. This is a consequence of the loss of baryons within the virial region of haloes at early times, which induces a shift in the halo mass functions towards lower masses. As shown in Fig.~\ref{mass_ratio}, all models in the largest mass bins have ratios close to the universal dark matter mass fraction. The mass loss is entirely explained by the removal of a large fraction of the baryons by feedback at early times. Focusing on lower masses, we see that the ratio for the CDM and SIDM models approaches a constant fraction that is lower than $\Omega_{\rm DM}/\Omega_{\rm m}$. This because the loss of baryons at early times hinders subsequent mass growth due to the resulting shallower gravitational potential well, leading to overall less massive haloes \citep{Sawala.2013}. The case of WDM is the combination of the above together with a mass decrease arising from the cut-off in the power spectrum.
\subsection{Halo formation times}
The epoch at which haloes form determines the shape and normalisation of their DM density profiles. This is because the formation time reflects the density of the Universe when the density perturbation decoupled from the Hubble flow. As discussed above, the early loss of baryons can slow down the mass growth of DM haloes. Moreover, a cut-off in the power spectrum can also delay the formation of haloes.
To explore how the differences made to our models alter the formation time of haloes, we compare how formation times vary across matched pairs, relative to their CDM DMO counterparts. For this purpose, we identify the formation time with the redshift at which the main progenitor first reached half of its $z=0$ virial mass, $z_{1/2}$. We compare how the median and scatter of this ratio varies as a function of mass in Fig~\ref{formation_time}.
We first note the similarities between the DMO versions of SIDM and CDM across the mass range studied here. Again, this is because the power spectra of inital density perturbations were the same in both models. On the other hand, the WDM DMO counterparts exhibit a formation delay that increases towards lower masses. This means that they form when the Universe is less dense compared to haloes that collapse earlier. Thus, their concentrations are expected to be lower in WDM than in their CDM and SIDM counterparts.
The hydrodynamical versions of all simulations have equal formation times for larger mass haloes, but begin to form slightly later at lower masses. This is caused by the loss of baryons at early times, which results in a measurable slow down in the growth rate of the halo due to the shallower potential well. This leads to lower virial masses at $z=0$ relative to their DMO counterparts, as discussed previously.
The above changes in the formation times of haloes have important implications on the fraction that host galaxies. This is because the interplay between their mass accretion histories and the mass required to trigger the gravitational collapse of gas largely determines whether a halo is luminous or not at $z=0$. Thus, delayed formation times and slower growth -- indirectly probed by our $z_{1/2}$ metric -- can reduce the amount of luminous haloes in a given mass range.
We explore this in Fig~\ref{occupation_fraction}, which shows how the halo occupation fraction (HOF) varies across different models. First, focusing on the CDM LT version, we observe three distinct regimes. At masses below $\sim 10^{9}\Msun$, no haloes host luminous components, whereas all haloes are luminous above $\sim 10^{10}\Msun$. The mass range between both limits is populated both by luminous and starless haloes.
The shape of the HOF is well understood from simple assumptions about when galaxy formation is triggered \citep{Benitez-Llambay.2020}. Essentially, any halo more massive than a redshift-dependent mass threshold, defined by the scale at which gas is unstable to gravitational collapse, will host a galaxy by $z=0$.. Before reionisation, this threshold is determined by atomic hydrogen cooling; after reionisation it is determined by the thermal state of the intergalactic gas. At high masses, all have crossed this threshold, hence all are luminous. Those at intermediate masses will cross it (or not) depending on their mass assembly histories, which vary across haloes. At lower masses, objects have not been able to trigger the gravitational collapse of gas and thus remain starless.
The predicted (CDM) HOF of \citet{Benitez-Llambay.2020} is shown in Fig~\ref{occupation_fraction}; its midpoint agrees well with our simulations. Nonetheless, there are some differences on the high and low mass ends. These are largely driven by the binning scheme we require to measure the HOF in our simulations, which is not fine enough to capture the sharp transition. However, all haloes above $6\times10^{9} \, \Msun$ should host a galaxy, but we find some that remain starless in our simulations. We attribute this to resolution effects: the limited resolution of our simulations (a factor of 8 coarser than that in the simulations of \citealt{Benitez-Llambay.2020}) is not enough to follow accurately the rate at which the gas becomes denser as it approaches the threshold for star formation.
Turning back to the HOF measured in our simulations, we can understand the differences between all models. For the SIDM and CDM cases, regardless of the hydrodynamical model, no significant differences exist in the assembly of matched counterparts. Thus, haloes that form galaxies in one simulation always do so in the alternative models. On the other hand, the WDM simulations show a clear difference with respect to the latter two. This is connected to their delayed formation.
To understand why this is the case, consider a CDM halo that only just crosses the mass threshold for galaxy formation. Its WDM counterpart will, at a fixed redshift,
be less massive due to its delayed formation. Consequently, it will not be massive enough to trigger the gravitational collapse of gas and will remain starless.
Evidently, the details of this simplified explanation change once a more realistic picture is considered. For example, lower concentrations of low mass haloes alter the hydrostatic equilibrium profile of gas, although this effect is minor. Moreover, the properties of reionisation also change as a reflection of the suppression of low-mass structure \citep{Yue.2012, Dayal.2017}. Finally, we note that the properties of the subset of starless haloes which retain their gas content after reionisation (reionisation limited HI clouds, \citealt{Benitez.Llambay.2017}), remain as of yet, unexplored. It would be interesting to contrast how their properties and abundance compare to those formed in CDM, potentially yielding additional constraints on the nature of DM.
\subsection{Density profiles}
An important prediction of simulations of cosmic structure formation is the spherically-averaged radial density profile of DM haloes. Their profiles in CDM DMO simulations are quasi-universal over 20 orders of magnitude in halo mass \citep{Wang.2020} and well described by the NFW \citep{Navarro.1996b} and Einasto \citep{Einasto.1965} formulas, which predict centrally divergent cusps. However, we expect significant changes to the the internal structure of DM haloes in the different models we study. Examining how they differ is an important step in understanding differences in the predicted $z=0$ satellite system, since it influences how strongly they are tidally stripped \citep{Penarubia.2010}.
Firstly, low-mass WDM haloes are likely to be less concentrated resulting from the delay in their formation relative to CDM. Scattering due to self-interactions will drive the centre of an initially cuspy profile to an isothermal, constant density core \citep{Rocha.2013,Robertson.2021}. Finally, the inclusion of baryons and different subgrid prescriptions may cause additional differences such as cores in CDM and WDM haloes, contraction of high mass haloes and an overall reduction in the DM density due to delayed growth.
We study in Fig.~\ref{density_ratios} how different choices for the DM model and baryonic physics alter the density profiles in three different mass bins, $\mathrm{M}_{200} \in [0.5,2.5]\times 10^{12}\Msun$, $\mathrm{M}_{200} \in [0.5,2.5]\times 10^{11}\Msun$, and $\mathrm{M}_{200} \in [0.5,2.5]\times 10^{10}\Msun$. The latter corresponds roughly to the least massive haloes still able to form galaxies (see Fig. \ref{occupation_fraction}).
We have matched all central haloes in these bins to their CDM DMO counterparts. We then estimate their densities using logarithmically spaced spherical shells in physical distance and express them relative to the density of their CDM DMO counterparts. Finally, we average across all haloes in these mass bins that satisfy all three relaxation criteria of \citet{Neto.2007}:
\begin{itemize}
\item The virial ratio $|2K/U|$ should be less than 1.35.
\item The centre of mass, measured using all DM particles within the virial region of the halo, should be within 0.07$R_{\rm vir}$ from the centre of potential.
\item The substructure mass fraction should be less than 10\%.
\end{itemize}
Focusing first on the high mass haloes, we see large differences across different DM and baryonic physics models. For CDM, the addition of baryons has no effect on the DM density at large radii. At smaller radii, there is an $\sim 80\%$ enhancement in the DM density in the LT version. The origin of this is the contraction of the halo in response to the formation of the galaxy at its centre. The HT simulation shows differences in the central parts, most notably a lower median density ratio. Nonetheless, it is still consistent with the LT one beyond $\sim 2~\mathrm{kpc}$ within the 1$\sigma$ scatter. We have examined the profiles individually and note some of the lowest mass HT haloes within this mass bin have cores, whereas more massive ones have similar (or greater) contractions relative to their LT counterparts. These differences arise because the properties of the stellar component have changed across simulations, e.g. their masses, sizes and if they form a bar, its dipole moment strength. This affects how much the bar torques the surrounding dark matter \citep{Forouhar.2022}.
The differences in SIDM relative to CDM depend on the assumed cross-section, although they are all have lower central densities. Identifying the core radius with the radius at which the density ratio first crosses unity, we see it increases monotonically with the particle cross-section. This occurs because the radius at which the profiles become approximately isothermal depends on the scattering rate of particles, and thus on their cross-sections. The removal of DM from the centre to intermediate radii causes a localised enhancement whose magnitude and location sensitive to the assumed SIDM cross-section. Thus, parametric density profiles fitted to the central density, such as the generalised NFW \citep{Zhao.1996}, do not fit these haloes well.
The addition of baryons in SIDM reduces the differences in the central regions of haloes compared to CDM DMO. For example, the median core radius decreases from 8 and 13~kpc to 1.5 and 4~kpc, for SIDM1 and SIDM10, respectively. The enhancement in density at intermediate radii becomes more similar across SIDM models. Both are a consequence of the interplay between two competing effects: self-interactions driving a decrease in density and halo contraction counteracting it.
Focusing on the lower masses, the CDM hydrodynamical simulations produce profiles that are consistently less dense than their DMO counterparts. Although the offset relative to CDM remains constant throughout the radial ranges shown, there is a slight dependence on radius. We attribute both differences to the early loss of baryons and subsequent delay in formation time, which leads to lower densities and concentrations. The similarity between the LT and HT simulations is due to the baryonic component of these galaxies being small. Thus, baryonic blowouts are not able to perturb the inner dark matter distribution.
Both the DMO and hydrodynamical WDM simulations show a much stronger radial offset relative to CDM. This is because they are significantly less concentrated than their CDM counterparts. Thus, their shapes are very different.
Finally, low-mass SIDM DMO haloes show differences relative to CDM DMO similar to their more massive counterparts: a decrease in the central density and an enhancement at intermediate radii. Although it might seem as if the density suppression is less severe than for the high mass haloes, this is because the radial scale (relative to the virial radius), is not the same in the bottom and top panels. Once again, the inclusion of baryons reduces the differences relative to CDM. In fact, the lowest cross-section of $1~\mathrm{cm}^{2}\mathrm{g}^{-1}$ shows no significant changes within the radial range shown. At large radii, we have the same constant density observed in CDM and WDM.
\section{Satellite systems}
As discussed in the previous section, changes to the DM model and baryonic physics lead to differences in the overall number of haloes that form, their internal structure and the fraction that host galaxies. We now focus on how these changes propagate to the satellite population of haloes.
We begin with a comparison of the $z=0$ properties, followed by a detailed analysis of the main causes for the differences. Finally, we also consider corrections to account for orphaned galaxies, which are the dominant population in the central tens of kiloparsecs and belong to the ultra-faint regime.
\subsection{A first look at the effect of tides}
The left panel of Fig.~\ref{Mstar_Vmax_relation} shows the stellar mass to $V_{\rm max}$ relation measured for all central galaxies at $z=0$. We see no systematic differences between models within the scatter, although the stellar components in HT can be slightly less massive than those in LT. Nonetheless, the best fit power law model with an exponential truncation, $M_{*} \propto V^{\gamma}_{\rm max}\exp{-V^{\nu}_{\rm max}}$, is similar in all of them. This fit was done using galaxies with $V_{\rm max} > 30 \mathrm{km} \, \mathrm{s}^{-1}$ and $M_{*} > 10^{6}\Msun $ to exclude heavily-stripped backsplash haloes, which are significantly offset from the mean relation, and galaxies with less than ten stellar particles.
The equivalent relation for all the $z=0$ satellites is shown in the right panel of the same figure. The observed offset at fixed stellar mass with respect to the relation for the centrals reflects the effects of tidal stripping. These remove mass as the satellites orbit more massive objects, decreasing their $V_{\rm max}$ over time. This primarily affects the DM, which occupies the less bound outskirts of the halo. The stellar component remains undisturbed for much longer than the DM, since it is more centrally concentrated.
\subsection{Stellar mass functions}
In Fig~\ref{mstar_function} we show the cumulative distribution of stellar mass for our sample of haloes. These were measured by selecting all satellites, using their SUBFIND bound stellar mass and averaging across all haloes each simulation. We only show the mass regime resolved by our simulations, which corresponds to masses larger than those of the ultra-faint satellite population. Nonetheless, it is clear that the number of $z=0$ satellites above $M_{*} = 10^{5} \Msun$ is strongly dependent on the assumed DM and baryonic physics model.
The most numerous populations occur in the CDM LT simulation, as expected. This is because their haloes are cuspy and more concentrated than in all of the other hydrodynamical models. Thus, they are more resilient to tides. When the density threshold for star formation increases (HT) -- and gas blowouts are able to carve cores -- the number of satellites decreases by about a third. This is evidence for increased stripping in the profiles with profiles.
The SIDM simulations also show a reduction in numbers that increases monotonically with the cross-section. As we saw in the previous section, SIDM models form the same number of haloes and galaxies as CDM. The only difference is the number of satellites that survive to $z=0$. Therefore, the lack of satellites relative to CDM indicates an stronger stripping and destruction due to central cores. As we saw in the previous section, the cores driven by the DM self-interactions become larger when the cross-section is larger.
Finally, the WDM satellite population is less numerous than in CDM. Contrary to SIDM, a simple interpretation on what causes the suppression is less trivial. WDM forms fewer haloes and galaxies, but lower concentrations may also play a role in exacerbating the suppression \citep{Bose.2017}. We discuss this in more detail in the following section, where we estimate how important each of these effects are. As in CDM, the increase in the density threshold for star formation leads to enhanced suppression relative to their LT counterparts.
The shape of the stellar mass functions are similar in all models at the higher stellar mass end, but there are large differences at lower stellar masses. Nonetheless, the most similar models to CDM LT (SIDM1 and SIDMvD) only show significant differences at $\sim 10^{6}\,\Msun$ whereas the other models already exhibit them at $\sim 10^{7}$ to $10^{8}\,\Msun$
We show the observed stellar mass functions for the MW satellite population as a grey stepped line in Fig~. \ref{mstar_function}. It is worth noting that the MW satellite population beyond the eleven classical satellites is incomplete. This is because surveys used for most discoveries below the classical satellite mass regime -- SDSS \citep{2015.ApJS} and DES \citep{Bechtol.2015, Drlica-Wagner.2015} -- are limited to certain regions of the sky and are flux-limited. Thus, the observational data should be considered a lower bound to the number of satellites. Nonetheless, the correction for incompleteness, based on the assumption of a CDM-like radial distribution, amounts to just a few satellites in the range shown here \citep{Newton.2018}.
When compared to observations, the \textit{average} satellite stellar mass functions of our haloes predict more or fewer low-mass satellites above $M_{*} = 10^{5}\Msun$ than observed, depending on the model. However, the total number of satellites depends on the mass of the host halo, generally increasing in more massive haloes. Thus, one may in principle choose a more massive one to increase the total number of satellites. This scatter driven by the variation in the host halo mass in our sample limits the constraining power of our comparison to the real MW. However, even if we had chosen haloes in a narrower mass bin, there would still be an intrinsic scatter due to different assembly histories. As shown by the error bars in Fig.~\ref{mstar_function}, which are representative of the scatter across all models, we find no significant inconsistencies with observations in the studied stellar mass range. Hence, we cannot rule out any based on stellar mass functions alone. Finally, we are not able to find LMC and SMC analogues around any of the studied haloes. This is likely because they are uncommon in isolated systems \citep{Santos.2021}.
\subsection{Maximum circular velocities}
An alternative way to examine how strongly the satellites have been stripped is through their $V_{\rm max}$ distributions. $V_{\rm max}$ decreases faster than the stellar mass of satellites, because the latter is more concentrated than the DM, which is stripped from the outskirts. We show the $V_{\rm max}$ distributions of the resolved $z=0$ satellites in Fig.~\ref{averaged_vmax_distribution}, averaged across all haloes in a given simulation.
We do not compare to observations since the maximum circular velocity of the halo is uncertain. Other quantities accessible to observations, such as the circular velocity at the half light radius, cannot be reliably measured in these simulations given their spatial resolution.
Similarly to the stellar mass functions, the CDM LT case represents an upper bound for all models, as it is the most resilient to tides. Although most models are similar at $\nu \geq 0.2 $, noticeable differences start to appear below that. Interestingly, the distributions for CDM HT and SIDMvD are almost exactly the same below that scale, illustrating the potentially degenerate effects that baryons and different dark matter models can have.
\subsection{Radial distributions}
Another important prediction of our simulations is the radial distribution of satellites. We explore this in Fig. ~\ref{averaged_radial_distribution}, where we show this distribution for different stellar mass bins. They were measured by integrating the satellite orbits for each halo during the last 300~Myr of the simulation, as described in Section 3. We then computed the time average and the average across the seven haloes of our sample in each simulation. The observed MW satellite radial distribution is also shown for comparison..
We see that the shape of the radial distributions depends strongly on the mass range. At high end, satellites occupy the outer regions of the halo, whereas lower mass satellites are closer to the centre. In the former, CDM LT is the most concentrated of all the models, although the distributions are noisy because of the low number of objects of these masses.
Once we start considering all the resolved satellite population (middle panel), we note that the radius enclosing half the total population -- a measure of how concentrated these systems are -- is smallest for the WDM models. This is closely followed by the CDM variations. The least concentrated satellite systems are the SIDM ones, with their concentration decreasing with increasing cross-section. In other words, even though WDM might have fewer satellites above a certain mass compared to SIDM, they are more concentrated. Increasing the density threshold for star formation leads to resolved satellite distributions that are somewhat less concentrated than their LT counterparts. The relative increase is similar for CDM, WDM and SIDM10, about $6\%$.
Correcting for orphans yields radial distribution functions that are much more centrally concentrated than the resolved satellite population. This is unsurprising, since orphans correspond to ultra-faints and populate the central regions. Generally, the inclusion of orphans makes the shapes of the radial distributions more alike across simulations and similar to that of the MW at large radii. Nonetheless, there is evidence that the MW satellite system may be more concentrated than in the simulations, perhaps due to the presence of Magellanic systems in the real MW \citep{Santos.2021}. We note that the orphan population only includes galaxies whose resolved progenitors had peak stellar masses above the baryonic particle mass of the simulations. Haloes which would have had a total stellar mass less than one particle are not counted. Thus, the orphan populations here are only a partial census of the low-mass satellites. Finally, the fraction of orphans relative to the total satellite population increases with the extent of tidal disruption of satellites.
\section{The reason behind the suppression of satellite numbers}
We have given a broad overview of how the overall population properties of the $z =0$ surviving satellies differ across models. In summary, these were variations in the number of satellites and different radial and $V_{\rm max}$ distributions. To investigate the underlying causes for these changes, we now turn to a more detailed comparison of how differences in the dark matter and baryon physics affect stripping, and thus satellite survivability.
For this, we select all satellites in the CDM LT model that are resolved at $z=0$ and identify their counterparts in the other simulations. We base our selection on this model because it has the largest surviving satellite population at $z=0$. The matching is done bijectively, as described in Section 3. In short, we identify the time at which the satellite progenitors attained their largest bound mass and cross match the 100 most bound DM particle at that time. This minimises the effects of tidal stripping and potentially diverging evolutionary paths. Moreover, this method is also able to identify counterparts that have been disrupted before $z=0$.
We are able to find counterparts in the CDM HT and SIDM simulations for $\sim 99\%$ of the $z=0$ surviving CDM LT satellites. The number of identified counterparts in the WDM simulation is $\sim 88\%$, because the population size is smaller due to the cut-off in the power spectrum.
\subsection{Different fates for the same satellite}
We start by considering the evolution of a single example of a satellite identified in the CDM LT simulation, whose matched counterparts retain similar orbital parameters throughout their existence. This is an important condition when comparing the evolution of a single object across simulations, as small differences in position and velocity near pericentre may lead to very different subsequent orbits and thus the tides they experience. We aim to exclude differences in stripping that are caused by changes to the orbits.
The evolution in galactocentric distance of the chosen satellite is shown in the bottom panel of Fig. \ref{example_bound_mass_evolution}. As expected, we see no differences prior to the first pericentric passage. Afterwards, we observe some minor changes to the orbital phase, but all the counterparts that survived up to $z=0$ have experienced four pericentric passages since they first entered the virial region of the halo.
Focusing on the evolution of total bound mass, there are very few differences prior to infall. At early times there are transient decreases associated with ongoing mergers, during which SUBFIND switches the subhalo it identifies as the most massive within a FoF group. We see that the peak bound mass for a fixed DM model changes between their hydrodynamical and DMO counterparts. As explained in Section 4, this is caused by the early loss of baryons and subsequent decrease in halo growth due this reduction in mass. Finally, we note the significant delay in the formation of the WDM counterparts, which lowers their peak bound mass relative to their CDM and SIDM equivalents.
The bound mass of satellites decreases continuously after infall into the virial region, with periods of intense stripping occurring near pericentre. These are often accompanied by a peak-trough-peak pattern, caused by a decrease in the tidal radius of these objects near pericentre (and thus the bound mass assigned by SUBFIND). The resulting bound mass is lowered between consecutive apocentres. A measure of how stripped these objects were by any given pericentric passage can be estimated by taking the bound mass ratio of the peaks immediately before and after a pericentric passage.
We do this for the first pericentre, which is when the orbits are most similar across simulations. The CDM DMO, LT and HT versions lost 28\%, 41\% and 54\% of their total mass, respectively. For the WDM we note a similar ordering -- but differing magnitude -- of stripping: 56\%, 63\% and 67\%. The SIDM counterparts are stripped to varying degrees depending on the cross-section. The lowest value, $1\,\mathrm{cm}^{2}\,\mathrm{g}^{-1}$, exhibits little difference to CDM, as expected since the structural changes at this mass scale are minimal (see bottom panel of Fig~\ref{density_ratios}). All of the SIDM10 versions lose a large fraction of mass, ranging from 60\% to 70\%. As expected, the stripped mass fraction in SIDMvD lies between the cases for the lowest and highest cross-sections.
The cumulative effect of subsequent pericentres and continuous stripping leads to different subhalo masses at $z=0$. In come cases, like WDM HT or all of the SIDM10 counterparts, the mass loss causes the subhalo to be disrupted before $z=0$. For those that survive, we see a clear separation between different cross-section values and whether baryons are present or when cores form due to a high density threshold for star formation.
To check whether the differences of $z=0$ mass between DMO and hydrodynamic counterparts is due to enhanced stripping or simply caused by a lower peak bound mass, we compute the relative loss of mass, $1 - M(z=0)/M_{\rm peak}$. For CDM, we measure $84\%$, $91\%$ and $95\%$ for the DMO, LT and HT versions, respectively. There are no differences for the WDM cases, with both the DMO and LT cases losing $94\%$ of their mass. The SIDM cases do show some differences, but less pronounced than in the CDM case; about a 3 percent increase in the mass loss rate in the hydrodynamical simulation. Note that this comparison does not attribute the increase of stripping in the hydrodynamical simulations to any single origin. Indeed, it can be caused by a combination of the presence of a massive stellar disk, the contraction of the host halo or changes in the satellite density profiles.
\subsection{Disruption rates}
Based on the previous example, as well as on the decrease in satellite numbers in some models even when the number of progenitors is the same, we expect many more satellites to be disrupted before $z = 0$ in the non-CDM LT counterparts. We explore this in Fig~\ref{disruption_redshift_comparison}, where we show the cumulative fraction of CDM LT counterparts in other simulations are disrupted before z = 0, as a function of the redshift when they were last resolved. This is only computed for the luminous subset of the matched populations; this does not significantly alter the SIDM and CDM HT numbers, since $\sim 96\%$ and $\sim 91\%$ of matched satellites are luminous, respectively. The difference in the SIDM case is likely caused by slight differences in the evolutionary histories, since whether or not a halo contains a single bound star particle becomes a stochastic process. In the case of CDM HT the difference stems from a combination of this and the fact that the onset of star formation will occur at later times due to the increase in the density threshold for star formation. Finally, the WDM luminous matched fractions in the LT and HT versions are $66\%$ and $55\%$, respectively. This results from the delay in the formation time of the satellite progenitors, which decreases the number of would-be satellites that cross the mass-threshold to trigger the gravitational collapse of gas (as shown by the halo occupation fraction shown in Fig~\ref{occupation_fraction}).
Focusing on the total fraction of disrupted satellites, we observe that more than half of all satellite progenitors in the LT and HT SIDM10 simulations are disrupted before $z=0$. The lack of a significant difference between the two is likely that the gas density threshold for star formation does not alter the internal structure of these SIDM haloes significantly, unlike models where it is able to turn a cusp into a core. As the cross-section value is lowered, so is the fraction of disrupted satellites: $31\%$ and $18\%$ for the SIDMvD and SIDM1 models, respectively. About a third of all satellites in the CDM HT are disrupted before $z=0$, likely due to the structural changes caused by gas blowouts. Finally, the low threshold version of WDM loses only a small fraction of the luminous population ($\sim 13 \%$). The high threshold version, as in the other cases, exhibits a slight enhancement in disruption rates. We conclude that the decrease in the number of satellites in the WDM cosmologies, compared to SIDM, is largely due to the suppression in the number of galaxies that are able to form.
\section{Discussion}
There are clear differences in the internal structure of dark matter haloes amongst CDM, WDM and SIDM models in DM-only simulations. However, these differences are greatly reduced by the effects of baryons. Depending on the halo mass range, choices regarding the subgrid physics may lead to comparable halo density profiles, as shown in the middle row of Fig.~\ref{density_ratios}. The similarity in the density profiles, in turn, leads to similar stripping histories. This results in degeneracies in the way in which baryon effects and the nature of the DM affect the properties of the satellite population. For example, the satellite $V_{\rm max}$ functions in SIDM with velocity-dependent cross-section and in CDM with a high density threshold for star formation are very similar (see Fig.~\ref{averaged_vmax_distribution}).
Our analysis thus indicates that the current freedom in the modelling of star formation and feedback in simulations makes it difficult to disentangle their effects from those arising from the nature of the dark matter. Although we have considered only a subset of all the possible model variations, our work suffices to highlight the problem and the current limitations on interpreting observational results. We thus conclude that it will be challenging to constrain the nature of dark matter based solely on the properties of $M_{*} \geq 10^{5} \Msun$ satellite systems.
A promising avenue to explore further is the population of ultra-faint satellites. As shown in the lower row of Fig. \ref{density_ratios}, the effects of baryons become increasingly less important in lower mass haloes, a direct consequence of their low baryonic content \citep{DiCintio.2014, Tollet.2016}. The internal structural differences in the halos of ultra-faint satellites, driven by the nature of dark matter, are largely preserved in the presence of baryons and affect their resilience to tidal stripping. Therefore, the properties of the ultra-faint population may retain an imprint of the nature of the dark matter. The study of these galaxies requires very high-resolution simulations that can resolve not only the formation of the faintest systems but also track their evolution as they fall into the halo and undergo tidal stripping.
Indeed, adequate numerical resolution is essential to model tidal stripping correctly. Work based on idealised collisionless simulations suggests that cuspy dark matter haloes are resilient to tides and always leave a small bound remnant behind \citep{Errani.2021}. However, the limited resolution of cosmological simulations makes this regime difficult to follow. Furthermore, when the subhaloes are not sufficiently well resolved, the rate at which they are stripped becomes artificially high \citep{vandenBosch.2018}. Studying ultra-faint satellites and unveiling their constraining power on the nature of the dark matter will thus require very high-resolution simulations.
Our analysis relies on several assumptions. There are also limitations inherent to our simulations. Firstly, our sample selection is based solely on the virial mass of FoF groups at $z = 0$. This criterion does not account for other factors relevant to tidal stripping, such as the mass of the host's stellar component. The reference EAGLE subgrid model underpredicts the stellar mass of MW-mass haloes by a factor of $\sim 2$ \citep{Schaye.2015}. Changes to the subgrid physics alter, to some degree, the resulting stellar masses of individual haloes. However, the average stellar-to-halo-mass relation is insensitive to the models considered here. Thus, we are confident that the relative differences we observe across the average satellite populations are due to structural changes in the subhaloes rather than differences in the central galaxy properties.
Additionally, the modelling of orphans is derived from convergence studies based on the Millenium I and II simulations. These are collisionless CDM simulations, which means that the effect of baryons, self-interactions and the presence of central density cores are not considered. Their dark matter resolution is several orders of magnitude lower ($10^{9}$ and $9\times 10^{6} \Msun$, respectively) than in the simulations used in this work. This study extrapolates their findings to different DM models and higher mass resolutions. This framework will need to be extended to alternative DM models.
\section{Conclusions}
We have simulated the assembly of haloes with masses within a factor of two from the MW and their satellite systems in a cosmological setting using different dark matter models. They were run on DMO and hydrodynamical simulations via the inclusion of baryonic physics using the EAGLE subgrid model, which under certain parameter choices, can lead to the formation of baryon-driven cores. This was done with the aim of studying how these changes affect the satellite populations between pairs of matched haloes and identify systematic differences.
Firstly, we saw significant differences at the field halo level across different simulations:
\begin{itemize}
\item Low mass haloes in hydrodynamical simulations lose their baryons at early times, leading to delays in their formation time and, correspondingly, lower $z = 0$ virial masses.
\item The cut-off in the power spectrum of WDM leads to a smaller number of low mass haloes forming. It also leads to a formation time delay much larger than that caused by baryons. This further suppresses the galaxy population, since the halo occupation fractions change and thus galaxy formation becomes less efficient.
\item The density profiles of CDM and WDM are virtually indistinguishable at high masses. They are significantly different at low masses, resulting from different concentrations that reflect their delayed formation times.
\item All SIDM haloes show significant differences with respect to the CDM density profiles, due to the formation of cores whose size scales with cross-section value. However, the inclusion of baryons makes the differences less apparent. At high halo masses, this is due to an interplay between self-scatterings and a contraction caused by the central galaxy. At low masses, it is caused by decreases in the overall DM density due to the delay in formation time resulting from slower growth triggered by the loss of baryons. This consequently affects the scattering rate between particles and thus the radius at which haloes are considered to have been thermalised.
\end{itemize}
All of these changes propagate to the infall properties of satellites, either via a reduction in the accreted number or structural changes that alter their subsequent evolution under the influence of tides.
\begin{itemize}
\item The delay in formation time of haloes, either via the loss of baryons, a cut-off in the power spectrum or both, leads to lower bound masses at infall.
\item Structural differences across models leads to different stripping rates, which are noticeable even after just one pericentric passage.
\item In SIDM, increasing cross-sections lead to larger cores and thus more efficient stripping. At this resolution level, the lowest cross-section value used in this study, $1 \, \mathrm{cm}^2\,\mathrm{g}^{-1}$, yields predictions with little to no difference compared to CDM.
\item Increasing the density threshold for star formation allows the gas to accumulate in larger quantities before being blown out via supernovae feedback. This results in greater gravitational coupling to the DM, allowing it to flatten the inner DM density profile as its removed. This leads to more efficient stripping relative to their cuspy counterparts. The effect is minor in SIDM models, since haloes already have flat inner density profiles due to DM self-scattering.
\end{itemize}
The above changes lead to a suppression in the number of satellites at $z=0$ and lower $V_{\rm max}$ values for those surviving. In SIDM, this is solely caused by the enhanced stripping as a consequence of their flat inner DM density profiles. The lack of satellites in WDM is almost entirely attributable to less haloes (and galaxies) forming in the first place. Models in which gas blowouts are able to flatten the density profiles of haloes also show a suppression in satellite numbers, even in CDM and WDM. In some cases, they lead to entirely degenerate satellite system properties, such as the stellar mass distribution in CDM with baryon-driven cores and velocity-dependent SIDM.
In summary, despite differences amongst dark matter models in DM-only simulations, the presence of baryons can erase the differences arising due to the nature of the dark matter. Our analysis demonstrates that the study of the satellite population in the mass range of $M_{*} \geq 10^{5}\, \Msun$ is unlikely set informative constraints on the nature of the DM. The lack of constraining power of massive satellites, however, does not rule out the possibility that less massive systems, particularly ultra-faint dwarfs, could be sensitive to the properties of the DM. Understanding and quantifying these constraints will require the development of dedicated, extremely high-resolution cosmological simulations, an endeavour worth pursuing.
\section*{Acknowledgements}
ABL acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (GA 101026328) and UNIMIB's Fondo di Ateneo Quota Competitiva (project 2020-CONT-0139). CSF, SMC and VJFM acknowledge support by the European Research Council (ERC)
through Advanced Investigator grant DMIDAS (GA 786910) and Consolidated Grant ST/T000244/1. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
\section*{Data Availability}
The data used in this study can be made available upon reasonable request to the corresponding author.
\bibliographystyle{mnras}
\bibliography{references}
\bsp %
\label{lastpage} |
Title:
JWST Imaging of Earendel, the Extremely Magnified Star at Redshift $z=6.2$ |
Abstract: The gravitationally lensed star WHL0137-LS, nicknamed Earendel, was
identified with a photometric redshift $z_{phot} = 6.2 \pm 0.1$ based on images
taken with the Hubble Space Telescope. Here we present James Webb Space
Telescope (JWST) Near Infrared Camera (NIRCam) images of Earendel in 8 filters
spanning 0.8--5.0$\mu$m. In these higher resolution images, Earendel remains a
single unresolved point source on the lensing critical curve, increasing the
lower limit on the lensing magnification to $\mu > 4000$ and restricting the
source plane radius further to $r < 0.02$ pc, or $\sim 4000$ AU. These new
observations strengthen the conclusion that Earendel is best explained by an
individual star or multiple star system, and support the previous photometric
redshift estimate. Fitting grids of stellar spectra to our photometry yields a
stellar temperature of $T_{\mathrm{eff}} \simeq 13000$--16000 K assuming the
light is dominated by a single star. The delensed bolometric luminosity in this
case ranges from $\log(L) = 5.8$--6.6 $L_{\odot}$, which is in the range where
one expects luminous blue variable stars. Follow-up observations, including
JWST NIRSpec scheduled for late 2022, are needed to further unravel the nature
of this object, which presents a unique opportunity to study massive stars in
the first billion years of the universe.
| https://export.arxiv.org/pdf/2208.09007 |
\title{JWST Imaging of Earendel, the Extremely Magnified Star at Redshift $z=6.2$}
\correspondingauthor{Brian Welch}
\email{bwelch7@jhu.edu}
\author[0000-0003-1815-0114]{Brian Welch}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St. Baltimore, MD 21218, USA}
\author[0000-0001-7410-7669]{Dan Coe}%
\affiliation{Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA}
\affiliation{Association of Universities for Research in Astronomy (AURA) for the European Space Agency (ESA), STScI, Baltimore, MD, USA}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St. Baltimore, MD 21218, USA}
\author[0000-0003-1096-2636]{Erik Zackrisson}%
\affiliation{Observational Astrophysics, Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden}
\author[0000-0001-9336-2825]{S.~E.~de~Mink}
\affiliation{Max-Planck-Institut fГјr Astrophysik, Karl-Schwarzschild-StraГџe 1, 85741 Garching, Germany}
\affiliation{Anton Pannekoek Institute for Astronomy and GRAPPA, University of Amsterdam, NL-1090 GE Amsterdam, The Netherlands}
\author[0000-0002-5269-6527]{Swara Ravindranath}
\affiliation{Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA}
\author[0000-0003-2861-3995]{Jay Anderson}
\affiliation{Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA}
\author[0000-0003-2680-005X]{Gabriel Brammer}
\affiliation{Cosmic Dawn Center (DAWN), Copenhagen, Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, Copenhagen, Denmark}
\author[0000-0002-7908-9284]{Larry Bradley}
\affiliation{Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA}
\author[0000-0002-4168-239X]{Jinmi Yoon}
\affiliation{Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA}
\affiliation{Joint Institute for Nuclear Astrophysics - Center for the Evolution of the Elements, USA}
\author[0000-0003-3142-997X]{Patrick Kelly}
\affiliation{School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA}
\author[0000-0001-9065-3926]{Jose M. Diego}%
\affiliation{Instituto de F\'isica de Cantabria (CSIC-UC). Avda. Los Castros s/n. 39005 Santander, Spain}
\author[0000-0001-8156-6281]{Rogier Windhorst}%
\affiliation{School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA}
\author[0000-0002-0350-4488]{Adi Zitrin}
\affiliation{Physics Department, Ben-Gurion University of the Negev, P.O. Box 653, Be'er-Sheva 84105, Israel}
\author[0000-0001-7399-2854]{Paola Dimauro}
\affiliation{INAF - Osservatorio Astronomico di Roma, via di Frascati 33, 00078 Monte Porzio Catone, Italy}
\author[0000-0002-6090-2853]{Yolanda Jim\'enez-Teja}
\affiliation{Instituto de Astrof\'isica de Andaluc\'ia, Glorieta de la Astronom\'ia s/n, 18008 Granada, Spain}
\affiliation{ObservatГіrio Nacional - MCTI (ON), Rua Gal. JosГ© Cristino 77, SГЈo CristГіvГЈo, 20921-400, Rio de Janeiro, Brazil}
\author[0000-0002-5258-8761]{Abdurro'uf}
\affiliation{Institute of Astronomy and Astrophysics, Academia Sinica, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan, R.O.C.}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St. Baltimore, MD 21218, USA}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0001-6342-9662]{Mario Nonino}
\affiliation{INAF-Trieste Astronomical Observatory, Via Bazzoni 2, 34124, Trieste, Italy}
\author[0000-0003-3108-9039]{Ana Acebron}
\affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, Via Celoria 16, I-20133 Milano, Italy}
\affiliation{INAF - IASF Milano, via A. Corti 12, I-20133 Milano, Italy}
\author[0000-0002-8144-9285]{Felipe Andrade-Santos}
\affiliation{Department of Liberal Arts and Sciences, Berklee College of Music, 7 Haviland Street, Boston, MA 02215, USA}
\affiliation{Center for Astrophysics \text{\textbar} Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA}
\author[0000-0001-9364-5577]{Roberto J. Avila}
\affiliation{Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA}
\author[0000-0003-1074-4807]{Matthew B. Bayliss}
\affiliation{Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA}
\author{Alex Ben\'itez}
\affiliation{King's College London, University of London, Strand, London WC2R 2LS, UK}
\author[0000-0002-8785-8979]{Tom Broadhurst}
\affiliation{Department of Theoretical Physics, University of the Basque Country UPV/EHU, Bilbao, Spain}
\affiliation{Donostia International Physics Center (DIPC), 20018 Donostia, Spain}
\affiliation{IKERBASQUE, Basque Foundation for Science, Bilbao, Spain}
\author[0000-0003-0883-2226]{Rachana Bhatawdekar}
\affiliation{European Space Agency, ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, NL}
\author[0000-0001-5984-0395]{Maru{\v s}a Brada{\v c}}
\affiliation{University of Ljubljana, Department of Mathematics and Physics, Jadranska ulica 19, SI-1000 Ljubljana, Slovenia}
\affiliation{Department of Physics and Astronomy, University of California Davis, 1 Shields Avenue, Davis, CA 95616, USA}
\author[0000-0001-6052-3274]{Gabriel B. Caminha}
\affiliation{Max-Planck-Institut fГјr Astrophysik, Karl-Schwarzschild-StraГџe 1, 85741 Garching, Germany}
\author[0000-0003-1060-0723]{Wenlei Chen}
\affiliation{School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455, USA}
\author[0000-0002-1722-6343]{Jan Eldridge}
\affiliation{Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand}
\author[0000-0002-5794-4286]{Ebraheem Farag}
\affiliation{Arizona State University, Tempe, AZ 85287, USA}
\author[0000-0001-5097-6755]{Michael Florian}
\affiliation{Department of Astronomy, Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA}
\author[0000-0003-1625-8009]{Brenda Frye}
\affiliation{Department of Astronomy, Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA}
\author[0000-0001-7201-5066]{Seiji Fujimoto}
\affiliation{Cosmic Dawn Center (DAWN), Copenhagen, Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, Copenhagen, Denmark}
\author[0000-0001-6395-6702]{Sebastian Gomez}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0002-6586-4446]{Alaina Henry}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St.
Baltimore, MD 21218, USA}
\author[0000-0003-4512-8705]{Tiger Y.-Y Hsiao}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St.
Baltimore, MD 21218, USA}
\author[0000-0001-6251-4988]{Taylor A. Hutchison}
\affiliation{Department of Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA}
\affiliation{George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University, College Station, TX, 77843-4242 USA}
\author[0000-0003-4372-2006]{Bethan L. James}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0002-8717-127X]{Meridith Joyce}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0003-1187-4240]{Intae Jung}
\affiliation{Astrophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA}
\affiliation{Department of Physics, The Catholic University of America, Washington, DC 20064, USA}
\affiliation{Center for Research and Exploration in Space Science and Technology, NASA/GSFC, Greenbelt, MD 20771}
\author[0000-0002-3475-7648]{Gourav Khullar}
\affiliation{Department of Astronomy and Astrophysics, University of
Chicago, 5640 South Ellis Avenue, Chicago, IL 60637}
\affiliation{Kavli Institute for Cosmological Physics, University of
Chicago, 5640 South Ellis Avenue, Chicago, IL 60637}
\author[0000-0003-2366-8858]{Rebecca L. Larson}
\altaffiliation{NSF Graduate Fellow}
\affiliation{The University of Texas at Austin, Department of Astronomy, Austin, TX, United States}
\author[0000-0003-3266-2001]{Guillaume Mahler}
\affiliation{Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK}
\affiliation{Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, UK}
\author[0000-0001-8057-5880]{Nir Mandelker}
\affiliation{Centre for Astrophysics and Planetary Science, Racah Institute of Physics, The Hebrew University, Jerusalem, 91904, Israel}
\author[0000-0003-0503-4667]{Stephan McCandliss}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St.
Baltimore, MD 21218, USA}
\author[0000-0002-8512-1404]{Takahiro Morishita}
\affiliation{IPAC, California Institute of Technology, MC 314-6, 1200 E. California Boulevard, Pasadena, CA 91125, USA}
\author{Rosa Newshore}
\affiliation{Department of Physics, Clark University Worcester, MA 01610-1477}
\author[0000-0002-5222-5717]{Colin Norman}
\affiliation{Center for Astrophysical Sciences, Department of Physics and Astronomy, The Johns Hopkins University,
3400 N Charles St.
Baltimore, MD 21218, USA}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author{Kyle O'Connor}
\affiliation{University of South Carolina, 712 Main St., Columbia, SC 29208, USA}
\author[0000-0001-5851-6649]{Pascal A. Oesch}
\affiliation{Department of Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland}
\affiliation{Cosmic Dawn Center (DAWN), Copenhagen, Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, Copenhagen, Denmark}
\author[0000-0003-3484-399X]{Masamune Oguri}
\affiliation{Center for Frontier Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan}
\affiliation{Department of Physics, Graduate School of Science, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba 263-8522, Japan}
\author[0000-0002-1049-6658]{Masami Ouchi}
\affiliation{National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan}
\affiliation{Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582, Japan, 3) Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo, Kashiwa, Chiba 277-8583, Japan}
\author[0000-0002-9365-7989]{Marc Postman}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0002-7627-6551]{Jane R.~Rigby}
\affiliation{Observational Cosmology Lab, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA}
\author[0000-0003-0894-1588]{Russell E. Ryan Jr}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0001-9851-8753]{Soniya Sharma}
\affiliation{Observational Cosmology Lab, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA}
\author[0000-0002-7559-0864]{Keren Sharon}
\affiliation{Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA}
\author[0000-0002-6338-7295]{Victoria Strait}
\affiliation{Cosmic Dawn Center (DAWN), Copenhagen, Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, Copenhagen, Denmark}
\author[0000-0002-7756-4440]{Louis-Gregory Strolger}
\affiliation{Space Telescope Science Institute (STScI),
3700 San Martin Drive,
Baltimore, MD 21218, USA}
\author[0000-0002-0474-159X]{F.X.~Timmes}
\affiliation{School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA}
\affiliation{Joint Institute for Nuclear Astrophysics - Center for the Evolution of the Elements, USA}
\author[0000-0003-3631-7176]{Sune Toft}
\affiliation{Cosmic Dawn Center (DAWN), Copenhagen, Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, Copenhagen, Denmark}
\author[0000-0001-9391-305X]{Michele Trenti}
\affiliation{School of Physics, University of Melbourne, Parkville VIC 3010, Australia}
\affiliation{ARC Centre of Excellence for All-Sky Astrophysics in 3 Dimensions, University of Melbourne, Parkville VIC 3010, Australia}
\author[0000-0002-5057-135X]{Eros Vanzella}
\affiliation{INAF -- OAS, Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via Gobetti 93/3, I-40129 Bologna, Italy}
\author[0000-0002-4853-1076]{Anton Vikaeus}
\affiliation{Observational Astrophysics, Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden}
\keywords{gravitational lensing, massive stars}
\section{Introduction}
Massive galaxy clusters magnify the distant universe through strong gravitational lensing.
These cosmic telescopes provide improved spatial resolution over what cutting-edge telescopes can provide alone, allowing the identification of small-scale structures in high redshift galaxies \citep[e.g.,][]{Welch22_clumps,Vanzella22_sunburst,Mestric22}.
In certain cases of precise alignment, galaxy clusters can magnify the light from individual stars by factors of thousands, allowing these stars to be seen above the light of their host galaxies.
The first of these were discovered as transients in images from the Hubble Space Telescope (\HST), at redshifts ranging from $z \sim 1 - 1.5$ \citep[][]{Kelly18,Rodney18,Chen19,Kaurov19_lensedstar}.
Recent discoveries have pushed lensed star observations to greater distances, including recent discoveries at $z = 2.37$ \citep{Diego22_godzilla}, another at $z = 2.65$ with the James Webb Space Telescope (\JWST) \citep[][]{Chen22_lensedstar}, and a star at $z = 6.2$ discovered in \HST\ imaging \citep[][]{Welch2022_earendel}.
\JWST\ \citep{Gardner06_JWST}, which has recently completed commissioning and begun science operations \citep{Rigby22_JWSTcommissioning}, will continue improving our ability to study distant lensed stars in detail.
Besides already discovering new lensed stars \citep[][]{Chen22_lensedstar}, \JWST\ will enable more detailed study of the highest redshift lensed stars.
The combination of this powerful new observatory and gravitational lensing could also be our best chance at observing Population III stars directly \citep{Windhorst18}.
In this paper, we present new \JWST\ imaging of the lensed star Earendel \citep[RA = 01:37:23.25, Dec = $-$8:27:52.27;][]{Welch2022_earendel}.
Earendel is in a $\zphot = 6.2 \pm 0.1$ galaxy dubbed the Sunrise Arc,
the most highly magnified galaxy at $z \sim 6$ \citep{Salmon2020}.
It is lensed by a massive $z = 0.566$ galaxy cluster WHL J013719.8--082841 (henceforth \WHL; RA = 01:37:25.0, Dec = $-$08:27:23, J2000;
\citealt{WHL12,WenHan15}).
We describe the \JWST\ imaging data in Section \ref{sec:data}.
The photometric redshift estimates for the Sunrise Arc are presented in Section \ref{sec:photoz}, and the updated magnification and size constraints are described in Section \ref{sec:magnif}.
Our photometric temperature estimate is discussed in Section \ref{sec:temp}.
We investigate possible variability of the source in Section \ref{sec:variable}.
Our results are presented and contextualized in Section \ref{sec:results}.
Finally, we end with our conclusions in Section \ref{sec:conclusions}.
Throughout, we assume a flat $\Lambda$CDM cosmological model, with $\Omega_{\rm M} = 0.3$, $\Omega_{\Lambda} = 0.7$, and the Hubble constant $H_0 = 70~ \text{km s}^{-1}\text{Mpc}^{-1}$.
Data products, reduced images, lens models, and anlaysis code are available via our website.\footnote{\url{https://cosmic-spring.github.io}}
\section{Data}
\label{sec:data}
\begin{table}[]
\centering
\begin{tabular}{c c c c c}
Filter & $\lambda$ & Exp. & Flux Density & \\
& ($\mu$m) & Time (s) & (nJy) & AB mag \\
\hline
F090W & 0.8--1.0 & ~\,2104 & $32 \pm 5$ & $27.77 \pm 0.19$ \\
F115W & 1.0--1.3 & ~\,2104 & $57 \pm 7$ & $27.05 \pm 0.14 $ \\
F150W & 1.3--1.7 & ~\,2104 & $50 \pm 6$ & $27.20 \pm 0.14$ \\
F200W & 1.7--2.2 & ~\,2104 & $43 \pm 3$ & $27.46 \pm 0.09$ \\
F277W & 2.4--3.1 & ~\,2104 & $63 \pm 5$ & $26.90 \pm 0.08$ \\
F356W & 3.1--4.0 & ~\,2104 & $66 \pm 4$ & $26.86 \pm 0.06$ \\
F410M & 3.8--4.3 & ~\,2104 & $64 \pm 5$ & $26.89 \pm 0.08$ \\
F444W & 3.8--5.0 & ~\,2104 & $62 \pm 6$ & $26.83 \pm 0.11$
\end{tabular}
\caption{\JWST\ NIRCam photometry of Earendel in 8 filters, measured as described in Appendix \ref{phot_appendix}.}
\label{tab:photometry}
\end{table}
\subsection{\JWST\ NIRCam}
Earendel was first identified in \HST\ imaging taken as part of the Reionization Lensing Cluster Survey \citep[RELICS; GO 14096][]{Coe19_relics} and a follow-up program (GO 15842, PI Coe), as described in \cite{Welch2022_earendel}.
We recently obtained additional imaging from the newly commissioned \JWST\ NIRCam instrument as part of Cycle 1 GO program 2282 (PI Coe).
These images span a wavelength range of 0.8--5 $\mu$m in eight filters, presented in Table \ref{tab:photometry}.
A color image of the Sunrise Arc hosting Earendel is shown in Figure \ref{fig:color}, and image stamps of Earendel in each filter are shown in Figure \ref{fig:stamps}.
Each filter was observed for a total of 2104 seconds of exposure time.
We utilized four dithers to cover the 5\arcsec\ gaps between the short wavelength (SW; $\lambda < 2.4 \mu$m) detectors, as well as improving the resolution of our final drizzled images and minimizing the impact of image artifacts and bad pixels.
Additional imaging in four filters (F090W, F115W, F277W, F356W) and NIRSpec spectroscopy for GO 2282 is expected in December 2022.
We retrieved \JWST\ pipeline\footnote{\url{https://jwst-pipeline.readthedocs.io}}
Level 2b data products ({\tt cal.fits}) from
MAST\footnote{\url{https://mast.stsci.edu}}.
They included updated zeropoints based on in-flight data,
delivered to CRDS version 11.16.3
with {\tt jwst\_0942.pmap} reference files
on 2022-07-29, the day before these observations.
We processed the \JWST\ Level 2 data products using the \texttt{grizli} pipeline\footnote{\url{https://github.com/gbrammer/grizli}} \citep{Brammer21_grizli}.
This processing pipeline reduces striping from 1/f noise and masks ``snowballs'' in the images, and includes a zeropoint correction based on observations of the LMC\footnote{\url{https://github.com/gbrammer/grizli/pull/107}}. These corrections match the later {\tt jwst\_0989.pmap} reference file zeropoints to within a few percent.
All images are then aligned to a common WCS registered to GAIA DR3 \citep{Gaia_EDR3}.
The pipeline next drizzles the images to a common pixel grid of 0\farcs04 per pixel using the \texttt{astrodrizzle} software \citep{MultiDrizzle,AstroDrizzle,DrizzlePac}.
The short wavelength NIRCam images are drizzled to a higher resolution grid of 0\farcs02 per pixel, aligned to the lower resolution grid (with each low-resolution pixel corresponding to $2\times2$ high-resolution pixels).
These reprocessed images are publicly available via our website.
Sources are then detected in a weighted sum of the drizzled NIRCam images in all filters using a Python implementation of SourceExtractor called SEP \citep{Barbary2016,sextractor}.
Fluxes are then calculated for each source in three circular apertures, 0\farcs36, 0\farcs5, and 0\farcs7.
The 0\farcs5 aperture fluxes are used for the photo-$z$ calculations with \texttt{EAZY}\footnote{\url{https://github.com/gbrammer/eazy-photoz}} \citep{Brammer08_eazy}, described below.
The source extraction parameters utilized in the \texttt{grizli} pipeline blend multiple features of the Sunrise Arc together.
In order to extract reliable, uncontaminated fluxes for Earendel, we therefore perform additional photometric measurements on the object directly, using a variety of measurements.
Distinguishing between flux originating from Earendel and flux originating from the host arc is non-trivial.
While photometry of isolated point sources is traditionally well measured with packages such as \texttt{photutils} \citep{photutils} or Source Extractor \citep{sextractor}, this case is somewhat more complex given the point source is embedded within the Sunrise Arc.
Slight differences in how backgrounds are subtracted and how the PSF is determined can produce non-negligible differences in the resulting fluxes.
We thus choose to measure the fluxes from a variety of methods, and produce final values from the average of each method.
We utilize both aperture photometry and PSF-matched photometry, with various aperture sizes and PSF models, measured by ten independent observers as described in Appendix \ref{phot_appendix}.
The resultant fluxes are presented in Table \ref{tab:photometry}.
This consensus photometry approach allow us to understand the range of possible fluxes given different assumptions, thus incorporating systematic uncertainties into our final measurement.
\subsection{\HST\ WFC3/IR}
We additionally study \HST\ data taken as part of both the original follow-up program (GO 15842) and an ongoing monitoring program (GO 16668, PI Coe).
The goal of the monitoring program is to assess variability in the lensed star, thus it observes in the WFC3/IR F110W bandpass, matching the strongest detection of the GO 15842 program.
In total this monitoring program will obtain four additional epochs, however only two have been observed thus far.
The observations of GO 15842 occurred on November 4, 2019, and November 27, 2019.
The first two epochs of GO 16668 observations occurred on November 28, 2021, and January 29, 2022.
These observations currently span just over two years.
\section{Photometric Redshift Estimate} \label{sec:photoz}
We perform initial photometric redshift estimation using \texttt{EAZY}, utilizing a set of galaxy spectral templates generated with FSPS \citep{Conroy09,Conroy10,ConroyGunn10}.
Redshifts are allowed to vary over the range $0.01 \leq \zphot \leq 18$, in steps of 0.01.
A redshift prior is applied based on previously observed galaxy count rates as a function of redshift and magnitude in the \HST\ F160W filter.
The \texttt{EAZY} calculations are performed as part of the image reduction pipeline, and thus are run on each segment identified by Source Extractor within said pipeline.
This splits the Sunrise Arc up into a total of 4 components, and gives photometric redshifts of $\zphot = 6.00^{+0.09}_{-0.11}$, $\zphot = 6.31^{+0.08}_{-0.15}$, $\zphot = 6.04^{+0.20}_{-0.23}$, and $\zphot = 6.25^{+0.12}_{-0.16}$ for each component.
Combining these redshifts yields a total redshift estimate for the full arc of $\zphot = 6.15^{+0.27}_{-0.34}$.
Previous works \citep{Salmon2020,Welch2022_earendel} have used other SED fitting methods to estimate the photometric redshift of the Sunrise Arc using only the \HST\ imaging.
These results consistently found redshifts $\zphot = 6.2 \pm 0.1$, consistent with the present fit.
We thus adopt a fiducial redshift of $z = 6.2$ for the Sunrise Arc, and by extension Earendel, which we use for all further calculations in this paper.
\section{Magnification and Size Constraints} \label{sec:magnif}
For the present analysis, we utilize the five previously published lens models presented in \cite{Welch2022_earendel}.
These models were made using four lens modeling software packages, Light-Traces-Mass \citep[LTM,][]{Zitrin09,Zitrin15,Broadhurst05}, Glafic \citep{Oguri2010}, WSLAP+ \citep{Diego05wslap,Diego07wslap2}, and Lenstool \citep{JulloLenstool07,JulloLenstool09}.
A factor of 6 variation exists between the slope of the lensing potential in these models, adding considerable uncertainty to our magnification and maximum size constraints.
We constrain the magnification of the lensed star following a similar procedure to that described in \cite{Welch2022_earendel}.
We first observe that Earendel is consistent with being a point source in each of the \JWST\ filters, as shown in Figure \ref{fig:stamps}.
We model the object as a point source using the four individual exposures (\texttt{cal} files) for each filter.
The point source is convolved with an empirically derived PSF model measured from individual \texttt{cal} exposures of the Large Magellanic Cloud (LMC) calibration field (Anderson et al. 2022, in prep).
These exposure-level PSF models appear to be more accurate than empirical PSF models based on the final drizzled images.
The point source model is then subtracted from the individual exposure, creating a total of four residual images.
These residuals are then summed, centering on Earendel's centermost pixel in each exposure, to create the full residuals presented in Figure \ref{fig:stamps}.
The residuals are consistent with noise in each filter, indicating that Earendel is indeed a point-like source.
We constrain the magnification following the method of \cite{Welch2022_earendel}.
Briefly, we measure the maximum separation of two point sources that would remain unresolved, then calculate the minimum magnification using the relation $\mu = \mu_0 / D$, where $D$ is the distance to the critical curve in arcseconds, and $\mu_0$ is a constant that can be fit for each lens model \citep{Diego19}.
We find that the distance between two resolved simulated point sources ($2\xi$ in their notation) is about one native pixel, which in the case of \JWST\ is 0.031\arcsec.
We then use this magnification, along with the measured image plane size, to calculate the maximum possible size of the object in the source plane, again following the method of \cite{Welch2022_earendel}.
Using this technique, we are able to improve our constraints on the magnification and maximum radius of Earendel.
We find best-fit magnification values ranging from 6000--35000 depending on the lens model, while the lower limit including uncertainties is $\mu \geq 4000$ (see Table \ref{tab:mag_radius}).
The updated magnifications along with the higher spatial resolution images allow us to set tighter constraints on the radius as well, with maximum radii ranging from $r < 0.005$-0.02 pc (1000-4000 AU), depending on the lens model.
\section{Temperature Estimate} \label{sec:temp}
The JWST/NIRCam photometry of Earendel, presented in Table~\ref{tab:photometry}, is not easily fitted by local low-mass stars, brown dwarfs or giant exoplanets in the Milky Way, based on the theoretical spectral energy distributions (SEDs) of \citet{Baraffe15} and \citet{Phillips20} for such objects. If we instead assume that the light from Earendel comes from a single, highly magnified star at high redshift, then a broad scan of SED fits based on the stellar atmosphere set of \citet{Lejeune97}\footnote{The part of the \citet{Lejeune97} compilation most relevant for the Earendel analysis is that based on Kurucz ATLAS stellar atmosphere models, modified to fit empirical color-tempreature calibrations.}, assuming no dust reddening and zero transmission of flux through the IGM shortward of the redshifted Lyman-$\alpha$ line, favour $z\approx 5.7$--6.5 and B-type stars with $T_\mathrm{eff}\approx 13000$--16000 K. Metallicity only has a minor impact on the quality of the fit and cannot be constrained by the data.
Assuming $z=6.2$ and refining the fit using the more realistic B-star TLUSTY stellar atmosphere grid by \citet{Lanz07} or the Potsdam Wolf-Rayet (PoWR) model atmosphere grid for OB stars with various levels of mass loss \citep{Hainich19} results in best-fitting temperatures of $T_\mathrm{eff}= 15000$ K (at the lower $T_\mathrm{eff}$ limit of either grid).
In Fig.~\ref{fig:stellarSED}a, we present the best fit to the SED of Earendel allowed by the PoWR stellar atmosphere model at SMC metallicity \citep[$\sim 1/7$ solar, broadly consistent with the fiducial metallicity adopted by][]{Welch2022_earendel}. With both this model and similar fits produced using TLUSTY or the \citet{Lejeune97} set, the shift in flux between F200W and the longer-wavelength filters is interpreted as due to the Balmer break. However, even the best-fitting models are unable to reproduce the size of this break (as evident from the significant offset between model and observational data in F200W) or the observed flux in the longest-wavelength filters, and the resulting $\chi^2$ is very high ($\chi^2 \approx 38$ for the plotted fit).
The primary reason why these SED models struggle to provide a convincing fit to the photometric data of Earendel is that the observed SED appears to feature both a relatively strong Balmer break (typical of $T_\mathrm{eff}\lesssim 13000 K$ stars) and a steep ultraviolet continuum slope (typical of $T_\mathrm{eff}\gtrsim 20000 K$ stars). Adding the effects of dust at the redshift of Earendel would not significantly improve the fit, since dust reddening would preferentially affect the ultraviolet continuum and require an intrinsically bluer (hotter) star to match the observed data, thereby increasing the tension with the Balmer break strength. Since surface gravity ($\log(g)$) modifies the slope of the ultraviolet continuum at fixed $T_\mathrm{eff}$, the SED of Earendel makes the $\chi^2$-minimization procedure favour higher $\log(g)$ than would be expected for a single very massive star. For example, the PoWR fit presented in Fig.~\ref{fig:stellarSED}a has $\log(g)=2.8$, which would correspond to stars with ZAMS mass $\approx 10$\ $M_\odot$ based on the stellar evolutionary models of \citet{Szecsi22}. Assuming the LTM magnification of $2\mu=35000$, the model presented in Fig. ~\ref{fig:stellarSED}a corresponds to a bolometric luminosity of $\log(L/L_\odot)\approx 5.8$, which at this temperature would be more typical of an evolved star with ZAMS mass $\approx 40\ M_\odot$. Rejecting high-$\log(g)$ models only acts to further degrade the fit, since such models exhibit less steep ultraviolet continuum slopes. Lens models with lower magnifications would not solve the problem either, since these require even higher bolometric luminosities, and thus higher ZAMS masses.
However, the $\log(g)$ tension can be reduced somewhat by assuming that Earendel is composed of several stars with approximately the same temperature, in which case the inferred luminsity could be explained by $\approx 2$ stars of ZAMS mass $30\ M_\odot$, or $\approx 4$ stars of ZAMS mass $20\ M_\odot$.
While photometric uncertainties remain large, it is tempting to consider the possibility at least two stars of different temperatures are contributing significantly to the observed SED of Earendel, since this could potentially explain the presence of both a steep ultraviolet continuum slope and a strong Balmer break. In Fig.~\ref{fig:stellarSED}b, we provide an example of such a double-star fit, in which the summed contributions from one star with $T_\mathrm{eff}=9000$ K star and one with $T_\mathrm{eff}=34000$ K provide a good fit ($\chi^2 \approx 5$) to the observed SED of Earendel. However, given the limited number of photometric data points available and the number of free parameters involved in such double-star fits, the parameters of the two stars are not well constrained. If two stars are involved, their magnifications may also differ, which further complicates the analysis. We therefore defer a detailed analysis of such scenarios until spectroscopic data is available.
Another possible explanation for the somewhat puzzling SED of Earendel is that some of the observed filter fluxes are affected by emission lines (e.g. CIV 1549 in F115W, [OIII]5007 and H$\beta$ in F277W, H$\alpha$ in F444W) from either a wind surrounding this object or from a more extended and less magnified HII region produced either by Earendel itself or by other massive stars in its surrounding. Upcoming JWST/NIRSpec spectroscopy of Earendel should make it possible either detect or set strong upper limits on the contribution of such lines.
\section{Variability} \label{sec:variable}
Gravitationally lensed stars typically experience fluctuations in their overall magnifications due to microlensing \citep[e.g.,][]{Kelly18,Chen19,Rodney18,Chen22_lensedstar,Diego22_godzilla}.
Earendel was found to be somewhat unique, as its microlensing configuration lends itself to modest variation over time \citep{Welch2022_earendel}.
To further monitor the variability of Earendel's magnification, we have been repeating observations using \HST\ in the same filter (WFC3 F110W).
These repeat observations offer the best chance to look for variability, as using different filters introduces additional uncertainty that can obscure true changes.
To ensure consistency between flux measurements in each epoch, we measure the brightness within a common circular aperture with radius 0\farcs3 = 5 pixels in each drizzled \HST\ image.
We utilize a common circular annulus immediately outside the central aperture to measure a local background, which is then subtracted from the central aperture flux.
The resulting flux measurements are shown in Figure \ref{fig:variable}, and the flux values are given in Table \ref{tab:variable}.
As a comparison, we repeat this flux measurement for each of the mirror-imaged clumps that bracket Earendel \citep[1.1a/1.1b in the notation of][]{Welch2022_earendel}.
These are also plotted and tabulated in Fig. \ref{fig:variable} and Table \ref{tab:variable}.
\begin{table}[]
\centering
\begin{tabular}{c c c c}
& Earendel & clump 1.1a & clump 1.1b \\
Obs. Date & Flux (nJy) & Flux (nJy) & Flux (nJy) \\
\hline
Nov.~4 2019 & $45 \pm 4$ & $68 \pm 4$ & $95 \pm 4$ \\
Nov.~27 2019 & $51 \pm 4$ & $81 \pm 4$ & $88 \pm 4$\\
Nov.~28 2021 & $62 \pm 4$ & $89 \pm 4$ & $99 \pm 4$\\
Jan.~29 2022 & $59 \pm 5$ & $76 \pm 5$ & $86 \pm 5$
\end{tabular}
\caption{\HST\ F110W flux values measured in four epochs over a two year period for Earendel and mirror images of a nearby lensed star cluster, as plotted in Figure \ref{fig:variable}.}
\label{tab:variable}
\end{table}
We find that the highest and lowest flux values for Earendel differ by $\sim 3 \sigma$.
We also find that the largest deviation from the median of all measurements is $2.7 \sigma$.
While this is enough of a difference to warrant further investigation, we cannot conclude that these values differ in a statistically significant way.
Additional epochs and deeper imaging would better constrain the time variability of this object.
The current lack of clear variability is consistent with the microlensing analysis of \citet{Welch2022_earendel}, which predicts that the magnification should generally stay consistent within a factor of 2.
Our present variation is at most a factor of $\sim 1.4$.
The lack of clear variation may also be indicative of more than one star being present.
Each star would then cross microcaustics at different times, minimizing the effect on the total flux of such microcaustic crossing events.
Ultimately, additional epochs of observation with greater signal to noise will be required to fully understand the variability of Earendel.
\section{Discussion}
\label{sec:results}
\begin{table*}[]
\centering
\begin{tabular}{c c c c c c c}
Lens Model & $\mu_0$ & $D_{\text{crit}}$ & Magnification & Radius & Luminosity & $M_V$ \\
& & (\arcsec) & $2\mu (\times 10^3)$ & (pc/AU) & ($\log(L_{\odot})$) & (AB, rest frame) \\
\hline
LTM & 113 & 0.006 & $35^{+140}_{-10}$ & $<0.005$ (1020 AU) & $5.8^{+0.1}_{-0.7}$ & $-8.5^{+1.6}_{-0.3}$ \\
Glafic (c=1) & 69 & 0.005 & $28^{+113}_{-8}$ & $<0.008$ (1600 AU) & $5.9^{+0.1}_{-0.7}$ & $-8.8^{+1.7}_{-0.3}$ \\
Glafic (c=7) & 23 & 0.005 & $9^{+38}_{-3}$ & $<0.012$ (2520 AU) & $6.4^{+0.1}_{-0.7}$ & $-10.0^{+1.7}_{-0.3}$ \\
WSLAP & 28 & 0.009 & $6^{+26}_{-2}$ & $<0.019$ (3890 AU) & $6.5^{+0.2}_{-0.7}$ & $-10.4^{+1.7}_{-0.3}$ \\
Lenstool & 18 & 0.006 & $6^{+23}_{-2}$ & $<0.020$ (4160 AU) & $6.6^{+0.1}_{-0.7}$ & $-10.5^{+1.7}_{-0.3}$
\end{tabular}
\caption{Magnification and delensed parameter measurements for each lens model. Luminosities and V-band absolute magnitudes are calculated assuming a single star dominates the observed flux.}
\label{tab:mag_radius}
\end{table*}
The \JWST\ imaging observations presented herein support the conclusion of \cite{Welch2022_earendel} that the object nicknamed Earendel is an extremely magnified star at redshift $\zphot = 6.2$.
The increased spatial resolution of \JWST\ allows us to improve our constraints on the total magnification of the star, resulting in best-fit values ranging from 6000 to 35000 depending on the lens model (see Table \ref{tab:mag_radius}).
The lower limit on the magnification including all uncertainties has increased by a factor of 4, from 1000 to 4000, thanks to improved constraints on the distance to the critical curve.
The increased spatial resolution also improves the constraints on the maximum size of Earendel in the source plane.
Whereas \cite{Welch2022_earendel} found upper limits on source plane radius ranging from 0.09--0.36 pc, we now find maximum radii $r < 0.005$--0.02 pc (1000--4000 AU), depending on the lens model.
This further distinguishes Earendel from known young massive star clusters, which have typical radii of $\sim 1$ pc \citep{Portegies-zwart2010_YMCs}.
Even small central cores observed in nearby star clusters such as R136 do not reach below tenths of a parsec \citep[e.g.,][]{crowther16_r136}, which is still far larger than our measured radii.
This strengthens our conclusion that Earendel is most likely an individual star system.
While our new radius constraints conclusively rule out a star cluster, the tightest constraint of $<1000$ AU leaves room for multiple companion stars.
Massive stars in the local universe often have companions, and frequently have more than one companion.
Secondary companions are typically located at a median distance of less than 2 AU, while tertiary companions are found at $\sim20$ AU (\citealt{sana12,Sana14}, and Fig~3 in the review by \citealt{offner22}).
A multiple star system would thus remain unresolved in our imaging.
For our analysis of the stellar properties of Earendel, we first assume that the light is either coming from a single star, or it is dominated by the brightest star of a compact group of stars.
This gives a best-fit temperature range of $T_\mathrm{eff} = 13000$--16000 K.
We calculate intrinsic bolometric luminosities based on the magnifications given by our various lens models, finding a best-fit range of $\log(L) = 5.8$--$6.6 L_{\odot}$ (see Table \ref{tab:mag_radius}).
We overplot these temperature and luminosity constraints on the Hertzsprung-Russel (HR) diagram in Figure \ref{fig:HRD}, alongside stellar evolution models of varying metallicities from \cite{Szecsi22}. The dots represent evenly spaced timesteps of 10,000 years, giving an indication of how fast the star is evolving through the diagram. These models account for stellar wind mass loss scaled down with metallicity following \citet{vink01} and assume that the stars are born with a modest rotation of 100 km\,s$^{-1}$. Models with much faster rotation (not shown here) typically stay too hot to explain the temperature derived for Earendel.
Taking our range of magnification estimates at face value, this gives us a range of single star ZAMS masses of 20--200 $M_{\odot}$. The higher metallicity models prefer solutions where the star is at the end of its main sequence stage. The lowest metallicity models also allow for central helium-burning solutions.
We note that the high ZAMS mass and high luminosity range is in line with observational biases expected for lensed stars, which favor observations of more luminous O- and B-type stars over fainter ones \citep{Meena22}.
There are several important caveats that effectively shrink this range. First, the microlensing analysis presented in \cite{Welch2022_earendel} indicates that the highest achievable magnification is likely around $\mu = 100000$, while our smooth lens models alone allow for magnifications up to $\mu = 175000$.
This makes the lowest-mass range somewhat dubious, as microlensing could limit the magnification enough to make observations of stars this faint unlikely.
Furthermore, the probability of an object achieving a given magnification falls proportional to the magnification squared, $P(>\mu) \propto \mu^{-2}$.
Our high-end magnification estimate of 35000 is therefore 25 times less likely to occur than a magnification of 7000.
While this does not rule out such high magnifications, it favors more luminous stars at lower magnifications.
The high mass, high luminosity end comes with a caveat as well. Theoretical models of the evolution of such high-mass stars are still plagued by substantial uncertainties, which particularly affect how long a star may spend in which part of the HR diagram. The models shown here account for substantial mixing beyond the convective core \citep[as detailed in][]{brott11}, but even larger amounts of mixing may be needed \citep[e.g.\ ][]{vink01}, which would extend the end of the main sequence to cooler temperatures.
At a temperature of 15000 K, luminosities above $\sim 10^6 L_{\odot}$ would exceed the Humphreys-Davidson limit, an empirical limit above which almost no stars are found to exist at least for stellar populations in the local universe \citep{HumphreysDavidson78_I,HumphreysDavidson79_III}.
Stars living in this regime tend to be Luminous Blue Variables \citep[LBVs, e.g.\ ][]{Smith04}, bright stars that experience irregular eruptive episodes of mass loss.
The physical mechanism for these eruptions is not fully understood \citep[see, however,][]{Jiang18}.
In particular, the question of whether the LBV phenomena still occurs at low metallicity is a matter of debate \citep[e.g.\ ][]{Davies18, Kalari18}.
If indeed the star has been experiencing strong mass loss, one may expect a dense outflow. The photosphere may then not be located at the hydrostatic layer but further out in the dense stellar wind. This may mean that the actual star is hotter than the temperature we have inferred here.
We note that the \cite{Szecsi22} stellar models do not include episodic eruptions of mass as observed for LBVs, so these stellar tracks may overpopulate the region in the HR diagram above the Humphreys Davidson limit.
A further interesting question related to the LBV possibility is whether Earendel shows signs of variability, for example from possible eruptions.
Current data hints at possible variations, but no statistically significant variability has yet been observed.
Follow-up observations are ongoing to further explore this possibility.
If instead Earendel is made up of a tightly bound group of stars with similar temperatures, the intersection with the Humphreys-Davidson limit could be avoided.
For example, as mentioned in Section \ref{sec:temp}, two stars with ZAMS mass $\sim 30 M_{\odot}$ and roughly equal temperatures could produce a similar result to our single star SED fit.
However, there are several discrepancies between our best-fit single temperature model and the measured photometry.
In particular, the model spectra used do not fully reproduce the F200W--F277W color (0.56 mag), which we interpret as the 4000\AA ~Balmer break.
The cooler ($T_{\mathrm{eff}} \lesssim 13000$ K) stars that best fit the Balmer break then struggle to reproduce the apparently blue UV slope, which would favor stars with $T_{\mathrm{eff}} \gtrsim 20000$ K.
We therefore consider possible two star solutions, and present one such solution in Figure \ref{fig:stellarSED}b.
We find that this can better reproduce both the blue UV slope and the Balmer break, leading to a better overall fit.
We note however that several of our individual photometry measurements yield flatter UV slopes (see Appendix \ref{phot_appendix}), which would favor the single cool star model.
The uncertainties on the photometry ultimately leave room for both fits to be plausible.
Interestingly, our best fit combination of a luminous cool star paired with a hot, similarly luminous companion would be a somewhat unusual evolutionary scenario.
Typically, one might expect the more evolved, cooler star to be the more massive and more luminous object.
We note two important caveats here.
First, the stellar parameters are not fully constrained in this fit due to the number of free parameters being high relative to the number of photometric data points.
Additionally, the two-star fit presented here assumes that both stars are at the same magnification.
In a true lensed two-star system, each component of the binary would travel on a slightly different path through the lens, resulting in different magnifications.
The magnification is directly tied to the inferred luminosity, so accounting for variable magnifications could alter the relative bolometric luminosities of the two stars.
This introduces further degeneracies with the already broad stellar parameter space.
We therefore defer extensive simulations of this scenario for future work.
Spectroscopic observations with \JWST\ NIRSpec, expected in December 2022, can help to address these discrepancies and further constrain the temperature and luminosity of the star/stars.
\section{Conclusions}
\label{sec:conclusions}
We present recently obtained \JWST\ imaging of the $\zphot = 6.2$ gravitationally lensed stellar source Earendel.
The increased depth and wavelength coverage of these images, combined with the higher angular resolution of \JWST\ compared to \HST, allow us to improve constraints on the magnification and radius of Earendel, further supporting the interpretation of it as a distant lensed star.
We further conclude that, if the light of Earendel is dominated by a single star, that star likely has an effective temperature of 13000--16000 K, indicating that it is likely a B-type giant similar to other lensed stars, or perhaps an LBV star.
The apparent discrepancies between our best-fit single star model and the observed photometric data allow room for the consideration of multi-star models.
In particular, we note that a two star system with one hot ($T_{{\rm eff}} \sim 34000$ K) and one cooler ($T_{{\rm eff}} \sim 9000$ K) star could produce a better fit to the observed data, though the wide parameter space in this case allows for many similarly well-matched solutions.
These initial photometric constraints, while themselves inconclusive, provide an important guide for our upcoming spectroscopic observations.
\JWST\ NIRSpec observations are due to be carried out in December 2022 under GO 2282, which will shed additional light on this remarkable object.
\section*{Acknowledgements}
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope (JWST). The data were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5-03127 for \JWST. These observations are associated with program JWST GO 2282.
EZ acknowledges support from the Swedish National Space Board.
MB acknowledges support from the Slovenian national research agency ARRS through grant N1-0238.
AZ acknowledges support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF), and by the Ministry of Science \& Technology, Israel.
MT acknowledges support by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140.
J.M.D. acknowledges the support of projects PGC2018-101814-B-100 and MDM-2017-0765.
Y.J-T acknowledges financial support from the European Union’s Horizon 2020 research and innovation programme under the Marie SkЕ‚odowska-Curie grant agreement No 898633, the MSCA IF Extensions Program of the Spanish National Research Council (CSIC), and the State Agency for Research of the Spanish MCIU through the Center of Excellence Severo Ochoa award to the Instituto de AstrofГsica de AndalucГa (SEV-2017-0709)
PAO acknowledges support by the Swiss National Science Foundation through project grant 200020\_207349.
The data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via \dataset[10.17909/2x4w-pd04]{https://doi.org/10.17909/2x4w-pd04}.
\facilities{JWST(NIRCam), HST(WFC3)}
\software{
grizli \citep{Brammer21_grizli},
astropy \citep{astropy:2013,astropy:2018},
photutils \citep{photutils_1.5},
Source Extractor \citep{sextractor},
EAZY \citep{Brammer08_eazy}
}
\appendix
\section{Earendel photometry} \label{phot_appendix}
Measurements of Earendel's photometry are complicated by its faint magnitude and location within the curved Sunrise Arc, which makes it difficult to measure and subtract the ``background''.
To mitigate the systematic uncertainties,
we performed 14 different analyses by 10 co-authors using various methods
described below (Table \ref{tab:photometry_results}).
We then averaged these results to arrive at a
concordance photometry for Earendel in the 8 \JWST\ filters
(Figure \ref{fig:photometry_average}).
A similar approach of averaging photometric redshift results from various methods was shown to be most accurate by \cite{Dahlen13}.
A diversity of perspectives and approaches can improve performance in many fields, also known as the ``wisdom of crowds'' (Surowiecki 2004).
Most of the 14 analyses adopted either aperture photometry or PSF fitting.
In both cases, corrections were made to total flux by accounting for encircled energy within a given aperture size and
filter.\footnote{\href{https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-performance/nircam-point-spread-functions
}{JDox: NIRCam Point Spread Functions}}
We found that the encircled energies reported in JDox (based on pre-launch estimates)
were consistent with an empirical PSF derived %
from the grizli image reductions based on 4 isolated unsaturated stars.
We used this empirical PSF for most of the PSF fitting analyses.
Analyst A performed PSF-fitting analyses both with this empirical PSF and with WebbPSF models,
finding very similar results both ways
using the piXedfit software \citep{Abdurrouf21,piXedfit}.
JA independently derived empirical spatially variable PSFs from individual {\tt cal} exposures of the LMC calibration field taken in the various filters (see Anderson et al. 2022, in prep) and used these PSF models to fit Earendel as a point source in the individual {\tt cal} exposures of GO-2282.
SR measured PSF photometry using the DAOPHOT software \citep{Stetson1987_daophot}.
The residual images were inspected and were found to be consistent with the noise level, supporting the interpretation that Earendel is unresolved.
SR also performed aperture photometry using aperture of r=0.2" and applied aperture corrections appropriate for each filter
MN performed photometry in two ways.
The first, labeled ``aperture" in Table \ref{tab:photometry_results}, uses \texttt{photutils} \citep{photutils} to measure background-subtracted flux in an variable aperture large enough to encircle 90\% of the empirical PSF.
The second method, labeled ``PSF" in Table \ref{tab:photometry_results}, utilizes \texttt{imfit} \citep{Erwin15_imfit} to make a model of Earendel in each filter.
The total flux is then measured in the same apertures as the ``aperture" method.
BW fit point source models convolved with the grizli empirical PSFs to each filter using a custom code similar to the forward model described in \citet{Welch22_clumps}.
DC performed a hybrid analysis cloning the empirical PSF at two locations on the Sunrise Arc on either side of Earendel $0.6''$ away.
They derived the photometry within a $r = 0.2''$ (SW) or $0.4''$ (LW)
aperture that best matched
Earendel's photometry in the same aperture.
The two locations sampled different background levels,
and the results were averaged.
DC also measured aperture photometry in $r = 0.2''$ apertures
(finding consistent results for $0.2''$ and $0.3''$
after applying encircled energy aperture corrections).
They found flux measurements varied 10--20\% depending on where they measured the background: on either side of Earendel $0.6''$ away along the Sunrise Arc.
They averaged measurements from the NE and SW sides.
YJ \& PD identified the pixels associated to Earendel using NoiseChisel \citep{akh15} clumps map, disabling the kernel option previously, and interpolated the background in this region based on flux from all surrounding pixels. This ``background" naturally folds the contribution from the sky, the Sunrise Arc, the wings of the nearby galaxies, and the intracluster light. Earendel's flux is obtained as the difference between the original and the interpolated images, thus minimizing the impact of the uncertainty in the sky on the final measurement. An aperture of $r = 0.3''$ was used.
DC extracted results from photutils analysis of the full {\tt i2d} images
(all aligned to the F200W image pixels),
with objects detected in the F200W image,
and F200W PSF-matched to each LW filter to measure the LW colors
(without PSF corrections for the SW colors).
Photometry was measured within both round and isophotal apertures,
subtracting backgrounds measured in annuli around Earendel.
Finally, for each analysis, we calculate the average magnitude across all filters.
This varied by 0.9 mag across all methods,
reflecting variations in the total flux normalization.
We took the average of these normalizations, AB mag 27.1,
and renormalized all SEDs to this average across filters.
This corrects for variations in total flux measurements without altering the SED derived by each method.
We then calculate the average and scatter (RMS) across all methods
as the final magnitude and uncertainty for Earendel in each filter.
These results are plotted in Figure \ref{fig:photometry_average}.
We note that
all analyses derive a red F200W$-$F277W color
(Balmer excess indicative of a cooler star $T \sim 10000$ K)
and almost all derive a blue F115W$-$F200W color
(rest-UV slope indicative of a hotter star $T \sim 30000$ K).
The JA photometry yields a flat rest-UV slope
that could be well fit by a single star
(see e.g., Figure \ref{fig:stellarSED}).
We also tried restricting our analysis to the PSF photometry analyses.
The resulting SED is consistent within the uncertainties
(to that shown in \ref{fig:photometry_average})
with a similar Balmer break and slightly shallower rest-UV slope.
\begin{table}[]
\centering
\begin{tabular}{c c c c c c c c c c c c}
Analyst & images & method & software & F090W & F115W & F150W & F200W & F277W & F356W & F410M & F444W\\
initials & & & & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy\\
\hline
BW & {\tt grizli} & PSF & & 23 & 35 & 32 & 28 & 53 & 56 & 52 & 58 \\
SR & {\tt grizli} & PSF & DAOPHOT & $34 \pm 9$ & $54 \pm 9$ & $45 \pm 8$ & $38 \pm 7$ & $66 \pm 11$ & $67 \pm 8$ & $69 \pm 10$ & $76 \pm 7$ \\
LB & {\tt grizli} & PSF & photutils & $32 \pm 2$ & $43 \pm 2$ & $39 \pm 2$ & $33 \pm 1$ & $60 \pm 3$ & $65 \pm 2$ & $65 \pm 3$ & $72 \pm 2$ \\
MN & {\tt grizli} & PSF & imfit & 21 & 35 & 30 & 28 & 52 & 54 & \nodata & 67 \\
A & {\tt grizli} & PSF & piXedfit & 25 & 68 & 55 & 45 & 80 & 80 & 75 & 75 \\
A & {\tt grizli} & WebbPSF & piXedfit & 23 & 65 & 57 & 45 & 73 & 71 & 63 & 75 \\
JA & {\tt cal} & PSF & & $27 \pm 1$ & $35 \pm 4$ & $34 \pm 3$ & $31 \pm 3$ & $52 \pm 9$ & $53 \pm 4$ & $60 \pm 2$ & $56 \pm 2$ \\
DC & {\tt grizli} & PSF-aperture & photutils & $33 \pm 2$ & $54 \pm 2$ & $49 \pm 5$ & $38 \pm 3$ & $57 \pm 3$ & $61 \pm 9$ & $65 \pm 0$ & $65 \pm 14$ \\
SR & {\tt grizli} & aperture & & $38 \pm 32$ & $74 \pm 39$ & $56 \pm 34$ & $44 \pm 32$ & $92 \pm 39$ & $95 \pm 40$ & $84 \pm 39$ & $107 \pm 41$ \\
YJ,PD & {\tt grizli} & aperture & NoiseChisel & $21 \pm 7$ & $39 \pm 7$ & $33 \pm 5$ & $29 \pm 6$ & $59 \pm 7$ & \nodata & \nodata & $75 \pm 10$ \\
MN & {\tt grizli} & aperture & photutils & 20 & 31 & 25 & 24 & 56 & 52 & \nodata & 56 \\
DC & {\tt grizli} & aperture & photutils & 35 & 73 & 49 & 38 & 58 & 63 & 69 & 61 \\
DC & {\tt i2d} & aperture & photutils & 23 & 64 & 63 & 40 & 59 & 66 & 68 & 60 \\
DC & {\tt i2d} & isophotal & photutils & 42 & 81 & 81 & 64 & 93 & 93 & 79 & 90 \\
\hline
\end{tabular}
\caption{\JWST\ NIRCam photometry of Earendel from 14 analyses.
Most performed photometry using the grizli image reductions.
Some used the pipeline products directly:
either the {\tt cal} or {\tt i2d} images.
}
\label{tab:photometry_results}
\end{table}
\bibliography{masterbib.bib}
|
Title:
Early results from GLASS-JWST XV: the faintest red sources in the NIRCAM deep fields are intrinsically blue |
Abstract: We present a first look at the reddest 2-5$\mu$m sources found in deep NIRCAM
images from the James Webb Space Telescope (JWST) GLASS Early Release Science
program. We undertake a general search, i.e. not looking for any particular
spectral signatures, for sources detected only in bands redder than reachable
with the Hubble Space Telescope, and which are only marginally or not detected
in bluer bands, corresponding to potential populations that may not have been
identified before. We search for sources down to AB $\sim 27$ (corresponding to
$>10\sigma$ detection threshold) in any of the F200W, F277W, F356W or F444W
filters, and demand a one magnitude excess with respect to all of the bluer
bands (F090, F115W, F150W). Fainter than F444W$>25$ we find 48 such sources. We
fit photometric redshifts and spectral energy distributions to our 7-band
photometry and identify the majority of this population ($\sim$ 70%) as $2<z<6$
galaxies that are faint at rest-frame ultraviolet-optical wavelengths, have
stellar masses $10^8$--$10^9$M$_\odot$, and have observed fluxes at $>2$ $\mu$m
boosted by a combination of the Balmer break and strong emission lines. Implied
rest equivalent widths are $>400\unicode{x00C5}$. This is in contrast with
brighter magnitudes where the red sources tend to be $z<3$ quiescent galaxies
and dusty star forming objects. The space density of $z\sim 4$ faint blue
galaxies with high equivalent widths is an order of magnitude higher than found
in pre-JWST surveys. Our general selection criteria allow us to independently
identify other phenomena as diverse as the robust $z\sim12$ Lyman Break Galaxy
reported in paper III, and a very cool brown dwarf reported in XIII. In
addition we discover an extremely low mass ($8\times 10^8$ M$_\odot$) quiescent
galaxy at $z\sim2$, which is new uncharted territory for understanding the
regulation of star formation.
| https://export.arxiv.org/pdf/2208.03468 |
\title{Early results from GLASS-JWST XV: the faintest red sources in the NIRCAM deep fields are intrinsically blue.}
\correspondingauthor{Karl Glazebrook}
\email{kglazebrook@swin.edu.au}
\author[0000-0002-3254-9044]{K. Glazebrook}
\affiliation{Centre for Astrophysics and Supercomputing, Swinburne University of Technology, PO Box 218, Hawthorn, VIC 3122, Australia}
\author[0000-0003-2804-0648 ]{T.~Nanayakkara}
\affiliation{Centre for Astrophysics and Supercomputing, Swinburne University of Technology, PO Box 218, Hawthorn, VIC 3122, Australia}
\author[0000-0003-4239-4055]{C. Jacobs}
\affiliation{Centre for Astrophysics and Supercomputing, Swinburne University of Technology, PO Box 218, Hawthorn, VIC 3122, Australia}
\affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia)}
\author[0000-0003-4570-3159]{N. Leethochawalit}
\affiliation{School of Physics, University of Melbourne, Parkville 3010, VIC, Australia}
\affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia}
\affiliation{National Astronomical Research Institute of Thailand (NARIT), Mae Rim, Chiang Mai, 50180, Thailand}
\author[0000-0003-2536-1614]{A. Calabr\`o}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\author{A.~Bonchi}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\affiliation{ASI-Space Science Data Center, Via del Politecnico, I-00133 Roma, Italy}
\author[0000-0001-9875-8263]{M.~Castellano}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\author[0000-0003-3820-2823]{A. Fontana}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\author[0000-0002-3407-1785]{C. Mason}
\affiliation{Cosmic Dawn Center (DAWN), Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, DK-2200 Copenhagen N, Denmark}
\author[0000-0001-6870-8900]{E.~Merlin}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\author[0000-0002-8512-1404]{T. Morishita}
\affiliation{Infrared Processing and Analysis Center, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA}
\author[0000-0002-7409-8114]{D.~Paris}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\author[0000-0001-9391-305X]{M. Trenti}
\affiliation{School of Physics, University of Melbourne, Parkville 3010, VIC, Australia}
\affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia}
\author[0000-0002-8460-0390]{T. Treu}
\affiliation{Department of Physics and Astronomy, University of California, Los Angeles, 430 Portola Plaza, Los Angeles, CA 90095, USA}
\author[0000-0002-9334-8705]{P. Santini}
\affiliation{INAF Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Rome, Italy}
\author[0000-0002-9373-3865]{X. Wang}
\affil{Infrared Processing and Analysis Center, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA}
\author[0000-0003-4109-304X]{K.~Boyett}
\affiliation{School of Physics, University of Melbourne, Parkville 3010, VIC, Australia}
\affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia}
\author[0000-0001-5984-0395]{Marusa Bradac}
\affiliation{University of Ljubljana, Department of Mathematics and Physics, Jadranska ulica 19, SI-1000 Ljubljana, Slovenia}
\affiliation{Department of Physics and Astronomy, University of California Davis, 1 Shields Avenue, Davis, CA 95616, USA}
\author[0000-0003-2680-005X]{G. Brammer}
\affiliation{Cosmic Dawn Center (DAWN), Denmark}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, DK-2200 Copenhagen N, Denmark}
\author[0000-0001-5860-3419]{T. Jones}
\affiliation{Department of Physics and Astronomy, University of California Davis, 1 Shields Avenue, Davis, CA 95616, USA}
\author[0000-0001-9002-3502]{D. Marchesini}
\affiliation{Department of Physics and Astronomy, Tufts University, 574 Boston Ave., Medford, MA 02155, USA}
\author[0000-0001-6342-9662]{M. Nonino }
\affiliation{INAF-Trieste Astronomical Observatory, Via Bazzoni 2, 34124, Trieste, Italy}
\author[0000-0003-0980-1499]{B. Vulcani}
\affiliation{INAF- Osservatorio astronomico di Padova, Vicolo Osservatorio 5, I-35122 Padova, Italy}
\keywords{editorials, notices ---
miscellaneous --- catalogs --- surveys. TODO}
\section{Introduction}
The development of sensitive near-infrared areal detectors for astronomy led to the
first sky surveys \citep{gratuitous} and the uncovering of new populations of high-redshift sources.
The first large area imaging surveys discovered new
populations of red objects, referred to early on as
`Extremely Red Objects' or `Distant Red Galaxies'\citep{Pat2004,Franx2003}; contrasting
with the dominant population of `Faint Blue Galaxies' \citep{FBGs}.
These redder objects
were bright in the near-infrared but dim or undetected in the optical bands. These were later spectroscopically confirmed as mixture of $z\sim 2$ early type massive quiescent galaxies \citep{GDDS,K20,GNIRS}, and massive dusty star-forming galaxies \citep{Wuyts2009}. These populations have now been photometrically and spectroscopically tracked to $z\sim 4$ \citep{Marchesini2010,Spitler2014,S14,Marsan2015,G17,S18,Forrest2020}. The effects of quiescence, dust and redshift all add to make spectral energy distributions (SEDs) progressively redder in the optical to near-infrared bandpasses. In recent years,
surveys have detected red $H-K$ and $H-3.6\micron$ sources that are likely even higher redshift quiescent and/or dusty sources \citep{Merlin2019,Fudamoto2021,Marsan2022}.
In the near-infrared the deepest surveys today come from the Hubble Space Telescope, however this is limited to wavelengths $<1.6$\micron.
The state-of-the-art at longer wavelengths has been provided by the 85cm Spitzer Space Telescope which was retired in 2020. Now this is surpassed
by new data from the James Webb Space Telescope \citep[JWST;][]{Rigby2022} which has unprecedented capability at 2–5\micron\ with the NIRCAM \citep{NIRCAM}
camera and 5--28\micron\ with the MIRI camera \citep{MIRI}. Thus a first look at the sources that emerge in the longer wavelengths of JWST is a compelling prospect. In this paper we do this, utilising data from the GLASS Early Release Science program \citep{Treu2022} where parallel
imaging with NIRCAM provides extremely deep data at $2–5\micron$, and our aim is to characterise the spectral energy distributions and possible nature and redshifts of these sources. In particular we adopt a complementary approach from other
early JWST papers
(\citealt{Castellano2022}, Paper III; \citealt{Leethochawalit2022}, Paper X; \citealt{Fink2022,Atek2022,Donnan2022,Naidu2022,Yan2022});
instead of searching for known classes of sources with particular color signatures we use a more general method which is sensitive to a wide variety of sources, and characterise what is revealed by the redder NIRCAM
bands.
The plan of this paper is as follows: In \ref{data} we describe the data and introduce the general method we use to select red sources. In section \ref{method} we outline our analysis methodology including determination of redshifts, spectral types and stellar masses.
In section \ref{population} we discuss the nature of the population and their spectral energy distributions and likely redshifts. In section \ref{conclusion} we present conclusions. Throughout this paper we adopt AB magnitudes and a standard cosmology with $\Omega_{\rm m}=0.3$ $\Omega_{\Lambda}=0.7$ and H$_0$=70 km s$^{-1}$ Mpc$^{-1}$.
\section{Data and sample selection}
\label{data}
GLASS-JWST is one of 13 Early Release Science programs. It obtained NIRISS and NIRSpec spectroscopy in the center of the
massive $z=0.31$ galaxy cluster A2744 on 28--29$^{\rm th}$ June 2022, while obtaining NIRCAM images of two parallel fields 3--8 arcmin away from the
cluster center. GLASS-JWST consists of the deepest extragalactic data amongst the ERS programs. Details can be found in the survey paper \citep{Treu2022}.
For this paper we consider the NIRCAM parallel fields which are sufficiently distant from the cluster that
only modest lensing magnification is expected \citep{Medezinski2016}. In this paper we neglect the effect, which does not affect colors, and the issue will be revisited after the completion of the campaign. The reduction of the images and construction of photometric catalogs are described in \citet[][Paper II]{Merlin2022}. In summary we have seven filters covering 0.9––4.4 \micron\ over an area of 9.7 arcmin$^2$, with exposures of 1.6--6.5 hours, with the F444W filter being the deepest.
Our catalogue is F444W selected; the F444W image is the detection image and forced photometry is done in the other bands on images PSF-matched to F444W. We correct all bands to total based on the ratio of total to aperture flux in F444W. For this paper's flux
and color measurements we use an aperture of 0.45 arcsec (this is 3$\times$ the point spread function - PSF - full width half maximum in F444W). The
5$\sigma$ limiting flux in F444W for this aperture is 28.5, while the other six bands range from 28.1 to 27.6.
We aim to develop a general method to identify sources whose fluxes rise up in the redder bands. First we define the latter:
for `red bands' we utilise the F200W, F277W, F356W and F444W filters.
Technically F200W is in the NIRCAM `short wavelength' channel but for our purposes we include it in the `red band' category as it represents a wavelength not accessible to HST and which is limited in depth by considerable thermal emission in ground-based observations. Then the `blue bands' are F090W, F115W and F150W. We require a red selection that picks up a wide variety of
sources and that at the faint end will pick up
objects that are only marginally or not detected in the blue bands, but which at brighter magnitudes can be compared
with previous HST$+$Spitzer work. After some experimentation we settled on the following:
\begin{enumerate}
\item We require that the photometry of a source be good in all 7 bands, i.e. no artefacts or chip boundaries affecting it which we determined by checking for flagged pixels near the source center. This
removes 22\% of all sources in the input sample.
\item We define a magnitude we call RED\_BRIGHT, which is the brightest magnitude of a source in {\it any} of the
red bands.
\item Next we similarly define BLUE\_BRIGHT for the brightest of the blue bands.
\item We select RED\_BRIGHT $-$ BLUE\_BRIGHT $>1.0$
\item We examine the results as a function of the RED\_BRIGHT magnitude limit.
\end{enumerate}
This results in galaxies where at least one of the red bands is one magnitude brighter than {\em all of the blue bands}.
This selection has several advantages: first it can pick up sources that are bright in only one red band (such as might be
due to emission lines contributing at certain wavelengths) as well as continuum sources that are bright in many red bands. The RED\_BRIGHT $-$ BLUE\_BRIGHT $>1.0$
selection is defined in AB magnitudes, which is convenient as blue continuum sources such as star-forming galaxies have
$\sim$ constant AB magnitudes with wavelength, and our survey sensitivity is also $\sim$ constant
between bands
(within a factor of two) in Janskies. Secondly
by utilising a one magnitude break the red color selection is similar to previous methods that have be used to
find high-redshift quiescent galaxies (e.g. \citealt{S14}), dusty galaxies \citep{Marchesini2010,Spitler2014,Franx2003} and Lyman break galaxies \citep{Steidel2003}. Finally at the faint
magnitudes it picks up sources undetected in the blue bands while at bright magnitudes it picks up previously known
red populations.
We consider sources down to RED\_BRIGHT $< 27.0$. At this magnitude limit the peak red fluxes in our aperture are $>10\sigma$, which are robust
sources. Also critically the blue limit for the faintest sources then corresponds to a $>3 \sigma$ detection, so we can be
confident that the sources are reliably at RED\_BRIGHT $-$ BLUE\_BRIGHT $\gtrsim$ 1 even if
not detected in the blue bands. One caveat to note is
that by construction our catalog is F444W selected, with a point source completeness limit of 28.9 (Paper II). This translates
to $\simeq$ 27.3 for our aperture. Thus although a candidate may be bright in another red band it will always have
some significant F444W flux. An advantage of F444W selection is that it probes
out to $z=7$ the rest frame optical where
stellar mass-to-light ratios have smaller variation than in the rest frame ultraviolet. We ran a set of simple simulations (following the methodology of \citealp{KG2004} but with $z_{form}=30$)
using PEGASE.2 models \citep{PEGASE.2} and determined, that for maximally old galaxies, in the absence of significant amounts of dust obscuration, this corresponds to
a strict stellar mass completeness limit of $6\times 10^8$ M$_\odot$
at $z=3$ and $2\times 10^9$ M$_\odot$ at $z=7$. Younger galaxies will be selected below
these mass limits as they have lower mass-to-light ratios.
\section{Methodology}
\label{method}
We use v3.1 of the GLASS-ERS NIRCAM images and catalogues from Paper II, which contains 6590
sources. After selecting 7 bands of good photometry as noted above we have 5130 sources.
Applying our RED\_BRIGHT$-$BLUE\_BRIGHT$>1$ selection we have 162
sources\footnote{listed in Supplementary information online} with $
RED\_BRIGHT<27$.
For these we fit the photometric redshifts and SEDs using the EAZY software \citep{EAZY},
specifically {\tt eazy-py} version 0.5.2. Our EAZY fits use the `empirical' template set {\tt eazy\_v1.3.spectra} which include high
equivalent width emission line components which have proved important for fitting high-redshift sources \citep{S16}.
EAZY is a robust and accurate photometric redshift
and multi-component SED fitting tool
that has been utilised and validated in many deep surveys (e.g. \citealt{S16,Whit11,3DHST}). However with only 7 near-infrared bands and a new telescope and instrument we approach it with caution and have inspected
all of the SED fits in our faint sample. Inspection of this sample (RED\_BRIGHT$>$25; 67 objects) led to the
further removal of 19 sources associated with image artefacts, blending with
bright neighbours or chip edges.
Of the remainder we find about 20\% are bad fits, these
are generally at the fainter end and have broad or multiply peaked redshift probability
distributions ($P(z)$); however there are a handful
of higher signal:noise object that simply can not be fit by EAZY. The majority,
80\%, are excellent fits with the break between blue
and red bands accurately recovered, with smoothly peaked redshift probability distributions. We interpret this high fraction as arising because
breaks in photometry in general provide good photometric redshift constraints, and our sample includes
these by construction. We caution that the reliability of photometric redshifts for sources outside our selection may be much less.
In Figure~1 we plot RED\_BRIGHT vs photometric redshift for our sources and
mark the typical limits of HST and Spitzer surveys. As a reference for this we take the Hubble Frontier Fields (HFF) depth from \cite{Shipley2018} which is
the deepest near-infrared survey with HST. Their HST F160W point source completeness
limit when corrected for our aperture corresponds to BLUE\_BRIGHT$=26.0$, we mark objects fainter than this in the blue channels with open circles.
The Spitzer 3.6$+$4.5\micron\ bands are similar to our F356W and F444W bands. In the HFF their
depth was AB$=$25 (an
aperture correction is inapplicable as Spitzer's broad PSF makes faint objects
effectively point sources). While there are significantly deeper Spitzer surveys they become
seriously confusion limited and incomplete for AB$>25$ (see Figure~14 of \citealt{S-CANDELS}). This issue is normally addressed by modelling Spitzer fluxes using
HST images as priors on source location, this introduces a dependence on detection in the bluer bands.
Therefore we mark RED\_BRIGHT$=$25 as the approximate limit for sources found with Spitzer,
noting that Spitzer photometry of HST detected sources can go considerably deeper.
We use Prospector \citep{Johnson2021a} to derive stellar masses,
star formation histories
and dust attenuation for the galaxies in our sample because Prospector includes a physical treatment of the effect of emission lines on the
photometry.
We use a non-parametric {\tt continuity\_flex\_sfh} with 4 SFH bins.
We use a \citet{Kroupa2001} IMF and fix the redshift of the galaxies at the best fit EAZY values.
We use a \citet{Calzetti2000} dust law and let the dust optical depth vary between 0--2.0.
We vary the stellar metallicity between $\log_{10}(\mathrm{Z}/\mathrm{Z}\odot)=-2$ to 0.19.
We further fix gas phase metallicity to be same as stellar metallicity and allow the ionisation parameter of the galaxies to vary between U=$-1$ to $-4$. We have inspected the Prospector SED
fits and find them to agree well with the EAZY SED fits.
We define `quiescent galaxies' as those with $\log_{10}$ of the
specific star-formation rate per year as $<-9.4$. This is a factor of 4
below the main sequence at $3<z<4$ from \cite{S18}.
We estimate dust attenuation $A_V$ from the {\tt dust2} parameter of the Prospector SED fits and code this in 3 bins on Figure~1. By inspecting the SEDs by eye we have verified that these attenuation classifications
accord well with the shape and steepness of the best fit SEDs.
\section{Discussion of Sources}
\label{population}
Several trends are apparent in the source population. First it can be seen
in Figure~1 that at bright magnitudes the sample is dominated by
quiescent galaxies and dusty star forming galaxies at $z\sim 2$. This is a well known result as discussed
in the introduction; and one might expect to see more such things at fainter magnitudes. However the nature of the population shifts and we see that at RED\_BRIGHT$>$25 the population is dominated ($\sim$ 70\%) by low attenuation ($A_V<1$)
star forming galaxies at $2<z<6$.
We also see candidate star-forming galaxies at $z>11$ appearing, which we will discuss in detail later.
We present examples showing the ranges of sources at the faint end in Figure 2. ID numbers refer
to the catalog of Paper II.
To start with ID582 and ID1284 show examples of blue $z\sim 4$ star forming galaxies that are the dominant population of galaxies selected by our criteria.
It can be seen that the increased flux $>2$\micron\ comes from the Balmer break together with a strong contribution from H$\beta$ and [OIII] emission lines. These SEDs can not be fit well at all without strong
line flux contributions, if one removes the high equivalent
width template from EAZY then the median $\chi^2$ SED residual of the faint sample increases significantly from 2.2 to 7.7. The contribution of such
emission lines can be seen in for example the F277 boost of ID582
and 1284 in Figure~1, it is even evident by eye in the F277 images.
The number of such
sources was notable, so we investigated what level of emission line equivalent widths
were needed to give such boosts to the photometry. To do this we measured the summed
H$\beta$ + [OIII] 4959,5007\AA\ equivalent widths of the best fit Prospector models. This confirms strong emission lines are needed
and gives an indication of the level of line strengths required; the distribution is shown
in Figure~3. It can be seen that very high equivalent widths are indicated with a rest-frame
range of 200--1400\AA. This makes
sense as the NIRCAM filters are quite broad, for example to boost the F277W by a factor of
50-70\% (as shown by the SED ID 582 and 1284) requires an {\em observed} frame equivalent width of
3000--5000\AA. The majority of the faint sample are low mass, we find the
typical range of stellar mass is $10^8$--$10^9$ M$_\odot$ and the range of star formation rate from SED fitting (Prospector age$<100$ Myr bin)
is 1--4 M$_\odot$ yr$^{-1}$. SED fitting is an unreliable method for measuring
star formation rates; these values are likely underestimted given the equivalent widths and
should be revisited by measuring emisison line fluxes from spectra.
The high equivalent widths are in contrast with sources having bright magnitudes, where the median is $\sim 80$\AA\ for RED\_BRIGHT$<$25.
The implied equivalent width distribution is very similar to that identified
at $1<z<3.4$ in our companion paper (Paper VI; \citealp{Boyett2022}) using NIRISS Wide Field Slitless Spectroscopy. We are sensitive
to higher redshifts because NIRISS is limited to wavelengths $<2$\micron\ whereas our
selection by constructions selects excess emission at $>2$\micron.
Objects with these kinds of extreme equivalent widths have been seen before, similarly selected
via filter boosts (e.g., \citealp{Malkan2017}). For example at $z\sim 3$ \cite{Forrest2017} identified galaxies
with equivalent widths of $\sim$ 800\AA\ by virtue of their excess in medium band
filters. At higher redshift ($z>7$) Spitzer photometry has indicated very high line
emission boosts comparable to what we find \citep[e.g.][]{Borsani2020,3DHST}. Such objects
are interpreted as very young galaxies with high star formation rates but little
stellar mass yet formed and are important to study due to their
potential role in cosmic reionization at $z>7$.
The space density we find at $2<z<6$ is high,
$\sim 2\times 10^{-4}$ Mpc$^{-3}$ for equivalent widths $>400$\AA\ and RED\_BRIGHT$>$25,
a factor of $\sim 7$ higher than \cite{Forrest2017}; although
we note we are probing to considerably fainter magnitudes, hence lower stellar masses, and higher redshifts. Compared
to all galaxies in NIRCAM with the same magnitudes they are $\sim$ 1\% of the population.
We also find that compared to all $2<z<6$ galaxies in the entire NIRCAM sample they are about $\sim 2$\% though we caution we have not validated the photometric redshifts for the broader
sample.
ID1826 shows an example of a dustier star forming galaxy at $z\sim 4$ with $A_V=1.3$, these are less common in the faint sample.
Examples of even rarer selected sources are shown on the lower panels. ID5029 is well fit by
a $z=2.0$ quiescent
galaxy with an extremely low stellar mass of $8\times 10^8$ M$_\odot$,
star formation rate $\lesssim$ 0.3 M$_\odot$ yr$^{-1}$
and with moderate
dust attenuation ($A_V=1.3$). We expect such low mass
quiescent galaxies to be significantly rarer than their massive cousins (of which many
examples can be seen at brighter magnitudes in Figure~1) because the
quiescent $z\sim 2$ galaxy stellar mass function of \cite{Tomczak2014} declines at low masses. However
we note that this mass function is significantly incomplete below $3\times 10^9$ M$_\odot$.
The source is resolved with an estimated size of 0.2––0.4 arcsec; this is similar
to those of the lowest mass ($\sim 10^{10}$ M$_{\odot}$) quiescent
galaxies of \cite{z2QGs}.
ID5029 is likely the lowest
mass quiescent galaxy identified at $z\sim 2$. We note it has F200W $=$ 27.7, considerably below the limit of ground based $K$-band surveys \citep{S16}.
In a companion paper (Paper IX; \citealt{QGs}) we present
the first spectra from JWST of two low mass ($\sim 10^{10}$ M$_\odot$) quiescent galaxies. These results
augur well for the future prospects of JWST
to measure the properties of quiescent galaxies at low masses.
ID2034 is a point source and has an unusual SED with a strong rise between F3456W and F444W; the residual flux in F115W strongly rules
out a $z>10$ solution. It is much better fit by a cool star, using the Phoenix stellar templates
built in to EAZY we find a 400K Y dwarf is an excellent fit. This
demonstrates how important it is to consider cool star templates
when evaluating very high redshift solutions.
We explore this object in more
detail in our companion Paper XIII \citep{T-dwarf} -- which describes the independent discovery -- with a more sophisticated set of
stellar templates and conclude it is a star on the T/Y boundary. It is the first ultra cool
dwarf to be discovered by JWST, its faint magnitude places it well outside the Milky Way thin disk.
ID4387 is a high confidence $z=12.6$ Lyman break galaxy candidate with a pronounced Lyman dropout between F150W and F200W, this was presented in detail in our
companion Paper III \citep{Castellano2022} where it was discovered by classical Lyman break color
selection. We note the other bright galaxy in that paper at $z=10.6$ is too low redshift to be selected by our method here; it has too much flux in F150W. Our method is not sensitive
to Lyman break galaxies with redshifts $7<z<11$ as they have strong rest-ultraviolet
continuum in the blue bands.
ID679 is a possible $z=15.9$ Lyman break galaxy candidate with F444W=26.9;
it has a candidate Lyman break between F200W
and F277W, a blue continuum shape at longer wavelengths ($>2$\micron),
and no flux detected in the bluer channels.
It is quite faint;
because of this the F115W$-$F200W color is not constrained well enough to
have put it in the color selection window for these redshifts of Paper III.
This also means the $z=15.9$ solution is not robust; the SED and $P(z)$ of
of this object show that there is significant probability of low
redshift ($z\sim 1$). We show this alternate solution in the figure. Given low
redshift is a priori more likely we can not regard
this as a strong candidate. The other $z=16.4$ object in Figure~1, ID2060,
is similar and
also has low-z solutions, furthermore it lies
very close to the diffraction
spike of a nearby bright galaxy that may contaminate the photometry.
The discovery of $z\sim 16$ F150W dropouts has attracted a lot of recent attention \citep{Fink2022,Donnan2022,Atek2022} and is scientifically
important for our understanding of early galaxy formation, however as see here
SEDs at these redshifts may be ambiguous unless they have very high signal:noise
\citep[e.g.,][]{Zavala2022}. Confirming the two objects
in our sample would require significantly deeper imaging data, shortwards of F200W.
These results do however indicate that our technique is a promising
alternative to traditional methods to discover more of the very high-redshift objects.
\section{Conclusions}
\label{conclusion}
We make a first exploration of the deep sky considering the faintest
very red sources that emerge at wavelengths $>2$\micron\ in JWST NIRCAM bands. Such sources would not have been seen by previous surveys. We utilise a novel general search method
that does not depend on any particular choice of SED class to search for.
We find 48 sources ($\sim 5$ arcmin$^{-2}$) that are detected in one or more bands beyond $2\micron$ but are absent or only marginally detected in bluer bands. Our primary conclusions are:
\begin{enumerate}
\item Our novel selection method picks out a diversity of different classes of interesting sources.
\item Contrary, perhaps, to a naive intuition, the population is dominated by low mass faint blue galaxies
at $z\sim 4$, where the Balmer break and high equivalent width H$\beta+$[OIII] emission lines are redshifted into
the red bands. Such objects are more numerous than uncovered
by pre-JWST surveys.
\item We find a few exotica such as a cool and distant T dwarf star and a very low mass quiescent galaxy at $z=2$.
\item We recover a robust $z=12.6$ Lyman break galaxy found by earlier color selection and
identify two additional, weaker, candidates at $z\sim 16$. However, these two are not robust due to their
very faint magnitudes. Nevertheless, this shows that our method
has the potential to be a useful alternative to classical techniques in such searches.
\end{enumerate}
The uncovering of new populations of galaxies in the red channels show the promise of JWST data for fully characterising the population of high-redshift galaxies and of stars in our galaxy. This
analysis is only a preliminary first look to see what is revealed by red NIRCAM channels. Future work can greatly improve the statistics utilising future improved NIRCAM calibrations and deeper and wider JWST surveys. NIRCAM slitless spectroscopy ought to be able to quickly
confirm the existence of an abundant $z\sim 4$ population of strong line emitters.
Finally, it would be valuable to add mid-infrared data from MIRI to better characterise the full SED shapes of the reddest objects that JWST/NIRCAM will find.
\acknowledgments
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program JWST-ERS-1324. We acknowledge financial support from NASA through grants JWST-ERS-1342.
KG, TN and CJ acknowledge support from Australian Research Council Laureate Fellowship FL180100060. NL and MT acknowledge support by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. CM acknowledges support by the VILLUM FONDEN under grant 37459. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant DNRF140. MB acknowledges support from the Slovenian national research agency ARRS through grant N1-0238.
\bibliographystyle{aasjournal}
\bibliography{myreferences}{}
|
Title:
Gravity Tests with Radio Pulsars in Perturbative and Nonperturbative Regimes |
Abstract: Searches for empirical clues beyond Einstein's general relativity (GR) are
crucial to understand gravitation and spacetime. Radio pulsars have been
playing an important role in testing gravity theories since 1970s. Because
radio timing of binary pulsars is very sensitive to changes in the orbital
dynamics, small deviations from what GR predicts can be captured or
constrained. In this sense, the gravity sector in the standard-model extension
was constrained tightly with a set of pulsar systems. Moreover, compact objects
like pulsars are possible to develop nonperturbative deviations from GR in some
specific alternative gravity theories, thus radio pulsars also provide rather
unique testbeds in the strong-gravity regime.
| https://export.arxiv.org/pdf/2208.00142 |
\newcommand{\refeq}[1]{(\ref{#1})}
\def\etal {{\it et al.}}
\title{Gravity Tests with Radio Pulsars in Perturbative and Nonperturbative
Regimes}
\author{Lijing Shao$^{1,2}$}
\address{$^1$Kavli Institute for Astronomy and Astrophysics, Peking University,
Beijing 100871, China}
\address{$^2$National Astronomical Observatories, Chinese Academy of Sciences,
Beijing 100012, China}
\bodymatter
\section{Introduction}
Among the four fundamental forces in the Nature, gravity is rather unique as it
is described in the language of differential geometry, while the other three
forces are understood in terms of quantum field theory. Therefore, to go beyond
the current paradigm of modern physics, which consists of Einstein's general
relativity (GR) and the standard model of particle physics, gravity might hold
the key. Empirical studies of gravitation and spacetime are important to
provide clues to a deep fundamental theory, probably the quantum
gravity.\cite{review,datatables} In testing gravity theories, radio pulsars have
been playing an important and unique role since the discovery of the
Hulse-Taylor pulsar in 1970s. In this short proceedings, we will briefly review
some interesting bounds from pulsar observations in a perturbative framework,
called the standard-model extension (SME),\cite{sme} as well as in some specific
scalar-tensor gravity theories where nonperturbative strong-field phenomena
might develop inside neutron stars. Pulsar timing puts remarkable limits in both
perturbative and nonperturbative gravity regimes.
\section{Perturbative weak-field expansion of gravity}
As GR has been confronted with various kinds of experiments and observations for
a century where all tests are passed with flying colors,\cite{review,datatables}
one might only expect small deviations from it, at least in the weak-field
limit. The gravity sector of SME is designed in the spirit of effective field
theory, and it categorizes all kinds of operators beyond GR by introducing SME
coefficients for Lorentz/CPT violation.\cite{sme} In the pure gravity sector,
the most generic Lagrangian for linearized gravity reads,
\begin{equation}
\label{eq:sme}
\mathcal{L}_{\mathcal{K}^{(d)}}=\frac{1}{4} h_{\mu \nu}
{\hat{\mathcal{K}}}^{(d) \mu \nu \rho \sigma} h_{\rho \sigma} \,,
\end{equation}
where $\hat{\mathcal{K}}^{(d) \mu \nu \rho \sigma}=\mathcal{K}^{(d) \mu \nu \rho
\sigma i_{1} i_{2} \cdots i_{d-2}} \partial_{i_{1}} \partial_{i_{2}} \cdots
\partial_{i_{d-2}}$ is a complicated operator with derivatives contracted with
SME coefficients $\mathcal{K}^{(d) \mu \nu \rho \sigma i_{1} i_{2} \cdots
i_{d-2}}$. The complete action~(\ref{eq:sme}) can be very cumbersome and
contains an infinite number of field operators. However, in the sense of
effective field theory, it is likely that terms of the lowest mass dimensions
dominate in certain low-energy experiments.
In a modified gravity, a binary orbit is generally altered. This results in
characteristic changes in the times of arrival, the main observables, for binary
pulsars. In turn, dedicated long-term observations of radio pulsars can provide
stringent limits to various types of modifications in the gravitational
interaction. An updated list of gravity tests in the SME framework provided by
pulsars includes tests of,\cite{pulsarsme}
\begin{itemize}
\item the minimal gravity sector with operators of mass dimension four,
\item CPT-violating operators of mass dimension five,
\item nonlinear operators of mass dimension eight which violate the
gravitational weak equivalence principle,
\item matter-gravity couplings with operators of mass dimensions three and
four, and
\item abnormal spin behaviours caused by the Lorentz-violating neutron star
structure, or due to gravity and matter-gravity couplings.
\end{itemize}
A summary of limits from pulsar timing experiments can be found in the {\it Data
Tables for Lorentz and CPT Violation},\cite{datatables} and for details readers
are referred to original publication.
\section{Nonperturbative strong-field gravity}
The treatment in the SME has assumed the smallness of any kinds of deviations
from GR. However, neutron stars are strongly self-gravitating objects. As
discovered by Damour and Esposito-Far\`ese in 1990s, a nonperturbative phenomena
called ``spontaneous scalarization'' might happen for neutron stars in a class
of scalar-tensor gravity theories.\cite{def} This behaviour introduces an extra
dipolar channel for gravitational radiation in a binary and can be constrained
by pulsar timing, via the orbital decay rate parameter, $\dot P_{\rm b}$. There
are a few variants of scalar-tensor gravity theories, including those with a
massive scalar field\cite{massiveSTG} and with a topological Gauss-Bonnet
term.\cite{gb} Scalarized neutron stars are illustrated in
Fig.~\ref{fig:massiveSTG} for three representative massive scalar-tensor
theories, including Damour-Esposito-Far\`ese theory, Mendes-Ortiz theory, and a
$\xi$ theory from considerations in cosmology. As we can see, scalar hairs grow
for neutron stars with certain masses as they are energetically favored. Current
pulsar-timing observations of a handful of neutron-star white-dwarf binaries and
asymmetric double neutron star binaries are able to put stringent constraints on
theory parameters.\cite{def, gb} Recently, gravitational waves also start to
provide useful limits,\cite{gw} and in many cases, depending on the specifics of
theories under investigation, limits from pulsar timing and gravitational waves
are complementary to each other.
\section{Discussion}
Neutron stars are superb testbeds for gravitation and spacetime. Thanks to the
precision timing ability of large-area radio telescopes, gravity tests are
versatile with radio pulsars. A number of changes in the orbital dynamics of
different types can be probed. In particular, it was demonstrated for a couple
of times that, a set of carefully chosen binary pulsars are able to break
degeneracy of theory parameters and put combined limits on the SME coefficients
for Lorentz/CPT violations. These limits usually are very tight and provide
important experimental results for the SME community. On the other hand, in some
specific alternative theories of gravity, the perturbative treatment fails, and
nonperturbative hairs grow for certain neutron stars. In such a case, pulsar
timing appears even advantageous for empirical gravity tests, and provides
remarkable constraints for gravity in the strong-field regimes, complementing
the new tests brought by observations of gravitational waves and black hole
shadows.
In a short summary, both perturbative and nonperturbative probes of the
gravitational interaction are useful and might lead to clues for quantum
gravity. Radio pulsars, whose timing results are extremely precise and improve
over time, stand as a unique testbed for gravity. In the upcoming years, we can
certainly expect improved tests from existing pulsar systems, as well as new
tests from yet-to-be-discovered pulsars, for example, possibly from pulsars in
binary with black holes.
\section*{Acknowledgments}
I am grateful to Quentin Bailey, Alan Kosteleck\'y, Norbert Wex for stimulating
discussions in the past few years. This work was supported by the National SKA
Program of China (2020SKA0120300), the National Natural Science Foundation of
China (11975027, 11991053, 11721303), and the Max Planck Partner Group Program
funded by the Max Planck Society.
|
Title:
Hundreds of Low-Mass Active Galaxies in the Galaxy And Mass Assembly (GAMA) Survey |
Abstract: We present an entirely new sample of 388 low-mass galaxies ($M_\star \leq
10^{10} M_\odot$) that have spectroscopic signatures indicating the presence of
massive black holes (BHs) in the form of active galactic nuclei (AGNs) or tidal
disruption events (TDEs). Of these, 70 have stellar masses in the dwarf galaxy
regime with $10^8 \lesssim M_\star/M_\odot \lesssim 10^{9.5}$. We identify the
active galaxies by analyzing optical spectra of a parent sample of $\sim$23,000
low-mass emission-line galaxies in the Galaxy and Mass Assembly (GAMA) Survey
Data Release 4, and employing four different diagnostics based on narrow
emission line ratios and the detection of high-ionization coronal lines. We
find that 47 of the 388 low-mass active galaxies exhibit broad H$\alpha$ in
their spectra, corresponding to virial BH masses in the range $M_{\rm BH} \sim
10^{5.0-7.7} M_\odot$ with a median BH mass of $\langle M_{\rm BH}\rangle \sim
10^{6.2} M_\odot$. Our sample extends to higher redshifts ($z \le 0.3; \langle
z \rangle=0.13$) than previous samples of AGNs in low-mass/dwarf galaxies based
on Sloan Digital Sky Survey spectroscopy, which can be attributed to the
spectroscopic limit of GAMA being $\sim 2$ magnitudes deeper. Moreover, our
multi-diagnostic approach has revealed low-mass active galaxies spanning a wide
range of properties, from blue star-forming dwarfs to luminous "miniquasars"
powered by low-mass BHs. As such, this work has implications for BH seeding and
AGN feedback at low masses.
| https://export.arxiv.org/pdf/2208.04960 |
\title{Hundreds of Low-Mass Active Galaxies in the Galaxy And Mass Assembly (GAMA) Survey}
\author[0000-0002-4587-1905]{Sheyda Salehirad}
\affiliation{eXtreme Gravity Institute, Department of Physics, Montana State University, Bozeman, MT 59717, USA }
\author[0000-0001-7158-614X]{Amy E.\ Reines}
\affiliation{eXtreme Gravity Institute, Department of Physics, Montana State University, Bozeman, MT 59717, USA }
\author[0000-0001-8440-3613]{Mallory Molina}
\affiliation{eXtreme Gravity Institute, Department of Physics, Montana State University, Bozeman, MT 59717, USA }
\affiliation{Department of Physics and Astronomy, University of Utah, 115 South 1400 East, Salt Lake City, UT 84112, USA}
\keywords{Active galaxies -- Active galactic nuclei -- Low-mass galaxies -- Dwarf galaxies -- Black holes -- Low-luminosity Active galactic nuclei}
\section{Introduction}\label{sec:intro}
Supermassive black holes (BHs) are found in the nuclei of almost all massive galaxies \citep[e.g.][]{Kormendy:1995,Kormendy:2013}, however the ``memory'' of BH seeding is erased during merger-driven growth over cosmic time \citep[e.g.][]{Volonteri:2010,Natarajan:2014}. The current proposed seeding models include remnants of Population III stars \citep[e.g.,][]{Bromm:2011}, direct collapse scenarios \citep[e.g.,][]{Loeb:1994,Begelman:2006,Lodato:2006}, and runaway collisions in dense star clusters \citep[e.g.,][]{Portegies:2004,Devecchi:2009,Miller:2012}. These models result in different BH seed masses; the remnants of Population III stars would create seeds with $M_{\rm BH}\sim100~M_\odot$, while stellar collisions and direct collapse would create BHs with $M_{\rm BH}\sim10^{3}\mbox{--}10^{5}~M_\odot$.
While the early BH seeds at high redshift are too faint to be detected with current facilities \citep[e.g.][]{Volonteri:2016,Vito:2018,Schleicher:2018}, lower-mass galaxies, especially nearby dwarf galaxies, that harbor massive BHs can constrain BH seed models \cite[see][for reviews]{Greene:2020,Reines:2022}. The relatively quiet merger history of dwarf galaxies \citep{Bellovary:2011} as well as supernova feedback that may stunt BH growth \citep{Habouzit:2017,Angl:2017} can leave their BH masses close to their initial seed mass.
Finding and studying BHs in dwarf galaxies is also important for understanding the role of both negative \citep{Manzano:2019} and positive \citep{Schutte:2022} AGN feedback in the low-mass regime.
There are multiple ways to search for BHs in the form of active galactic nuclei \citep[AGNs; see][for a review]{Ho:2008,Kewley:2019}. In the optical regime, narrow-line ratio diagnostic diagrams \citep[e.g.,][]{Baldwin:1981,Shirazi:2012} that differentiate between star forming (SF) and AGN ionizing spectral energy distributions (SEDs) have been employed to identify AGN activity in lower-mass and dwarf galaxies \citep[e.g.,][]{Reines:2013,Moran:2014,Sartori:2015,Baldassare:2016}. Moreover, detection of broad H$\alpha$ emission \citep{Greene:2004,Greene:2007,Dong:2012,Reines:2013,Chilingarian:2018} can be indicative of the presence of dense gas in the broad line region (BLR) around a BH, thus suggestive of AGN activity in galaxies. High-ionization coronal emission lines, such as [\ion{Fe}{10}]$\lambda6374$ and [\ion{Ne}{5}]$\lambda3426$, can also be produced in the presence of massive BHs, thus an indicator of AGN activity \citep[e.g.,][]{penston:1984,prieto2002,satyapal:2008,goulding:2009,cerquiera:2021,Molina:2021,Molinafex:2021,Schmidt:1998ne5,Gilli:2010}.
There are selection biases associated with each AGN diagnostic, which results in the selection of different populations of galaxies. The narrow-line diagnostic diagrams typically observe high-accretion rate AGNs \citep{Greene:2020} and struggle with identifying low ionization nuclear emission regions (LINERs), low-luminosity AGNs (LLAGNs), and shock activity \citep{Ho:2008,Molina:2018,Kewley:2019}. This leaves a significant portion of lower-mass galaxies with lower accretion rates unexplored. Moreover, the radiation from the host galaxy can obscure AGN activity in SF galaxies \citep{Moran:2002,Groves:2006, Stasinska:2006,cann:2019}.
Thus, it is crucial to conduct searches for AGN activity that can minimize these effects and probe different populations of lower-mass galaxies.
In this paper, we present a spectroscopic search for BH activity in low-mass galaxies utilizing data from the Galaxy And Mass Assembly (GAMA) survey Data Release 4 \citep[DR4;][]{Liske:2015,Driver:2022}. We analyze the spectra and search for AGN signatures in galaxies with stellar masses $M_\star\leq10^{10}M_\odot$ and redshifts $z\leq0.3$. Given that the GAMA spectroscopic survey covers different sky regions and is approximately two magnitudes deeper than the Sloan Digital Sky Survey (SDSS) spectroscopic survey \citep{york:2000}, where most previous optical searches have been conducted \citep[e.g.,][]{Greene:2007,Reines:2013,Moran:2014},
we aim to find novel AGN candidates in this stellar mass range.
We proceed by employing four AGN diagnostics, including
two narrow-line diagnostic diagrams ([\ion{O}{3}]/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$ and \ion{He}{2}/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$), as well as searching for the [\ion{Fe}{10}]$\lambda6374$ and [\ion{Ne}{5}]$\lambda3426$ high-ionization coronal emission lines. This multiple diagnostic approach allows us to perform a more comprehensive search for AGN activity in this low-mass range, and thus potentially identify massive BHs from different populations of galaxies (e.g. in terms of their masses and colors).
We explain the data and our sample selection process in section \ref{sec:data} and the analysis of the GAMA spectra in section \ref{sec:analysis}. The results of each emission-line diagnostic and the host galaxy properties are included in sections \ref{sec:results} and \ref{sec:host_properties}, respectively. A summary and conclusions are presented in
section \ref{sec:discussion_summary}. Here we assume a $\Lambda$CDM cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$ and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$.
\section{Data and Parent Sample of Low-Mass Galaxies}\label{sec:data}
\subsection{The GAMA Survey}\label{sec:GAMA_survey}
The GAMA Survey includes optical spectroscopy taken with the AAOmega multi-object spectrograph on the 3.9 m Angelo-Australian Telescope \citep[AAT;][]{Saunders:2004,Smith:2004,Sharp:2006}. The spectrograph is equipped with a dual-beam setup that covers the wavelength range of 3730--8850 \AA\ with a dichroic split at 5700 \AA. The spectral resolution of the blue and red arms are 3.5 and 5.3 \AA, respectively, and the spectroscopic fibers are 2\arcsec\ in diameter.
In this work, we utilize spectra and stellar masses released in GAMA DR4 covering three equatorial 60 deg$^2$ regions (G09, G12 and G15) and two southern $\sim50$ deg$^2$ regions (G02 and G23).
The combined limiting magnitude for the main survey objects in the equatorial and G23 regions is $r<19.65$ mag and the G02 region has a limiting magnitude of $r<19.8$ mag \citep{Baldry:2018,Driver:2022}.
\subsection{Parent Sample}\label{sec:sample_selection}
The GAMA database is stored in tables organized into data management units (DMUs)\footnote{\url{http://www.gama-survey.org/dr4/schema/}}.
The current GAMA spectra are provided in the \texttt{AATSpecAll v27} table in the \texttt{SpecCat} DMU \citep{Liske:2015}. We only use spectra with redshift estimates that are correct by the probability of at least 95\%. Additionally, if multiple GAMA spectra are matched to a single GAMA object, we use the spectrum that provides the best redshift for that object. We also exclude problematic spectra such as those that are affected by fringing and bad splicing.
To select our parent sample of low-mass galaxies, we impose a stellar mass cut of $M_\star \leq 10^{10} M_\odot$ using galaxy stellar masses provided by GAMA,
which are stored in the \texttt{StellarMasses} DMU \citep{Taylor:2011}. Stellar masses are obtained from stellar population fits to multiband SEDs. We utilize the mass estimates in the \texttt{StellarMassesGKV v24} table \citep{Driver:2022} for the equatorial and G23 survey regions, which uses matched-segment photometry across all bands derived
from the Kilo-Degree Survey \citep[KiDS;][]{Kuijken:2019} and the Visible and Infrared Survey Telescope for Astronomy Kilo-degree Infrared Galaxy Public Survey \citep[VIKING;][]{Edge:2013}.
Stellar masses for galaxies in the G02 survey region are provided in the \texttt{StellarMassesG02CFHTLS v24} and \texttt{StellarMassesG02SDSS v24} tables, which are based on multi-band SED fitting to Canada-France-Hawaii Telescope Lensing \citep[CFHTLenS;][]{Heymans:2012} and SDSS photometry, respectively. We utilize the mass estimates given in the \texttt{StellarMassesG02CFHTLS v24} table, but use the \texttt{StellarMassesG02SDSS v24} table to remove galaxies with masses that are different by at least 0.3 dex in both tables. Finally, we apply the mass constraint of $10^{5} \leq M_* \leq 10^{10} M_\odot$, which results in 52,782 objects.
In addition to the stellar mass constraint, we also employ signal-to-noise (S/N) cuts using emission line measurements provided by GAMA. In particular, we use the Gaussian-fit, emission-line fluxes and equivalent widths (EWs) from the \texttt{GaussFitSimple v05} table from the \texttt{SpecLineSFR} DMU \citep{Gordon:2017}. Following the \citet{Reines:2013} methodology, we impose the following requirements: the H$\alpha$, [\ion{O}{3}]~$\lambda5007$ and [\ion{N}{2}]~$\lambda6583$ lines must have ${\rm S/N}\geq3$ and ${\rm EW}>1$~\AA, and H$\beta$ must have ${\rm S/N}\geq2$. We also only include the objects with redshifts $z\leq0.3$ to ensure the [\ion{S}{2}] doublet is in the observed wavelength range. This leaves us with a parent sample consisting of 23,460 galaxies.
\section{Analysis of the GAMA Spectra}\label{sec:analysis}
In this work, we use
a variety of optical emission line diagnostics
to search for AGN activity in our parent sample of low-mass emission-line galaxies. While we use the GAMA flux measurements to help define our parent sample, we create custom code to carry out our spectral analysis and search for AGN signatures. This includes fitting and subtracting the stellar continuum, separating broad and narrow H$\alpha$ and H$\beta$ components, and fitting various other emission lines. All of our custom code is written in the \texttt{Python} programming language\footnote{\url{https://www.python.org/}}.
\subsection{Stellar Continuum Subtraction} \label{sec:continuum}
The stellar continuum, which significantly contributes to the observed spectra of the galaxies in our parent sample, needs to be removed before we can search for emission-line signatures of AGNs. Stellar light will generally contain absorption features and it is especially important to model and remove Balmer absorption lines when searching for broad H$\alpha$ or H$\beta$ emission that could signify dense gas orbiting a massive BH.
We use the publicly available package \texttt{pPXF} \citep{Cappellari:2017} to find the best fit stellar continuum model for each spectrum.
We use the \citet{Bruzual:2003} SSP models in the wavelength range of 3350 to 8850 \AA\ with spectral resolution of 3 \AA, which are calculated for 3 different metallicities ($Z=$ 0.008, 0.02, 0.05) and 10 different ages ($t = $ 0.005, 0.025, 0.1, 0.29, 0.64, 0.9, 1.4, 2.5, 5, and 11 Gyr). We model each spectrum with a combination of single-metallicity SSP models, modified by a low-order multiplicative polynomial to account for reddening by dust. This method yields acceptable continuum models as well as plausible velocity dispersions by \texttt{pPXF} for the majority of the objects in our sample. However, if the velocity dispersion is unrealistically large (200--1000 km/s), we refit the continuum including additive polynomials, which can change absorption line strengths and thereby help minimize template mismatch \citep{Cappellari:2017}. This was the case for 95 objects. We select the model metallicity with the smallest $\chi^2$ value. The majority of the galaxies in our sample ($\sim$ 72 \%) are best fitted by the sub-solar metallicity model $(Z=0.008)$. This is consistent with previous studies that show low-mass galaxies generally have low metallicities \citep[e.g.,][]{Tremonti:2004}. Since our primary goal is to measure the emission lines, we attempt good fits to the stellar continua but do not fully explore the parameter space. An example of a fitted galaxy spectrum is shown in the top panel of Figure \ref{fig:spec_line_sample}. In the end, we subtract the best-fit model from the data to achieve a pure emission-line spectrum.
\subsection{Emission Line Measurements} \label{sec:lines}
We use the \texttt{LMFIT} package in python \citep{lmfit} to model the emission lines with Gaussians. For each spectral region that we fit, we also include a linear component in the model to account for uncertainties associated with the initial stellar continuum fit. Examples of fitted emission lines are shown in the bottom panel of Figure \ref{fig:spec_line_sample}.
Following the methodology in \citet{Reines:2013} and references therein, we first fit the [\ion{S}{2}]$\lambda\lambda$6716,6731 doublet with single Gaussian models for each line in the doublet. We assume equal widths for the lines (in velocity space) and hold their relative laboratory wavelengths fixed. We then fit each line in the [\ion{S}{2}] doublet with a two-component Gaussian model. In this case, we additionally constrain the relative heights, widths and positions of the two components to be the same for both lines. We adopt the two-component Gaussian model if the reduced $\chi^2$ is at least 20\% lower than that of the single Gaussian model. Only 15 galaxies meet this criterion and require a two-component Gaussian model for the narrow line profile.
We then fit the [\ion{N}{2}]$\lambda\lambda$6548,6583 doublet and narrow H$\alpha$ line based on the parameters from the [\ion{S}{2}] emission-line model, as the [\ion{N}{2}] and narrow H$\alpha$ line profiles are well-matched to the [\ion{S}{2}] lines \citep{Filippenko:1988,Filippenko:1989,Ho:1997,Greene:2004}. The relative separation between the [\ion{N}{2}] lines is held fixed using their laboratory wavelengths and the flux ratio of [\ion{N}{2}]$\lambda$6583/[\ion{N}{2}]$\lambda6548$ is set to the theoretical value of 2.96. We fix the width of the lines in the [\ion{N}{2}] doublet (in velocity space) to that of the [\ion{S}{2}] lines, but let the width of the narrow H$\alpha$ line increase by as much as 25\%. We scale the two-component [\ion{S}{2}] parameters for the 15 galaxies with two-component [\ion{S}{2}] models to fit the narrow-line emission of the [\ion{N}{2}] and H$\alpha$ group.
The [\ion{N}{2}]+H$\alpha$ complex is then fitted a second time with an additional broad H$\alpha$ component. If the computed reduced $\chi^2$ value is at least 20\% less than that of the narrow-line model, and the full-width at half maximum (FWHM) of the broad H$\alpha$ line is at least 500~km~s$^{-1}$ after correcting for the fiber-dependant instrumental resolution, we select the model with broad H$\alpha$ component. We fit the H$\beta$ line using the same method as the H$\alpha$ line.
We also model the [\ion{O}{3}]$\lambda$5007 and [\ion{O}{1}]$\lambda$6300 emission lines. Since the [\ion{O}{3}] line normally shows a broad, blue shoulder \citep[e.g.][]{Heckman:1981,Whittle:1985} and does not match the other line profiles \citep{Greene:2005}, we use an independent Gaussian model for the fitting process. We also need independent [\ion{O}{1}] model to accurately model the [\ion{Fe}{10}] line, which is discussed below. We fit the [\ion{O}{3}] and [\ion{O}{1}] lines with one- and two-Gaussian models, and accept the two-component model if the measured reduced $\chi^2$ is lowered by at least 20\%.
We follow the methodology described in \citet{Molinafex:2021} to fit the [\ion{Fe}{10}]$\lambda$6374 line, which allows us to detect [\ion{Fe}{10}] even if it is blended with the [\ion{O}{1}]$\lambda$6363 line. We use the model parameters of the fitted [\ion{O}{1}]$\lambda$6300 line to describe the [\ion{O}{1}]$\lambda$6363 line.
Specifically, we shift the model using the laboratory line wavelengths, assume the same width in velocity, and keep the flux ratios of [\ion{O}{1}]$\lambda$6300/[\ion{O}{1}]$\lambda$6363 $=$ 3. We also add a linear fit to the continuum in this spectral region.
Finally, we subtract the [\ion{O}{1}]$\lambda$6363 Gaussian component and the linear fit so we are left only with a potential
[\ion{Fe}{10}] line, which we fit with a single Gaussian model.
We also search for \ion{He}{2} $\lambda$4686 and [\ion{Ne}{5}]$\lambda3426$ lines and fit a single Gaussian model to each line. Given the observed wavelength range of the GAMA survey, we only search for [\ion{Ne}{5}] emission in galaxies with redshift $z\geq0.15$.
We use the parameters from the Gaussian models to calculate the emission-line fluxes.
We consider a line detected if the flux has a S/N $\geq3$.
In addition to the flux requirement, we require the line peak to be at least 3$\sigma$ above the noise for the relatively weak \ion{He}{2}$\lambda4686$, [\ion{O}{1}]$\lambda6300$, [\ion{Fe}{10}]$\lambda6374$, and [\ion{Ne}{5}]$\lambda3426$ lines, where the noise is determined as the root mean square (rms) of the continuum windows around the lines.
Finally, we visually inspect the AGN candidates that are flagged by our automated code and remove those that have spectra with missing pixel values within the emission lines, those affected by bad splicing or fringing, and those with bad fits to emission lines (e.g., noise or broad fits to the continuum).
Given that the [\ion{N}{2}] and H$\alpha$ lines are fitted based on the [\ion{S}{2}] model parameters, a good fit the to the [\ion{S}{2}] lines is needed. However, if the flagged AGN candidates have strong [\ion{N}{2}] and H$\alpha$ detections, despite unreliable [\ion{S}{2}] detection, we keep them as potential AGN candidates in our final sample.
\input{tables/all}
\input{tables/all_flux}
\section{AGN Selection}
\label{sec:results}
In this work, we search for various optical spectroscopic indicators of AGN activity using the emission line measurements described above.
In order to provide a comprehensive search for AGN activity, we employ four different AGN diagnostics that we consider to be relatively robust in the low-mass regime. These include the [\ion{O}{3}]/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$ and \ion{He}{2}/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$ 2D narrow emission line ratio diagrams
\citep{Baldwin:1981,Shirazi:2012}, as well as searching for [\ion{Fe}{10}] and [\ion{Ne}{5}] and coronal-line emission \citep[][]{Molinafex:2021,Schmidt:1998ne5,Gilli:2010}.
We also search for broad H$\alpha$ emission \citep[e.g.,][]{Greene:2005,Reines:2013,Chilingarian:2018} in our parent sample, but only include
the broad-line AGN candidates that overlap with other AGN diagnostics in this work since broad H$\alpha$ can also be produced by transient stellar phenomena (e.g., Type II supernovae, in star-forming galaxies with low masses; \citealt{Baldassare:2016}).
We describe each of the four AGN diagnostics below, and present the results of applying each diagnostic to our parent sample of low-mass emission-line galaxies (also see Figure \ref{fig:AGNsamp}). The galaxy properties of the AGN candidates and their respective emission-line flux measurements are listed in Tables \ref{tab:gal_prop} and \ref{tab:flux}, respectively.
\subsection{[\texorpdfstring{\ion{O}{3}}{TEXT}]/H\texorpdfstring{$\beta$}{TEXT} vs. [\texorpdfstring{\ion{N}{2}}{TEXT}]/H\texorpdfstring{$\alpha$}{TEXT}}\label{sec:nii}
The photoionizing continuum from an AGN contains a larger fraction of high-energy photons relative to hot stars, which results in extended partially ionized regions in AGNs. In these regions, lines such as [\ion{N}{2}]$\lambda$6583, [\ion{S}{2}]$\lambda\lambda$6716,6731, and [\ion{O}{1}]$\lambda$6300 are produced by collisional excitation. This results in larger intensities of these lines with respect to H$\alpha$ in the narrow-line emission from AGNs than in \ion{H}{2} regions, which allows them to be separated in emission-line diagnostic diagrams.
The [\ion{O}{3}]/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$ diagnostic diagram \citep{Baldwin:1981} has been widely used to separate SF galaxies from AGN-dominated ones. This diagram is metallicity sensitive, with SF galaxies varying in abundance from low metallicity (low [\ion{N}{2}]/H$\alpha$ ratio, high [\ion{O}{3}]/H$\beta$ ratio) to high metallicity (high [\ion{N}{2}]/H$\alpha$ ratio, low [\ion{O}{3}]/H$\beta$ ratio), while shocks and AGN-dominated galaxies generally have higher ratios of [\ion{O}{3}]/H$\beta$ and [\ion{N}{2}]/H$\alpha$. This results in a clear separation between SF galaxies and those with an AGN contribution in the general population of galaxies \citep[e.g., ][]{Kewley:2019}. However, this diagnostic diagram can struggle with identifying AGNs in low-mass galaxies, which tend to have lower metallicities than more massive ones. In other words, low-metallicity AGNs overlap with low-metallicity starbursts in this diagram \citep{Groves:2006} and so these AGNs may be missed. Nevertheless, this diagram appears to be robust at identifying bona-fide AGNs in the low-mass regime \citep{Reines:2013,Baldassare:2017}.
We employ this diagram as our first AGN indicator as shown in the left panel of Figure \ref{fig:bpt_nii}. We use the classification scheme outlined in \citet{Kewley:2006}, where star-forming/\ion{H}{2} galaxies fall below the empirical composite line from \citet{Kauffmann:2003}, AGN-dominated galaxies fall above the theoretical extreme starburst line from \citet{Kewley:2001}, and composite galaxies fall in between the two lines. We identify 71 AGNs and 238 composite galaxies in our parent sample by using this diagram.
We also plot these AGN and composite galaxies in the [\ion{O}{3}]/H$\beta$ vs.\ [\ion{S}{2}]/H$\alpha$ and [\ion{O}{1}]/H$\alpha$ diagrams \citep{Veilleux:1987} as shown in the middle and right panels of Figure \ref{fig:bpt_nii}. In these diagrams, we use the classification scheme in \citet{Kewley:2006} where the star-forming galaxies and the AGN candidates are separated by the theoretical extreme starburst line from \citet{Kewley:2001} and the Seyfert-like and LINER-like galaxies by the Seyfert-LINER line. We find that 298/309 and 89/309 of the AGNs and composites have reliable [\ion{S}{2}] and [\ion{O}{1}] detections (see section \ref{sec:lines}), out of which 39\% fall in the AGN region of the [\ion{S}{2}]/H$\alpha$ diagram and 74\% are AGN-like in the [\ion{O}{1}]/H$\alpha$ diagram. There are also 47 objects that show AGN activity in all three diagrams. Moreover, some of the AGNs/Composites selected by this diagnostic have additional AGN indicators (see Figure \ref{fig:AGNsamp} and the following subsections).
\subsection{\texorpdfstring{\ion{He}{2}/H$\beta$}{TEXT} vs. [\texorpdfstring{\ion{N}{2}]/H$\alpha$}{TEXT}}
\label{sec:he2}
Nebular \ion{He}{2} emission has a relatively high ionization potential (54.4 eV) and therefore can also be produced by a hard ionizing spectrum, which may indicate AGN activity. The \ion{He}{2}/H$\beta$ vs. [\ion{N}{2}]/H$\alpha$ diagram proposed by \citet{Shirazi:2012} has been used to separate SF galaxies from AGN-dominated ones in dwarf galaxies \citep[][]{Sartori:2015}. While \ion{He}{2} emission can originate from AGN activity, stellar processes can also produce this line; thus care is needed when using this diagnostic.
We search for \ion{He}{2} emission in our parent sample of low-mass galaxies and identify 44 galaxies with detected emission, out of which 12 overlap with the [\ion{N}{2}]/H$\alpha$-selected AGNs and composites. We select the \ion{He}{2}/H$\beta$ AGN candidates in our sample by employing the criterion proposed in \citet{Molinafex:2021}, log(\ion{He}{2}/H$\beta$)$>-1$, as shown in Figure \ref{fig:he2_diag}. This limit is expected to be higher than that produced by X-ray binaries (XRBs) or Wolf-Rayet (WR) stars \citep{Schaerer:2019} and is slightly stricter than the criteria presented in \citet{Shirazi:2012}. We find that 36 of the \ion{He}{2}-emitting galaxies meet this criterion, out of which 10 are also [\ion{N}{2}]/H$\alpha$ AGNs and 1 is a composite object. The remaining \ion{He}{2}-emitting galaxy among the [\ion{N}{2}]/H$\alpha$-selected composite galaxies, while strictly in the \ion{H}{2} part of the diagram, is consistent with a \ion{He}{2}-selected AGN within the measurement uncertainties (see Figure \ref{fig:he2_diag}). One of the \ion{He}{2}/H$\beta$ AGNs has [\ion{Fe}{10}] emission and 2 have [\ion{Ne}{5}] emission (see sections \ref{sec:fex} and \ref{sec:ne5}), all three of which are also [\ion{N}{2}]/H$\alpha$ AGNs. In appendix \ref{appendix:lines}, we show the observed spectra for a selection of these AGN candidates in Figure \ref{fig:heii_spectra}, and the \ion{He}{2} emission line fits for all 36 \ion{He}{2}/H$\beta$-selected AGNs in Figure \ref{fig:spec_he2}.
Given that the majority (25/36) of the \ion{He}{2}/H$\beta$ AGN candidates are SF in the [\ion{N}{2}]/H$\alpha$ diagram, we further investigate these systems.
First, we visually search for WR features in the spectra \citep{Conti:1991,Schaerer:1999} such as the blue and red bumps that appear around 4650 \AA\ and 5808 \AA. We do not find WR signatures in these galaxies, and thus conclude that either WR stars are not responsible for the observed \ion{He}{2} emission or any potential WR signatures are not detectable in the GAMA spectra.
Next, we investigate whether it is possible to have a \ion{He}{2}/H$\beta$-selected AGN that is also SF in the [\ion{N}{2}]/H$\alpha$ diagram by combining a variety of AGN spectra with SF spectra. We begin by linearly adding emission-line fluxes from a well-known AGN in a dwarf galaxy, NGC 4395 \citep{Filippenko:1989,Filippenko:2003}, to $\sim 3000$ [\ion{N}{2}]/H$\alpha$ SF galaxies. We select these objects from our parent sample of low-mass galaxies described in section \ref{sec:sample_selection}, and require a S/N $>3$ for all the emission lines of interest (H$\beta$, [\ion{O}{3}], [\ion{N}{2}], H$\alpha$). None of these SF objects have detectable \ion{He}{2} emission, and each line of interest is scaled by the ratio of the [\ion{O}{3}]$\lambda5007$ line for NGC 4395 to that of each SF galaxy by factors of 0.5, 1, and 2 (i.e., a scale factor of 0.5 indicates a lower amount of star formation contribution to the synthesized line ratios). We then linearly add the scaled emission-line fluxes to those of NGC 4395 and plot the resulting emission-line ratios in the \ion{He}{2}/H$\beta$ and [\ion{O}{3}]/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$ diagrams as shown in the first two columns of Figure \ref{fig:spec_he2_sample}. None of the constructed line ratios in this test are simultaneously \ion{He}{2}/H$\beta$ AGNs and [\ion{N}{2}]/H$\alpha$ SF galaxies. However, if the \ion{He}{2} line fluxes were stronger than that of NGC 4395 by at least a factor of 1.2, 1.4, and 2.1 for the scale factors of 0.5, 1, and 2, respectively, there would be galaxies that are both \ion{He}{2}/H$\beta$ AGNs and SF in the [\ion{N}{2}]/H$\alpha$ diagram.
In the next test, we employ the same methodology described above, but instead of using the emission-line fluxes from NGC 4395, we use 10 galaxies in our sample that are both [\ion{N}{2}]/H$\alpha$ and \ion{He}{2}/H$\beta$-selected AGNs. In 4/10 of these case studies, we find that there are objects that have emission-line ratios that are simultaneously AGN-like in the \ion{He}{2}/H$\beta$ diagram and SF in [\ion{N}{2}]/H$\alpha$ diagram. We show an example in the two middle columns of Figure \ref{fig:spec_he2_sample}. The AGN in this Figure (CATAID 1787285) has
an \ion{He}{2}/[\ion{O}{3}] ratio $\sim 10$ times higher than NGC 4395. The fraction of objects that are \ion{He}{2}/H$\beta$ AGNs and SF in the [\ion{N}{2}]/H$\alpha$ diagram ranges from 15 \% to 50 \% depending on the SF contribution scale factor. However, we note that there is a continuum of objects reaching up into the composite region of the [\ion{N}{2}]/H$\alpha$ diagram, which is not seen in our sample of \ion{He}{2}-selected AGNs (Figure \ref{fig:AGNsamp}). The majority of the \ion{He}{2}-selected AGNs in our sample fall in the SF region of the [\ion{N}{2}]/H$\alpha$ diagram, with only 1 composite object and a handful of AGNs. Motivated by this, we next investigate the impact of using a low-metallicity AGN on the simulated line ratios.
We carry out the final test with a mock low-metallicity AGN, setting log([\ion{N}{2}]/H$\alpha)=-1.3$ and log([\ion{O}{3}]/H$\beta)=1.1$. We use the same value of log \ion{He}{2}/H$\beta = -0.1$ as the GAMA object in Figure \ref{fig:spec_he2_sample}. The simulated AGN+SF line ratios are shown in the last two columns of Figure \ref{fig:spec_he2_sample}.
In this case we again find \ion{He}{2}/H$\beta$ AGNs that are SF in the [\ion{N}{2}]/H$\alpha$ diagram for all the SF contribution scale factors.
The results given above indicate that factors such as metallicity, star formation versus AGN contribution, and the \ion{He}{2}/[\ion{O}{3}] ratio can impact where objects fall in the diagnostic diagrams. While this exercise has demonstrated that is certainly possible, and perhaps likely, that the detected \ion{He}{2} emission in this work is driven by AGN activity, follow-up observations would be useful to confirm these \ion{He}{2}-selected AGNs that are SF in the [\ion{N}{2}]/H$\alpha$ diagram.
\subsection{[\texorpdfstring{\ion{Fe}{10}}{TEXT}]\texorpdfstring{$\lambda$}{TEXT}6374 Coronal Line Emission}
\label{sec:fex}
The presence of [\ion{Fe}{10}]$\lambda$6374 emission with high ionization potential \citep[262.1 eV;][]{Oetken:1977} can be indicative of AGN activity in galaxies \citep[e.g.,][]{penston:1984,prieto2002,satyapal:2008,goulding:2009,cerquiera:2021}. Recent studies presented in \citet{Kimbro:2021} and \citet{Molina:2021} also confirmed the existence of
[\ion{Fe}{10}]$\lambda$6374 line from accreting BHs in dwarf galaxies. We note that, however, this line is usually weak and thus hard to detect.
We search for the [\ion{Fe}{10}]$\lambda$6374 line in our parent sample of galaxies and identify 56 reliable detections, out of which 1 overlaps with the [\ion{N}{2}]/H$\alpha$-selected AGNs and 1 is a composite object. Moreover, the [\ion{N}{2}]/H$\alpha$-selected AGN is also among the \ion{He}{2}/H$\beta$ AGNs.
We show the observed spectra for a selection of these objects in Figure \ref{fig:fex_spectra} and the [\ion{Fe}{10}] and [\ion{O}{1}] doublet emission-line fits for all 56 galaxies in Figure~\ref{fig:oi_fex} in appendix \ref{appendix:lines}.
The luminosity of the [\ion{Fe}{10}] lines in our sample span a range of $\sim 10^{38}-$10$^{41}$ erg s$^{-1}$, with a median of $10^{39.6}$ erg s$^{-1}$. Given these luminosities, there are two main sources that could explain the observed [\ion{Fe}{10}] emission: AGNs or tidal disruption events (TDEs), which is where a massive BH tidally disrupts a star. AGN activity can produce the [\ion{Fe}{10}] line as a result of gas photoionized by the AGN continuum \citep[e.g., ][]{Nussbaumer:1970,Pier:1995,Negus:2021}, or radiative shock waves emitted by radio jets from the AGN \citep[e.g., ][]{wilson:1999,Molina:2021}. A class of tidal disruption events (TDEs) called extreme coronal line emitters (ECLEs) also produce coronal-line emission with L$_{[\rm FeX]}\sim$ 10$^{38-40}$ erg s$^{-1}$ \citep{Komossa:2008,Wang:2011,Wang:2012}.
Other potential origins of [\ion{Fe}{10}] emission are discussed in \citet{Molinafex:2021}, but these all fail to explain the high luminosities observed here. For example, supernovae rarely produce coronal lines and their luminosities are generally orders of magnitude lower than those observed in our sample. Even one of the most extreme examples, SN 2005ip, had a peak [\ion{Fe}{10}]$\lambda$6374 luminosity of just 2$\times 10^{37}$ erg s$^{-1}$ \citep{smith:2009}.
Therefore, we conclude that the observed [\ion{Fe}{10}] emission in our sample of low-mass galaxies is indicative of AGN activity or tidal disruption events (TDEs), both of which require the presence of a massive BH.
\subsection{[\texorpdfstring{\ion{Ne}{5}}{TEXT}]\texorpdfstring{$\lambda$}{TEXT}3426 Coronal Line Emission}
\label{sec:ne5}
The presence of coronal lines with high ionization energies such as [\ion{Ne}{5}]$\lambda$3426 ($\sim97$ eV) are generally considered strong indicators of AGN activity \citep{Schmidt:1998ne5,Gilli:2010}. However, this line has also been found in star-forming galaxies \citep{Izotov:2004}, and it is generally weak and hard to detect.
We search for [\ion{Ne}{5}] emission in our parent sample of low-mass galaxies and identify 5 galaxies with such emission. However, we cut 2 of the objects with marginal [\ion{Ne}{5}] detections and spectra that do not show any other AGN signatures. The remaining 3 [\ion{Ne}{5}]-emitting galaxies are [\ion{N}{2}]/H$\alpha$ AGNs, 2 of which are also \ion{He}{2}/H$\beta$-selected AGN candidates. We show the observed spectra as well as the [\ion{Ne}{5}] emission-line fits for these galaxies in Figure \ref{fig:spec_ne5}. The luminosities of the [\ion{Ne}{5}] lines are in the range of $10^{40.9-41.4}$ erg s$^{-1}$.
\subsection{Broad H\texorpdfstring{$\alpha$}{TEXT} Emission and Black Hole Masses}
\label{sec:broadha}
Dense gas orbiting in the vicinity of a BH can produce broad-line emission, such as broad H$\alpha$, and can be used to estimate the mass of the central BH \citep{Greene:2005}. However, in low-mass galaxies, broad H$\alpha$ emission from stellar-processes such as supernovae can mimic that of an AGN. Thus, transient broad H$\alpha$ emission that disappears over time likely indicates a supernova origin, whereas persistent broad H$\alpha$ favors an AGN origin
\citep[e.g.,][]{Baldassare:2016}.
We search for broad H$\alpha$ emission in our parent sample of low-mass galaxies and identify 103 galaxies with such emission. As shown in Figure \ref{fig:broad_bpt}, 47 of these galaxies are in our [\ion{N}{2}]/H$\alpha$-selected AGN and composite sub-sample. Additionally, 7 of these 47 objects show additional AGN-like signatures: 6 are also \ion{He}{2}-selected AGNs, 1 has observed [\ion{Ne}{5}] emission, and 1 is both a \ion{He}{2}-selected AGN and has detectable [\ion{Fe}{10}] emission, while 1 of the 47 galaxies is SF in the \ion{He}{2}/H$\beta$ diagram. There is also one broad-line AGN candidate that is consistent with SF in the [\ion{N}{2}]/H$\alpha$ and \ion{He}{2}/H$\beta$ diagrams.
The remaining galaxies do not overlap with any of the diagnostics we employ in this work.
The broad H$\alpha$ luminosities of the broad-line [\ion{N}{2}]/H$\alpha$-selected AGN candidates range from 10$^{39.7}-10^{42.6}$, with a median luminosity of $10^{40.9}$ erg s$^{-1}$. The SF galaxies have broad H$\alpha$ components with lower luminosities that span a range of 10$^{39.1-41.6}$, with a median luminosity of $10^{40.3}$ erg s$^{-1}$. Moreover, the widths (FWHMs) of all the broad H$\alpha$ components span a range of $\sim$500--3664, with median FWHM of 1490 km s$^{-1}$ for the [\ion{N}{2}]/H$\alpha$ AGNs/Composites and 895 km s$^{-1}$ for the SF galaxies.
The distributions of FWHM and luminosity of the broad H$\alpha$ components are plotted in panels (a) and (b) of Figure \ref{fig:ha_dist}. Given that the luminosities and FWHMs of the broad H$\alpha$ lines in the SF galaxies tend to be significantly lower than those of the [\ion{N}{2}]/H$\alpha$ AGNs/Composites, and that many star-forming galaxies with broad H$\alpha$ are not in fact AGNs \citep{Baldassare:2016}, we consider these objects suspect and do not include them in our final sample of AGNs.
We estimate virial BH masses for the 47 broad-line AGNs/Composites using equation 5 in \citet{Reines:2013} and our measurements of the luminosity and FWHM of the broad H$\alpha$ line. The resulting BH masses vary from $10^{5-7.7}$, with a median BH mass of $10^{6.2} M_\odot$. We plot the distribution of BH masses in panel (c) of Figure \ref{fig:ha_dist}. A list of luminosities and FWHMs of the broad H$\alpha$ components, and the corresponding BH masses, for the AGNs/Composites are given in Table \ref{tab:bh_masses}. For the sake of completeness, we also estimate BH masses for the SF galaxies with broad H$\alpha$. These are in the range of $10^{4.9-7.3}$, with a median of $10^{5.8}$.
The BH masses for the rest of the AGN candidates in this work are unknown. However, if we assume the BH mass-to-total stellar mass relation for AGNs derived in \citet{Reines:2015}, the BH masses for all of the AGN candidates span a range of 10$^{4.3}-$10$^{6.4}$, with a median BH mass of 10$^{6.2}$ M$_\odot$.
\input{tables/bh_masses}
\section{Sample Properties}
\label{sec:host_properties}
\subsection{Newly-Identified AGNs and Active Fractions}
\label{sec:syn_res}
In this work, we identify 388 unique AGN candidates from our parent sample of low-mass galaxies by utilizing two narrow-line diagnostic diagrams (sections \ref{sec:nii} and \ref{sec:he2}) as well as searching for [\ion{Fe}{10}]$\lambda$6374 and [\ion{Ne}{5}]$\lambda3426$ coronal-line emission (sections \ref{sec:fex} and \ref{sec:ne5}).
We do not find any matches between our parent sample of galaxies and the AGNs reported in \citet{Greene:2007}, \citet{Reines:2013}, \citet{Moran:2014}, \citet{Chilingarian:2018}, and \citet{Molinafex:2021}. In fact, only 2164/23460 galaxies in our parent sample have been observed by other surveys, out of which only 301 have SDSS spectra. Thus, we conclude that this work presents an entirely new sample of AGNs in low-mass galaxies.
Overall we find an active fraction among our parent sample of low-mass emission-line galaxies of $388/23460 \approx 1.7\%$. Accounting for all of the low-mass galaxies in GAMA (including those that were cut from our parent sample due to weak/no lines, see \S\ref{sec:sample_selection}), the active fraction drops to $388/52782 \approx 0.7\%$. The majority of the active galaxies were found as AGNs/composites in the [\ion{N}{2}]/H$\alpha$ diagnostic diagram. These alone give an active fraction of $\sim$ 1.3\% among our parent sample of low-mass emission-line galaxies. The active fraction using the \ion{He}{2}/H$\beta$ ratio and [\ion{Fe}{10}]-emitting galaxies are each $\sim$ 0.2\%, and the fraction of detectable [\ion{Ne}{5}]-emitting galaxies is just $\sim$ 0.01\%.
While accurate comparisons to other spectroscopic searches for AGNs in the low-mass regime are complicated by various selection criteria and the differing survey characteristics, the values we find are in approximate agreement with prior work \citep{Reines:2013,Moran:2014,Sartori:2015,Molinafex:2021,Polimera:2022}.
\subsection{Host Galaxies}
\label{sec:host_galaxies}
The host galaxies of the AGNs in our sample have an upper mass limit of $10^{10}$ M$_\odot$ by design, and the lowest-mass galaxies with AGNs in our sample
have stellar masses of log$(M_*/M_\odot) \sim 8$ (see Figure \ref{fig:distributions}). A summary of the host galaxy properties for each sub-sample can be found in Table \ref{tab:other_bh_mass} (also see Figure \ref{fig:distributions} and Table \ref{tab:gal_prop} for individual values). Consistent with previous studies \citep[e.g.,][]{Reines:2013}, the [\ion{N}{2}]/H$\alpha$-selected AGNs are predominantly among the higher-mass galaxies, although the minimum galaxy mass in this sub-sample has log$(M_*/M_\odot) \sim 8.3$. The \ion{He}{2}-selected AGNs show a similar trend.
In contrast, the [\ion{Fe}{10}]-emitting galaxies are more evenly spread out in terms of their stellar mass and tend to be more reflective of the parent sample of low-mass galaxies. The rare [\ion{Ne}{5}]-emitting galaxies are exclusively found among higher mass objects with luminous AGNs.
The median total absolute $g$-band magnitude of our sample is
$\langle M_g \rangle=-19.5$ mag.
This is very similar to that of the \citet{Greene:2007} sample of broad-line AGNs with BH masses $M_{\rm BH} \lesssim 2 \times 10^6~M_\odot$.
Our median $g$-band magnitude is also $\sim$1 mag more luminous than what \citet{Reines:2013} found for their AGN sample and that of LMC \citep[M$_g^{LMC} \sim -18.2$ mag; ][]{Tollerud:2011}. This is not surprising given that our upper mass limit is more than 3 times larger than that in the \citet{Reines:2013} sample.
A color-mass diagram is
shown in panel (e) of Figure \ref{fig:distributions}. The host galaxies of the [\ion{N}{2}]/H$\alpha$-selected AGN candidates tend to be redder and relatively massive overall, consistent with the findings in \citet{Reines:2013}. The bias towards redder galaxies selected using the [\ion{N}{2}]/H$\alpha$ diagnostic may be a selection effect, since this diagnostic is metallicity sensitive and struggles with detecting AGNs in low-metallicity star-forming galaxies \citep[e.g., ][]{Groves:2006,Reines:2013,Kewley:2019}. On the other hand, the galaxies among the \ion{He}{2}/H$\beta$ and [\ion{Fe}{10}] sub-samples
extend to less massive and bluer, thus more star-forming galaxies. \citet{Molinafex:2021} found a similar trend for [\ion{Fe}{10}]-emitting dwarf galaxies in the SDSS. The [\ion{Ne}{5}]-emitting sub-sample are among galaxies that are more massive and bluer in our parent sample, characteristic of quasars with strong UV emission from the accretion disk. Given that these objects are powered by relatively low-mass BHs, they may be akin to ``miniquasars" that have been proposed as potential contributors to cosmic reionization \citep{Haiman:1998,Madau:2004}.
The redshift distributions of the active galaxies in this work, along with that of our parent sample of low-mass galaxies, are shown in panel (b) of Figure \ref{fig:distributions}. The maximum redshift of $z=0.3$ comes from our requirement of detecting and modeling the narrow-line profile using the [\ion{S}{2}]$\lambda\lambda$6716,6731 doublet (\S\ref{sec:lines}).
Overall, the median redshift of the active galaxies is $z = 0.13$. The [\ion{Ne}{5}] line is only within the observable wavelength range for $z\geq0.15$ and therefore the three [\ion{Ne}{5}]-AGNs are at higher redshifts than the other sub-samples (see Table \ref{tab:other_bh_mass}). The [\ion{Fe}{10}]-selected AGNs/TDEs are at slightly lower redshifts compared to the [\ion{N}{2}]/H$\alpha$ and \ion{He}{2}-selected objects, likely owing to the weakness of the [\ion{Fe}{10}] line.
\input{tables/prop}
Our sample of active galaxies extends to higher redshifts than previous samples in the low-mass regime based on SDSS spectroscopy. For example, the \citet{Reines:2013} dwarf galaxies all have $z\lesssim0.055$, the \citet{Moran:2014} sample has $z\lesssim0.018$, \citet{Sartori:2015} finds a median redshift of ${z}\sim0.03$, and the [\ion{Fe}{10}]-selected objects in \citet{Molinafex:2021} have a median redshift of $z\sim 0.03$.
The closest comparisons are to that of the Type 1 AGN sample of \citet{Greene:2007}, which has a median redshift of 0.08, and the Type 2 AGN counterparts in \cite{Barth:2008} that have $z\lesssim0.08$.
The higher redshifts probed by our study are likely due to the fact that the GAMA spectroscopic limiting magnitude is $\sim 2$ magnitudes deeper than that of the SDSS.
\subsection{The Dwarf Galaxy Sample}
\label{sec:dwarfs}
Searches for AGNs in the low-mass regime often use different criteria for selecting their samples. In some cases, low BH masses are used \citep[e.g.,][]{Greene:2007,Chilingarian:2018} and in others, absolute magnitude \citep{Barth:2008} or stellar mass limits \citep[e.g.,][]{Reines:2013} are used. As described above, our main sample of low-mass active galaxies has an upper stellar mass limit of $10^{10} M_\odot$, which has also been used by \citet{Moran:2014} and \citet{Baldassare:2018}. Here we focus on AGNs in the dwarf galaxy mass range, which is usually taken to be $M_\star \leq 3\times10^9 M_\odot$ \citep{Reines:2013}.
As discussed in Section \ref{sec:data}, our parent sample of low-mass emission-line galaxies with $M_\star \leq 10^{10} M_\odot$ consists of 23,460 objects, of which 9,094 are dwarf galaxies with $M_\star \leq 3\times10^9 M_\odot$. In total, we identify 70 unique dwarf galaxies hosting AGNs based on our diagnostics described in Section \ref{sec:results}. We find 9 AGNs and 25 composites using the [\ion{N}{2}]/H$\alpha$ diagram. Two of the dwarf composites also have broad H$\alpha$ emission and virial BH masses of $\sim 10^5 M_\odot$ and $\sim 7 \times 10^6 M_\odot$.
There are 13 dwarf galaxies with detectable \ion{He}{2} emission, 9 of which are AGN candidates with high \ion{He}{2}/H$\beta$ ratios.
We find that 27 of the [\ion{Fe}{10}]-emitting galaxies are dwarf galaxies, while none of the [\ion{Ne}{5}]-emitting galaxies are in this mass range. We show $grz-$band images of most of the dwarf galaxies in our sample in Figure \ref{fig:dwarf_images}, which we obtained from the DESI Legacy Imaging Survey SkyViewer \citep{decals}.
\section{Summary and Conclusions}
\label{sec:discussion_summary}
In this work, we have systemically searched for optical signatures of active massive BHs in $\sim$23,000 galaxies with stellar masses $M_\star\leq10^{10} M_\odot$ and redshifts $z\leq 0.3$ by analyzing spectroscopic data from GAMA DR4.
We employed four optical emission-line diagnostics and identified 388 unique active galaxies, 70 of which are in the dwarf galaxy regime with $10^8 \lesssim M_\star/M_\odot \lesssim 10^{9.5}$. Our main results are summarized in Figures \ref{fig:AGNsamp} and \ref{fig:distributions}.
We used the ratio of [\ion{O}{3}]/H$\beta$ vs.\ [\ion{N}{2}]/H$\alpha$ as our first diagnostic. This diagnostic diagram has previously been used to identify AGNs in low-mass/dwarf galaxies \citep[e.g., ][]{Reines:2013,Moran:2014,Sartori:2015}, and follow-up observations with X-rays have confirmed the existence of massive BHs in some of these sources independently \citep[e.g., ][]{Baldassare:2017}.
Moreover, the clean separation between the AGN and composite galaxies, and those consistent with star formation make these AGN candidates easily distinguishable. For these reasons, we consider the 71 AGNs identified by this diagnostic as secure and the 238 composite galaxies as strong AGN candidates. While this diagnostic provides a relatively clean sample, it can miss weakly accreting BHs and/or those residing in actively star-forming galaxies (particularly those with low metallicities).
Next, we searched for low-mass galaxies with relatively high \ion{He}{2}/H$\beta$ ratios.
We employed a stricter criterion, log(\ion{He}{2}/H$\beta) > -1$, to select AGNs than previous works \citep[e.g., ][]{Shirazi:2012,Sartori:2015} with the goal of providing a clean sample. This ratio is expected to be higher than what can be produced by stellar-mass X-ray binaries and Wolf-Rayet stars
\citep{Schaerer:2019}. We find 36 galaxies that meet this criterion. Of these, 10 are also [\ion{N}{2}]/H$\alpha$ AGNs and 1 is a composite. Given that the majority of the AGN candidates identified by this diagnostic are star-forming in the [\ion{N}{2}]/H$\alpha$ diagram, further observations are needed to confirm our results independently.
We also systematically searched for two high-ionization coronal lines ([\ion{Fe}{10}]$\lambda6374$ and [\ion{Ne}{5}]$\lambda3426$) in the spectra of our parent sample of low-mass galaxies. The [\ion{Fe}{10}]$\lambda6374$ coronal line is detectable in 56 galaxies, only 2 of which have additional AGN indicators. As discussed in detail in \citet{Molinafex:2021}, [\ion{Fe}{10}]$\lambda6374$ can be produced by certain types of supernovae. However, one of the most extreme known examples is SN 2005ip supernovae with a peak luminosity of $2 \times 10^{37}$ erg s$^{-1}$ \citep{smith:2009}. This is an order of magnitude less than the minimum [\ion{Fe}{10}] luminosity of $10^{38}$ erg s$^{-1}$ in our sample. Thus, we are optimistic that these [\ion{Fe}{10}] lines are produced by AGN activity or extreme coronal-line emitting TDEs, both of which require massive BHs. We found three galaxies with strong [\ion{Ne}{5}] emission that are also [\ion{N}{2}]/H$\alpha$ AGNs. Two of these objects were also selected as AGNs using our \ion{He}{2}/H$\beta$ criterion.
In total we have found 388 unique low-mass galaxies exhibiting narrow-line signatures of active massive BHs, 47 of which have detectable broad H$\alpha$ emission in their spectra. Using standard virial techniques, we estimated BH masses for these objects and find a range of $M_{\rm BH} \sim 10^{5.0-7.7} M_\odot$. The median BH mass is $10^{6.2} M_\odot$, consistent with expectations given the host galaxy stellar masses \citep{Reines:2015}. We found an additional 56 star-forming galaxies with broad H$\alpha$ emission in their spectra, with no narrow-line signatures indicating the presence of AGNs. Given that broad H$\alpha$ in many star-forming dwarf galaxies can be produced by transient stellar processes such as supernovae \citep{Baldassare:2016},
we are suspicious of the broad-line objects without narrow-line signatures of AGNs and do not include them in our final sample of low-mass active galaxies.
As seen in previous works \citep{Reines:2013,Molinafex:2021}, the various emission-line AGN diagnostics that we have used tend to probe different parts of the parameter space spanned by our parent sample of low-mass/dwarf galaxies (see Figure \ref{fig:distributions}). For example, the [\ion{N}{2}]/H$\alpha$ AGNs/Composites are biased towards redder and more massive galaxies within our parent sample, and the [\ion{Fe}{10}]-selected AGNs tend to be in bluer star-forming galaxies with a color and mass distribution more representative of our parent sample. Thus, using a multi-diagnostic approach can provide a more complete census of AGNs in low-mass/dwarf galaxies. While we have strived to strike a balance between assembling a clean yet comprehensive sample of low-mass/dwarf active galaxies in GAMA, large-scale follow-up campaigns would be useful to check the robustness of the AGN diagnostics we (and others) have applied in the low-mass regime.
Ultimately this work has provided an entirely new sample of hundreds of low-mass/dwarf active galaxies, which extends to southern sky regions and higher redshifts than previous searches in the low-mass regime. We find an AGN fraction of $\sim 1\%$, which is similar to other spectroscopic searches in this mass range. This active fraction provides a lower limit on the BH occupation fraction in low-mass galaxies with implications for the origin of the first BH seeds.
\acknowledgements
We thank the anonymous reviewer for their helpful comments and suggestions that improved this work. AER acknowledges support for this work provided by Montana State University and NASA through EPSCoR grant number 80NSSC20M0231.
MM is supported by funding from Ford Foundation Postdoctoral Fellowship, administered by the National Academies of Sciences, Engineering, and Medicine, awarded to MM in 2021-2022.
GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 179.A-2004. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 177.A-3016.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID 2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID 2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l’Espai (IEEC/CSIC), the Institut de Fisica d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF’s NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program “The Emergence of Cosmological Structures” Grant No. XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant No. 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant No. 11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
acknowledgements
\software{
Astropy \citep{astropy2013,astropy2018},
Matplotlib \citep{matplotlib},
LMFIT\citep{lmfit}}
\clearpage
\appendix
\section{Observed Spectra and Emission-Line Fits for the \ion{He}{2} AGNs and [\ion{Fe}{10}] AGNs/TDEs}
\label{appendix:lines}
\clearpage
\bibliographystyle{aasjournal}
\bibliography{papers}
|
Title:
Exploiting the Einstein Telescope to solve the Hubble tension |
Abstract: We probe four cosmological models which, potentially, can solve the Hubble
tension according to the dark energy equation of state. In this context, we
demonstrate that the Einstein Telescope is capable of achieving a relative
accuracy below $1\%$ on the Hubble constant independently of the specific dark
energy model. We firstly build mock catalogs containing gravitational wave
events for one, five and ten years of observations, and above Signal-to-Noise
Ratio equal to nine. From these catalogs, we extract the events which are most
likely associated with possible electromagnetic counterpart detected by
THESEUS. Finally, we select four dark energy models, namely a non-flat
$\omega$CDM, an interacting dark energy, an emergent dark energy, and a time
varying gravitational constant model, to forecast the precision down to which
the Einstein Telescope can bound the corresponding cosmological parameters. We
foresee that the Hubble constant is always constrained with less than $1\%$
uncertainty, thereby offering a potential solution to the Hubble tension. The
accuracy on the other cosmological parameters is at most comparable with the
one currently obtained using multiple probes, except for the emergent dark
energy model for which the Einstein Telescope alone will be able to improve the
current limits by more than one order of magnitude.
| https://export.arxiv.org/pdf/2208.13999 |
\title{%
Exploiting the Einstein Telescope to solve the Hubble tension}
\author{Matteo Califano}
\email{matteo.califano@unina.it}
\affiliation{Scuola Superiore Meridionale, Largo San Marcellino 10, I-80138, Naples, Italy}
\affiliation{INFN Sezione di Napoli, Compl. Univ. di
Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy}
\author{Ivan de Martino}
\email{ivan.demartino@usal.es}
\affiliation{Universidad de Salamanca, Departamento de Fisica Fundamental, P. de la Merced S/N, Salamanca, ES}
\author{Daniele Vernieri}
\email{daniele.vernieri@unina.it}
\affiliation{Dipartimento di Fisica, Universit\`a di Napoli ``Federico II'', Compl. Univ. di Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy}
\affiliation{Scuola Superiore Meridionale, Largo San Marcellino 10, I-80138, Naples, Italy}
\affiliation{INFN Sezione di Napoli, Compl. Univ. di
Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy}
\author{Salvatore Capozziello}
\email{capozziello@unina.it}
\affiliation{Scuola Superiore Meridionale, Largo San Marcellino 10, I-80138, Naples, Italy}
\affiliation{Dipartimento di Fisica, Universit\`a
di Napoli ``Federico II'', Compl. Univ. di
Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy}
\affiliation{INFN Sezione di Napoli, Compl. Univ. di
Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy}
\date{\today}
\preprint{ET-0188A-22}
\section{Introduction}\label{sec:intro}
The detection of Gravitational Waves (GWs) from the coalescence of merging Binary Black Holes (BBH) and Binary Neutron Stars (BNS)\til\cite{Abbott2016,GW170817} opened a new window to test the General Relativity, relativistic astrophysics, and cosmology\til\cite{GW170817,Abbott2016b,Ezquiaga2017}. As it is well known, the GWs bring direct information on the luminosity distance of sources and, therefore, they can be used as rulers to measures distances in the Universe. Indeed, they are usually called {\em standard sirens}\til\cite{Schutz1986,Holz2005}, and are fully complementary to the {\em standard candles}, such as Cepheids and Supernovae Type Ia (SNeIa) among the others, that are instead based on the detection of their electromagnetic emission and need to be calibrated on closer sources in order to get a measure of their luminosity distance.
Although GWs offer an alternative method to obtain distances in cosmology and are not affected by calibration problems, they are not free of issues. Indeed, the GWs waveform encodes both information on the systems, such as masses, spin, inclination angle among the others, and information on the cosmology such as the distance. Furthermore, they encode information on a given theory of gravity and, potentially, can probe
it\til\cite{Bogdanos:2009tn,Capozziello:2019klx,Capozziello:2021bki,Oikonomou:2022xoq,Odintsov:2022cbm}.
However, information on masses and redshift is completely degenerate, and the only way to break such a degeneracy, is to have a prior information on the redshift from an electromagnetic counterpart.
There are several way to get accurate information on the redshift. For instance, one can assign to the GWs source the redshift of the host galaxy\til\cite{Schutz1986,Holz2005,Chen2018} or, alternatively, looking at the electromagnetic emission following the GWs such as
short Gamma-Ray Burst (GRB)\til\cite{Capozziello2011,GW170817} and kilonovae\til\cite{GW170817}. However, the host galaxy can be accurately detected only up to redshift below one\til\cite{Gray2020}, and kilonovae will be detected up to redshift $z\sim 1$\til\cite{Chase2022}. On the contrary, short GRBs may be detected using forthcoming satellites, such as Transient High Energy Sources and Early Universe Surveyor (THESEUS), up to redshift $z\sim 8$\til\cite{THESEUS:2017wvz,Amati2021,Stratta2022}. Such high redshift detections can allow also to deeply test the cosmological evolution\til\cite{Rosati2021,Tanvir2021,Dainotti2021,Dainotti2022}. Furthermore, a complementary avenue to obtain the redshift information is represented by the observation of tidal deformation in BNS mergers\til\cite{messenger:Read,Chatterjee2021}. Indeed, it may supply redshift estimation with an accuracy ranging from $8$\% to $40$ \% depending on the choice of the Equation of State (EoS).
Nowadays, the LIGO/Virgo/KAGRA collaboration have explored several ways to constrain cosmological models from GWs events. A turning point was the event GW170817, {\em i.e.} the first merger of a BNS with the simultaneous detection of the GRB 170817A\til\cite{GW170817}, yelding to the first estimation of the Hubble constant with GWs, $H_0= 70_{-8}^{+12}$ km s$^{-1}$ Mpc$^{-1}$ at 68\% of confidence level\til\cite{LIGO_H0_2017}. Afterwards, the LIGO/Virgo/KAGRA collaboration have explored the possibility to constrain $H_0$ by analyzing the population distribution of BBH mergers, and searching for the host galaxy identification\til\cite{LVK_H0_2021}. However, these new measurements of the Hubble constant are in agreement with both the late-time and the early-time measurements of $H_0$ and, therefore, do not help to solve the so-called Hubble tension\til\cite{DiValentino2021,Abdalla2022} which could point out whether some "new physics" issue exists\til\cite{Capozziello:2020nyq,Spallicci:2021kye} or if some self-consistent method to fit the cosmic history at any red-shift is lacking\til\cite{Benetti:2019gmo}.
Nevertheless, the next generation of GW detectors, {\em e.g.} the Einstein Telescope (ET), can strongly improve the accuracy on the Hubble constant reducing it below 1\%\til\cite{Maggiore2020}, and promise to offer a solution to such a tension pointing out the its correct value. Therefore, there is an important need to study also the theoretical framework related to the Hubble tension. Let us remember that the Hubble tension is $4.2\sigma$ discrepancy between the measurements of $H_0$ obtained by fitting the CMB power spectra\til\cite{Planck2020}, and using the {\em standard candles} such as Cepheids\til\cite{Riess2019} both in the framework of the {\em concordance} cosmological model, also known as $\Lambda$ Cold Dark Matter ($\Lambda$CDM) model.
Since the nature of Dark Energy (DE) is still a puzzle, there are many attempts to explain the $H_0$ tension modifying the DE EoS\til\cite{Zhang2019,Belgacem:2019tbw,DiValentino2021,Abdalla2022,Jin2022a,Jin2022b,Yu2021}, or the underlying theory of gravity\til\cite{Belgacem2019,Belgacem2019b,Abdalla2022,Ferreira2022}. Here, we will focus on a set of models which modify the DE EoS in view of solving the Hubble tension\til\cite{DiValentino2021,Abdalla2022}. For each model we will predict the accuracy down to which ET will be able to detect departures from the $\Lambda$CDM offering in such a way a theoretical framework of DE capable of solving the Hubble tension. In Sect.\til\ref{sec:models}, we will briefly introduce the DE models. In Sect.\til\ref{sec:mockdata}, we will summarize the procedure adopted to build mock data that will mimic the ET observations of the luminosity distance. In Sect.\til\ref{sec:stats}, we will give details of our statistical analysis, while in Sect.\til\ref{sec:results} we will show results. Finally, in Sect.\til\ref{sec:conclusion} we will give our final discussion and conclusions.
\section{Dark Energy models}\label{sec:models}
We focus on four DE models which may help in solving the Hubble tension\til\cite{DiValentino2021,Abdalla2022,Dagostino2019,Capozziello:2021xjw}, and differ from each other in the way they affect the DE EoS leading to a modification of the luminosity distance.
In the context of General Relativity, and of the Friedman-Lema\^{i}tre-Robertson-Walker (FLRW) cosmology, the luminosity distance is defined as
\begin{equation}\label{luminosty_distance}
d_{L}(z) =
\begin{cases}
\frac{c(1+z)}{H_0}\frac{1}{\sqrt{\Omega_k}}\sinh{\left[ \sqrt{\Omega_k} \int_{0}^{z}\frac{dz'}{E(z')}\right]}\quad &\mbox{for}\ \Omega_k >0,\\
\frac{c(1+z)}{H_0} \int_{0}^{z}\frac{dz'}{E(z')}\qquad &\mbox{for}\ \Omega_k =0,\\
\frac{c(1+z)}{H_0}\frac{1}{\sqrt{|\Omega_k|}}\sin{\left[ \sqrt{|\Omega_k|} \int_{0}^{z}\frac{dz'}{E(z')}\right]}\quad &\mbox{for}\ \Omega_k <0,
\end{cases}
\end{equation}
where $z$ is the redshift, $c$ is the speed of light, $H_0$ is the Hubble constant, $\Omega_k,0$ is the value of the curvature parameter at $z=0$, and
\begin{equation}\label{E_z}
E^{2}(z)=\Omega_{m,0}(1+z)^{3} + \Omega_{k,0}(1+z)^{2} + \Omega_{DE}(z)\,,
\end{equation}
where $\Omega_{m,0}$ is the value of the matter density parameter at $z=0$, and $\Omega_{DE}(z)$ is the DE density parameter as function of the redshift. In the $\Lambda$CDM model, $\Omega_{DE}(z)=\Omega_{\Lambda,0}$.
Since Eq.\til\eqref{E_z} is strictly related to the Friedman equations and to the DE EoS, changing the DE model leads to different expressions of the function $E(z)$\til\cite{Capozziello:2019cav}. To study the proprieties of DE component in a general framework, one can consider the ratio of the DE pressure to its energy density as a function of redshift\til\cite{Chevallier2001,Linder2003}:
\begin{equation}
\omega_{DE} (z) = \frac{p_{DE}(z)}{\rho_{DE}(z)}\,.
\end{equation}
Again, $\Lambda$CDM model is recovered for $\omega_{DE} (z)=-1$.
In the next subsections, we will discuss four DE models whose modifications of the Eq.\til\eqref{E_z} may serve to solve the Hubble tension. Moreover, we will also discuss the limits in which such models recover the $\Lambda$CDM cosmology.
\subsection{Non-flat $\omega$CDM}\label{subsec:omCDM}
We focus on the simplest extension of the $\Lambda$CDM model in which $\omega_{DE} (z)$ is a constant, but it can assume values different from $\omega_{DE} = -1$, {\em i.e.} the cosmological constant. Hence, the modification to Eq.\til\eqref{E_z} appears as\til\cite{Copeland2006}
\begin{equation}\label{eq:Ez}
E^2 (z) = \Omega_{m,0}(1+z)^{3} + \Omega_{k,0}(1+z)^{2} + \Omega_{\Lambda,0}(1+z)^{1+\omega_{DE}}\,.
\end{equation}
In\til\cite{Gao:2021}, it was shown using the CMB + BAO + SN +$H_0$ dataset observations, the aforementioned model may solve the Hubble tension at 95\% Confidence Level (CL). The best-fit values are: $H_0 = 69.88_{-0.76}^{+0.77}$ km s$^{-1}$ Mpc$^{-1}$ and $\omega_{DE} = -1.08\pm 0.03$.
In\til\cite{Belgacem:2019tbw}, generating a mock dataset of combined events by ET and THESEUS, they obtained the following accuracy on cosmological parameter $\omega_{DE}$: $\sigma_{\omega_{DE}} =0.3$
Since, we are considering a non-flat $\omega$CDM model, we can recast $\Omega_{m,0}$ as $1- \Omega_{k,0}-\Omega_{\Lambda,0}$, and when $\omega_{DE}$ assumes values different from $-1$ the model departs from the standard cosmological constant. For instance, the case $\omega_{DE} > -1$ is usually referred to as “quintessence”\til\cite{Copeland2006}, while the case with $\omega_{DE} < -1$ as “phantom”\til\cite{Bamba2012}.
\subsection{Interacting Dark Energy}\label{subsec:Interacting Dark Energy}
Another scenario capable of solving the Hubble tension considers that Dark Matter
(DM) and DE interact not only gravitationally. This is the so-called {\em Interacting Dark Energy} (IDE) model.
Following\til\cite{Valiviita2008,Gavela2009}, one can parameterize the interaction between DM and DE as follows
\begin{align}
\nabla_{\mu} T^{(DM)\mu}_{\phantom{(DM)\mu}\nu} &= Q u^{(DM)}_{\nu}/a\,,\\
\nabla_{\mu} T^{(DE)\mu}_{\phantom{(DE)\mu} \nu} &=- Q u^{(DM)}_{\nu}/a\,,
\end{align}
where $T^{(DM)\mu}_{\phantom{(DM)\mu}\nu}$ and $T^{(DE)\mu}_{\phantom{(DE)\mu} \nu}$ are the energy momentum tensors for the DM and DE, respectively. The coefficient $Q$ encodes the coupling between the two dark components. See also\til\cite{Piedipalumbo:2019snr}. Although different functional forms of $Q$ have been explored\til\cite{Gavela2009,Wang2016,Yang2019}, we select the following coupled model: $Q= \xi H(z) \rho_{DE}$; because a generic interaction coupling might have several instabilities, while the model we choose can avoid them under some suitable conditions on $\xi$ and $\omega$\til\cite{Gavela2009,Wang2016}. Hence, the DM and DE background evolve with respect to the cosmic time as\til\cite{Gavela2009}
\begin{align}
\label{IDE_DM}
\dot{\rho}_{DM} +3H(z)\rho_{DM} &= \xi H(z)\rho_{DE},\\
\label{IDE_DE}
\dot{\rho}_{DE} +3H(z)\rho_{DE}(1 + \omega_{DE}) &= -\xi H(z)\rho_{DE}.
\end{align}
Since the DM density must be positive along the cosmic evolution, if $\omega_{DE} \ < 0$ and $\xi \ >0$, we need to impose the following condition: $\xi < - \omega_{DE}$\til\cite{Gavela2009}.
Solving the Eqs.\til\eqref{IDE_DM} and \eqref{IDE_DE}, one can rewrite the Eq. \eqref{E_z} for the case of an IDE model as
\begin{equation}\label{eq:Ez:interacting}
\begin{aligned}
E^2 (z) = \Omega_{m,0}(1+z)^{3} + \Omega_{\Lambda,0}\left[(1+z)^{3 \left(1+\omega_{DE}^{eff}\right)}\ \right.\\
\left.+ \frac{\xi}{3\omega_{DE}^{eff}}\left(1-(1+z)^{3 \omega_{DE}^{eff}}\right)(1+z)^{3}\right]\,,
\end{aligned}
\end{equation}
where $\omega_{DE}^{eff}= \omega_{DE} +\frac{\xi}{3}$. In order to avoid the early time instability, the quantities $(1 + \omega_{DE})$ and $\xi$ must have opposite sign\til\cite{Gavela2009}. It is worth noticing that the $\Lambda$CDM is recovered by setting $\omega_{DE} = -1$ and $\xi = 0$.
In our analysis, we will consider two cases: (i) $\omega_{DE}$ is fixed to $-1$ and $\xi$ is a free parameter, (ii) $\omega_{DE}$ and $\xi$ are both free parameters.
Using the CMB dataset, it has been shown, for the case (i), that the IDE model is capable of solving the Hubble tension making early and late time measurements of $H_0$ agree at 68\% CL\til\cite{DiValentino2020a,Divalentino2020b,Pan2019}. The best-fit values are: $H_0 = 72.8_{-1.6}^{+3.0}$ km s$^{-1}$ Mpc$^{-1}$ and $\xi = -0.51_{-0.29}^{+0.12}$. In the case (ii), using CMB+Cepehids, the best-fit values are: $H_0 = 73.3_{-1.0}^{+1.2}$ km s$^{-1}$ Mpc$^{-1}$, $\omega_{DE} = -0.95_{-0.05}^{+0.01}$ and $\xi = -0.73_{-0.10}^{+0.05}$.
\subsection{Emergent Dark Energy}\label{subsec:Emergent Dark Energy}
Another solution to the Hubble constant is that DE contributes to the total energy density budget of the Universe only at late time\til\cite{Li2019,Pan2020}. In such a case, the Eq.\til\eqref{E_z} can be re-written as follows
\begin{equation}\label{eq:Ez:emergent}
E^2 (z) = \Omega_{m,0}(1+z)^{3} + \tilde{\Omega}_{DE}(z)\,.
\end{equation}
In the simplest parameterization, the DE evolves as\til\cite{Li2019}
\begin{equation}
\tilde{\Omega}_{DE}(z) = \Omega_{\Lambda,0}\left[ 1- \tanh{\left( \log_{10}\left(1+z \right) \right) } \right]\,.
\end{equation}
In this parameterization, there are the same degrees of freedom of the $\Lambda $CDM model. Indeed, there is only one free parameter, namely $\Omega_{\Lambda,0}$. Despite this is not a sever modification of the parameter space, the statistical analysis of the temperature fluctuations of the CMB data data\til\cite{Planck2020} provide a higher value of the Hubble constant, $H_0 = 72.35_{-0.79}^{+0.78}\ \mbox{km}\ \mbox{s}^{-1}\ \mbox{Mpc}^{-1}$\til\cite{Yang2020}, with respect to the $\Lambda$CDM cosmology, which results to agree with the late-time measurements of $H_0$ at 68\% CL.
We will focus on a generalization of the aforementioned model where the DE contribution arises at a specific transition redshift $z_t$. In such a model the DE critical density can be written as\til\cite{Li:2020}
\begin{equation} \label{eq:EME}
\tilde{\Omega}_{DE}(z) = \Omega_{\Lambda,0}\left[ \frac{1- \tanh{\left(\Delta \log_{10}\left(\frac{1+z}{1+z_t} \right) \right)} }{1+ \tanh{\left(\Delta \log_{10}\left(1+z_t\right)\right)}} \right]\,,
\end{equation}
where $\Delta$ is a free parameter and $z_t$ is the epoch where the matter energy density and the DE density are equal. More precisely, $z_t$ is defined by the following equality:
\begin{equation}
\Omega_{m,0}(1+z_t)^{3}\ =\ \frac{\Omega_{\Lambda,0}}{1+ \tanh{\left(\Delta \log_{10}\left(1+z_t\right)\right)}} \ .
\end{equation}
In this case, there is only one extra free parameter, $\Delta$, which can discriminate between the $\Lambda$CDM model, which is recovered for $\Delta = 0$, and the emergent DE parameterization given in Eqs. \eqref{eq:Ez:emergent} and \eqref{eq:EME}, {with $\Delta \neq 0$}. Under the parameterization in Eq. \eqref{eq:EME}, it has been shown using CMB+BAO+Cepheids that the $H_0$ tension reduces to $1.8\ \sigma$ with the best-fit values of $H_0 = 71.0_{-1.3}^{+1.4}\ \mbox{km}\ \mbox{s}^{-1}\ \mbox{Mpc}^{-1}$ and $\Delta = 0.85_{-0.41}^{+0.44} $\til\cite{Yang2021}.
Finally, we focus on the second parameterization in Eq. \eqref{eq:EME} because it admits a direct limit to the $\Lambda$CDM cosmological model and, therefore, allows us to predict the accuracy down to which departure from the $\Lambda$CDM model may be detected by future experiments.
\subsection{Time-Varying Gravitational Constant}\label{subsec:time-Varying}
Alternatively to the previous models, one can investigate the case in which gravitational coupling is a function of the redshift through some scalar field\til\cite{Mota:2011iw}. Starting from an effective quantum theory of gravity that is asymptotically safe, one can obtain: $G_N (z) = G_{N,0}(1 + z)^{-\delta_{G}}$\til\cite{Weinberg1976,Weinberg2010}. The term $G_{N,0}$ refers to the values of gravitational constant at $z=0$, and $\delta_{G}$ parameterizes its evolution with redshift. Indeed, $\delta_{G}=0$ means that the gravitation constant is the Newtonian one, and no evolution with redshift is considered.
Since the gravitational coupling is no longer a constant, the cosmological constant would also be redshift-dependent: $\Lambda(z) = \Lambda_{0} (1 + z)^{\delta_{\Lambda}}$\til\cite{Xue2015}. The density of matter and of DE evolve according to the following equations:
\begin{align}
\left(\frac{G_N}{G_{N,0}}\right) \rho_{m} &= \rho_{m,0}(1+z)^{(3-\delta_{G})}\,,\\
\left(\frac{G_N}{G_{N,0}}\right) \rho_{\Lambda} &= \rho_{\Lambda,0}(1+z)^{\delta_{\Lambda}}.
\end{align}
Therefore, the Eq. \eqref{E_z} can be recast in the following form
\begin{equation}\label{eq:Ez:Gvar}
E^2 (z) = \Omega_{m,0}(1+z)^{(3-\delta_{G})} + \Omega_{\Lambda,0}(1+z)^{\delta_{\Lambda}}\,,
\end{equation}
The requirement for a flat Universe leads to the relation\til\cite{Xue2015}
\begin{equation}\label{delta_relation}
\delta_{\Lambda}=\delta_{G}\frac{\Omega_{m,0}}{\Omega_{\Lambda,0}}.
\end{equation}
Let us notice that the $\Lambda$CDM cosmology is recovered by setting $\delta_{G}=0$.
Using the CMB + BAO + SN +$H_0$ dataset, it was shown that the model mitigates the tension in the Hubble constant reducing it at $2\sigma$\til\cite{Gao:2021}. In their analysis, the best-fit values are: $H_0 = 70.69_{-1.08}^{+1.06}\ \mbox{km}\ \mbox{s}^{-1}\ \mbox{Mpc}^{-1}$ and $\delta_{G} = -0.0062_{-0.0023}^{+0.0025} $.
\subsection{Comparing the dark energy models with the $\Lambda$CDM cosmology}
In Fig.\til\ref{fig:models}, we illustrate the impact of the DE parameters on the luminosity distance for each model. In the upper left panel, we depict the non-flat $\omega$-CDM model for the value of $\omega=[-2; 0]$ as red and green solid lines, respectively. In the upper right panel, we show the IDE model for the value of $\xi=[-1; -2]$ as red and green solid lines, respectively. In the lower left panel, we depict the emergent DE model for the value of $\Delta=[-2; 2]$ as red and green solid lines, respectively. And, finally, in the lower right panel, we show the time varying gravitational constant model for the value of $\delta_G=[-1; 1]$ as red and green solid lines, respectively. In all panels, we also report, for comparison,
our {\em fiducial} cosmological model, as a blue solid line, which is a flat-$\Lambda$CDM with the following values of cosmological parameters\til\cite{Planck2020}:
\begin{equation}\label{baseline_model}
\begin{aligned}
H_0 = 67.66\ \mbox{km}\ \mbox{s}^{-1}\mbox{Mpc}^{-1} ,\ \Omega_{m,0} = 0.3111,\\
\Omega_{\Lambda,0} = 0.6889\ \mbox{and}\ \Omega_{k,0} = 0.00\,.\end{aligned}
\end{equation}
Below each panel, we report the residuals to illustrate the level of departure expected from the $\Lambda$CDM model. The maximum departure in the case of non-flat $\omega$-CDM model is $\sim 7 \%$, while it reaches $\sim 18 \%$
at $z\approx 4$ for the IDE model. The emergent DE model reaches a maximum departure when $z\sim 0$ as the model claims to solve the Hubble tension with a modification of the DE contribution at late-time. Finally, the departure of the time varying gravitational constant model from the {\em fiducial} model reaches $\sim 15 \%$ at $z\approx 4$.
\section{Mock Data}\label{sec:mockdata}
Here we briefly summarize the procedure adopted to build up the mock catalogs. We closely follow the recipe given in\til\cite{Califano2022}, and assign the redshift to the GWs sources extracting it from the following redshift probability distribution\til\cite{Regimbau:2012,Cai:2017,Belgacem:2019tbw}
\begin{equation}\label{rate:unit_of_redshift}
p(z) = \mathcal{N}\frac{R_m (z)}{1+z}\frac{dV(z)}{dz},
\end{equation}
where $\mathcal{N}$ is a normalization factor, $dV(z)/dz$ is the comoving volume element, and $R_m (z)$ is the merger rate per unit of volume in the source frame. The latter takes the form\til\cite{RegimbauHughes:2009,Meacher:2016,Regimbau:2017}
\begin{equation}\label{merger_rate}
R_{m} (z) = R_{m,0} \int_{t_{min}}^{t_{max}} R_f[t(z)-t_d] P(t_d) d t_d \ ,
\end{equation}
where $R_f[t(z)-t_d]$ is the Star Formation Rate (SFR), and $P(t_d)$ is the time delay distribution. We assume, for the SFR, the model proposed in\til\cite{Vangioni:2014} and for the time delay distribution a power law functional form, $P(t_d)\propto t_{d}^{-1}$, as suggested by population synthesis models\til\cite{Tutukov:1994,Lipunov:1995,Pacheco:2006,Belczynski:2006,Shaughnessy:2008}. Nevertheless, it is worth to notice that setting the SFR and the time delay distribution to other models do not affect the accuracy of final results (for more details we refer to Sect.\til 5.3 in\til\cite{Califano2022}).
We integrate Eq.\til\eqref{merger_rate} between a minimum time delay of $20$ Myr and the maximum fixed to the Hubble time. Furthermore, the quantity $R_{m,0}$ is the normalization of the merger rate at $z=0$. We, therefore, set it to the best fit value obtained by the LIGO/Virgo/KAGRA collaboration: $R_{m}(z=0)=105.5^{+190.2}_{-83.9}\ \mbox{Gpc}^{-3} \mbox{yr}^{-1}$\til\cite{LIGO2021:population}. Once the redshift are extracted form the probability distribution in Eq. \eqref{rate:unit_of_redshift}, we can assign a {\em fiducial} luminosity distance, $d_L^{fid}(z)$, based on
our {\em fiducial} cosmological model in Eq. \eqref{baseline_model}.
The ET will have three independent interferometers and, hence, the combined SNR is $\rho=\left(\sum\limits_{i=1}^{3}(\rho_{(i)})^2\right)^{1/2}$. The SNR of the single interferometer, $\rho_{(i)}$, %
in the ideal case of Gaussian noise, is:
\begin{equation}\label{eq:SNR}
\rho^{2}_{(i)}=4 \int_{f_{\rm lower}}^{f_{\rm upper}} \frac{|F_{+,i}\tilde{h}(f)_{+}+F_{\times,i}\tilde{h}(f)_{\times}|}{S_{h,i}(f)} df.
\end{equation}
In the previous definition, $S_{h,i}(f)$ is the one-sided noise power spectral density of $i$-th interferometer, $\tilde{h}_{+}$ and $\tilde{h}_{\times}$ are the GW strain amplitudes of $+$ and $\times$ polarizations, and $F_{+,i}(\psi,\theta,\phi)$ and $F_{\times,i}(\psi,\theta,\phi)$ are the so-called beam pattern functions\til\cite{FinnChernoff:1993}. The whole sensitivity function\footnote{The latest power spectral density $S_h(f)$ can be downloaded at \url{https://apps.et-gw.eu/tds/?content=3&r=14065}.} $S_h(f)$ is depicted in in Fig.\til\ref{fig:sems_curv} . To integrate Eq. \eqref{eq:SNR}, we set a lower cutoff, $f_{lower}$, at $1$ Hz\til\cite{Sensitivity:2011} and the upper one to $f_{upper}=\frac{c^3}{(6\sqrt{6}\pi M_{obs})G}$.
We can compute the total number of observable BNS merger, $N$, from the equation
\begin{equation}
N = T_{obs}\ \Theta \int_{0}^{10} \frac{R_m (z)}{1+z}\frac{dV(z)}{dz}dz\,,
\end{equation}
where $\Theta$ is the duty cycle and $T_{obs}$ is the observation time. In order to generate the catalog and select the events above a certain threshold of the signal-to-noise ratio (SNR) $\geq 9$, we assume an isotropic distribution for the sky angles $\theta$ and $\phi$, and an uniform distribution for the orientation angle $\cos i$ and the polarization $\psi$. Moreover, to generate the synthetic signal self-consistently with our choice of $R_{m,0}$, we follow the LIGO/Virgo/KAGRA collaboration and set a uniform NS mass range in the interval $[1, 2.5]\ M_\odot$.
Thus, we obtain a rate of $\sim 3\times 10^4$ events per year, assuming a duty cycle of 80\%. In our analysis, we will consider 1, 5 and 10 years of observations.
Once the {\em fiducial} luminosity distances are generated, we add a Gaussian noise component, $\mathcal{N}(d_{L}^{fid},\sigma_{ d_{L}})$, to them in order generate our mock observations.
The variance $\sigma^2_{ d_{L}}$ includes the contributions due to the instrument, $\sigma^2_{inst}$, the lensing, $\sigma^2_{lens}$, and the peculiar velocity of the host galaxy, $\sigma^2_{pec}$. Therefore, the total variance will be:
\begin{equation}\label{sigma_dl}
\sigma_{d_L}^2={\sigma_{inst}^2+\sigma_{lens}^2 +\sigma_{pec}^2}\,.
\end{equation}
The contribution due to the instrumental noise component $\sigma_{inst}$ is\til\cite{Cutler1994,Dalal:2006}
\begin{equation}
\sigma_{inst}=\frac{2}{\rho}d_L(z),
\end{equation}
where the factor two accounts for the degeneration between $\rho$ and the inclination angle, which may differ for each event.
The contribution due to the weak lensing distorsions, $\sigma_{lens}$, is given by\til\cite{Hirata:2010,Tamanini:2016}
\begin{equation}
\sigma_{lens}=0.066\left(\frac{1-(1+z)^{-0.25}}{0.25}\right)^{1.8}d_L(z)F_{delens}(z),
\end{equation}
where $F_{delens}(z)= 1- \frac{0.3}{\pi /2}\arctan{\frac{z}{z_*}}$, with $z_*=0.073$\til\cite{Tamanini:2016}. The latter factor takes into account the possibility to reduce the uncertainty due to weak lensing with the future detectors such as the Extremely Large Telescope\til\cite{Speri:2021}.
Finally, $\sigma_{pec}$ is related to the peculiar velocities and can be approximated with the following fitting formula\til\cite{Kocsis:2006}
\begin{equation}
\sigma_{pec}=\left[ 1+\frac{c(1+z)^2}{H(z)d_L (z)}\right]\frac{\sqrt{\langle v^2\rangle}}{c}d_L (z)\,,
\end{equation}
where we set the averaged peculiar velocity $\langle v^2\rangle$ to $500$ km/s, in agreement with the observed values in galaxy catalogs\til\cite{Cen2000}.
\subsection{Electromagnetic counterpart}
The predicted outcomes for a BNS merger are: a relativistic outflow, which is highly anisotropic and can produce an observable high energy transient; a thermal and radioactive source emitting most of its energy at ultraviolet, optical, and near-infrared wavelengths; and a burst of MeV neutrinos\til\cite{Pian2021}.
The neutrino burst is hard to detect. Indeed, with current instruments such as the IceCube Neutrino Observatory\til\cite{IceCube2017}, we can detect a neutrino counterpart only for events located at redshift below $0.1$\til\cite{Aartsen2020}.
The thermal sources, {\em i.e.} kilonova, produced by the radioactive decay of unstable heavy elements synthesized during the coalescence can be detected up to $z\sim 1$ with the current and forthcoming telescopes such as the Roman Space Telescope\til\cite{Spergel2015,Hounsell2018,Chase2022,Alfradique2022}. Since, we are interested to study the accuracy on the cosmological parameters and, more specifically, to constrain DE models, we only focus on the first kind of outcomes: the relativistic outflows. In particular, we study the case of short GRBs because forthcoming Gamma-Ray and X-Ray satellites will detect electromagnetic counterparts at $z\sim 8$\til\cite{THESEUS:2017wvz}.
In particular, we consider the THESEUS satellite that could overlap with ET and provides the electromagnetic counterpart of the GWs events\til\cite{THESEUS:2017qvx,THESEUS:2017wvz,Amati2021,Ciolfi2021,Ghirlanda2021,Rosati2021}. Again we closely follow Ref.\til\cite{Califano2022}, and simulate the observed photon flux of the GRB events associated to GW events through the luminosity distance by sampling the luminosity probability distribution $\phi(L)$\til\cite{Yang:2021qge,Califano2022}. We assume $\phi(L)$ be a standard broken power law distribution\til\cite{Wanderman:2014eza} and the jet profile be Gaussian\til\cite{Resmi2018,Howell:2019}.
Once we have extracted the flux from the relation flux-luminosity, we select only the events which are above the flux threshold of $0.2\ \mbox{photon}\ \mbox{cm}^{-2} \ \mbox{s}^{-1}$. To obtain the number of combined events, we set the duty cycle of the THESEUS satellite to 80\%\til\cite{THESEUS:2017wvz} mainly due to a reduction of observing time owing to the passage through the Southern Atlantic Anomaly and a sky coverage fraction of $0.5$. Moreover, since the THESEUS satellite can localize a source only within $5$ arcmin in its central field of view, we record only $1/3$ of the total number of combined events\til\cite{Belgacem:2019tbw}. We estimate a rate of $\sim 11$ events per year. To show the effectiveness of the procedure, in Fig.\til\ref{fig:catalog}, we depict all GW events recorded after 10 years of observations with the corresponding error bars (green points), the ones with the electromagnetic counterpart (red points), and the {\em fiducial} cosmological model (blu solid line).
\section{Statistical Analysis}\label{sec:stats}
We carry out a Monte Carlo Markov Chain (MCMC) analysis to estimate the accuracy down to which each DE model parameter (that depends on the specific choice of the DE EoS) can be constrained with future observation from ET. Our mock data are built using the flat-$\Lambda$CDM cosmology in Eqs.\til\eqref{E_z} and \eqref{baseline_model} as our {\em fiducial} model. Then,
we expect the posterior distributions of the parameters of the DE models introduced in Sect.\til\ref{sec:models} to be centered around the {\em fiducial} model. Therefore, the error on the model parameters will indicate the accuracy that we will be able to reach with ET. To this aim, we will run our MCMC pipeline on both bright and dark sirens, {\em i.e.} events whose electromagnetic counterpart has been and has not been detected, respectively, and we will point out the main differences in the results.
Our MCMC is based on the {\texttt{emcee}} package\til\cite{emcee}, and will employ the likelihood of all GWs events
defined as the product of the single event likelihood, $ p(\textbf{d}|\bm{\lambda}) = \prod_{i=1}^{N} p(d_i|\bm{\lambda})$. Here, $\bm{\lambda}$ are the the cosmological parameters of interest for the specific model, and \textbf{d}$\equiv\lbrace d_i \rbrace_{i=1}^{N}$ is the mock dataset with $N$ equal to the number of observations. In order to write down the single event likelihood, one must distinguish between the run with the bright and dark sirens\til\cite{Califano2022}. When using bright sirens, the redshift information is assumed known from the detection of an electromagnetic counterpart which is, in our case, a short GRB. In such a case, the single event likelihood can be written as\til\cite{ Mandel:2018mve, Ye:2021klk}
\begin{equation}\label{likelihood_bright}
p(d_i | \bm{\lambda})= \frac{\int p(d_i|D_L)p_{pop}(D_L |z_i , \bm{\lambda})d D_L}{\int p_{det}(D_L)p_{pop}(D_L|z_i , \bm{\lambda}) dD_L}\,,
\end{equation}
where $p_{pop}(D_L | \bm{\lambda})=\delta(D_L - d_{L}^{th}(z_i,\bm{\lambda}))$\til\cite{DelPozzo:2011vcw}.
In the Eq.\til\eqref{likelihood_bright}, the denominator is a normalization factor that takes into account the selection effects\til\cite{Mandel:2018mve,Vitale:2020aaz}.
Instead, when using {dark sirens}, we assume that the redshift distribution of the BNS population is known, and marginalize over the redshift\til\cite{Ding:2018zrk,Ye:2021klk}. In such a case, the probability of detecting an event $d_i$ in a specified cosmological model is given by
\begin{equation}
\begin{aligned}
p(d_i|\bm{\lambda}) &= \int_{0}^{z_{max}} p(d_i , z_i |\bm{\lambda}) dz_i \\
&= \int_{0}^{z_{max}} p(d_i|d_L^{th}(z_i,\bm{\lambda})) p_{obs}(z_i|\bm{\lambda})dz_i\,,
\end{aligned}
\end{equation}
where the probability prior distribution of the redshift, $p_{obs}(z_i|\bm{\lambda})$, is obtained from the observed events and already includes detector selection effects\til\cite{Ding:2018zrk}.
Finally, in our analysis, we neglect the contribution of the spin of the source to the amplitude of the signal\til\cite{Poisson:1995ef,Baird:2012cu} and assume a flat uniform prior on the cosmological parameters of interest as reported in Table\til\ref{tab:prior}.
\begin{table}[!ht]
\centering
\begin{tabular}{cc||cc}
\hline
\hline
Parameters & Prior & Parameters & Prior\\
\hline
$H_0$ & $\mathcal{U}(35,85)$ & $\omega_{DE}$ & $\mathcal{U}(-3,0)$\\
$\Omega_{m,0}$ & $\mathcal{U}(0,1)$ & $\xi$ & $\mathcal{U}(-3,3)$ \\
$\Omega_{\Lambda,0}$ & $\mathcal{U}(0,1)$ & $\Delta$ & $\mathcal{U}(-2,2)$\\
$\Omega_{k,0}$ & $\mathcal{U}(-1,1)$ & $\delta_{G}$ & $\mathcal{U}(-3,3)$ \\
\hline
\hline
\end{tabular}
\caption{Uniform priors on the cosmological parameters involved in the DE models explained in Sect.\til\ref{sec:models}.}
\label{tab:prior}
\end{table}
\begin{table*}[!ht]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
\hline
\multicolumn{9}{c}{{\bf $\omega$CDM}}\\
\hline
\multirow{2}{2.5em}{{\bf years}}&\multicolumn{4}{c|}{{\bf Bright Sirens}}&\multicolumn{4}{c}{{\bf Dark Sirens}} \\
\cline{2-9}
& ${\bf H_0}$ & ${\bf \Omega_{k,0}}$ & ${\bf \Omega_{\Lambda,0}}$ & $ {\bf \omega_{DE}}$ & ${\bf H_0}$ & ${\bf \Omega_{k,0}}$ & ${\bf \Omega_{\Lambda,0}}$ & $ {\bf \omega_{DE}}$ \\
\hline
1 & $66.70_{-2.24}^{+2.50}$ &$-0.08_{-0.28}^{+0.38}$ &$0.70_{-0.34}^{+0.21}$ &$-1.56_{-0.97}^{+1.39}$ &$67.52_{-0.17}^{+0.19}$ &$0.02_{-0.03}^{+0.04}$ &$0.68_{-0.04}^{+0.04}$ &$-1.20_{-0.36}^{+0.28}$ \\
5 & $67.80_{-1.02}^{+0.99}$ &$0.13_{-0.22}^{+0.22}$ &$0.57_{-0.18}^{+0.22}$ &$-1.63_{-0.97}^{+1.08}$ &$67.70_{-0.07}^{+0.07}$ &$-0.02_{-0.02}^{+0.02}$ &$0.67_{-0.03}^{+0.04}$ &$-0.97_{-0.15}^{+0.15}$ \\
10 & $67.55_{-1.03}^{+1.02}$ &$-0.05_{-0.17}^{+0.19}$ &$0.66_{-0.16}^{+0.20}$ &$-1.35_{-0.98}^{+0.84}$ & $67.68_{-0.05}^{+0.06}$ &$-0.01_{-0.02}^{+0.02}$ &$0.68_{-0.03}^{+0.03}$ &$-0.95_{-0.11}^{+0.09}$\\
\hline
\hline
\multicolumn{9}{c}{{\bf Interacting Dark Energy ($\omega_{DE}$-fixed)}}\\
\hline
\multirow{2}{2.5em}{{\bf years}}&\multicolumn{4}{c|}{{\bf Bright Sirens}}&\multicolumn{4}{c}{{\bf Dark Sirens}} \\
\cline{2-9}
& ${\bf H_0}$ & ${\bf \Omega_{m,0}}$ & ${\bf {\bf \omega_{DE}}}$ & $ {\bf \xi}$ & ${\bf H_0}$ & ${\bf \Omega_{m,0}}$ & $ {\bf \omega_{DE}}$ & $ {\bf \xi}$ \\
\hline
1 & $66.71_{-1.66}^{+1.35}$ &$0.25_{-0.17}^{+0.25}$ & -- &$-0.98_{-1.11}^{+1.50}$ & $67.72_{-0.24}^{+0.27}$ &$0.30_{-0.03}^{+0.03}$ & --&$0.13_{-0.20}^{+0.20}$ \\
5 & $68.20_{-0.99}^{+0.91}$ &$0.22_{-0.13}^{+0.16}$ & --&$-0.65_{-0.84}^{+1.10}$& $67.68_{-0.12}^{+0.14}$ &$0.33_{-0.02}^{+0.02}$ & -- &$0.01_{-0.11}^{+0.12}$ \\
10 & $67.81_{-0.93}^{+0.97}$ &$0.24_{-0.14}^{+0.13}$ & -- &$-0.76_{-0.92}^{+0.83}$ &$67.70_{-0.05}^{+0.05}$ &$0.32_{-0.01}^{+0.01}$ & -- &$-0.02_{-0.06}^{+0.06}$ \\
\hline
\hline
\multicolumn{9}{c}{{\bf Interacting Dark Energy ($\omega_{DE}$-variable)}}\\
\hline
1 & $67.86_{-2.15}^{+2.50}$ &$0.51_{-0.15}^{+0.11}$ &$-2.08_{-0.66}^{+1.05}$ &$0.96_{-0.95}^{+1.18}$& $67.51_{-0.19}^{+0.19}$ &$0.37_{-0.05}^{+0.04}$ &$-1.14_{-0.21}^{+1.16}$ &$0.79_{-0.50}^{+0.59}$\\
5 & $68.64_{-1.17}^{+1.22}$ &$0.42_{-0.25}^{+0.16}$ &$-1.51_{-0.63}^{+0.52}$ &$0.37_{-1.22}^{+1.72}$ &$67.70_{-0.10}^{+0.11}$ &$0.28_{-0.18}^{+0.17}$ &$-0.97_{-0.09}^{+0.09}$ &$-0.25_{-0.26}^{+0.27}$\\
10 & $67.99_{-1.04}^{+1.35}$ &$0.43_{-0.26}^{+0.16}$ &$-1.38_{-0.64}^{+0.43}$ &$0.15_{-1.33}^{+1.68}$& $67.63_{-0.04}^{+0.04}$ &$0.28_{-0.16}^{+0.17}$ &$-0.92_{-0.08}^{+0.08}$ &$-0.07_{-0.18}^{+0.16}$ \\
\hline
\hline
\multicolumn{9}{c}{{\bf Emergent Dark Energy}}\\
\hline
\multirow{2}{2.5em}{{\bf years}}&\multicolumn{4}{c|}{{\bf Bright Sirens}}&\multicolumn{4}{c}{{\bf Dark Sirens}} \\
\cline{2-9}
& ${\bf H_0}$ & ${\bf \Omega_{m,0}}$ & ${\bf {\bf \Delta}}$ & -- & ${\bf H_0}$ & ${\bf \Omega_{m,0}}$ & $ {\bf \Delta}$ & -- \\
\hline
1 & $66.46_{-1.35}^{+4.16}$ &$0.35_{-0.10}^{+0.09}$ &$-0.06_{-0.91}^{+1.09}$& --& $67.86_{-0.24}^{+0.34}$ &$0.31_{-0.01}^{+0.01}$ &$0.21_{-0.34}^{+0.28}$& --\\
5 & $67.30_{-0.82}^{+2.71}$ &$0.32_{-0.06}^{+0.05}$ &$0.26_{-0.77}^{+0.80}$& --&$67.60_{-0.08}^{+0.09}$ &$0.31_{-0.01}^{+0.01}$ &$-0.02_{-0.06}^{+0.06}$& --\\
10 & $66.92_{-0.68}^{+2.17}$ &$0.36_{-0.06}^{+0.05}$ &$0.21_{-0.83}^{+0.89}$& -- & $67.66_{-0.03}^{+0.03}$ &$0.310_{-0.002}^{+0.002}$ &$0.00_{-0.01}^{+0.01}$& -- \\
\hline
\hline
\multicolumn{9}{c}{{\bf Time-Varying Gravitational Constant}}\\
\hline
\multirow{2}{2.5em}{{\bf years}}&\multicolumn{4}{c|}{{\bf Bright Sirens}}&\multicolumn{4}{c}{{\bf Dark Sirens}} \\
\cline{2-9}
& ${\bf H_0}$ & ${\bf \Omega_{m,0}}$ & ${\bf {\bf \delta_{G}}}$ & -- & ${\bf H_0}$ & ${\bf \Omega_{m,0}}$ & $ {\bf \delta_{G}}$ & -- \\
\hline
1 &$66.92_{-1.70}^{+1.30}$ &$0.26_{-0.17}^{+0.26}$ &$-0.49_{-1.52}^{+1.63}$ & -- &$67.64_{-0.08}^{+0.07}$ &$0.31_{-0.01}^{+0.01}$ &$-0.03_{-0.04}^{+0.05}$& -- \\
5 &$67.49_{-0.89}^{+0.87}$ &$0.35_{-0.12}^{+0.12}$ &$0.22_{-0.60}^{+1.21}$&--&$67.68_{-0.04}^{+0.05}$ &$0.32_{-0.01}^{+0.01}$ &$-0.01_{-0.03}^{+0.02}$& -- \\
10 & $67.51_{-0.92}^{+0.81}$ &$0.29_{-0.07}^{+0.10}$ &$-0.26_{-0.46}^{+0.42}$& -- &$67.65_{-0.04}^{+0.04}$ &$0.31_{-0.01}^{+0.01}$ &$-0.02_{-0.02}^{+0.02}$ & -- \\
\hline
\hline
\end{tabular}
\caption{The table lists the best fist values and the $1\sigma$ uncertainty on the cosmological parameters of interest for each DE models presented in Sect.\til\ref{sec:models}.}\label{tab:results}
\end{table*}
\section{Results}\label{sec:results}
We carried out a MCMC run for each mock catalog and for each DE model. We built six mock catalogs: three of them reporting all the GW events at one, five, and ten years of observations but without having prior redshift information (dark sirens); and the other three containing those GW events with a detected electromagnetic counterpart and, therefore, having prior redshift information (bright sirens) from a X-ray telescope, such as THESEUS.
We use these mock catalogs to constrain the four DE models mentioned in Sect.\til\ref{sec:models}. All results of our run are reported in Table\til\ref{tab:results}. For each DE model we also show the corner plot of the posterior distributions of the cosmological parameters of interest, see Figs.\til\ref{fig:contour_interacting_wcdm},\til\ref{fig:contour_interacting_1p},\til\ref{fig:contour_interacting_2p},\til\ref{fig:contour_emergent}, and\til\ref{fig:time_varying}.
In each Figure, we report on left and right side the posterior distribution obtained from the bright and dark sirens, respectively. In each contour plot, we depict with green, orange, and blue histograms and filled areas the results from one, five, and ten years of observation, respectively. The different level of the transparency of the contours corresponding to a specific total number of years of observations depicts the 68\%, 95\% and 99\% CL from the darkest to the lightest color, respectively. Finally, the vertical dashed red line indicates the value of the {\em fiducial} cosmological parameters. Let us now discuss in details the results for each DE model comparing them with the current observational results. In the discussion we will always refer to the results obtained after ten years of observations.
{\bf \em $\omega$CDM}: we report the results in Fig.\til\ref{fig:contour_interacting_wcdm}.
As reported in Table\til\ref{tab:results}, we may constrain the cosmological parameters with an accuracy of $[\sigma_{H_0}, \sigma_{\Omega_{k,0}}, \sigma_{\Omega_{\Lambda,0}}, \sigma_{\omega_{DE}}]= [1.02, 0.18, 0.18, 0.93]$ and $[0.06, 0.02,0.03,0.10]$ in the case of bright and dark sirens, respectively. These results correspond to a relative accuracy of the Hubble constant and the $\omega_{DE}$ of $[\sim1.5\%,\sim 69\%]$ and $[\sim 0.1\%,\sim 10\%]$, in the case of bright and dark sirens, respectively. As reference, the current accuracy on $H_0$ is at level of 1\%, and on $\omega_{DE}$ is at level of 3\%\til\cite{Gao:2021}. However, it is important to remark that those bounds were obtained by using not only luminosity distances but also the distance prior data obtained by {\em Planck} satellite\til\cite{Gao:2021}. Therefore, we show that, using bright sirens, ET will bound the Hubble constant with the same level of accuracy, but it will be capable of improving it by one order of magnitude using dark sirens. Nevertheless, ET alone will not be capable of improving the accuracy on the $\omega_{DE}$ parameter not even with the dark sirens catalog.
{ \bf \em Interacting Dark Energy ($\omega_{DE}$-fixed)}: we report the posterior distributions in Fig.\til\ref{fig:contour_interacting_1p}, and the best fit values of the cosmological parameters in Table\til\ref{tab:results}. The 68\% uncertainties are: $[\sigma_{H_0}, \sigma_{\Omega_{m,0}}, \sigma_{\xi}]= [0.95, 0.13, 0.88]$ and $[0.05, 0.01, 0.06]$ in the case of bright and dark sirens, respectively. These results translate in an accuracy on the Hubble constant of $\sim 1.4\%$ and $\sim 0.1\%$, which is always better than current constraints shown in\til\cite{DiValentino2020a,Divalentino2020b,Pan2019} by a factor of $\sim 2.4$ and $\sim 46$. On the contrary, the accuracy on the parameter $\xi$ is improved only with dark sirens by a factor of $\sim 3.3$.
{\bf \em Interacting Dark Energy ($\omega_{DE}$-variable)}: the results of the MCMC algorithm are shown in Fig.\til\ref{fig:contour_interacting_2p} and listed in Table\til\ref{tab:results}. The cosmological parameters are constrained with an accuracy of $[\sigma_{H_0}, \sigma_{\Omega_{m,0}}, \sigma_{\omega_{DE}},\sigma_{\xi}]= [1.19, 0.21, 0.53, 1.5]$ and $[0.04, 0.16,0.08, 0.17]$ which corresponds to a relative accuracy on the Hubble constant and the $\omega_{DE}$ of $[\sim1.7\%,\sim 38\%]$ and $[\sim 0.1\%,\sim 9\%]$, in the case of bright and dark sirens, respectively.
Using only bright sirens, the uncertainty and the relative accuracy on $H_0$, $\omega_{DE}$ and $\xi$ are not comparable with the ones obtained in\til\cite{DiValentino2020a,Divalentino2020b,Pan2019}. Nevertheless, when we use dark sirens the accuracy on $H_0$ improves of a factor $\sim 27.5$ while the constraints on $\omega_{DE}$ and $\xi$ do not still improve results from\til\cite{DiValentino2020a,Divalentino2020b,Pan2019}. It is worth to notice that previous analysis are based on multiple dataset, such as CMB and Cepheids, while we are only focusing on studying the capability of ET.
{\bf \em Emergent Dark Energy}: the results are reported in Fig.\til\ref{fig:contour_emergent} and in Table\til\ref{tab:results}. We constrain the cosmological parameters with the following accuracy: $[\sigma_{H_0}, \sigma_{\Omega_{m,0}}, \sigma_{\Delta}]= [1.43, 0.06, 0.86]$ and $[0.03, 0.002, 0.01]$ in the case of bright and dark sirens, respectively. These results provide us with a relative accuracy on $H_0$ of $\sim 2.1\%$ and $\sim 0.04\%$. Using bright sirens allow to obtain bounds on the Hubble constant comparable with current constraints shown in\til\cite{Yang2021}. Instead, using dark sirens, we improve such a constraint by a factor $\sim 46$. The parameter $\Delta$ is constrained with a better accuracy only when dark sirens are taken into account, improving the bounds in\til\cite{Yang2021} of a factor $\sim 40$.
{\bf \em Time-Varying Gravitational Constant}: we report the posterior distributions in Fig.\til\ref{fig:time_varying}, while the constraints on the cosmological parameters are reported in Table\til\ref{tab:results}. We show that, in the framework of a time-varying gravitational constant, ET will be capable of bounding the cosmological parameters with an accuracy of $[\sigma_{H_0}, \sigma_{\Omega_{m,0}}, \sigma_{\delta_G}]= [0.86, 0.08, 0.44]$ and $[0.04, 0.01, 0.02]$, in the case of bright and dark sirens, respectively. Hence, the predicted relative accuracy on the Hubble constant is $\sim 1.2\%$ and $\sim 0.06\%$. Using the CMB + BAO + SN +$H_0$ dataset, the bounds on the Hubble constant are currently at level of $\sim 1.5\%$, while the accuracy on $\delta_G$ is of the order $0.002$\til\cite{Gao:2021}. Therefore, while using dark sirens ET will be capable of improving more than one order of magnitude the relative error on $H_0$, it will not be capable alone of improving the bounds on $\delta_G$.
\section{Discussion and Conclusions}\label{sec:conclusion}
The Hubble tension is one of the most important issue of modern cosmology\til\cite{DiValentino2021,Abdalla2022}. It is still not clear whether the solution of such a tension regards more the observational and statistical sector than the theoretical one with the possibility of some "{\it new physics}". Most of the solutions proposed up to the date are focused on extending the $\Lambda$CDM model\til\cite{Belgacem:2019tbw,DiValentino2021,Abdalla2022}, and on changing the underlying theory of gravity\til\cite{Belgacem2019,Belgacem2019b,Abdalla2022,Ferreira2022,Capozziello:2019cav}. Nowadays, this tension is established to be at $4.2\sigma$ and arises from a discrepancy in the value of the Hubble constant obtained from late-time observations, such as Cepheids, SNeIa, and BAO among the others, and the observation of the CMB power spectrum at early-time. A dataset complementary to the usual late-time observations is represented by the estimation of the luminosity distance from the GWs. Since the latter are not based on the measurements of the photon flux, they must not be calibrated on closer electromagnetic source, such as Cepheids and SNeIa. Therefore, they represent a potential way to solve the Hubble tension, and may identify the cause whether it is related to the observations at late or early time, or to theoretical limitation of the $\Lambda$CDM model. To this aim, the LIGO/Virgo/KAGRA collaboration have constrained the Hubble constant with GWs to be $H_0= 70_{-8}^{+12}$ km s$^{-1}$ Mpc$^{-1}$ at 68\% CL\til\cite{LIGO_H0_2017}. However, the accuracy reached is not enough to provide a definitive answer.
The 3G GW detector ET promises to constrain the Hubble constant with sub-percent accuracy\til\cite{Maggiore2020}, offering a possible solution to the Hubble tension. Therefore, we have forecast the accuracy down to which ET may bound the cosmological parameters of four DE models which have the potential to solve the Hubble tension\til\cite{Abdalla2022}. Namely, we investigate the non-flat $\omega$CDM, the interacting dark energy, the emergent dark energy and the time varying gravitational constant models. We have predicted the luminosity distance expected in those models varying the cosmological parameters, and fit it to the mock data built to mimic the expected rate of observations and accuracy of ET, whose construction was explained Sect. in\til\ref{sec:mockdata}. Our fitting procedure is based on the MCMC algorithm explained in Sect.\til\ref{sec:stats}. The results of are reported in Table\til\ref{tab:results}, and we also show the posterior distributions of the cosmological parameters of interest in Figs.\til\ref{fig:contour_interacting_wcdm},\til\ref{fig:contour_interacting_1p},\til\ref{fig:contour_interacting_2p},\til\ref{fig:contour_emergent}, and\til\ref{fig:time_varying}.
Our results clearly indicate that ET will be capable of reaching an accuracy of $\sim 1\%$ with bright sirens, and go below $\sim 0.1\%$ with dark sirens, independently by the theoretical framework used in the statistical analysis. This accuracy will be adequate to solve the Hubble tension. Nevertheless, ET alone will not always be capable of improving current constraints on the additional cosmological parameters that depend on the specific choice of DE model. For instance, in the non-flat $\omega$CDM and interacting DE models, the parameters $\omega_{DE}$ and $\xi$ will be constrained with an accuracy worse than current bounds\til\cite{DiValentino2020a,Divalentino2020b,Pan2019,Gao:2021}. In the case of the time varying gravitational constant model, the accuracy reached by ET will be still one order of magnitude higher than current constraints\til\cite{Gao:2021}. On the contrary, in the emergent DE model, we show that ET will be also able to improve the bounds on the additional cosmological parameter $\Delta$ by a factor 40 with respect to current analysis\til\cite{Yang2021}. These results show the huge capability of ET to solve the Hubble tension independently by the theoretical framework chosen, but also point out that, to strongly constrain the DE models we have considered, ET will need to be complemented with other dataset.
\section*{Acknowledgments }
MC, DV, and SC acknowledge the support of Istituto Nazionale di Fisica Nucleare (INFN), Sez. di Napoli, {\it iniziativa specifiche} QGSKY, MoonLIGHT2, and TEONGRAV.
\bibliographystyle{apsrev4-2}
\bibliography{Biblio.bib}
|
Title:
Black hole mass estimation using X-ray variability measurements in Seyfert galaxies |
Abstract: Our objective is to critically assess the X-ray flux variability as a tool
for measuring the black hole (BH) mass in active galactic nuclei (AGN). We aim
to establish a prescription for estimating BH masses based on measurements of
the normalised excess variance from X-ray data. We discuss the minimum
requirements in terms of the light-curve duration and X-ray signal-to-noise
ratio (S/N) to enable a reliable determination that is comparable to what can
be derived from the continuum and emission line reverberation studies. We used
the light curves of local Seyfert from the Nuclear Spectroscopic Telescope
Array hard X-ray mission (NuSTAR), to compute the normalised excess variance
(NXV) in the 3-10 and 10-20 keV bands, thus extending the analysis to an energy
band higher than 10 keV. The excess variance measurements were then combined
with independent BH mass estimates from the literature to establish the MBH
versus NXV relation for different samples and weigh its accuracy in terms of
the light-curve duration and X-ray S/N. We find that it is possible to
accurately measure the BH mass in AGN using excess variance measurements in the
3-10 and the 10-20 keV bands, however, strong quality requirements should be
applied. The minimum necessary S/N and duration of the light curves used to
compute the excess variance ought to be 3 and approximately 100 ks,
respectively. We provide a linear relationship between the normalised excess
variance and the black hole mass that can be used to estimate the latter, with
an average uncertainty of the order of 0.4 to 0.25 dex (depending on the
adopted light-curve segment duration).
| https://export.arxiv.org/pdf/2208.12490 |
\title{Black hole mass estimation using X-ray variability measurements in Seyfert galaxies}
\titlerunning{Black hole mass estimation of Seyfert galaxies}
\authorrunning{A. Akylas et al.}
\author{A. Akylas
\inst{1}
\and
I. Papadakis
\inst{2,3}
\and
A. Georgakakis
\inst{1}
}
\institute{Institute for Astronomy Astrophysics Space Applications and Remote Sensing (IAASARS), National Observatory of Athens, I. Metaxa \& V. Pavlou, Penteli, 15236, Greece \\ \email{aakylas@noa.gr}
\and
Department of Physics and Institute of Theoretical and Computational Physics, University of Crete, 71003 Heraklion, Greece
\and
Institute of Astrophysics - FORTH, N. Plastira 100, 70013 Vassilika Vouton, Greece}
\abstract
{}
{Our objective is to critically assess the X-ray flux variability as a tool for measuring the black hole (BH) mass in active galactic nuclei (AGN). We aim to establish a prescription for estimating BH masses based on measurements of the normalised excess variance from X-ray data. We discuss the minimum requirements in terms of the light-curve duration and X-ray signal-to-noise ratio (S/N) to enable a reliable determination that is comparable to what can be derived from the continuum and emission line reverberation studies.}
{We used the light curves of local Seyfert from the Nuclear Spectroscopic Telescope Array hard X-ray mission ($\rm NuSTAR$), to compute the normalised excess variance (\snxv) in the 3-10 and 10-20 keV bands, thus extending the analysis to an energy band higher than 10 keV. The excess variance measurements were then combined with independent BH mass estimates from the literature to establish the \mbh\ versus \snxv\ relation for different samples and weigh its accuracy in terms of the light-curve duration and X-ray S/N.}
{We find that it is possible to accurately measure the BH mass in AGN using excess variance measurements in the 3-10 and the 10-20 keV bands, however, strong quality requirements should be applied. The minimum necessary S/N and duration of the light curves used to compute the excess variance ought to be $\sim$3 and $\sim 80 - 100$ ks, respectively. We provide a linear relationship between the normalised excess variance and the black hole mass that can be used to estimate the latter, with an average uncertainty of the order of $0.4 - 0.25$ dex (depending on the adopted light-curve segment duration). In general, BH mass estimates from 3-10 keV and 10-20 keV band light curves are expected to be similar. The 10–20 keV band is preferred for sources that are heavily absorbed and the 3–10 keV band is preferred for sources that may be dominated by the X–ray reflection component at energies above 10 keV. }
{}
\keywords{accretion, accretion disks -- X-rays: general -- galaxies: active -- quasars: supermassive black holes}
\authorrunning{Akylas et al.}
\section{Introduction}
Super-massive black holes (SMBHs) reside in the centre of most (if not all) galaxies and
are responsible for their most energetic face, namely, of active galactic nuclei (AGN).
According to the current paradigm, the incredibly high luminosity of these objects
is powered by the accretion of matter in the vicinity of the SMBHs.
As matter spirals inward, copious amounts of energy are released, over a wide
range of wavelengths (from radio to gamma rays) due to
the conversion of gravitational potential energy into radiation.
The emitted power in AGN depends on the black hole (BH) mass and the accretion rate. Their luminosity can reach $10^{15}$
times that of the Sun, while the accretion of matter may significantly
contribute to the growth of the BH mass.
Studies have made it clear that SMBHs play a major role in regulating star
formation in galaxies and, thus, affect their surroundings. One of the strongest pieces of empirical
evidence of the mutual interaction between AGN and its host galaxy
is demonstrated by the strong correlation between the mass of SMBHs and the bulge stellar velocity dispersion, $\rm \sigma_{\ast}$, \citep[e.g.][]{ferrarese2000, gebhardt2000}.
This relation can be established through the
interaction between the energy and radiation generated by accretion and the gas in the
host galaxy, known as AGN feedback \citep[e.g.][]{fabian2012}.
In order to understand and investigate the role of SMBHs in galaxy formation and evolution
processes, we need to monitor the growth of the SMBHs across the cosmic timescale. Therefore, it
is not surprising that a lot of effort has been focussed on finding ways of measuring the mass of SMBHs in
galaxies -- overall, and, in particular, in the case of AGN.
Direct mass measurements of SMBHs are possible with stellar kinematics and gas dynamics,
although these methods require good spatial resolution and are presently only feasible for a small number
of nearby galaxies. These methods can be used for weak AGN, such as low-luminosity AGNs, LINERs,
and Seyfert-2s; this is because for more luminous AGNs, the strong nuclear emission weakens the stellar
features in the spectrum.
Based on the assumption that the motion of the gas in the broad line region (BLR) of AGNs is dominated
by the gravitational influence of the SMBH, we can use the virial equation $\rm M_{BH}$ = ($\rm f R_{BLR} \Delta V^2$ )/G
to estimate SMBHs. Here, $\rm R_{BLR}$ is the average radius of the emitting gas in the broad line region,
usually determined with reverberation mapping \citep[e.g.][]{peterson2004}, while the velocity dispersion
of the gas ($\rm \Delta V$) is measured from the width of the broad emission lines. The dimensionless factor, f,
is referred to as a virial coefficient and encapsulates the unknown geometry and dynamics of the broad line region gas.
Its true value for each AGN is unknown and, in most cases, an average value is used, based on the assumption that
AGNs follow the \mbh\ -- $\rm \sigma_{\ast}$ relation observed in quiescent galaxies \citep[e.g.][]{grier2013}.
Black hole masses can also be inferred indirectly from observable quantities that are correlated with the mass of the SMBHs,
such as the velocity dispersion of bulge stars, that is, the value of the \mbh\ -- $\rm \sigma_{\ast}$ scaling relationship (mentioned above) or the bulge luminosity \citep[e.g.][]{gultekin2009}.
In this work, we focus on the flux variability of AGN as a means of measuring SMBHs. Stochastic variations of the radiated energy
is one of the main observational characteristics of the accretion flows onto compact objects and SMBHs in particular.
The origin of these variations is still not well understood and could be related to accretion flow instabilities,
a flaring corona, or hot-spots orbiting the central compact object
\citep[e.g.][]{GravityCollaboration2018, GravityCollaboration2020}.
Although the AGN flux variability is observed across the entire electromagnetic spectrum, analysing light curves at X-ray wavelengths
offers several advantages. Stellar processes emit (very) little X-ray radiation and, therefore, observations at these energies arguably provide
the cleanest diagnostic of active SMBHs at the centres of galaxies over a broad redshift range and accretion luminosity baselines \citep[e.g.][]{brandt2015}.
The X-ray photons can also penetrate relatively dense gas clouds and be virtually unaffected, thereby providing a representative sampling
of the obscured AGN in the Universe. Therefore, X-ray observations enable flux variation measurements across a broad range of AGN (including low-luminosity and obscured objects)
that are challenging or even impossible to perform at other wavebands, where the host galaxy emission is dominant. The proximity of the X-ray
emitting region to the active black hole implies the possibility of a direct connection between the X--ray flux variations and the
physical properties of the system (e.g. black-hole mass or accretion rate).
Past observations have indicated that such a relation exists in AGN. The X-ray power spectral density (PSD) of AGN has been modelled
using a bending power-law with a slope of -1 at low frequencies and then a slope of -2 \citep[e.g.][]{mchardy2004}. Various studies
have indicated that the PSD bend frequency may be aptly correlated with BH mass and (potentially) with the bolometric luminosity
\citep[e.g.][]{czerny2001, mcHardy2006, kording2007, Gonzalez2012}.
However, it is difficult to estimate the PSD in AGN (and, even more so, detect the bending frequency) as this requires long, high
signal-to-noise ratios (S/N) and uninterrupted light curves. Therefore, it is not practical to use the bending-frequency versus BH mass relation
to measure the BH mass in AGN.
Using the normalised excess variance, \snxv, \citep[e.g.][]{nandra1997} as an estimator of the intrinsic
'band-variance' of the source, a tight anti-correlation between \snxv\ and \mbh\
has also been found in the local universe \citep[e.g.][]{papadakis2004, oneill2005, zhou2010, ponti2012}
as well as the high-redshift universe \citep[e.g.][]{lanzuisi2014}. In particular, the value of \snxv\ can be estimated much more easily with the available
data sets for many types of AGN, both at low and high redshifts. It is for this reason that most of the SMBH mass measurements
in AGN are based on the use of the \snxv\ vs \mbh\ relation \citep[e.g.][]{ponti2012}.
Our objective is to critically assess the X-ray flux variability as a tool for measuring AGN SMBHs.
We used the light curves of local Seyfert from the $\rm NuSTAR$ observatory to compute \snxv\ in the energy bands
3-10 and 10-20 keV. The use of $\rm NuSTAR$ data has allowed us to extend our analysis to an energy band higher than 10 keV,
which may be important for obscured AGN. The excess variance measurements are then combined with independent BH mass estimates
from the literature to establish the \mbh\ vs \snxv\ relation in AGN. All studies in the past have
focused on the reverse relation (i.e. \snxv\ vs \mbh) as the main goal was the study of the X--ray variability,
while our focus here is the estimation of the BH mass in AGN from the excess variance measurements.
We present a prescription to measure SMBH masses using the excess variance measurements and we discuss,
for the first time, the minimum requirements in terms of light-curve duration and X-ray S/N values
that enable a reliable determination, similar to those derived from the continuum and emission line reverberation studies
or the \mbh\ -- $\rm \sigma_{\ast}$ relation.
\section{The sample}\label{thesample}
Our sample consists of the Seyfert galaxies detected in the 105-month survey of the Burst Alert Telescope, \citep[BAT;][]{Barthelmy2004}
on board the \swift Gamma-Ray Burst observatory \citep{gehrels2004}, along with archival observations from the \nustar mission \citep{harrison2013}.
The BAT 105 month catalogue \citep{oh2018} contains 379 Seyfert-1s and 448 Seyfert-2s galaxies. There are 161 Seyfert-1s and 253 Seyfert-2s
(414 in total) that have been observed by \nustar until September 2020, with a duration greater than 10 ks. We found 664 independent,
archival \nustar observations for these sources.
In these \nustar observations, a total of 114 are flagged as having been affected by increased solar activity and these have been removed from further analysis.
Moreover, we restricted our analysis to the low-redshift regime. This is meant to minimise the inconsistencies on the excess variance estimation
when using intervals with a fixed length in the observers frame. Therefore, sources with redshift greater than 0.07 (i.e.\, the maximum
redshift for the sources in the reverberation sample; see Section \ref{revsample} below for details) are not included in the sample. Our final sample
consists of 473 independent observation of 300 local AGN -- of which 108 are Seyfert-1s and 192 are Seyfert-2s galaxies.
In order to provide a recipe for the \mbh\ estimation, we need to study sources with known BH mass. Thus, we kept sources with BH measurements,
which are based either on the so-called reverberation mapping technique or on the use of good-quality, host galaxy velocity dispersion measurements,
as we explain below.
\subsection{The reverberation sample}\label{revsample}
Reverberation mapping \citep[e.g.][]{blandford1982} is one of the most direct ways to obtain BH masses in AGNs. It provides an estimate of the size of the
BLR from the lag between the variability in the photo-ionising continuum and the broad emission lines. On this basis, it is possible to infer a virial mass for the SMBH in the central region of the AGN.
We cross-correlated the original sample with the sources included in the AGN BH mass database \citep{massdb} to obtain a sample with BH mass measurements
based on reverberation mapping technique (hereafter, the 'rev sample').
We adopt the weighted average of the individual BH mass estimates determined from all the emission lines, assuming that <{\rm f}>=4.3, as suggested by \citet{grier2013}.
The rev sample consists of 26 AGN with 86 \nustar observations.
The average duration per \nustar observation in the rev sample is $\sim80$ ks.
The rev sample log is presented in Table \ref{rev_sample}.
\subsection{The velocity dispersion sample}\label{vdsample}
Reverberation mapping data are currently available only for a small number of AGN, but the empirical scaling relation between black hole mass and host-galaxy properties, namely, the \mbh\ -- $\rm \sigma_{\ast}$ relation \citep[e.g.][]{grier2013} offers an alternative way to obtain BH mass estimates. To this end, we used the central velocity dispersion measurements ($\sigma_{\ast}$) presented in the first catalogue of the Swift-BAT AGN Spectroscopic Survey \citep[BASS,][]{koss2017}. These authors provided measurements of $\sigma_{\ast}$ for many AGN in the 70-month Swift BAT all-sky catalogue. We considered only secure velocity dispersion measurements by selecting the data with spectral fitting quality flag value of 1 or 2. Dual AGN systems, as reported in \citet{koss2012} are excluded from the analysis. There are 84 sources fulfilling these criteria, with 111 \nustar observations.
We also searched the Hyperleda database (\citet{paturel2003}) to obtain additional central velocity dispersion measurements and increase our sample.
We find $\rm \sigma_{\ast}$ measurements for 24 new sources (i.e without $\rm \sigma_{\ast}$ measurements in the BASS survey sample) with 49 \nustar observations. Before merging the targets from both the BASS project and the Hyperleda database, we checked whether there are any systematic differences in the velocity dispersion estimates between the two samples. We then identified the common sources, that is, sources with $\rm \sigma_{\ast}$ measurements in both the Hyperleda and the BASS databases and compare the corresponding $\rm \sigma_{\ast}$ values. There are 15 common sources in these samples. Figure\, \ref{hypervskoss} shows a plot of $\rm \sigma_{\ast, {\rm Hyperleda}}$ vs $\rm \sigma_{\ast, BASS}$. The plot clearly shows that the BASS and the Hyperleda $\rm \sigma_{\ast}$ measurements are very well correlated. There are no large amplitude or systematic deviations from the one-to-one relation and, therefore, we chose to merge the two samples.
In the last step, we cross-correlated the data with \cite{ricci2017} in order to obtain information on the X-ray column density ($\rm N_H$) of each source. We excluded from the sample the Compton thick (CT) sources, namely, the sources with $\rm N_H$>$10^{24}$ $\rm cm^{-2}$. The final velocity dispersion sample (hereafter, the VD sample) is comprised of 84 AGN with reliable measurements of $\rm \sigma_{\ast}$ and 107 \nustar observations. The VD sample log is presented in Table \ref{vd_sample}.
For each source in the VD sample, we can estimate the BH mass using the \mbh\ -- $\rm \sigma_{\ast}$ relation of \citet{woo2013}:
\begin{equation}
{\rm
logM_{BH}=8.37+5.31 \cdot log \left(\frac{\sigma_\ast}{200~km~s^{-1}} \right)}
.\end{equation}
\noindent As shown by \citet{grier2013}, BH mass estimates using Eq.\,1 are fully compatible with the masses obtained from the reverberation technique, when assuming a mean virial factor <{\rm f}> of 4.3. The mass estimates for the VD sample are also listed in Table \ref{vd_sample}.
\subsection{The Compton-thick sample}\label{ctsample}
The sources that have been excluded from the VD sample, due to being CT in nature, make up our CT sample. There are in total 24 CT sources with 53 \nustar observations available. These sources are treated separately since their continuum emission, particularly in the 3-10 keV band is severely suppressed, due to heavy absorption, and their spectrum may be dominated by reflected emission \citep[e.g.][]{balok2014}. The log of the CT sample is presented in Table \ref{ct_sample}.
\section{X-ray data reduction}\label{data_reduction}
The observations were processed with the \nustar data analysis pipeline
({\sc nupipeline}) in order to produce cleaned, calibrated event list files.
For this standard pipeline processing, we used \nustar data analysis software
{\sc NuSTARDAS} v2.0.0 and CALDB version 20170222\footnote{$\rm https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar\_swguide.pdf$}.
From the cleaned event files,
we extracted source and background light curves for each of the two \nustar focal
plane modules (FPMA \& FPMB) using the {\sc nuproducts} script.
We adopted a source radius of 60 arcsec for the source light curve extraction, for both FPMA \& FPMB observations and applied the live-time, point spread function (PSF), and vignetting corrections. The background light curves are extracted from a four times larger (120 arcsec radius) source-free region of the image at an off-axis angle similar to the source position. We extracted light curves in the 3–10 and 10–20 keV bands, using a bin size of 250 s.
The light curves obtained for the FMPA and FMPB modules were first background subtracted and then summed using the {\sc LCMATH} tool within {\sc FTOOLS}. For the subtraction of the background from the source light curves, the proper scaling factors, accounting for the differences in extraction radius for the source and background light curves, have been taken into account. The combined error in the count rate during subtraction or addition of the light curves is always calculated as: $err=\sqrt{err1^2 + err2^2,}$ where err1 and err2 are the uncertainties in the count rate in each light curve bin obtained during the \nustar data reduction process.
\section{The normalised excess variance}\label{calc_nxv}
To measure the variability power of the sources in our sample we compute the normalised excess variance \cite[e.g.][]{nandra1997} using:
\begin{equation}
{\rm
\sigma_{NXV}^{2}=\frac{1}{N\mu^2} \sum_{i=1}^{N} \left( X_i - \mu )^2 -\sigma_i^2 \right),}
\label{snvx}
\end{equation}
\noindent where $N$ is the number of bins in the light curve, $\rm X_i$ and $\rm \sigma_i$ are the count rates and their uncertainties, respectively, and $\mu$ is the unweighted mean count rate.
The $\sigma_{NXV}^{2}$ has been measured using light curve segments with a duration of $\rm \Delta t=10$, 20, 40, and 80 ks. When more than one valid segments are available for a source (for a given timescale, $\rm \Delta t$), we compute the mean of the individual excess variances. This approach should reduce the uncertainty due to the stochastic nature of the X-ray variations \citep[see e.g.][]{allevato2013}. Following \citet{ponti2012}, all time bins in the light curve segments with fractional exposure lower than 0.35 have been excluded. We note that due to the restriction of our analysis in the local universe (the median redshift of the sample equals 0.02 and the maximum redshift equals 0.07), the impact of the source redshift in the estimation of $\sigma^2_{NXV}$ in fixed length intervals is negligible and it is also very similar for all the samples.
\begin{table*}
\caption{Best-fit results for the rev and VD $M_{BH}$ vs $\sigma_{NXV}^2$ relations} %
\label{fitting1} %
\centering %
\begin{tabular}{c c c c c c c c c } %
\hline %
& \multicolumn{4}{c}{3-10 keV} & \multicolumn{4}{c}{10-20 keV} \\
& \multicolumn{2}{c}{Rev} & \multicolumn{2}{c}{VD} & \multicolumn{2}{c}{Rev} & \multicolumn{2}{c}{VD} \\
\hline
segment & slope & intercept & slope & intercept & slope & intercept & slope & intercept\\ %
\hline \hline %
10 ks &-0.76$\pm$0.12 & 6.78$\pm$0.07 & 0.09 $\pm$0.18 & 7.95 $\pm$0.13 & -0.91 $\pm$0.13 & 6.71$\pm$0.09 & 0.33$\pm$0.16 & 7.96$\pm$0.10 \\
20 ks &-0.91$\pm$0.09 & 7.01$\pm$0.06 &-0.05 $\pm$0.13 & 7.80 $\pm$0.11 & -0.96 $\pm$0.14 & 6.94$\pm$0.11 & 0.23$\pm$0.16 & 7.84$\pm$0.09 \\
40 ks &-0.78$\pm$0.15 & 7.19$\pm$0.08 &-0.40 $\pm$0.15 & 7.59 $\pm$0.10 & -1.16 $\pm$0.15 & 7.25$\pm$0.09 &-0.10$\pm$0.21 & 7.56$\pm$0.11 \\
80 ks &-0.90$\pm$0.09 & 7.36$\pm$0.07 &-1.00 $\pm$0.05 & 7.42 $\pm$0.07 & -1.13 $\pm$0.13 & 7.36$\pm$0.08 &-0.96$\pm$0.21 & 7.39$\pm$0.08 \\
\hline %
\end{tabular}
\end{table*}
\section{Measuring \mbh\ using $\sigma^2_{NXV}$}
Previous studies have explored the dependence of the normalised excess variance on the BH mass, ultimately suggesting a linear correlation in the log-log space between the two quantities.
We aim to establish a prescription to estimate BH masses given the measurements of the normalised excess variance from X--ray data, therefore, we chose to establish the reverse relation, that is, the relation of {\mbh} versus {\snxv}. Our objective is to investigate what are the requirements for the light curves that will be used to compute {\snxv},
so that the resulting BH mass estimates will be as unbiased as possible and of known variance.
Instead of applying, a priory, a certain cut to the S/N value, we simply consider only the sources where the AGN variability is well detected, namely, the computed {\snxv} is positive (either the mean or a single value, if there is only one light curve segment available). In this way, observations where the S/N ratio is low or the intrinsic amplitude is weak were excluded from further analysis. Then we carried out a separate study of the {\mbh} versus {\snxv} relation for the rev and VD samples in four timescales ($\rm \Delta t=10$, 20, 40, and 80 ks) and we investigate how the {\mbh} versus {\snxv} relation changes with respect to the total duration of the light curves used to compute \snxv. In this way (as we show in the following), we are able to establish the minimum requirements for the light curves (i.e. the necessary number of segments and S/N values) in order to acquire a reliable \mbh\ estimate.
\subsection{\mbh\ -- \snxv\ relation for the rev and VD samples}\label{comparison_section}
The four panels of Fig. \ref{comparison1} show the {\mbh} vs {\snxv} plots for the rev and the VD samples (filled-blue circles and open-red squares respectively) in the 3--10 keV band, when using segments with durations of 10, 20, 40 \& 80 ks. The four panels of Fig. \ref{comparison2} present the same results in the 10--20 keV band.
A strong anti-correlation between \mbh\ and \snxv\ is observed in all timescales for the rev sample. Smaller BH masses show the largest variability amplitude, as has been observed many times in the past. However, this is not the case for the VD sample, especially at timescales shorter than 40 ks (top panels in both figures).
We fit a straight line to each plot, in log-log scale, of the following form,
\begin{equation}
{\rm
\log(M_{BH})= \alpha \cdot \log(\sigma_{NXV,0.005}^2) + \beta,}
\label{linefit}
\end{equation}
\noindent
to model the observed behaviour. The \snxv\ values have been normalised to the value of 0.005 in order to minimise the error
on the line parameters, $\alpha$ and $\beta$. Since our objective is to predict {\mbh} from {\snxv} we used the ordinary least-squares
regression of Y on X, OLS(Y|X), where Y is the variable to be predicted (i.e.\, $\rm M_{BH}$) and X is the measured variable
($\rm \sigma_{NXV}^2$), following the prescription of \citet{isobe1990}.
Table \ref{fitting1} lists the best-fit results and the corresponding 1$\rm \sigma$ uncertainties to the {\mbh}--{\snxv} plots, both for the rev and the VD samples, at all timescales, in both bands. The best-fit slope for the rev sample remains roughly constant, within the errors, in both bands, while the intercept constantly increases as we move to higher timescales. This is fully consistent with the expectations, as the variance provides a measure of the (intrinsic) PSD integral from the shortest to the longest sampled timescale. In the case of red-noise PSDs, this integral should increase with increasing segment duration, hence, the positive correlation of the best-fit line normalisation with the segment duration ($\rm \Delta t$).
The VD data points shown in Figs.\,\ref{comparison1} and \ref{comparison2} and the best-fit results show that the situation is more complicated in the VD sample.
There is almost a complete lack of correlation between {\mbh} and {\snxv} on the shortest timescales ($\rm \Delta t=10$ \& 20 ks), in both energy bands. The situation improves progressively as we move to longer timescales where the intrinsic variability is higher. A clear and strong anti-correlation between \mbh\ and \snxv\ is present only in the $\rm \Delta t=80$ ks plots. In fact, the best-fit results to the rev and VD plots are remarkably similar in the case of $\rm \Delta t=80$ ks.
The inconsistency of the {\mbh} versus {\snxv} relations between the rev and the VD samples at $\rm \Delta t\lesssim 40$ ks could not be attributed to any differences in the accuracy of the BH mass measurements between the two cases. After all, the rev BH mass measurements are based on relations that are 'forced' to agree with the $\rm M_{BH}-\sigma_{\ast}$ relation. Therefore, the difference between the {\mbh} versus {\snxv} plots for the rev and VD samples when the light curve segments are short is expected to be related to limitations in accurately measuring the X-ray variability of the sources in the VD sample. In the following section, we quantify this limitation and present a prescription for the use of $\rm \sigma_{NXV}^2$ in the BH mass determination.
\subsection{Significance of the observation duration}\label{criteria}\label{choosesamples}
The main difference between the rev and VD samples is the average number of observations per target and their duration. There are, on average, 3.3 observations per source in the rev sample, with a mean duration of 80 ks each. On the contrary, there are only 1.3 observations per target in the VD sample, with a mean duration of 60 ks each. In order to quantify the impact of the available exposure time of a source in the estimation of the $\sigma_{NXV}^2$, we performed the following experiment.
For each timescale, we progressively increased the minimum number of light curve segments required in order to keep a target in the sample. Effectively, this leads to smaller and smaller sub-samples, with an increased average duration for the observations.
For example, when we request a minimum of at least two time segments of 20 ks for the estimation of \snxv, then only sources with one observation of at least $\rm \Delta t=40$ ks or two observations of at least 20 ks each are considered. When, for instance, we increase the minimum number of 20 ks segments to five, then it is only those sources with at least a $\rm \Delta t=100$ ks observation or five observations of
20 ks each are considered. We repeated this exercise, namely, increasing the minimum number of segments for both the rev and the VD samples, until there were less than six sources left in the sample and we stopped.
For each new sub-sample, we again fit a straight line to obtain the best fit slope and intercept of the $\rm M_{BH}$ versus $\sigma_{NXV}^2$ relation.
Figures \ref{duration1} and \ref{duration2} show the resulted best-fit parameters plotted as a function of the average duration of the observations for each sub-sample.
The numbers next to each point indicate the number of sources left in each sub-sample after increasing the minimum number of segments.
The best-fit parameters remain roughly constant for the rev sample, irrespective of the number of segments required to compute \snxv. An increase in that number does not affect the fitting results. However, this is not the case with the VD sample. The VD sample includes many sources with observations with a significantly shorter average duration than the average duration of the rev sample. The best-fit line parameters change
as we increase the number of segments we use to compute \snxv. When the average duration of the observations in the VD sub-sample is similar to that of the rev sample, then the best-fit slopes and intercepts become consistent.
We also note, as explained in \citet{isobe1990}, that the best-fit parameter errors may be underestimated for the smaller-sized samples (i.e.\ for the best-fit parameters determined by fitting the objects with the longest observations). This actually makes the agreement between the rev and VD best-fit parameters even more impressive. It also shows that the systematic bias present in the best-fit parameters when we consider all the VD sample sources is much greater than the statistical uncertainty of the best-fit parameters.
The black circles in Figs.\, \ref{duration1} and \ref{duration2} indicate the first VD and rev sub-samples (i.e. those with a shorter average duration and, therefore, a larger size) for which the best-fit line slope and intercept are consistent (within errors). For all subsequent sub-samples, the best-fit results remain roughly constant.
Our results indicate that an average of $\sim250$ ks in the duration of the observations per source is necessary for the best-fit results, from both samples, to agree when using 10 ks segments. This number decreases to $\sim180-200$ ks when the estimation of the $\sigma_{NXV}^2$ is based on 40 ks and 80 ks segments. This is probably due to the fact that as the segment duration increases, the intrinsic variance increases as well; therefore, it is easier (for a given S/N) to measure it accurately using slightly shorter average duration of observations. The results are similar for both the 3-10 and the 10-20 keV bands.
\section{A prescription for measuring \mbh\ in Seyfert galaxies.}
Figures \ref{merged1} and \ref{merged2} show the {\mbh} versus {\snxv} data for the largest rev and VD samples, where the best-fit line slope and intercept are consistent with each other (see the encircled points in Figs. \ref{duration1} and \ref{duration2}).
The blue-dotted and red-dashed lines indicate these best-fit models to the rev and VD samples, respectively (for the same $\rm \Delta t$ value and energy band).
Our rev sample is comprised of Type I AGN only, since the mass estimation is obtained by the reverberation method. On the other hand, the VD sample, where the mass is estimated using stellar velocity dispersion measurements from high-quality spectra, contains both Type I and Type II AGNs. We note, however, that we do not expect any systematic difference in the {\mbh} -- {\snxv} relation for Type I and Type II AGNs. For example, \citet{rani2017} presented their results on the flux variability on hourly timescales for a large sample of AGN using {\nustar} and suggested no difference in the variability behaviour between Seyfert 1 and 2 galaxies in all X-ray bands.
To further explore this notion, in Figures \ref{merged1} and \ref{merged2}, we plot Type I AGN separately from the rev sample (filled-blue circles), Type I AGN from the VD sample (open-red squares), and Type II AGN from the VD sample (filled-red squares). The small number statistics in the individual VD sub-samples does not allow for a separate line fitting analysis. However, the plots clearly show that there are no systematic differences or inconsistencies between the different populations. We therefore chose to combine the samples and fit all the data simultaneously.
The solid green lines in Figs.\, \ref{merged1} and \ref{merged2} show the best-fit lines obtained from the combined (rev+VD) sample. The best-fit results are listed in Table \ref{bestsample} (columns 5 and 6 for the 3--10 keV band, columns 11 and 12 for the 10--20 keV band).
We propose for these best fit-line models to be used to measure the BH mass in Seyfert galaxies.
Columns 2 and 8 in the same table present a list of the total number of sources in the combined (rev+VD) sample for each band. The samples are smaller in the 10--20 keV band, mainly because the S/N of the light curves in this band is significantly smaller; hence, there are fewer objects with positive excess variance measurements. Columns 3 and 9 list the average duration of the observations ($\rm \Delta t_a$) as well as the average ($N_A$) and the minimum ($N_m$) number of segments for the the objects in each sample, rounded to the lower integer. Here,
$N_{m}$ corresponds to the number of segments of the target with the shortest duration for the observations in the combined sample. Columns 4 and 10 list
the average and the minimum S/N of the light curves that we used to compute the excess variance in the combined sample. Lastly, columns 7 and 13 list the average scatter of the points around the best-fit line.
\begin{table*}
\setlength{\tabcolsep}{4pt}
\caption{Best-fit results of the $\rm M_{BH}$ vs $\rm \sigma_{NXV}^2$ relation for the combined rev and VD samples.}
\label{bestsample}
\centering
\begin{tabular}{c c c c c c c c c c c c c}
\hline
& \multicolumn{6}{c}{Rev+VD (3--10 keV)} & \multicolumn{6}{c}{Rev+VD(10--20 keV)} \\
\hline
$\rm \Delta t^1$ &
N$_{\rm s}^2$ &
$\rm \Delta t$$_{\rm a}$/N$_{\rm a}$/N$_{\rm m}^3$ &
$\rm S/N_{a/m}^4$ &
${\rm \alpha}^5$ & $\beta^6$ &
$\sigma_{sc}^7$ &
N$_{\rm src}$ &
$\rm \Delta t$$_{\rm a}$/N$_{\rm a}$/N$_{\rm m}$ &
$\rm S/N_{a/m}$ & ${\rm \alpha}$ & ${\rm \beta}$ & $\sigma_{sc}$ \\
\hline \hline
10 & 32 & 240/24/10 & 13.6/4.1 & -0.74$\pm$0.10 & 6.78$\pm$0.06 & 0.42 & 21 & 250/25/11 & 8.5/2.5 & -0.91$\pm$0.11 & 6.73$\pm$0.09 & 0.35 \\
20 & 33 & 230/11/5 & 13.3/4.2 & -0.87$\pm$0.08 & 7.00$\pm$0.05 & 0.32 & 28 & 210/10/4 & 8.5/2.5 & -0.98$\pm$0.11 & 6.99$\pm$0.07 & 0.38 \\
40 & 30 & 230/5/2 & 13.6/4.7 & -0.73$\pm$0.13 & 7.24$\pm$0.06 & 0.36 & 27 & 230/5/2 & 8.3/2.9 & -1.14$\pm$0.12 & 7.22$\pm$0.06 & 0.29 \\
80 & 27 & 200/2/1 & 13.7/3.3 & -0.94$\pm$0.06 & 7.41$\pm$0.05 & 0.26 & 19 & 200/2/1 & 8.9/2.9 & -1.08$\pm$0.12 & 7.38$\pm$0.05 & 0.23 \\
\hline %
\multicolumn{13}{l}{\small $^1$ Segment duration in ks.} \\
\multicolumn{13}{l}{\small $^2$ Number of sources in the combined rev+VD samples.} \\
\multicolumn{13}{l}{\small $^3$ Average duration in ks/average number of segments/minimum number of segments} \\
\multicolumn{13}{l}{\small $^4$ Average signal-to-noise ratio /minimum signal-to-noise ratio} \\
\multicolumn{13}{l}{\small $^5$ Best-fit slope, $^6$ best-fit intercept.$^7$ Average scatter of the points around the best fit.} \\
\end{tabular}
\end{table*}
\subsection{ The prescription}
\label{prescription}
Based on the results presented in the previous section, we propose the following prescription for the measurement of the BH mass in AGN excess variance measurements.
First, the light curve(s) should be divided into a number of segments, the average \snxv\ should be computed, and then Equation \ref{linefit} can be used to derive \mbh, with $\alpha$ and $\beta$ listed in Table \ref{bestsample}, depending on the duration of the segment that was chosen and the energy band.
Second, the number of light curve segments that will be used to compute the excess variance should ideally be comparable to the average number of segments listed in Table \ref{bestsample} and (at least) larger than the minimum number of segments listed in the same table. Since the scatter of the points around the best-fit lines decreases with increasing segment duration, we propose the use of the longest possible segment to estimate the BH mass.
Finally, the average S/N of the light curves should be comparable to the average S/N and (at least) greater than the minimum S/N listed in Table \ref{bestsample} (see columns 4 and 10 in Table \ref{bestsample}). The smallest S/N values suggest that S/N ratios at least equal to $\sim$3 are necessary to achieve a reliable estimate of the BH mass.
\subsection{Compton-thick sources}
We used the prescription mentioned above to measure the BH mass of the CT sources. Therefore, we apply the same minimum duration and S/N criteria for the measurement of $\sigma_{NXV}^2$ presented above for the sources in the CT sample. Due to the small size of the CT sample and the reduced photon statistics due to the absorption, we were able to measure $\sigma_{NXV}^2$ in only four sources when using the 80 ks segments. The number of sources with positive \snxv\ measurements on shorter timescales is very small (only one or two).
Black cross-marks in Fig. \ref{ct} show the \mbh versus $\rm \sigma_{NXV}^2$ for the CT sources, in the 3-10 keV (upper panel) and the 10-20 keV band (lower panel) for the 80 ks segments.
The solid green line shows the best-fit relation to the combined rev+VD sources. An inspection of Fig. \ref{ct} suggests that the
$\sigma_{NXV}^2$ measurements in the 10-20 keV band are in good agreement with the model. Although the number of the CT sources is small, this result suggests that we could use the variability method to measure the BH mass in CT sources, when we use data in the 10--20 keV band.
On the other hand, the scatter of the BH-mass versus excess variance CT points around the best-fit line in the 3--10 keV is significantly greater. The CT data in the top panel of Fig.\, \ref{ct} may even indicate a lack of correlation between BH mass and variability for the CT sources in the 3--10 keV band. This could be due to the fact that most of the observed flux in the 3--10 keV band may be due to scattered radiation rather than the primary continuum in CT sources, although the limited number of CT measurements do not allow us to reach a firm conclusion in this regard.
\section{Application of the proposed prescription}
We applied the proposed prescription to obtain BH mass estimates for AGN that are not part of either the rev, VD, or CT samples and that have \nustar archival observations which fulfill the minimum number of segments and S/N criteria (as presented in \S \ref{prescription}). There are 22 such sources, which are listed in Table \ref{mass_estimates} and these constitute our 'prediction' sample (prd sample, hereafter). The observation log of these sources is presented in Table \ref{prediction_sample}. We followed the same steps as detailed in Sections \ref{data_reduction} and \ref{calc_nxv} to compute $\rm \sigma_{NXV}^2$ in the 3-10 and the 10-20 keV bands, using $\rm \rm \Delta t=10$, 20, 40, and 80 ks segments. Then we used the best-fit models listed in Table \ref{bestsample} to estimate the BH mass for each source. Our BH mass predictions are listed in Table \ref{mass_estimates}.
There are two sources (i.e. ESO511-30 and 2MASXJ19301380) where the excess variance is negative, at all timescales and in both energy bands. This is despite the fact that the light curves have a S/N value greater than 3 in both bands and are relatively long, specially in the case of ESO511-30 (BAT ID=719)m where the observation is $\sim$320 ks long. Since all the sources in the rev+VD sample have a positive excess variance measurement and their BH mass is (mostly) smaller than 10$^8$M$_{\odot}$, we suspect that the mass of the BH in these objects is significantly larger than 10$^8$ M$_{\odot}$.
Within each energy band, the BH mass estimates from the various light curve segments agree with each other within $1-2\sigma$ (where $\sigma$ is equal to the average scatter of the points around the best-fit line, as listed in Table \ref{bestsample}). This is not surprising, as the various light curve segments are all part of the same observations for each source.
The 3--10 keV and the 10--20 keV band BH mass estimates agree within $1-2\sigma$, at all timescales, in all sources, except IRAS09149-6206. In this case, we are able to measure the BH mass using segments with durations of 10, 20, 40m and 80 ks in the 3--10 keV band and the log(\mbh) estimates ranging between 7.35 and 7.55.
On the other hand, we cannot measure the BH mass using 10 and 20 ks long segments in the 10--20 keV band, because the resulting excess variance is negative. Moreover, the 10--20 keV excess variance using the 40 and 80 ks segments implies a much larger BH mass. The reason for this discrepancy is the fact that a (relatively) large amplitude variation appears in one of the 3--10 keV band light curves, while it is absent in the respective 10--20 keV light curves (see also \citet{walton2020}).
For two more sources (i.e. Pictor A and HE1143-1810), the excess variance is positive in the 3--10 keV band, but negative in the 10--20 keV band. There are low-amplitude variations in the 3--10 keV
band light curves, which do not appear in the 10--20 keV light curves. In both cases, log(\mbh) is greater than $\sim 7.6-7.7$. It is possible that the variability detection in the 3--10 keV band only is a statistical effect: due to the high BH mass, it just might happen that the variability is detected only in the 3--10 keV band, where the S/N is greater anyway. Another possibility may be that these sources are dominated by the reflection component at energies above 10 keV; hence, the continuum variations are better sampled at lower energies.
There are also two cases (i.e 3C111 and Mrk 926) where the $\rm \sigma_{NXV}^2$ was measured only in the 10-20 keV band. Mrk 926 is the source with the largest BH mass estimate in the prd sample. We believe that the detection of variability in the 10-20 keV and not in the 3--10 keV band is purely statistical in nature.
It just happens to measure a positive excess variance in this source at energies above 10 keV; and this is probably the case for 3C111 as well.
Finally, we note that BH mass estimates should always be treated with care. For example, NGC6240, is a galaxy that is known to harbour two active nuclei, each with a mass in excess of $9\times 10^7$ M$_{\odot}$ \citep[e.g.][]{kollatschny2020b}. Such galaxies are excluded from both the rev and the VD samples. We estimate a log(\mbh) of 7.5 - 8.2 (for $\rm \Delta t=40$ ks, in the 10--20 and 3--10 keV bands, respectively), in agreement with the previous estimates. However, we have not explored how the presence of two active nuclei may affect the accuracy of our method.
\begin{table*}
\caption{Black hole mass estimates for the prediction sample}
\label{mass_estimates}
\centering
\begin{tabular}{c c c c c c c c c c }
\hline
BAT ID & Name & \multicolumn{8}{c}{Log(M/M$_{\odot}$)} \\
\hline
& & \multicolumn{2}{c}{80 ks} & \multicolumn{2}{c}{40 ks} & \multicolumn{2}{c}{20 ks} & \multicolumn{2}{c}{10 ks} \\
& & 3-10keV & 10-20keV & 3-10keV & 10-20keV & 3-10keV & 10-20keV & 3-10keV & 10-20keV \\
\hline
\hline
163 & NGC1194 & 7.2 & 6.7 & 7.07 & 6.5 & -- & 6.6 & -- & 6.44\\
184 & NGC1365 & 6.01 & 6.16 & 6.39 & 6.22 & 6.19 & 6.27 & 6.35 & 6.33\\
214 & 3C111 & -- & 7.7 & -- & 7.56 & -- & 7.39 & -- & 7.25\\
229 & HE0436-4717 & 6.67 & 6.9 & 6.72 & -- & 6.94 & -- & 6.87 & --\\
270 & PictorA & 7.78 & -- & 7.68 & -- & 7.7 & -- & 7.59 & --\\
447 & IRAS09149-6206 & 7.54 & 8.54 & 7.35 & 8.37 & 7.42 & -- & 7.53 & --\\
472 & MCG-5-23-16 & 7.3 & 7.34 & 7.16 & 7.24 & 7.03 & 7.19 & 6.95 & 7.05\\
567 & HE1143-1810 & -- & -- & 7.63 & -- & 7.83 & -- & -- & --\\
657 & ESO323-77 & 7.34 & 7.42 & 7.55 & 7.65 & 7.12 & 7.17 & 7.24 & 7.18\\
670 & MCG-3-34-64 & 6.56 & 6.24 & 6.84 & 6.39 & 6.62 & 6.5 & 7.06 & 6.66\\
692 & 4U1344-60 & 7.07 & 7.05 & 7.28 & 7.32 & 7.19 & 7.19 & 7.05 & 6.98\\
719 & ESO511-30 & -- & -- & -- & -- & -- & -- & -- & --\\
750 & LEDA3076910 & 6.61 & 6.3 & 6.73 & 6.31 & 6.7 & 6.52 & 6.69 & 6.45\\
837 & ESO138-1 & -- & -- & -- & -- & -- & -- & -- & --\\
841 & NGC6240 & -- & -- & 7.99 & 7.7 & 8.22 & 7.52 & -- & 7.96\\
995 & Fairall51 & 6.7 & 6.83 & 6.83 & 6.77 & 6.88 & 6.92 & 6.8 & 6.82\\
1032 & ESO141-55 & 7.14 & 7.39 & 7.47 & -- & 7.56 & -- & 7.51 & --\\
1040 & 2MASXJ19301380 & -- & -- & -- & -- & -- & -- & -- & --\\
1111 & IGRJ21277+5656 & 6.85 & 6.82 & 6.95 & 6.83 & 6.74 & 6.74 & 6.62 & 6.56\\
1172 & MR2251-178 & -- & -- & 8.22 & 8.36 & 8.1 & 8.41 & 7.94 & --\\
1183 & Mrk926 & -- & 8.32 & -- & 8.59 & -- & 8.5 & -- & --\\
1194 & IRAS23226-3843 & 7.79 & 8.52 & 7.61 & -- & 7.83 & -- & -- & --\\
\hline
\end{tabular}
\end{table*}
\section{Summary \& guidelines}
Here, we present a prescription for measuring the mass of the central BH in AGN using excess variance measurements. We emphasise that we did not study the variability versus BH mass relation in AGN, as this has been widely considered in previous studies. Our objective is to select a well-defined sample of sources with available BH mass measurements and then to identify the minimum duration and S/N of the light curves that are necessary for the \mbh versus \snxv\, relation to be well defined, that is, for the scatter of the points around the best-fit lines to be similar to the average uncertainty of BH mass estimates computed with other methods. If the X-ray variability mechanism is the same for all AGN, we can measure BH mass in any AGN following the prescription we outlined in \S \ref{prescription}, as long as the duration and S/N of the available light curves satisfy the 'quality' criteria discussed there.
We emphasise that the prescription to measure AGN that we present in this work is only applicable to the case of light curves with a bin size of 250 sec that can be divided in segments of 80, 40, 20, and 10 ks. Such light curves can be obtained from observations taken by {\rm NuSTAR} and {\rm XMM-Newton}, as well as from past X--ray observatories, such as {\rm Suzaku} and {\rm ASCA}. There are many AGN observations in the {\rm RXTE} and {\it Swift}/XRT archive, however, their mean bin size is significantly larger than 250 sec and, hence, they cannot be used to estimate BH mass using the prescription in Section \ref{prescription}. Nevertheless, there are already quite a few AGN whose \nustar\, and {\rm XMM-Newton} archival light curves can meet the criteria described in Section \ref{prescription}. We already applied our prescription to archival light curves of 22 AGN with \nustar\ light curves that fulfill the necessary criteria and we computed BH mass estimates for them. In the near future, we plan to search the {\rm XMM-Newton} archive for AGN with light curves that can satisfy the necessary criteria and estimate their BH mass.
The average scatter of the rev+VD sources around the best-fit line, $\bar{\sigma}_{SC}$, is 0.25, 0.35, and 0.4, for the segments with $\rm \Delta t$=80 ks, 40 and 20 ks, and 10 ks, respectively (these numbers are equal to the mean of $\sigma_{SC,3-10 {\rm keV}}$ and $\sigma_{SC,10-20 {\rm keV}}$ values, listed in Table \ref{bestsample}). On average, the error of log(\mbh) should be comparable to $\bar{\sigma}_{SC}$. We stress that $\bar{\sigma}_{SC}$ is not the formal error on the BH mass estimate of an individual source. Instead, it should be considered as representative of the expected standard deviation of these estimates. It is possible to compute the error of log(\mbh) for an individual source, but only if the available light curves are long enough (see below).
Our results demonstrate that it is possible to measure the BH mass in AGN using excess variance measurements if the available light curves are long enough and have the necessary S/N values. The resulting BH mass estimates will be as reliable as the estimates which are based on other techniques (e.g. the continuum and emission line reverberation and velocity dispersion measurements).
The 'quality' requirements on the light-curve duration and S/N are quite strong. They are mainly imposed by the statistical properties of \snxv. The normalised excess variance is a statistic which follows a highly asymmetric distribution function, with a broad variance \cite[see][]{allevato2013}. We strongly advise against the use of light curves that do not follow the duration and S/N criteria that we list in Section \ref{prescription}. If such light curves are used to compute \snxv\, and then log(\mbh) using the best-fit results listed in Table \ref{bestsample}, the resulting estimate may be heavily biased.
Due to the \snxv\, statistical properties, the \mbh\, versus \snxv\ relation may actually depend on the characteristics of the light curves we use (for the same segment duration). For example, the difference between the slope of the best-fit line to the VD data plotted in the top left and bottom right panels in Fig.\,\ref{comparison1} is 1.09$\pm 0.19$, which is significant at the 5.7$\sigma$ level. This result clearly shows that it will be inappropriate to use the best-fit results listed in Table \ref{bestsample} to compute the BH mass for sources with low-quality light curves.
It may be worth investigating the relation between BH mass and excess variance for shorter light curves or light curves with lower S/N and to establish a prescription for estimating their \mbh\, values as well. In this way, BH masses could be measured for many more sources. This aspect, however, is beyond the scope of this work and, in any case, we suspect that the resulting estimates will have significantly larger uncertainties, hence, they may be of limited applicability.
Table \ref{bestsample} lists the best-fit parameters of the lines we fitted to the data of the combined (rev+VD) sample, when the excess variance is computed using light curve segments which are $\rm \Delta t=10$, 20, 40 and 80 ks long (Figs. \ref{merged1} and \ref{merged2}). We expect that the BH mass estimates should be independent of the segment duration. However, whenever possible, it would be advisable to compute \mbh\, using all four different light-curve segment durations to verify that there are no unexpected complications with the observations. Since the average error of the BH estimates should decrease with increasing segment duration, we propose to adopt BH mass estimates based on the longest $\rm \Delta t$.
The accuracy of the BH mass estimate will not increase even if the duration of the light curves is longer than the longest light curves among the sources in the rev+VD sample. This is because the uncertainty on \mbh\ mainly depends on the uncertainty of the best-fit line parameters, which should be representative of the typical scatter of the points plotted in Figs \ref{merged1} and \ref{merged2}. On the other hand, if there are more than 50 segments, with an average S/N ratio greater than 3, then it is possible to compute the error on the mean \snxv\ \citep[see][]{allevato2013}. In this case, an error on the BH mass measurement of the individual source can be measured, using the error of the best-fit line parameters (listed in Table \ref{bestsample}) and the commonly used rule of error propagation.
The best-fit results and the scatter of the rev and VD points around the best-fit lines are comparable for the 3--10 and the 10--20 keV bands (see Figs.\, \ref{merged1} and \ref{merged2}). Black hole mass estimates should be similar when using light curves in both bands to compute the excess variance. Nevertheless, we suggest computing the BH mass in both bands when there are light curves in both bands (i.e. from NuSTAR observations). The 10--20 keV band should be preferred for sources which are heavily absorbed (as our results, from a limited number of CT sources, indicate). On the other hand, the 3--10 keV band should be preferred for sources which may be dominated by the X--ray reflection component at energies above 10 keV.
The \mbh\--\snxv\ relations studied here are well calibrated for sources with BH mass between 10$^6-10^8$M$_{\odot}$. We caution against the use of the proposed method for sources with \mbh$\le 10^6$M$_{\odot}$. In this regime, the relation between \mbh\ and \snxv\ could significantly deviate from the linear assumption adopted in this paper. On the other hand, the method we present should be valid for AGN with larger BH masses, although the available data sets for such sources may not be sufficient. The best-fit line in Figs. \ref{merged1} and \ref{merged2} show that the normalised excess variance in sources with BH mass larger than 10$^8$ M$_{\odot}$ should be smaller than $\sim 10^{-3}$, even when $\rm \Delta t=80$ ks (the limit is smaller for shorter segments).
This is a small excess variance limit and light curves with the minimum S/N and duration requirements will probably result in a negative \snxv. It will be necessary to use light curves with a much higher S/N or many light curve segments in order to measure such a low intrinsic excess variance.
The proposed prescription should be valid both for Type I and Type II objects. Although all AGN in the rev sample are Type I, the VD sample contains a mixture of Type I and Type II sources. Our plots show no difference
in the {\mbh} -- {\snxv} relation for these two classes and, moreover,
the best-fit lines to the rev and VD log(\mbh) -- \snxv\, plots are almost identical. Therefore we expect the method to perform equally well for objects in either class.
In this work, we study mainly radio-quiet AGN, although a limited number of objects from the 3C (and one for the 4C) catalogue are included in our sample. Emission from the jet is not supposed to be dominant in the bands we study for any of these sources and we therefore expect the proposed prescription to be valid for these sources as well. In any case, we strongly advise against the use of the proposed prescription to blazars. In this class of sources, in constrast to their radio-quiet counterparts, the X-ray emission could be dominated by radiative processes in the relativistic jet. In agreement to this scenario, \citet{rani2017} suggested that blazars show much greater X-ray flux variation amplitudes as compared to Seyfert galaxies.
In order to compare or combine BH mass measurements using the prescription presented here with those derived using different methods, it is necessary to take into account (when relevant) that our estimators are normalised to the \mbh\ -- $\sigma_{\ast}$ relation of \citet{woo2013}. We note that our results are applicable for AGN in the nearby Universe (i.e. sources with z$\le 0.07$). Our prescription should also be applicable to AGN further away, as long as rest-frame light curves satisfy the necessary quality-criteria. Obviously, the reliability of the BH mass estimates in this case will also depend on the assumption that the AGN variability properties do not change with redshift.
It would be interesting to see if a similar method can be established for the AGN light curves resulting from e-ROSITA sky survey data. The method will have to be calibrated appropriately, as the light curve sampling properties will be entirely different to the light curves that are currently delivered by the pointed observations of XMM-Newton and \nustar. It is not certain that it will be possible to measure AGN using variability measurements with the e-ROSITA light curves, but given the large amount of sources that will be identified, it may be worth investigating this possibility.
\begin{acknowledgements}
We have made use of data from the $\rm NuSTAR$ mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. We thank the \nustar Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the \nustar Data Analysis Software (NuSTARDAS) jointly developed by the Space Science Data Center (SSDC; ASI, Italy) and the California Institute of Technology (USA). This work is based on archival data, software or online services provided by the SSDC. This research has made use of the High Energy Astrophysics Science Archive Research Centre Online Service, provided by the NASA/Goddard Space Flight Centre and NASA’s Astrophysics Data System.
\end{acknowledgements}
\bibliography{ref}
\bibliographystyle{aa}
\clearpage
\onecolumn
\begin{appendix}
\section{Plots of the best-fit results as function of observation durations}
\section{Observation logs}
\begin{longtable}{rrcccccp{2cm}}
\caption{\label{rev_sample} Log of the rev sample}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration & $\rm log(M/M_\odot)$ & Ref. \\
& & & & & (sec) & & \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\
\hline
\endfirsthead
\caption{continued.}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration & $\rm log(M/M_\odot)$ & Ref. \\
& & & & & (sec) & & \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\
\hline
\endhead
\hline
\endfoot
6 & Mrk335 & 60001041005 & 0.025 & Sy1.2 & 191700 & 7.23 & 1, 2, 3, 5, 6, 7, 8, 9 \\
& & 80201001002 & & & 167400 & & \\
& & 80001020002 & & & 138000 & & \\
& & 80502636006 & & & 96000 & & \\
& & 80502636002 & & & 94500 & & \\
& & 90602619006 & & & 64800 & & \\
& & 90602619004 & & & 63900 & & \\
& & 80502636004 & & & 48900 & & \\
& & 90602619008 & & & 48000 & & \\
& & 60001041002 & & & 45900 & & \\
& & 60001041003 & & & 42900 & & \\
& & 90602619002 & & & 24600 & & \\
73 & Fairall9& 60001130002 & 0.047 & Sy1.2 & 74700 & 8.29 & 2, 7, 9, 10, 11 \\
116 & Mrk590 & 90201043002 & 0.026 & Sy1.5 & 96900 & 7.57 & 2, 6, 7, 9 \\
& & 60160095002 & & & 39900 & & \\
& & 80402610002 & & & 39300 & & \\
266 & Ark120 & 60001044004 & 0.032 & Sy1.0 & 132600 & 8.06 & 2, 6, 7, 9 \\
310 & UGC3374 & 60201027002 & 0.020 & Sy1.5 & 184200 & 7.28 & 12, 13 \\
458 & Mrk110 & 60201025002 & 0.035 & Sy1.5 & 387000 & 7.29 & 2, 6, 9 14 \\
497 & NGC3227 & 60202002014 & 0.003 & Sy1.5 & 97800 & 6.75 & 2, 9, 13, 15, 16 \\
& & 60202002002 & & & 96000 & & \\
& & 60202002008 & & & 86400 & & \\
& & 60202002010 & & & 86100 & & \\
& & 60202002004 & & & 84000 & & \\
& & 80502609004 & & & 63600 & & \\
& & 80502609002 & & & 57000 & & \\
530 & NGC3516 & 60002042004 & 0.008 & Sy1.2 & 110100 & 7.39 & 2, 9, 13, 15, 16 \\
& & 60002042002 & & & 71400 & & \\
& & 60160001002 & & & 70500 & & \\
& & 60302016012 & & & 57900 & & \\
& & 60302016010 & & & 55500 & & \\
& & 60302016004 & & & 53400 & & \\
& & 60302016006 & & & 53100 & & \\
& & 60302016002 & & & 45600 & & \\
& & 60302016008 & & & 37800 & & \\
542 & Arp151 & 60160430002 & 0.021 & Sy1.2 & 39000 & 6.67 & 9, 17, 18, 19, 20 \\
558 & NGC3783 & 60101110004 & 0.009 & Sy1.2 & 88800 & 7.37 & 2, 9, 21, 22, 23, 24 \\
& & 60101110002 & & & 86100 & & \\
& & 80202006002 & & & 56400 & & \\
566 & UGC6728 & 60160450002 & 0.006 & Sy1.2 & 27600 & 5.85 & 25, 26 \\
583 & Mrk1310 & 60160465002 & 0.019 & Sy1.5 & 39300 & 6.21 & 9, 17, 18 \\
585 & NGC4051 & 60001050008 & 0.002 & Sy1.5 & 102900 & 5.89 & 2, 9, 12, 13, 27 \\
& & 60001050006 & & & 95400 & & \\
& & 60001050003 & & & 86100 & & \\
& & 60001050002 & & & 16800 & & \\
595 & NGC4151 & 60001111005 & 0.003 & Sy1.5 & 119700 & 7.55 & 2, 7, 9, 13, 15, 28, 29, 30, 31, 32 \\
& & 60502017004 & & & 87000 & & \\
& & 60502017006 & & & 62700 & & \\
& & 60502017002 & & & 62100 & & \\
& & 60502017008 & & & 61800 & & \\
& & 60502017012 & & & 57000 & & \\
& & 60502017010 & & & 57000 & & \\
& & 60001111002 & & & 46200 & & \\
608 & NGC4253 & 60001048002 & 0.012 & Sy1.5 & 172800 & 6.82 & 9, 17, 18, 33 \\
616 & NGC4395 & 60061322002 & 0.001 & Sy1.0 & 39000 & 5.44 & 34 \\
631 & NGC4593 & 60001149006 & 0.009 & Sy1.0 & 47700 & 6.88 & 2, 9, 35, 36 \\
& & 60001149004 & & & 47700 & & \\
& & 60001149010 & & & 39900 & & \\
680 & MCG-6-30-15 & 60001047003 & 0.007 & Sy1.0 & 261000 & 6.29 & 37 \\
& & 60001047005 & & & 57000 & & \\
& & 60001047002 & & & 46500 & & \\
686 & NGC5273 & 60061350002 & 0.003 & Sy1.5 & 39900 & 6.66 & 38 \\
697 & Mrk279 & 60160562002 & 0.030 & Sy1.5 & 37800 & 7.43 & 2, 7, 9, 47, 48\\
717 & NGC5548 & 60002044008 & 0.017 & Sy1.5 & 98100 & 7.71 & 2, 7, 13, 15, 17, 39, 40, 41, 42, 43, 44, 45, 46 \\
& & 60002044006 & & & 97800 & & \\
& & 60002044005 & & & 97200 & & \\
& & 90701601002 & & & 73800 & & \\
774 & Mrk290 & 60061266004 & 0.029 & Sy1.5 & 41700 & 7.27 & 2, 9, 16 \\
984 & 3C382 & 60001084002 & 0.057 & Sy1.2 & 155400 & 8.84 & 12, 13 \\
& & 60202015002 & & & 47400 & & \\
& & 60202015004 & & & 44400 & & \\
& & 60202015010 & & & 40200 & & \\
& & 60202015008 & & & 38700 & & \\
& & 60202015006 & & & 38700 & & \\
& & 60061286002 & & & 32700 & & \\
994 & 3C390.3 & 60001082003 & 0.056 & Sy1.5 & 78000 & 8.63 & 2, 7, 9, 41, 49, 50, 51, 52, 53, 54 \\
& & 60001082002 & & & 38100 & & \\
1090 & Mrk509 & 60101043002 & 0.034 & Sy1.2 & 320400 & 8.05 & 2, 6, 7, 9 \\
& & 60101043004 & & & 75000 & & \\
1182 & NGC7469 & 60101001004 & 0.016 & Sy1.5 & 45600 & 6.95 & 2, 4, 7, 13, 55 \\
& & 60101001008 & & & 45300 & & \\
& & 60101001006 & & & 45300 & & \\
& & 60101001014 & & & 45000 & & \\
& & 60101001002 & & & 41400 & & \\
& & 60101001012 & & & 39600 & & \\
& & 60101001010 & & & 39600 & & \\
\hline
\end{longtable}
{Notes: (1) {\it Gehrels SWIFT}/BAT catalogue identification number (2) Optical counterpart name (3) \nustar observation ID (4) Spectroscopic redshift (5) Optical classification (6) Duration of the \nustar observation (7) Logarithm of the source black hole estimate in solar mass units (8) Reference for the stellar velocity dispersion measurement: (1) \citet{grier2012a}, (2) \citet{zu2011}, (3) \citet{grier2012b}, (4) \citet{collier1998}, (5) \citet{hu2015}, (6) \citet{peterson1998}, (7) \citet{peterson2004}, (8) \citet{du2014}, (9) \citet{bentz2013}, (10) \citet{rodriguez1997}, (11) \citet{santos1997}, (12) \citet{fausnaugh2017}, (13) \citet{dalla2020}, (14) \citet{kollatschny2001}, (15) \citet{derosa2018}, (16) \citet{denney2010}, (17) \citet{grier2013}, (18) \citet{bentz2010}, (19) \citet{valenti2015}, (20) \citet{bentz2008}, (21) \citet{bentz2021a}, (22) \citet{reichert1994}, (23) \citet{onken2002}, (24) \citet{stirpe1994}, (25) \citet{bentz2016b}, (26) \citet{bentz2021b}, (27) \citet{denney2009}, (28) \citet{clavel1990}, (29) \citet{metzroth2006}, (30) \citet{ulrich1996}, (31) \citet{maoz1991}, (32) \citet{bentz2006}, (33) \citet{bentz2009}, (34) \citet{peterson2005}, (35) \citet{barth2013}, (36) \citet{denney2006}, (37) \citet{bentz2016a}, (38) \citet{bentz2014}, (39) \citet{derosa2015}, (40) \citet{pei2017}, (41) \citet{Kovacevik2014}, (42) \citet{clavel1991}, (43) \citet{korista1995}, (44) \citet{dietrich1993}, (45) \citet{peterson2013}, (46) \citet{peterson2002}, (47) \citet{maoz1990}, (48) \citet{santos2001}, (49) \citet{dietrich2012}, (50) \citet{obrien1998}, (51) \citet{sergeev2017}, (52) \citet{dietrich1998}, (53) \citet{sergeev2011}, (54) \citet{shapovalova2010}, (55) \citet{peterson2014}}
\begin{longtable}{rrrrrccrc}
\caption{\label{vd_sample} Log of VD sample}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration & $\rm Log(M/M_\odot)$ & Ref. & $\rm N_H$ \\
& & & & & (sec) & & & $(\rm cm^{-2})$ \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\
\hline
\endfirsthead
\caption{continued.}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration & $\rm Log(M/M_\odot)$ & Ref. & $\rm N_H$ \\
& & & & & (sec) & & & $\rm (cm^{-2})$ \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\
\hline
\endhead
\hline
\endfoot
62 & IC1657 & 60261007002 & 0.012 & Sy2 & 86100 & 6.93 & 1 & 23.40 \\
& & 60061008003 & & & 23400 & & & \\
63 & NGC454E & 60061009002 & 0.012 & Sy2 & 41100 & 8.41 & 1 & 23.30 \\
74 & NGC513 & 60061012002 & 0.019 & Sy2 & 27600 & 7.64 & 1 & 22.78 \\
77 & Mrk359 & 60402021010 & 0.017 & Sy1.5 & 100200 & 6.60 & 1 & 20.61 \\
& & 60402021006 & & & 98700 & & & \\
& & 60402021004 & & & 98400 & & & \\
84 & NGC612 & 60061014002 & 0.030 & Sy2 & 31500 & 8.98 & 1 & 23.97 \\
102 & NGC788 & 60061018002 & 0.013 & Sy2 & 28800 & 8.03 & 1 & 23.82 \\
157 & NGC1142 & 60368001002 & 0.028 & Sy2 & 41400 & 9.29 & 1 & 23.76 \\
171 & UGC2638 & 60160148002 & 0.023 & Sy2 & 44400 & 7.56 & 1 & 22.86 \\
216 & NGC1566 & 80401601002 & 0.005 & Sy1.5 & 151800 & 6.72 & 2 & 20.00 \\
& & 60501031006 & & & 149400 & & & \\
& & 60501031004 & & & 134400 & & & \\
& & 80502606002 & & & 116700 & & & \\
& & 80301601002 & & & 111000 & & & \\
& & 60501031002 & & & 104700 & & & \\
260 & 2MASXJ05081967+1721483 & 60006011002 & 0.017 & Sy1.9 & 27600 & 8.49 & 1 & 22.33 \\
279 & ESO553-43 & 60160236002 & 0.027 & Sy2 & 38700 & 7.84 & 1 & 23.30 \\
308 & NGC2110 & 60061061002 & 0.007 & Sy2 & 33300 & 9.29 & 1 & 22.94 \\
337 & VIIZw73 & 60061067002 & 0.041 & Sy2 & 30900 & 7.30 & 1 & 23.74 \\
345 & 2MASXJ06411806+3249313 & 60061071002 & 0.047 & Sy2 & 44400 & 7.92 & 1 & 23.12 \\
349 & UGC3601 & 60160278002 & 0.017 & Sy1 & 42300 & 8.50 & 1 & 21.40 \\
385 & UGC3995B & 60061352002 & 0.016 & Sy2 & 43200 & 7.55 & 1 & 23.92 \\
405 & UGC4211 & 60260001002 & 0.034 & Sy2 & 39300 & 8.03 & 1 & 23.02 \\
416 & Fairall272 & 60061080002 & 0.021 & Sy2 & 45300 & 7.95 & 1 & 23.53 \\
426 & 4C+29.30 & 60061083002 & 0.064 & Sy2 & 40200 & 8.71 & 1 & 23.80 \\
430 & CASG218 & 60260002002 & 0.054 & Sy2 & 39001 & 8.32 & 1 & 23.61 \\
439 & Mrk18 & 60061088002 & 0.011 & Sy1.9 & 36600 & 7.59 & 1 & 23.08 \\
451 & IC2461 & 60061353002 & 0.007 & Sy2 & 62700 & 6.68 & 1 & 22.78 \\
453 & MCG-1-24-12 & 60061091010 & 0.019 & Sy2 & 28800 & 7.63 & 1 & 22.81 \\
& & 60061091006 & & & 28500 & & & \\
& & 60061091012 & & & 22200 & & & \\
& & 60061091002 & & & 22200 & & & \\
& & 60061091004 & & & 20100 & & & \\
471 & NGC2992 & 60160371002 & 0.007 & Sy1.9 & 37800 & 7.44 & 1 & 21.72 \\
474 & 4C+73.08 & 60160374002 & 0.058 & Sy2 & 22500 & 9.17 & 1 & 23.79 \\
480 & NGC3081 & 60561044002 & 0.008 & Sy2 & 108900 & 7.17 & 2 & 23.91 \\
509 & LEDA93974 & 60061202002 & 0.020 & Sy2 & 39900 & 8.41 & 1 & 22.60 \\
515 & UGC5856 & 60061359002 & 0.025 & Sy2 & 44100 & 6.77 & 1 & 22.60 \\
517 & UGC5881 & 60160409002 & 0.020 & Sy2 & 39000 & 7.96 & 1 & 22.90 \\
519 & Mrk417 & 60061206002 & 0.032 & Sy2 & 39600 & 7.96 & 1 & 23.90 \\
528 & Z291-28 & 60160420002 & 0.047 & Sy2 & 28500 & 8.21 & 1 & 23.15 \\
532 & Mrk732 & 60061208002 & 0.029 & Sy1.5 & 51300 & 8.41 & 1 & 20.00 \\
533 & 2MASXJ11140245+2023140 & 60061324002 & 0.026 & Sy2 & 43200 & 8.07 & 1 & 23.28 \\
548 & NGC3718 & 60301031004 & 0.003 & Sy1.9 & 167400 & 9.63 & 1 & 21.96 \\
& & 60301031008 & & & 110400 & & & \\
& & 60301031006 & & & 106500 & & & \\
557 & HE1136-2304 & 80002031003 & 0.027 & Sy1.5 & 127800 & 7.32 & 1 & 21.30 \\
& & 80002031002 & & & 51300 & & & \\
560 & NGC3786 & 60061349002 & 0.008 & Sy1.9 & 43200 & 7.46 & 2 & 22.36 \\
568 & UGC6732 & 60161452002 & 0.009 & Sy2 & 40200 & 8.52 & 1 & 22.28 \\
580 & IC751 & 60061217002 & 0.031 & Sy2 & 21600 & 8.50 & 1 & 23.88 \\
584 & LEDA38038 & 60061219002 & 0.028 & Sy2 & 39600 & 7.75 & 1 & 22.46 \\
600 & Z187-22 & 60160481002 & 0.024 & Sy2 & 39000 & 7.47 & 1 & 23.74 \\
605 & Was49b & 60061335002 & 0.063 & Sy1 & 38100 & 8.18 & 1 & 23.41 \\
609 & NGC4258 & 60101046004 & 0.001 & Sy1.9 & 195600 & 7.43 & 2 & 23.00 \\
& & 60101046002 & & & 103500 & & & \\
615 & NGC4388 & 60501018002 & 0.008 & Sy2 & 97200 & 6.74 & 2 & 23.52 \\
& & 60061228002 & & & 38400 & & & \\
626 & NGC4507 & 60102051004 & 0.011 & Sy1.9 & 68700 & 7.69 & 2 & 23.95 \\
& & 60102051008 & & & 57300 & & & \\
629 & ESO506-27 & 60469006002 & 0.025 & Sy2 & 39300 & 8.98 & 1 & 23.95 \\
630 & LEDA170194 & 60061232002 & 0.036 & Sy2 & 39900 & 7.97 & 1 & 22.76 \\
653 & NGC4941 & 60061236002 & 0.004 & Sy2 & 39600 & 6.60 & 1 & 23.91 \\
654 & NGC4939 & 60002036002 & 0.010 & Sy2 & 42000 & 7.19 & 1 & 23.29 \\
659 & NGC4992 & 60061239002 & 0.025 & Sy2 & 43500 & 8.43 & 1 & 23.69 \\
669 & LEDA46599 & 60160536002 & 0.031 & Sy2 & 38700 & 8.39 & 1 & 22.88 \\
674 & ESO509-38 & 60260010002 & 0.026 & Sy1.9 & 45600 & 8.45 & 1 & 20.00 \\
678 & ESO509-IG066 & 60061244002 & 0.034 & Sy1.9 & 39000 & 7.98 & 1 & 22.84 \\
684 & NGC5283 & 60465006002 & 0.010 & Sy2 & 51600 & 7.56 & 2 & 23.15 \\
685 & Mrk268 & 60061246002 & 0.039 & Sy1.9 & 39600 & 8.43 & 1 & 23.53 \\
694 & IC4329A & 60001045002 & 0.016 & Sy1.5 & 310200 & 8.66 & 2 & 21.52 \\
710 & 2MASXJ14104482-4228325 & 60160571002 & 0.033 & Sy2 & 39900 & 8.10 & 1 & 22.78 \\
712 & NGC5506 & 60501015002 & 0.006 & Sy1.9 & 121200 & 8.06 & 2 & 22.44 \\
& & 60501015004 & & & 93900 & & & \\
723 & NGC5610 & 60160581002 & 0.016 & Sy2 & 45300 & 7.55 & 1 & 22.56 \\
733 & NGC5674 & 60061337002 & 0.024 & Sy2 & 39300 & 7.35 & 1 & 22.84 \\
734 & NGC5683 & 60160589002 & 0.036 & Sy1.2 & 40200 & 7.54 & 2 & 20.00 \\
754 & Mrk1392 & 60160605002 & 0.036 & Sy1.5 & 39300 & 8.39 & 2 & 20.00 \\
755 & 2MASXJ15064412+0351444 & 60301023002 & 0.037 & Sy2 & 125400 & 7.16 & 1 & 22.18 \\
& & 60061261002 & & & 39000 & & & \\
757 & Mrk1393 & 60376005002 & 0.054 & Sy1 & 60300 & 7.90 & 1 & 20.28 \\
& & 60160607002 & & & 44100 & & & \\
766 & NGC5899 & 60061348002 & 0.008 & Sy2 & 43500 & 8.58 & 1 & 23.03 \\
783 & NGC5995 & 60061267002 & 0.025 & Sy1.9 & 38400 & 8.55 & 1 & 21.97 \\
804 & Z367-9 & 60061270002 & 0.027 & Sy2 & 38700 & 9.98 & 1 & 23.02 \\
817 & SDSSJ163115.52+235257.4 & 60260011002 & 0.059 & Sy1 & 42000 & 8.67 & 1 & 21.70 \\
836 & LEDA214543 & 60061273002 & 0.032 & Sy2 & 39600 & 9.99 & 1 & 22.58 \\
875 & NGC6300 & 60061277002 & 0.003 & Sy2 & 28800 & 6.80 & 2 & 23.31 \\
960 & MCG+7-37-31 & 60061283002 & 0.041 & Sy2 & 21900 & 7.91 & 1 & 22.48 \\
& & 60061283004 & & & 15600 & & & \\
971 & CGMW5-03333 & 60160686002 & 0.067 & Sy1.9 & 44400 & 7.75 & 1 & 22.43 \\
978 & LEDA3097193 & 60061354002 & 0.022 & Sy2 & 28800 & 7.44 & 1 & 22.98 \\
& & 60061354004 & & & 16500 & & & \\
981 & CGMW5-04382 & 60061285002 & 0.019 & Sy2 & 42900 & 8.25 & 1 & 23.18 \\
1009 & Fairall189 & 60160700002 & 0.028 & Sy1 & 33300 & 7.96 & 1 & 20.00 \\
1027 & ESO231-26 & 60160706002 & 0.062 & Sy2 & 42000 & 8.76 & 1 & 23.38 \\
1049 & 2MASXJ19471938+4449425 & 60061292002 & 0.053 & Sy2 & 33900 & 9.00 & 1 & 22.84 \\
1051 & 3C403 & 60061293002 & 0.059 & Sy2 & 39300 & 9.17 & 1 & 23.69 \\
1135 & NGC7172 & 60061308002 & 0.008 & Sy2 & 60000 & 8.32 & 1 & 22.91 \\
1156 & ESO533-50 & 60061312002 & 0.026 & Sy2 & 42300 & 7.58 & 1 & 23.49 \\
1161 & Mrk915 & 60002060002 & 0.024 & Sy1 & 103800 & 7.49 & 1 & 20.00 \\
1162 & UGC12138 & 60061343002 & 0.025 & Sy1.5 & 42000 & 7.35 & 2 & 20.00 \\
1177 & UGC12282 & 60160812002 & 0.017 & Sy1 & 50400 & 9.96 & 1 & 23.76 \\
1202 & UGC12741 & 60061321002 & 0.017 & Sy2 & 39600 & 8.61 & 1 & 23.82 \\
1409 & NGC4579 & 60201051002 & 0.005 & Sy1.9 & 253800 & 7.94 & 2 & 20.50 \\
\hline
\end{longtable}
\begin{longtable}{rrrrrccrc}
\caption{\label{ct_sample} Log of CT sample}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration & $\rm log(M/M_\odot)$ & Ref. & $\rm N_H$ \\
& & & & & (sec) & & & $\rm (cm^{-2})$ \\
\hline
\endfirsthead
\caption{continued.}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration & $\rm log(M/M_\odot)$ & Ref. & $\rm N_H$ \\
& & & & & (sec) & & & $\rm (cm^{-2})$ \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\
\hline
\endhead
\hline
\endfoot
70 & MCG+8-3-18 & 60061010002 & 0.02 & Sy2 & 58200 & 8.27 & 1 & 24.12 \\
81 & ESO244-30 & 60468001002 & 0.025 & Sy2 & 59700 & 7.04 & 1 & 24.36 \\
144 & NGC1068 & 60302003004 & 0.003 & Sy1.9 & 110700 & 7.88 & 2 & 25.00 \\
& & 60002030002 & & & 109200 & & & \\
& & 60302003008 & & & 108600 & & & \\
& & 60002033004 & & & 108300 & & & \\
& & 60002033002 & & & 103800 & & & \\
& & 60302003002 & & & 97800 & & & \\
& & 60302003006 & & & 97200 & & & \\
& & 60002030004 & & & 90900 & & & \\
153 & NGC1125 & 60510001002 & 0.011 & Sy2 & 64500 & 6.98 & 1 & 24.21 \\
165 & NGC1229 & 60061325002 & 0.036 & Sy2 & 46500 & 8.10 & 1 & 24.00 \\
245 & Z420-15 & 60061053004 & 0.029 & Sy2 & 34800 & 8.15 & 1 & 24.08 \\
& & 60061053002 & & & 27300 & & & \\
325 & Mrk3 & 60002048004 & 0.013 & Sy1.9 & 52200 & 8.56 & 1 & 24.06 \\
& & 60002048002 & & & 51900 & & & \\
& & 60002048006 & & & 51000 & & & \\
& & 60002048010 & & & 46800 & & & \\
& & 60002048008 & & & 46200 & & & \\
& & 60002049004 & & & 42000 & & & \\
& & 60002049002 & & & 40500 & & & \\
& & 60002049006 & & & 40200 & & & \\
& & 60002049010 & & & 39900 & & & \\
& & 60002049008 & & & 37800 & & & \\
362 & UGC3752 & 60061072002 & 0.015 & Sy1.9 & 45300 & 6.98 & 1 & 24.78 \\
440 & NGC2788A & 60160344002 & 0.013 & Sy2 & 40500 & 8.64 & 1 & 24.26 \\
& & 60469001002 & & & 37500 & & & \\
467 & UGC5101 & 60001068004 & 0.039 & Sy1.9 & 44400 & 8.20 & 1 & 24.28 \\
484 & NGC3079 & 60061097002 & 0.003 & Sy2 & 40200 & 8.06 & 2 & 24.56 \\
518 & NGC3393 & 60061205002 & 0.012 & Sy2 & 27900 & 8.34 & 2 & 24.40 \\
590 & NGC4102 & 60160472002 & 0.002 & Sy2 & 39600 & 8.69 & 1 & 24.14 \\
711 & Circinus & 60002039002 & 0.001 & Sy2 & 94200 & 7.67 & 2 & 24.36 \\
& & 30002038004 & & & 74400 & & & 24.36 \\
& & 30002038006 & & & 66300 & & & \\
& & 30002038002 & & & 34200 & & & \\
739 & NGC5728 & 60061256002 & 0.009 & Sy1.9 & 49800 & 8.33 & 2 & 24.14 \\
740 & Z164-19 & 60061327006 & 0.029 & Sy1.9 & 37200 & 6.70 & 1 & 24.64 \\
828 & NGC6232 & 60061328004 & 0.014 & Sy2 & 63000 & 7.08 & 1 & 24.35 \\
& & 60061328002 & & & 31200 & & & \\
1127 & NGC7130 & 60261006002 & 0.016 & Sy1.9 & 81000 & 7.30 & 1 & 24.22 \\
& & 60061347002 & & & 39600 & & & \\
1184 & NGC7479 & 60061316002 & 0.007 & Sy1.9 & 43500 & 7.21 & 1 & 24.16 \\
1188 & NGC7582 & 60201003002 & 0.005 & Sy2 & 100500 & 7.15 & 2 & 24.15 \\
& & 60061318002 & & & 38100 & & & \\
& & 60061318004 & & & 27900 & & & \\
1198 & NGC7682 & 60368002002 & 0.017 & Sy2 & 47100 & 7.17 & 2 & 24.27 \\
& & 60061319002 & & & 43200 & & & \\
1262 & NGC1320 & 60061036004 & 0.008 & Sy2 & 52200 & 6.99 & 2 & 24.10 \\
& & 60061036002 & & & 26400 & & & \\
1302 & NGC2273 & 60001064002 & 0.006 & Sy2 & 39600 & 7.56 & 2 & 24.10 \\
1425 & NGC5194 & 60201062003 & 0.002 & Sy2 & 324000 & 6.47 & 2 & 24.70 \\
& & 60002038002 & & & 28500 & & & \\
\hline
\end{longtable}
{Notes: (1) {\it Gehrels SWIFT}/BAT catalogue identification number. (2) Optical counterpart name. (3) \nustar observation ID. (4) Spectroscopic redshift. (5) Optical Classification. (6) Duration of the \nustar observation. (7)Logarithm of the source black hole estimate in solar mass units. (8) Reference for the stellar velocity dispersion measurement: (1) \citet{koss2017}, (2) \citet{paturel2003}-Hyperleda database. (9) X-ray column density }
\begin{longtable}{rrrrrr}
\caption{\label{prediction_sample} Log of the prediction sample}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration \\
& & & & & (sec) \\
(1) & (2) & (3) & (4) & (5) & (6) \\
\hline
\endfirsthead
\caption{continued.}\\
\hline\hline
BAT ID & Name & NuSTAR ObsID & z & Type & Duration \\
& & & & & (sec) \\
(1) & (2) & (3) & (4) & (5) & (6) \\
\hline
\endhead
\hline
\endfoot
163 & NGC1194 & 60501011002 & 0.013 & Sy2 & 114900\\
184 & NGC1365 & 60002046003 & 0.005 & Sy2 & 75900\\
& & 60002046009 & & & 132300\\
& & 60002046007 & & & 146100\\
214 & 3C111 & 60202061002 & 0.048 & Sy1.2 & 39900\\
& & 60202061004 & & & 91800\\
& & 60202061006 & & & 104100\\
229 & HE0436-4717 & 60160197002 & 0.053 & Sy1 & 39000\\
& & 30001061004 & & & 116100\\
270 & PictorA & 60101047002 & 0.035 & Sy2 & 225300\\
447 & IRAS09149-6206 & 90401630002 & 0.057 & Sy1 & 162600\\
& & 60401020002 & & & 214500\\
472 & MCG-5-23-16 & 60001046006 & 0.008 & Sy1.9 & 207900\\
567 & HE1143-1810 & 60302002010 & 0.032 & Sy1.2 & 42600\\
& & 60302002006 & & & 45000\\
657 & ESO323-77 & 60202021004 & 0.015 & Sy1.5 & 80100\\
& & 60202021002 & & & 80400\\
& & 60202021006 & & & 80700\\
& & 60202021008 & & & 86400\\
670 & MCG-3-34-64 & 60101020002 & 0.016 & Sy1.9 & 151200\\
692 & 4U1344-60 & 60201041002 & 0.012 & Sy1.9 & 163800\\
719 & ESO511-30 & 60502035006 & 0.022 & Sy1 & 60300\\
& & 60502035002 & & & 62100\\
& & 60502035004 & & & 63000\\
& & 60502035010 & & & 63300\\
& & 60502035008 & & & 80400\\
750 & LEDA3076910 & 60061259002 & 0.016 & Sy1.5 & 39900\\
& & 60401022002 & & & 188700\\
837 & ESO138-1 & 60201040002 & 0.009 & Sy2 & 91800\\
& & 60061274002 & & & 91800\\
841 & NGC6240 & 60102042006 & 0.024 & Sy1.9 & 45000\\
& & 60102042004 & & & 51300\\
& & 60002040002 & & & 59700\\
995 & Fairall51 & 60402014004 & 0.014 & Sy1.5 & 59400\\
& & 60402014006 & & & 63300\\
& & 60402014002 & & & 118200\\
1032 & ESO141-55 & 60201042002 & 0.036 & Sy1.2 & 174300\\
1040 & 2MASXJ19301380+3410495 & 60160713002 & 0.062 & Sy1.5 & 39600\\
& & 60376001002 & & & 95100\\
1111 & IGRJ21277+5656 & 60001110003 & 0.014 & Sy1 & 50100\\
& & 60001110007 & & & 75000\\
& & 60001110002 & & & 89100\\
& & 60402008004 & & & 132300\\
& & 60402008010 & & & 133800\\
& & 60402008008 & & & 137400\\
& & 60402008006 & & & 142800\\
1172 & MR2251-178 & 60102025004 & 0.064 & Sy1.2 & 43500\\
& & 60102025008 & & & 45600\\
& & 60102025002 & & & 51600\\
& & 90601637002 & & & 52800\\
1183 & Mrk926 & 60201029002 & 0.046 & Sy1.5 & 213600\\
1194 & IRAS23226-3843 & 80502607002 & 0.035 & Sy2 & 118800 \\
\hline
\end{longtable}
{Notes: (1) {\it Gehrels SWIFT}/BAT catalogue identification number. (2) Optical counterpart name. (3) \nustar observation ID. (4) Spectroscopic redshift. (5) Optical Classification. (6) Duration of the \nustar observation.}
\end{appendix} |
Title:
Parameter Estimation of Gravitational Waves with a Quantum Metropolis Algorithm |
Abstract: After the first detection of a gravitational wave in 2015, the number of
successes achieved by this innovative way of looking through the universe has
not stopped growing. However, the current techniques for analyzing this type of
events present a serious bottleneck due to the high computational power they
require. In this article we explore how recent techniques based on quantum
algorithms could surpass this obstacle. For this purpose, we propose a
quantization of the classical algorithms used in the literature for the
inference of gravitational wave parameters based on the well-known Quantum
Walks technique applied to a Metropolis-Hastings algorithm. Finally, we compare
this algorithm with its classical counterpart for all the events of the first
GW catalog GWTC-1 for the estimation of different sets of parameters with
increasing complexities and we find a polynomial advantage in the quantum
algorithms, thus setting a first starting point for future algorithms.
| https://export.arxiv.org/pdf/2208.05506 |
\preprint{AIP/123-QED}
\title[Sample title]{Sample Title:\\with Forced Linebreak\footnote{Error!}}%
\thanks{Footnote to title of article.}
\author{A. Author}
\altaffiliation[Also at ]{Physics Department, XYZ University.}%
\author{B. Author}%
\email{Second.Author@institution.edu.}
\affiliation{
Authors' institution and/or address%
}%
\author{C. Author}
\homepage{http://www.Second.institution.edu/~Charlie.Author.}
\affiliation{%
Second institution and/or address%
}%
\date{\today}%
\keywords{Suggested keywords}%
\begin{quotation}
The ``lead paragraph'' is encapsulated with the \LaTeX\
\verb+quotation+ environment and is formatted as a single paragraph before the first section heading.
(The \verb+quotation+ environment reverts to its usual meaning after the first sectioning command.)
Note that numbered references are allowed in the lead paragraph.
The lead paragraph will only be found in an article being prepared for the journal \textit{Chaos}.
\end{quotation}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AIP
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://authors.aip.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. Either format may be used for submission
purposes; however, for peer review and production, AIP will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AIP that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, citations are numerical; \cite{feyn54} author-year citations are an option.
To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
REV\TeX\ provides the ability to properly punctuate textual citations in author-year style;
this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AIP styles for REV\TeX~4 include Bib\TeX\ style files
\verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for
numbered and author-year bibliographies,
respectively.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical, and
you obtain the author-year style by specifying a class option of \verb+author-year+.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
sorsamp}) after the first pass of \LaTeX\ produces the file
\verb+sorsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Author-year and numerical author-year citation styles (each for its own reason) cannot use this method.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AIP journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\appendix
\section{Appendixes}
To start the appendixes, use the \verb+\appendix+ command.
This signals that all following section commands refer to appendixes
instead of regular sections. Therefore, the \verb+\appendix+ command
should be used only once---to set up the section commands to act as
appendixes. Thereafter normal section commands are used. The heading
for a section can be left empty. For example,
\begin{verbatim}
\appendix
\section{}
\end{verbatim}
will produce an appendix heading that says ``APPENDIX A'' and
\begin{verbatim}
\appendix
\section{Background}
\end{verbatim}
will produce an appendix heading that says ``APPENDIX A: BACKGROUND''
(note that the colon is set automatically).
If there is only one appendix, then the letter ``A'' should not
appear. This is suppressed by using the star version of the appendix
command (\verb+\appendix*+ in the place of \verb+\appendix+).
\section{A little more on appendixes}
Observe that this appendix was started by using
\begin{verbatim}
\section{A little more on appendixes}
\end{verbatim}
Note the equation number in an appendix:
\begin{equation}
E=mc^2.
\end{equation}
\subsection{\label{app:subsec}A subsection in an appendix}
You can use a subsection or subsubsection in an appendix. Note the
numbering: we are now in Appendix~\ref{app:subsec}.
\subsubsection{\label{app:subsubsec}A subsubsection in an appendix}
Note the equation numbers in this appendix, produced with the
subequations environment:
\begin{subequations}
\begin{eqnarray}
E&=&mc, \label{appa}
\\
E&=&mc^2, \label{appb}
\\
E&\agt& mc^3. \label{appc}
\end{eqnarray}
\end{subequations}
They turn out to be Eqs.~(\ref{appa}), (\ref{appb}), and (\ref{appc}).
\nocite{*}
\bibliography{sorsamp}%
|
Title:
Truncated accretion discs in black hole X-ray binaries: dynamics and variability signatures |
Abstract: Variable features in black hole X-ray Binaries (BH-XRBs) are observed in
different energy ranges and time scales. The physical origin of different
spectral states in BH-XRBs and their relations with the underlying accretion
disc are still elusive. To investigate the intermediate state of BH-XRBs during
outburst, we simulate a truncated accretion disc around a Kerr black hole using
a general relativistic magneto-hydrodynamical (GRMHD) framework under
axisymmetry with adaptively refined mesh. Additionally, we have also carried
out radiative transfer calculations for understanding the implications of disc
dynamics on emission. Dynamically, the inner edge of the truncated accretion
disc oscillates in a quasi-periodic fashion (QPO). The QPO frequency of
oscillations $(\nu_{\rm QPO, max})$ increases as the magnetic field strength
and magnetic resistivity increase. However, as the truncation radius increases,
$\nu_{\rm QPO, max}$ decreases. In our simulation models, frequency varies
between $7\times(10M_{\odot}/M_{\rm BH})$ Hz $\lesssim\nu_{\rm QPO,
max}\lesssim20 \times (10M_{\odot}/M_{\rm BH})$ Hz, which is in the range of
low-frequency QPOs. We further find evidence of transient shocks in the highly
accreting stage during oscillation. Such a transient shock acts as an extended
hot post-shock corona around the black hole that has an impact on its radiative
properties. The radiative transfer calculations show signatures of these
oscillations in the form of modulation in the edge-brightened structure of the
accretion disc.
| https://export.arxiv.org/pdf/2208.10726 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\newcommand{\bvc}[1]{\textbf{\color{green}#1}}
\begin{keywords}
accretion, accretion discs - black hole physics - magnetic reconnection - MHD - shock waves - X-rays: binaries.
\end{keywords}
\section{Introduction}
X-ray binaries (XRBs) and active galactic nuclei (AGNs) typically have spectral and temporal variations in their light curves, which could be the imprints of physical processes occurring around the central objects. The BH-XRBs are mostly inactive, with rare outbursts caused by a sudden surge in accretion activity. In BH-XRBs, an outburst and its various spectral states (hard state, hard/soft intermediate state, and soft state) characteristics are commonly characterized using a hardness-intensity `Q' diagram \citep[see for detail,][etc.]{Fender-etal2004, Remillard-McClintock2006,Done-etal2007, Dunn-etal2010,Belloni2010, Belloni-Motta2016}.
The thermal component dominates the spectrum in soft-state, and it may be understood using a geometrically thin, optically thick accretion disc that produces a blackbody spectrum locally \citep {Shakura-Sunyaev1973, Novikov-Thorne1973}. In the hard state, a high-energy power-law component dominates the spectrum, and the contribution of both hard and soft components can be seen in the intermediate state \citep[e.g.,][]{Homan-Belloni2005, McClintock-etal2006, Belloni2010}. Compton upscattering of soft X-ray photons by hot electrons present in the corona close to the black hole causes the power-law component \citep{Thorne-Price1975, Sunyaev-Truemper1979, Chakrabarti-Titarchuk1995}.
The corona also plays an important role in understanding the observed correlation between different energy ranges of BH-XRBs \citep[e.g.,][]{Kylafis-etal2018}. Recently, with a correlation study for GRS 1915+105, it has been shown that the matter in the corona even contributes to the jet at a different time \citep{Mendez-etal2022}. Thus, it is crucial to study corona formation close to the black hole and investigate its dynamics for understanding the observed variability in BH-XRBs. There are essentially two models with regard to the placement of the corona: (i) lamp-post model, and (ii) extended corona \citep[see][]{Chauvin-etal2018}. The lamp-post model posits a compact corona at a specific height along the black hole's angular momentum axis. This is a simplified model and is often considered as ad-hoc \citep[and references therein] {Degenaar-etal2018}.
On the other hand, the extended corona model has several flavors depending on where the corona is located \citep{Galeev-etal1979, Haardt-Maraschi1991, Miyamoto-Kitamoto1991, Fender-etal1999, Kylafis-Belloni2015}. However, the general agreement is that it originates from the thin disc itself and occupies an area near the black hole. Evaporation of a truncated accretion disc at a certain radius can produce such a region \citep{Eardley-etal1975, Ichimaru1977}. Also, because of the shock-transition in the accretion flow, such corona can be produced from low angular momentum flow \citep[references therein]{Das2007,Chakrabarti-etal2015,Dihingia-etal2018,Dihingia-etal2019a,Dihingia-etal2019b,Dihingia-etal2020}. The generation of shock waves and oscillations is demonstrated using asymmetric hydrodynamical simulations \citep{Lee-etal2011,Lee-etal2016,Das-etal2014}. \cite{Singh-etal2021} recently have shown the production of shock with resistive MHD. However, the shock formation in these simulations is highly dependent on the initial conditions, and it requires supersonic injection of accretion materials at the outer boundary.
General relativistic magneto-hydrodynamics (GRMHD) is an indispensable technique for studying the physics of active galactic nuclei (AGNs) \citep[see,][references therein]{Davis-Tchekhovskoy2020, Mizuno2022}. However, GRMHD has not been widely used to investigate the physics of BH-XRBs. Recently, \cite{Dexter-etal2021} used radiation GRMHD around BH-XRBs and attributed the strongly magnetized accretion to the hard-state. In our recent study \citep{Dihingia-etal2021,Dihingia-Vaidya2022a}, we simulated a high-angular momentum thin disc around the black hole. We initially observed a structured jet and disc-wind around the black hole, but after sufficient simulation time, the inner part of the accretion disc starts to oscillate, and the time period of oscillation increases with time. We attributed the oscillating phase to be the hard-intermediate state (HIMS) seen in BH-XRBs.
The onset of an outburst happens once a thin-disc (high angular momentum matter) forms at a large distance $(\sim2\times10^4$ gravitational radius ($r_g$)) from the black hole \citep[see for discussion,][]{Kylafis-Belloni2015}. Such a scenario can be realized as a truncated accretion disc with a very large truncation radius around a black hole. Simulating such a realistic picture is computationally expensive. Consequently, we chose a truncation radius closer to the black hole to capture physics only in an intermediate state of the initial phase of the outburst.
Following this, we simulate a truncated accretion disc around a Kerr black hole. We consider that the standard thin disc is only up to a certain radius $(r_{\rm tr})$. Due to this generality, the truncated accretion disc may help us in understanding the physically motivated corona model and its variability signatures. In this study, we investigate the properties of the truncated accretion disc and its behavior with different truncation radii and magnetic field strengths both for the ideal MHD case and with different resistivity for the resistive case. Further, we employ the general relativistic radiative transfer (GRRT) post-processing module RAPTOR \citep{Bronzwaer-etal2018} to understand its consequences in the radiative properties of the truncated accretion disc.
Our paper is arranged as follows: In section 2, we explain in detail the numerical approach along with the initial conditions considered for our study. Section 3 explains the dynamical evolution of simulation models. Section 4 provides temporal characteristics of time average quantities, and in section 5, we present the radiative properties of simulation models. A summary and discussion of our results are presented in section 6.
\section{Numerical setup}
\subsection{Basic equations}
For this work, we use the \texttt{code BHAC} equipped with adaptive mesh refinement (AMR) to solve the set of ideal as well as resistive GRMHD equations following \cite{Porth-etal2017, Olivares-etal2019,Ripperda-etal2019}. The basic resistive GRMHD equations are as follows,
\begin{align}
\begin{aligned}
&\nabla_\mu\left(\rho u^\mu\right)=0,\\
&\nabla_\mu T^{\mu\nu}=0,\\
&\nabla_\mu F^{\mu\nu}={\cal J}^\mu,\\
&\nabla_\mu{}^*F^{\mu\nu}=0,\\
\end{aligned}
\label{eq-01}
\end{align}
where different symbols have their usual meaning, viz. $\rho, u^\mu, T^{\mu\nu}, F^{\mu\nu}, {}^*F^{\mu\nu}$, and ${\cal J}^{\mu}$, stand for rest-mass density, four velocity, energy-momentum tensor, Faraday tensor, dual of the Faraday tensor, and electric 4-current, respectively.
The details of these terms and explicit procedure of solving is given in \cite{Porth-etal2017,Olivares-etal2019,Ripperda-etal2019}. These equations are solved in a spherically symmetric, Modified Kerr-Schild (MKS) geometry in axisymmetric.
By choosing an appropriate MKS stretching parameter, we ensure that the concentration of maximum resolution is near the equatorial plane of the simulation domain \citep{McKinney-Gammie2004}.
\texttt{Code BHAC} employs the constrained-transport method \citep{DelZanna-etal2007} to ensure a divergence-free magnetic field in the simulation domain. Throughout the work, we use a generalised unit system considering $G=M=c=1$, where $M$, $G$, and $c$ are the mass of the black hole, the universal gravitational constant, and the speed of light, respectively. In this unit system, mass, length, and time are expressed in terms of $M$, $r_g=GM/c^2$, and $t_g=GM/c^3$, respectively. Subsequently, we follow convention of sign of the metric as $(-, +, +, +)$, with four velocities satisfying $u_\mu u^\mu = -1$.
Our simulation box is extended radially from $r=r_{\rm H}$ (event horizon) to the $r_{\rm out}=500$. In the polar direction, our box is extended from $\theta=0-\pi$.
\subsection{Initial conditions}
We set up a initial thin-disc following \cite{Dihingia-etal2021,Dihingia-Vaidya2022a}, the setup is based on the \cite{Novikov-Thorne1973} model. The density profile in for the thin-disc setup in Boyer-Lindquist coordinates is given by,
\begin{align}
\rho(r,\theta) = \rho_e(r) \exp\left(-\frac{\alpha^2 z^2}{H^2}\right); ~~ z=r\cos(\theta).
\label{eq-rho}
\end{align}
Here, we choose $\alpha=2$ to maintain a thin disc configuration and $H$ is the scale height of the accretion disc, which is obtained following \citet{Riffert-Herold1995} and \citet{Peitz-Appl1997}. The density profile on the equatorial plane $(\rho_e(r))$ is given by,
\begin{align}
\rho_e(x)=\left(\frac{\Theta_0}{\cal K}\right)^{1/(\Gamma -1)}
\left(\frac{f(x)}{x^2}\right)^{1/(4(\Gamma - 1))},
\label{eq-rhoe}
\end{align}
where $x=\sqrt{r}$, $\Theta_0$ is a constant related to the temperature of the initial disc, we chose $\Theta_0 = 0.0001$. Also, we consider polytropic index $\Gamma = 4/3$ and entropy constant ${\cal K}=0.1$ for this study. Finally, $f(x)$ is given by,
\begin{align}
\begin{aligned}
f(x) =& \frac{3}{2} \frac{1}{ x^2 \left(2 a+x^3-3 x\right)}\bigg[ x - x_0 -\frac{3}{2}\ln\left(\frac{x}{x_0}\right) \\
&- \frac{3\left(s_1-a\right)^2}{s_1(s_1-s_2)(s_1-s_3)} \ln \left(\frac{x- s_1}{x_0-s_1}\right) \\
&- \frac{3\left(s_2-a\right)^2}{s_2(s_2-s_1)(s_2-s_3)}\ln \left(\frac{x- s_2}{x_0-s_2}\right) \\
&- \frac{3\left(s_3-a\right)^2}{s_3(s_3-s_1)(s_3-s_2)}\ln \left(\frac{x- s_3}{x_0-s_3}\right)\bigg], \\
\end{aligned}
\label{eq-fx}
\end{align}
where $x_0=\sqrt{r_{\rm ISCO}}$, $r_{\rm ISCO}$ is the radius of the ISCO (innermost stable circular orbit), and $s_1, s_2,$ and $s_3$ are the roots of the equation $s^3 - 3s + 2a=0$. Along with the density profile, we also supply the initial azimuthal velocity, which is given as follows \citep{Dihingia-etal2021},
\begin{align}
u^\phi(r,\theta) = \left(\frac{\cal A}{{\cal B}+ 2 {\cal C}^{1/2}}\right)^{1/2},
\label{eq-12}
\end{align}
where
$$
\begin{aligned}
{\cal A}=&\left(\Gamma^r_{tt}\right)^2,\\
{\cal B}=&g_{tt}\left(\Gamma^r_{tt}\Gamma^r_{\phi \phi}-2 {\Gamma^r_{t\phi}}^2\right)+2 g_{t\phi} \Gamma^r_{tt} \Gamma^r_{t\phi} - g_{\phi \phi } {\Gamma^r_{tt}}^2,\\
{\cal C}=&\left({\Gamma^r_{t\phi}}^2 - \Gamma^r_{tt} \Gamma^r_{\phi \phi}\right) (g_{t\phi} \Gamma^r_{tt}- g_{tt} \Gamma^r_{t\phi})^2.\\
\end{aligned}
$$
Here, $\Gamma^\alpha_{\beta\gamma}$ and $g_{\mu\nu}$ are the non-zero components of the Christoffel symbols and metric for Kerr black hole, respectively. Furthermore, we consider the thin-disc is truncated at a radial distance $r_{\rm tr}$ and extended up to the outer boundary of the simulation box $r_{\rm out}=500$.
In our study, we consider that the accretion disc is threaded by the poloidal magnetic field lines. The initial poloidal field lines are prescribed by implementing a vector potential $A_\phi$ following \cite{Zanni-etal2007,Vourellis-etal2019}. The functional form of the vector potential is given by
\begin{align}
A_\phi \propto \left(r \sin \theta\right)^{3/4} \frac{m^{5/4}}{\left(m^2 + \tan^{-2}(\theta-\pi/2)\right)^{5/8}}.
\label{eq-02}
\end{align}
The parameter $m (=0.1$) is related to the initial inclination of the field lines and it also determines the magnetic flux of the system. The parameter $m$ play crucial role in the launching of Blandford-Payne type wind from the accretion disc \citep{Blandford-Payne1982,Dihingia-etal2021}. The initial strength of the poloidal magnetic field is determined by the choice of the plasma-$\beta$ parameter at the truncation radius $r_{\rm tr}$ on the equatorial plane as, $\beta_{\rm tr} = p_{\rm gas}^{\rm tr}/p_{\rm mag}^{\rm tr}$. Here, superscript `tr' denotes quantities calculated at the $r=r_{\rm tr}$.
Following our motivation, we carry out eight axisymmetric simulation models, by choosing different truncation radius $(r_{\rm tr})$, initial plasma-$\beta$ ($\beta_{\rm tr}$), effective resolution (with AMR), and magnetic resistivity.
For axisymmetric models, we consider the highest effective resolution to be $2048\times1024$ (with three refinement levels).
The list of input parameters, effective resolution, and the final simulation time $(t_{\rm final})$ for all the simulation models are shown in the table \ref{tab-01}. Out of all these models, we consider 2D40AH to be our reference model for the sake of explanation and comparison. In Fig. \ref{fig-initial}, we show the initial density distribution $(\log(\rho/\rho^{\rm tr}))$ and the initial gas pressure distribution $(\log(p_{\rm gas}/p_{\rm gas}^{\rm tr}))$ for the reference model at panels (a) and (b), respectively. In panel Fig. \ref{fig-initial}a, we also show the initial magnetic field lines in terms of gray lines. In Fig. \ref{fig-initial}b, the white line represent the boundary of plasma-$\beta=1$. In the figure, the density distribution follows equation \ref{eq-rho} outside the truncation radius ($r_{\rm tr}=40$). Near the equatorial plane, the matter distribution is gas pressure dominated, and far from the equatorial plane and inside the truncation radius, the matter distribution is magnetic pressure dominated.
\begin{table}
\centering
\begin{tabular}{| l | c |c | c | c | c }
\hline
Model & $\eta$ &Effective resolution & $r_{\rm tr}$ & $\beta_{\rm tr}$ & $t_{\rm final}$\\
\hline
2D40A & 0 & $1024\times 512 $ & 40 & 100 & 14000\\
2D40AH & 0 &$2048\times 1024 $ & 40 & 100 & 9790 \\
rl-2D40AH & $1\times 10^{-3}$ &$2048\times 1024 $ & 40 & 100 & 9560 \\
rh-2D40AH & $5\times 10^{-3}$ &$2048\times 1024 $ & 40 & 100 & 8600 \\
2D40B & 0 & $1024\times 512 $ & 40 & 500 & 14000\\
2D40C & 0 &$1024\times 512 $ & 40 & 1000 & 12000\\
2D50 & 0 &$1024\times 512$ & 50 & 100 & 11160\\
2D60 & 0 &$1024\times 512$ & 60 & 100 & 11480\\
\hline
\end{tabular}
\caption{The explicit values of effective resolution, resistivity ($\eta$), truncation radius $r_{\rm tr}$, initial plasma-$\beta$ at $r_{\rm tr}$ ($\beta_{\rm tr}$), and the final simulation time $t_{\rm final}$ for different simulation models.}
\label{tab-01}
\end{table}
\subsection{Boundary conditions}
To prevent material from entering the numerical domain and interfering with accretion flow from the truncated disc, we impose no-inflow conditions at the inner radial boundary. At the polar axis, the scalar and radial components of vectors are considered symmetric, whereas the azimuthal and polar components of vectors are considered antisymmetric.
Furthermore, in our simulation setup, we do not allow any inflow of matter at the outer edge of the accretion disc, which is rather a naive approximation. However, as the current study only focuses to understand the dynamics of the inner part of the truncated accretion disc, we run our simulation models up to $\sim 450-500$ inner disc orbits (at ISCO). This is about $\sim 15\%-20\%$ of the rotation time of the outer edge at $r=500$ and therefore the idealistic far out radial boundary will not affect the inner flow structure significantly within our simulation time.
\section{Dynamical Characteristics}
\subsection{A reference case - ideal MHD}
We first discuss in detail our reference simulation 2D40AH, which is an axisymmetric run involving an ideal MHD approximation. Subsequently, in the later sections, a comparison is done with results obtained from resistive simulations.
The truncated accretion disc winds up the initial weak poloidal magnetic field lines and generates the toroidal component with the temporal evolution.
The differential rotation of the flow triggers magneto-rotational instabilities (MRI) and drives turbulence, which helps in the transport of angular momentum. To ascertain that MRI is active in our simulations, we set the resolution such that the MRI quality factor $Q_{\theta}\gtrsim10$ (see Appendix A for detail) throughout the simulation domain \citep[see][]{Sano-etal2004}.
With the angular momentum transport, the Keplerian matter of the inner edge loses angular momentum and becomes sub-Keplerian. Subsequently, sub-Keplerian matter can occupy orbits of lower radii. As a result, the flow starts to accreting towards the black hole. Furthermore, the onset of disc-winds aids in transporting angular momentum outwards, which also contributes to setting up the accretion process \citep{Dihingia-etal2021}.
Eventually, the matter reaches the event horizon of the black hole.
To understand the dynamical properties in detail, in Fig. \ref{fig-rho}, we show the logarithmic normalized density distribution $(\rho/\rho^{\rm tr})$ and magnetic field lines for model 2D40AH. The panels (a), (b), (c), and (d) correspond to simulation time $t=3240, 3800, 4080$, and $4380$, respectively. It is interesting to note that in Fig. \ref{fig-rho}a and Fig. \ref{fig-rho}e, the accretion disc is extended up to the event horizon. In between (Fig. \ref{fig-rho}b-\ref{fig-rho}d) the accretion disc is truncated. It essentially suggests that the accretion disc connects to the event horizon and disconnects from the event horizon periodically. During this process, the inner edge of the accretion disc oscillates in a quasi-periodic manner.
We observe such oscillation events throughout our simulation (discussed further in section 4). In Fig. \ref{fig-mdot_line}, we show the mass flux $\dot{m}=|\sqrt{-g}\rho u^r|$ at radii $r=5, 10, 15,$ and $20$ as a function of $\theta$ at simulation time (a) $t=3240$ and (b) $t=3800$ for the reference model. In Fig. \ref{fig-mdot_line}a, we observe very high mass flux around the equatorial plane ($\theta\sim\pi/2$). As a result, matter advects to the black hole efficiently. In this state, matter drags the magnetic field lines along with it, and due to the high mass flux, magnetic flux efficiently accumulates around the event horizon. Further, we also find that these magnetic field lines are rooted into the ergosphere (see Fig. \ref{fig-rho}a and Fig. \ref{fig-rho}e).
As the magnetic flux accumulated around the horizon reaches $\dot{\phi}_{\rm BH}\gtrsim 15$ (see section 4 for more detail), flow achieves a magnetically arrested disc (MAD) configuration \citep{Tchekhovskoy-etal2011}. However, magnetic flux does not keep on building around the black hole and eventually loses the saturated magnetic flux in a rapid phenomenon commonly known as a magnetic eruption event \citep[e.g.,][]{Igumenshchev2008,Vourellis-etal2019, Porth-etal2021, Dexter-etal2020}. After the eruption event, the accretion disc becomes truncated and the mass flux around the equatorial plane becomes negligible (see the black solid, blue dashed, and green dotted lines in Fig. \ref{fig-mdot_line}b).
To understand the process of oscillation in detail, we show the density profile along with the magnetic field lines at time $t=3240$ (a) and $t=3800$ (b) for the reference model in Fig. \ref{fig-zoom}. In the highly accreting state ($t=3240$), opposite polarity magnetic field lines squeezed together near the black hole, resulting in the formation of current sheets. These current sheets are prone to reconnection events. Furthermore, the turbulent $B_{\phi}$ component changes polarity and facilitates reconnection. In the ideal MHD models, the reconnection events are governed by the grid-dependent numerical resistivity. The distribution of the toroidal component of the magnetic field (a) $B_{\phi}$ and the magnetisation $(\sigma=B^2/\rho)$, as well as the magnetic field lines close to the black hole, are shown in Figure~\ref{fig-bphi} for the reference model at time $t=4380$. The diagram clearly depicts the formation of islands of opposite polarity magnetic fields (\ref{fig-bphi}a), in which low-magnetised matter is trapped within highly magnetised matter (reffig-bphib). In our simulations, these figures (\ref{fig-bphi}a,b) confirm the formation of plasmoids and chains of plasmoids. Our simulations' chain of plasmoids strongly suggests the presence of active tearing mode instabilities.
These reconnection events also create new poloidal field lines penetrating the equatorial (see Fig. \ref{fig-zoom}b). That contributes to a strong magnetic tension force opposing the accretion of the flow. As a result, accretion is halted, and the accretion rate drops by orders of magnitude (for example, see solid black line in Fig. \ref{fig-mdot_line}a and \ref{fig-mdot_line}b).
The increased magnetic tension force momentarily gets balanced with the force due to the ram pressure of the flow at a certain radius $r_{\rm max}$.
Consequently, for the reference model, the inner edge of the accretion disc recedes up to a radius $r_{\rm max}\sim20$,
which is two times less than the initial truncation radius $r_{\rm tr}=40$.
Further, the matter at the inner edge again accretes towards the central black hole due to loss of angular momentum and gravitational attraction forming an highly accreting flow (see the inner edge of the disc in Fig. \ref{fig-rho}b-\ref{fig-rho}e). We observe the formation and depletion of highly accreting flow repeated till the end of our simulation. As a result, the inner part of the accretion disc keeps on oscillating between a highly accreting state and a low accreting state.
Note that the radius $r_{\rm max}$ does not remain the same throughout the simulation time, $r_{\rm max}$ decreases with simulation time. For example, the edge of the accretion disc for the reference model recedes up to $r_{\rm max}\sim17$ at simulation time $t\sim10000$.
During the oscillation of the inner edge of the accretion disc, the inflow (high-density) and outflow (low-density) acquire shear in the transition layer. This layer may develop Kelvin–Helmholtz instability (KHI) due to shear. In Fig. \ref{fig-str}, we plot the logarithmic density profile and the velocity streamlines for the reference model at simulation time $t=4220$. The red line in the figure corresponds to the $u^r=0$ contour. The regions with $u^r<0$ and $u^r>0$ are populated with inflowing and outflowing streamlines, respectively. We observe formation of velocity vortices at the boundary layers, which is a typical signature of active KHI. The inflow-outflow region does not reach a steady-state in our simulation; instead, it continues to vary with time. In Fig.~\ref{fig-strtime}, for example, we show the temporal evolution of the density profile and velocity streamlines at three different times for the reference model ($t=5300, 5320$, and $5340$). The formation of a typical velocity vortex is depicted in the figure. The velocity vortex begins to form in the region where the flow has a high-density gradient. In addition, the inflowing and outflowing streamlines mix along the contour $u^r=0$ in the same region. These findings support the presence of KHI in the boundary layer. It should be noted that the only requirement for invoking KHI is a uniform velocity shear and no density difference (for more information, see \cite{Matsuoka2014}). As a result, the boundary layer between the relativistic jet and the inflowing matter (along the $u^r=0$ contour) is vulnerable to the formation of KHI. As a result, KHI vortices keep on forming throughout our simulation runs. The infalling matter interacts with the fast jet ejected from close to BH through KHI vortices. These vortices formed at the interface layers can facilitate mixing and also help in mass loading to the jet.
\subsection{Evolution with resistivity}
The large-scale qualitative features of the resistive models are quite similar to those of the reference model (ideal MHD cases). We observe the accretion of matter and oscillations of the inner edge of the accretion disc within the event horizon to $r_{\rm max}$. Nonetheless, the time period of the oscillations depends on resistivity. Magnetic resistivity essentially helps in diffusing matter through the magnetic field and also impacts reconnection events in the magnetic field lines (e.g., \cite{Vourellis-etal2019,Ripperda-etal2019, Ripperda-etal2020}). In Fig. \ref{fig-resis}, we show the density profile and the magnetic field lines for our resistive models (a) rl-2D40AH ($\eta=1\times10^{-3}$) at $t=4680$ (b) rh-2D40AH ($\eta=5\times10^{-3}$) at time $t=5190$. The times are chosen such that the accreting matter is connected to the event horizon for both models. By comparing with the reference model (Fig. \ref{fig-zoom}a), for resistive cases, we do not observe the formation of plasmoids due to tearing mode instabilities. Such plasmoids form only in the domain with Lundquist number $S\gtrsim 10^{4}$, where reconnection rate is independent of $S$ and commonly known as the `fast' reconnection mode or ideal tearing mode \citep[etc.]{Bhattacharjee-etal2009, Huang-Bhattacharjee2010, Loureiro-Uzdensky2016, Striani-etal2016, Ripperda-etal2020}. With the increase of magnetic resistivity, plasmoid instability is suppressed. A similar observation has been reported earlier by \cite{Ripperda-etal2020}.
Also, the increase in resistivity suppresses the MRI-induced turbulence \citep[for example, see][]{Qian-etal2017}. As a result, the turbulent feature in the flow decreases with the increase of resistivity, which is evident from Fig. \ref{fig-zoom}a, \ref{fig-resis}a, and \ref{fig-resis}b.
A detailed comparison of ideal and resistive MHD flow close to the black hole is shown in Fig. \ref{fig-resis}c (ideal MHD) and \ref{fig-resis}d (resistive MHD) in terms of magnetization $(\sigma=b^2/\rho)$ and magnetic field lines. The magnetization profile is quite similar for both models, with highly magnetized $(\sigma>1)$ regions in the off equatorial plane and the matter in the equatorial plane is low magnetized $(\sigma<1)$. The figures clearly show the turbulent nature of the flow and the formation of plasmoids due to the reconnection of opposite polar magnetic field lines for the ideal model. However, in the resistive MHD model, the flow is not turbulent, and we do not observe the formation of plasmoids. Note that we consider the resistivity $\eta$ to be constant throughout the simulation domain. In general, resistivity can be a function of space and time. Recently, with such resistivity profiles, it has been shown that resistivity plays an important role in the launching process of outflows from the accretion disc in resistive GRMHD \citep{Qian-etal2018}, the generation of a turbulent outflow due to reconnection \citep{Vourellis-etal2019}, or the magnetic field amplification by a mean-field $\alpha^2$-$\Omega$ disc dynamo \citep{Vourellis-Fendt2021}.
\subsection{A transient disc structure}
In this section, we study the slow-magnetosonic (hereafter, magnetosonic) behaviour of the truncated accretion disc. To do that, in Fig. \ref{fig-ucon1}, we show the radial four velocity ($u^r$), by following same temporal snapshots as Fig. \ref{fig-rho}.
The boundary between inflow and outflow is marked by the red contour ($u^r=0$) in Fig. \ref{fig-ucon1}. As the disc is truncated, the region with $u^r<0$ reduces and becomes fainter (see Fig. \ref{fig-ucon1}b) indicating negligible inflow, whereas the region with $u^r>0$ increases and become darker, showing a large amount of outflow. With time the region with $u^r<0$ increases, also observe an increase in the radial velocity (becomes dark blue). We also observe a sharp transition of radial velocity close to the black hole (see Fig. \ref{fig-ucon1}c-d). Such a change in velocity indicates the presence of a shock transition in the flow. Previously, many semi-analytic hydrodynamic studies have hinted at the formation of such shocks and suggested that shock solutions are viable in understanding the radiative and timing properties of astrophysical sources (\cite{Chakrabarti1989, Chakrabarti2011, Aktar-etal2015, Kumar-Indranil2017, Dihingia-etal2018, Dihingia-etal2020, Dihingia-etal2019a, Dihingia-etal2019b, Das-etal2021}, etc.). Similarly, semi-analytic MHD studies also suggested the formation of magnetosonic shock in the vicinity of the black hole \citep[e.g.,][]{Takahashi-etal2002,Takahashi-etal2006,Fukumura-etal2007}.
We show the distribution of the slow magnetosonic Mach number ($M_s=u_p/a_s$; hereafter, Mach number) by following the same temporal snapshots as Fig. \ref{fig-rho}, where poloidal velocity $u_p^2= u^ru_r + u^\theta u_\theta$, slow magnetosonic speed $a_s^2 = \frac{1}{2}(a_0^2 + a_{f}^2) - \frac{1}{2}\sqrt{(a_0^2 + a_{f}^2)^2 - 4 a_0^2 a_{f}^2 \cos^2\xi}$.
The sound speed and the poloidal Alfv\'en speed are given by $a_0^2=\gamma p/\rho h$ , $a_{f}^2= B_p^2/(B_p^2 + \rho h)$, respectively, where $B_p^2 = B_rB^r + B_\theta B^\theta$ and $h$ is the specific enthalpy of the flow. Finally, $\xi$ is the angle between the poloidal velocity vector and the poloidal magnetic field vector.
In the figure, the blue line corresponds to contour $M_s=1$.
In Fig. \ref{fig-mach}a and \ref{fig-mach}e, we observe that accretion flow around the equatorial plane is super slow-magnetosonic (hereafter, super-magnetosonic) ($M_s>1$), where the accretion flow is connected to the event horizon. Far from the equatorial plane, we observe a region with sub slow-magnetosonic (hereafter, sub-magnetosonic) ($M_s<1$) to super-magnetosonic ($M_s>1$) flow surrounding the rotation axis of the black hole (within $\theta\sim 25^\circ - 45^\circ$ for the upper quadrant).
In this region, very far from the black hole flow is super-magnetosonic ($M_s>1$), and also close to the black hole flow is super-magnetosonic ($M_s>1$). In between, we observe a sub-magnetosonic ($M_s<1$) region (dark violet), with a smooth transition between the layers. With time, we observe the formation of a sub-magnetosonic flow close to the black hole (Fig. \ref{fig-mach}b, see the dark violet region). As the flow moves far from the equatorial plane, the flow smoothly makes a transition from sub-magnetosonic ($M_s<1$) to super-magnetosonic ($M_s>1$) velocity.
In Fig. \ref{fig-mach}c and \ref{fig-mach}d, we observe a super-magnetosonic region $(M_s>1)$ close to the black hole, which has a sharp transition to a sub-magnetosonic value close to the black hole $(r\sim15)$, indicating the presence of shocks in our simulations.
For the sake of clarity, we explain this in terms of a parabolic streamline $y=cx^{2.5}$, with $c=0.009$ (depicted by solid black line in Fig. \ref{fig-mach}c and Fig. \ref{fig-mach}d). Along the streamline, very close to the black hole flow is super-magnetosonic ($r\lesssim2$), within $2\lesssim r \lesssim 15$ flow is sub-magnetosonic, and then flow is super-magnetosonic again with a shock transition (for detail see the Fig. \ref{fig-shock}e). After that, flow undergoes multiple transitions between these regimes and finally ends up as super-magnetosonic for $r\gtrsim60$.
In Fig \ref{fig-shock}, we show the flow properties along the parabolic streamline ($y=c x^{2.5}$) for model 2D40AH at time $t=4220$. Panels (a), (b), (c), (d), and (e) correspond to the radial four-velocity $(u^r)$, density $(\rho/\rho^{\rm tr})$, pressure $(p/p^{\rm tr})$, Bernoulli parameter $(-hu_t)$ profile, and Mach number ($M_s$), respectively. The panels suggest that the flow has a shock transition in between $r_s=15.0-15.5$, shown in the figure by the gray vertical line. In Fig. \ref{fig-shock}a, we observe that at the shock, radial velocity sharply dropped from $u^r_-=0.2393$ to $u^r_+=0.0382$. Here, `-' and `+' correspond to quantities obtained before and after the shock transition. As an inset, in Fig. \ref{fig-shock}a, we show the transition of radial velocity $(u^r)$ during the shock transition.
Away from this shock transition, the radial velocity of the flow increases closer to the black hole (smaller $r$). Whereas at a larger radius far from the shock, the radial velocity becomes zero $(r \backsimeq 58.0$, see the red dot in Fig. \ref{fig-shock}a) and increases in the opposite direction as flow moves far from the black hole. This point divides the inflow from the outflow. The red contours show a similar division between inflow-outflow in Fig. \ref{fig-ucon1}. Similarly, in Fig. \ref{fig-shock}b, we observe that the density profile also shows a jump across the shock front from $\rho_-=2.13\times10^{-7}\rho^{\rm tr}$ to $\rho_+=9.08\times10^{-7}\rho^{\rm tr}$. It indicates the presence of a highly compressed shock with a density jump $R_\rho=\rho_+/\rho_-=4.26$. Similar to the density, the pressure also increases across the shock with a pressure jump $R_p=p_+/p_-\sim 10.06$ (see Fig. \ref{fig-shock}c). Consequently, the temperature jump across the shock is $R_\Theta(=R_p/R_\rho)=2.36$.
However, the specific energy or the Bernoulli parameter $(-hu_t)$ does not change significantly across the shock. This fact essentially suggests the non-dissipative nature of the shock (see Fig. \ref{fig-shock}d). Across the shock, the value of the Bernoulli parameter is less than unity $(-hu_t\sim0.98)$, indicating that the immediate post-shock flow is gravitationally bound $(-hu_t<1)$.
As evident from panel (d) of Fig. \ref{fig-mach}, in Fig. \ref{fig-shock}e, the Mach number ($M_s$) profile along the streamline shows three magneto-sonic points at $r= 2.3, 43.9,$ and $65.0$, respectively.
These critical points are denoted as red dots in Fig. \ref{fig-shock}e. The gray horizontal line in Fig. \ref{fig-shock}e corresponds to $M_s=1$, indicating the division between the sub-magnetosonic and super-magnetosonic regions. For the outgoing branch $(u^r>0, r>58.0)$, the sub-magnetosonic flow becomes super-magnetosonic after crossing the magnetosonic points at $r=65.0$. The in-going branch $(u^r<0, r<58.0)$ becomes super-magnetosonic after crossing the magnetosonic point at $r=43.9$ but becomes sub-magnetosonic after the shock transition $(r_s)$. Finally, the flow becomes super-magnetosonic after crossing the magnetosonic point at $r=2.3$. We observe that Mach number jumps from $M_{s,-}=1.98$ to $M_{s,+}=0.49$ across the shock front.
To understand the dynamics of the shock, in Fig. \ref{fig-shockdy}, we show the temporal evolution of the radial velocity profile for 2D40AH from time $t=4200$ to $t=4350$. The shock locations obtained at times $t=4200, 4250, 4300, 4310, 4320, 4330, 4340$, and $t=4350$ are $r_s=15.40, 14.98, 14.32, 13.83, 13.20, 12.33, 11.09$ and $r_s=4.44$, respectively (here, we consider the mean value). Thus, shock is not steady and the shock front moves towards the black hole with time. The velocity of the shock front also increases with time. Finally, the shock disappears with a huge outflow. After the outflow, as advection starts to dominate, shock appears again. We observe the appearance and disappearance of shock throughout the simulation time in all the simulation models. This essentially suggests that the shock transition in the truncated accretion disc is a transient phenomenon.
In summary, we observe that the flow creates a hot, comparatively high-density region around the black hole during the shock transition. Also, note that the density of this post-shock region is much lower than that of the thin disc. This hot region can be associated with an extended corona surrounding the black hole. Many earlier authors anticipated the formation of such an extended corona from the truncated disc \citep{Eardley-etal1975, Ichimaru1977}. With temporal evolution, the size of the corona also changes as the shock front moves towards the black hole. The variations in the size of the corona may lead to fluctuations in the electromagnetic emission. As a result, these fluctuations in emission could be useful in understanding variability in astrophysical sources.
\section{Temporal characteristics}
In this section, we study the long-term temporal evolution of the truncated accretion disc in terms of integrated quantities and compare their properties for different simulation models (ideal MHD).
The mass accretion rate $(\dot{M}_{\rm acc})$, mass flux through funnel $(\dot{M}_{\rm jet})$, mass flux through wind $(\dot{M}_{\rm wind})$, and magnetic flux accumulated at the event horizon $(\dot{\phi}_{\rm BH})$ are shown in Fig. \ref{fig-fluxes} for different simulation models. The $\dot{M}_{\rm acc}$ is calculated at the event horizon, and $\dot{M}_{\rm jet}$ and $\dot{M}_{\rm wind}$ are calculated at a radius $r=50$ following \cite{Porth-etal2017, Nathanail-etal2020}.
Note that, in $\dot{M}_{\rm jet}$, we only consider highly magnetised, relativistic outflow with $\sigma>1, -hu_t>1$, or efficiency factor of the Poynting flux $\xi>2$, where $\xi=(-T^r_t - \rho u^r)/\rho u^r$. On the other hand, in $\dot{M}_{\rm wind}$, we consider outflow with $\sigma<1, -hu_t>1$, or efficiency factor of the Poynting flux $\xi<2$.
In the left panels, we study the role of initial plasma-$\beta$ parameters ($\beta_{\rm tr}$), and in the right panels, we investigate the role of truncation radius on these quantities.
With the transport of angular momentum, the matter in the inner edge sets the accretion process. Once the accreted matter reaches the event horizon, we observe a spike in the accretion rate $(\dot{M}_{\rm acc})$ profile (see Fig. \ref{fig-fluxes}a). The matter drags the magnetic field lines along with it, and we observe magnetic field lines rooted to the ergo-sphere. Such a magnetic field structure is required to have an active Blandford-Znajek (BZ) process \citep{Blandford-Znajek1977, Komissarov-Barkov2009}. To better understand the properties of relativistic jet, in Fig. \ref{fig-jet}, we show the total energy ($P_{\rm jet}$) flux and electromagnetic energy flux ($\dot{E}_{\rm jet}$) through the jet along with the $\dot{M}_{\rm jet}$ in code unit for model 2D40A from $t=6500$ to $t=8500$. $P_{\rm jet}$ and $\dot{E}_{\rm jet}$ are calculated following \citep{McKinney-etal2012,Nathanail-etal2020},
\begin{align}
P_{\rm jet}= \int \left(-T^r_t -\rho u^r\right)\sqrt{-g} d\theta d\phi,
\end{align}
\begin{align}
\dot{E}_{\rm jet} = \int -{T^{EM}}^r_t\sqrt{-g} d\theta d\phi,
\end{align}
where, superscript EM denotes only the electromagnetic part of the energy-momentum tensor (for detail see \cite{McKinney-etal2012}. The integration is performed only in the funnel region at $r=50$, with $\sigma>1, -hu_t>1$ or $\xi>2$. We observe peak value of $\dot{M}_{\rm jet}$, $P_{\rm jet}$, and $\dot{E}_{\rm jet}$ at the same time (see Fig. \ref{fig-jet}), suggesting the high-mass flux through the funnel is Poynting dominated and highly relativistic. These facts strongly suggest that the active BZ process results in a higher value of $\dot{M}_{\rm jet}$ (see Fig. \ref{fig-fluxes}b).
Note that the density in the funnel region is very low and depends on the density floor model (see the black color in Fig. \ref{fig-rho}).
Similarly, to understand the peaks in the disc-wind profiles (Fig. \ref{fig-fluxes}c), in Fig. \ref{fig-bphibz}, we show the ratio of the toroidal $(\sqrt{B_\phi B^\phi})$ component to the poloidal component $(B_p=\sqrt{B^rB_r + B_\theta B^\theta})$ of the magnetic field for the reference model at a simulation time $t=3240$. The black solid line in the figure represents the boundary of the $\sqrt{B_\phi B^\phi}/B_p=1$ contour. Depending on the contribution of these two components the disc wind could be Blandford-Payne type wind or $(B_{\phi})$ dominated disc-wind \citep[for discussion see][]{Dihingia-etal2021}. The figure essentially suggest that the high-density matter in the equatorial plane has very strong toroidal component of magnetic field as compared to the poloidal component of the magnetic field $(\sqrt{B_\phi B^\phi}/B_p>1)$. The values of the ratio on the equatorial plane ranges $\sqrt{B_\phi B^\phi}/B_p\sim 10-100$. Thus, the matter close to the black hole is subjected to a gradient of pressure due to the toroidal component of the magnetic field, resulting in a strong toroidal magnetic field $(B_{\phi})$ dominated disc-wind.
Consequently, we observe a higher value of $\dot{M}_{\rm wind}$ in this stage (see Fig. \ref{fig-fluxes}c).
In the highly accreting stage, magnetic flux accumulates around the event horizon much faster. Once the magnetic flux accumulated around the horizon reaches $\dot{\phi}_{\rm BH} > 15$ (see the gray horizontal line), flow achieves a magnetically arrested disc (MAD) (\cite{Tchekhovskoy-etal2011}, see Fig. \ref{fig-fluxes}d). As discussed in section 3.1, magnetic flux does not keep on building around the black hole, and eventually it loses the magnetic flux via an eruption event.
The eruption events give rise to strong flares in jet rate as well as in wind rate (see Fig. \ref{fig-fluxes}b-c). Such eruption events are very helpful in understanding the observed near-infrared (NIR) flares from Sgr A* \citep{ Dexter-etal2020, Chatterjee-etal2020a, Porth-etal2021}. After the eruption event, the accretion disc becomes truncated again and the accretion rate at the event horizon ($\dot{M}_{\rm acc}$)
drops to its lowest value ($\dot{M}_{\rm acc}\lesssim10^{-4}$, see Fig. \ref{fig-fluxes}a). Due to the lack of matter close to the black hole, the mass flux through the jet and wind also becomes negligible (see Fig. \ref{fig-fluxes}c-d).
Subsequently, the magnetic flux accumulated around the horizon becomes $\dot{\phi}_{\rm BH}< 15$. Over time, flow again accumulates magnetic flux around the horizon and becomes $\dot{\phi}_{\rm BH}> 15$ (see Fig. \ref{fig-fluxes}d). We observe the quasi-periodic oscillation of the magnetic flux throughout our simulations. Consequently, the other mass flux rates $(\dot{M}_{\rm acc}, \dot{M}_{\rm jet},~{\rm and}~ \dot{M}_{\rm wind})$ also show peak values in a similar manner.
We observe a time lag in the peak value in accretion rate $(\dot{M}_{\rm acc})$, mass flux through funnel $(\dot{M}_{\rm jet})$, and mass flux through wind $(\dot{M}_{\rm wind})$ profiles. The time taken by the jet and the disc-wind from the launching site to reach radius $r = 50$ appears as the observed time lag in Fig. \ref{fig-fluxes} (for details see Fig. 15 of \cite{Dihingia-etal2021}).
To understand the statistical behaviour of the these oscillations, we plot the power density spectrum (PDS) of the accretion rate profile $(\dot{M}_{\rm acc}$) for different simulation models at Fig. \ref{fig-PDS}. In panel Fig. \ref{fig-PDS}a, variation of PDS is shown for different values of $\beta_{\rm tr}$ and in panel Fig. \ref{fig-PDS}b, the same is shown for different values of truncation radius. In the figure, the power is plotted in an arbitrary unit while the frequency is plotted in Hz by converting the simulation time to the physical unit (seconds). Although the peaks in the accretion rate profile are not statistically rich due to the limited run time of the simulation $t\sim12000-14000$ code unit or $t\sim0.6-0.7(M_{\rm BH}/10M_{\odot})$ second in physical unit. We observe a frequency corresponding to maximum power in the PDS $(\nu_{\rm QPO, max})$. This fact essentially suggests the quasi-periodic nature of the oscillations.
The strength of the magnetic field is the primary driver of activities in the accretion flow. With the increase of magnetic field strength (lower $\beta_{\rm tr}$), the magnetic tension force due to the reconnected field lines also increases. Stronger ram pressure is required to penetrate the stronger magnetic tension force. Consequently, the accreting matter needs more time to reach the event horizon, and as a result, the time period of oscillation increases with the strength of the magnetic field (lowering of $\beta_{\rm tr}$). For the models with $\beta_{\rm tr}=100$, $500$, and $1000$, the $\nu_{\rm QPO, max}$ of the PDS are $\nu_{\rm QPO, max}\sim 13.5$, $16.0$, and $20$ $Hz\times (10 M_{\odot}/M_{\rm BH})$, respectively.
With the increase of the truncation radius $(r_{\rm tr})$, the accretion starts from a larger distance from the black hole and with a higher value of specific angular momentum. The flow loses the excess angular momentum via different channels (e.g., MRI, disc-winds) and moves towards the black hole.
Therefore, the infall time of the matter at the inner edge of the truncation radius increases with the truncation radius.
Consequently, mass accretion between the event horizon and truncation radius takes longer for models with a higher truncation radius. As a result, the accumulation of magnetic flux happens at a much lower rate. Eventually, the time period of oscillation increases with truncation radius. For the models with $r_{\rm tr}=40$, $50$, and $60$, the $\nu_{\rm QPO, max}$ of the PDS are $\nu_{\rm QPO, max}\sim 13.5$, $9.5$, and $7.0$ $Hz\times (10 M_{\odot}/M_{\rm BH})$, respectively.
To study the role of resistivity in the dynamical properties of the truncation disc, we plot accretion rate as a function of simulations time for models 2D40AH (ideal MHD), rl-2D40AH ($\eta=1\times10^{-3}$), and rh-2D40AH ($\eta=5\times10^{-3}$) in Fig. \ref{fig-comp}a. The figure shows that the temporal evolution of the accretion rate is different for models with different resistivity. In resistive flow, the matter can diffuse through the magnetic field without preserving the frozen-in condition as in the ideal MHD. In such a situation, comparatively lower ram pressure is required to penetrate the magnetic tension force in the disc mid-plane. As a result, with the increase in magnetic resistivity of the flow, the time period of oscillation decreases. As a result, the QPO frequency $(\nu_{\rm QPO, max})$ is expected to increase with the increase in magnetic resistivity.
However, in the ideal MHD case, due to the active `fast' reconnection mode, we observe a much shorter time period of oscillations as compared to resistive models. For the same reason, the accretion peaks are sharper than those of the other models. We also note that, at time $t\sim9000$, radius of balancing magnetic tension force and the ram pressure for magnetic resistivity $\eta=1\times10^{-3}$ is $r_{\rm max}\sim37$. Subsequently, for $\eta=5\times10^{-3}$ the radius of balancing magnetic tension force and the ram pressure decreases to $r_{\rm max}\sim33$.
We see sharp peaks in the mass and magnetic flux rates in Figs.~\ref{fig-fluxes} and \ref{fig-comp}.
We stress that these simulations are carried out in an axisymmetric framework. To compare these results to a three-dimensional (3D) evolution, we ran a 3D simulation using the same setup as the reference run, but with a modest resolution of $224\times96\times128$.
Clearly, the resolution of our ideal MHD 3D run is insufficient to capture some critical details of the system, such as magnetic field line reconnection. This exemplary 3D simulation, on the other hand, helps us understand the reality of features observed in the accretion flow \citep[e.g.,][]{Porth-etal2021,Ripperda-etal2022}.
We do not see such sharp peaks in the mass and magnetic flux profiles in 3D unlike the ideal axisymmetric runs.
Instead, we see broader peaks caused by the formation of accretion {\it fingers} (for a detailed discussion, see Appendix B).
\section{Radiative characteristics}
Radiative transfer calculations are essential to correlate our simulations with the astrophysical observations in the electromagnetic paradigm. In this section, we study the radiative signature of the truncated accretion disc during the oscillation of inner accretion flow.
\subsection{Post-processing tool and scaling}
We incorporate the GRRT post-processing module \texttt{RAPTOR} and calculate the near horizon emission from the reference run. \texttt{RAPTOR} reproduces the appearance and spectrum of black hole sources to a distant observer considering the effects due to a strong gravitational field (viz. gravitational lensing, redshift, and relativistic beaming) \citep{Bronzwaer-etal2018}. \texttt{RAPTOR} also allows us to incorporate different radiative processes to render emission from the accretion disc.
In this study, we aim to study emission signatures in the hard-intermediate state. Accordingly, we consider the thermal synchrotron and Bremsstrahlung processes as the sources of emission and neglect the black body component, which may be necessary for the soft and soft-intermediate states of an outburst.
We calculated the spectrum for a black hole with a mass $M=10M_{\odot}$. To model the electron temperature from the flow temperature, we use $R-\beta$ prescription following \cite{Moscibrodzka-etal2016}, where we choose $R_l=1, R_h=60$ following \cite{Mizuno-etal2021}. We collected all the emissions on a screen from $r\lesssim 60$ and fixed the line of sight angle at $i=60^{\circ}$. We fixed the distance of the screen at $r_{\rm cam}=10^4r_g$. To mimic the accretion flow around a BH-XRB, we scale the density in cgs units with the help of the rest-mass density scaling factor $\rho_{\rm unit}=M_{\rm unit}/r_g^3$, where we consider $M_{\rm unit}=10^{13}$ gm . Accordingly, we scale energy density, magnetic field strength, and number density in cgs units using, $U_{\rm unit}=\rho_{\rm unit}c^2$, $B_{\rm unit}=c\sqrt{4\pi\rho_{\rm unit}}$, and $N_{\rm unit}= \rho_{\rm unit}/(m_e + m_p)$, respectively, where $m_e$ and $m_p$ are mass of electron and proton.
With this, the accretion rate calculated when the inner edge is attached and detached to the event horizon is of the order of $\dot{M}_{\rm acc}\sim10^{-3}$ and $\dot{M}_{\rm acc}\sim10^{-5}$ Eddington units ($\dot{M}_{\rm Edd}=1.44\times10^{17} \left(M/M_{\odot}\right)$ g s$^{-1}$), respectively. Note that the spectral properties of an accretion disc depend on the mass of the black hole, accretion rate, line of sight angle, non-thermal particles, $R_l$, $R_h$ parameters, etc. \citep[e.g.,][]{Bandyopadhyay-eyal2021,Mizuno-etal2021,Fromm-etal2021}. For this study, we do not intend to do parametric studies. Instead, we want to study the qualitative properties of emission during the oscillation of the inner edge of the accretion disc. Therefore, we only considered one set of parameters throughout the study.
\subsection{Synthetic emission spectra}
In Fig. \ref{fig-spec}, we plot the emission spectrum for the reference run at different simulation times, where the thin and thick lines present the contribution only from the thermal synchrotron process alone and that including Bremsstrahlung, respectively. In the figure, $\nu F_\nu$ is expressed in units of `Jy~Hz'. Different simulation times are marked on the figure. During this simulation time range, the accretion matter moves from the radius $r_{\rm max}$ to the event horizon. The emission spectra show a peak value around $\nu\sim 10^{16}-10^{18}Hz$, which corresponds to the emission due to the thermal synchrotron process and dominates up to $\nu\lesssim10^{19}Hz$. Whereas the high energy $(\nu\gtrsim 10^{19})$ part of the spectrum is dominated by the emission due to the Bremsstrahlung process.
The emissions from the Bremsstrahlung process show a distinct shape change with respect to frequency within $\nu\sim10^{18-20}$Hz, which signifies the presence of regions with a sharp change in temperature and density. At the time $t=7500$ (for solid black line), the accretion disc is truncated, and the inner edge of the thin disc is also far from the black hole. Overall, the flow is cold, and the emission comes only from the thin Keplerian disc and weak disc-winds. Consequently, we observe lower emissions in the high-energy region.
As the flow gradually loses angular momentum, the inner edge of the disc approaches the black hole. During this process, the matter in the inner edge becomes hot, accumulates more magnetic flux, and disc-winds also become stronger. This results in higher intensity of thermal synchrotron and Bremsstrahlung emission. Also, we observe that peak due to thermal synchrotron emission moves towards the higher frequency range as matter approaches the black hole. As the flow reaches close to the event horizon $(t=8500$, see the green dot-dashed line), the emission within energy range $(h\nu \gtrsim 0.1KeV, \sim 10^{16}{\rm Hz})$ increases by two orders of magnitude. In comparison, the lower energy part remains unaltered, as the low magnetized outer part of the disc remains steady during this period. However, the emission due to the Bremsstrahlung increases only by one order of magnitude during this process.
To understand the radiative properties during the eruption events, in Fig. \ref{fig-specr}, we plot the emission spectrum for different simulation times (marked on the figure) when the inner edge of the disc is receding from the black hole. The thin and thick lines in the figures are drawn following a similar convention as in Fig. \ref{fig-spec}. In the figure, we observe the opposite trend to that of Fig. \ref{fig-spec}. As the truncated disc recedes, it shows strong outflow in terms of jet and disc-wind (see Fig. \ref{fig-fluxes}). With the outflow, accretion flow loses the advected magnetic flux. Consequently, the magnetic field strength decreases, and we observe that the thermal synchrotron emission decreases with time. Also, the synchrotron peak shifts towards the lower energy range.
We observe that most of the emission is contributed by the thermal synchrotron process during the receding, even in the high energy region $(\nu\gtrsim10^{19}$Hz), signifying a very strong magnetic field around the black hole. The Bremsstrahlung starts to dominate again in the high energy range as the disc reaches far from the black hole ($t=9000$, see the green dot-dashed line). By then, the disc-winds leave our region of interest ($r\lesssim60$) and do not contribute to the emission spectra.
\subsection{Synthetic intensity maps}
The 2D synthetic intensity at $h\nu=1KeV$ for the reference run at different simulation times (same as Fig. \ref{fig-spec} and \ref{fig-specr}) is shown in Fig. \ref{fig-map1k}. In the figures, we plot the normalized intensity in logarithmic scale within a box $x:[-60, 60]r_g$ and $y:[-60, 60]r_g$. In the figure, we broadly observe two concentric rings. The size of the outer ring is about $40r_g$ and the inner is about $10r_g$. These maps essentially suggest that the inner edge of the truncated accretion disc and the disc-wind contribute to most of the emission, which constitutes the outer ring. As the inner edge moves towards the event horizon $t\sim 7500-8500$, the radius of the brightened edge decreases. At the time, $t=8000, 8400$, and $8500$, we observe two extra ring-like structures surrounding the black hole due to the hot and dense post-shock matter. The post-shock region exists on both sides of the equatorial plane. Thus, the emission coming out of the post-shock region appears as two concentric rings around the black hole to the observer. The rings at time $t=8000$ are marked on the figure as `A' and `B'. As the post-shock regions on both sides of the equatorial plane appear at slightly different inclinations to the observer. As a result, the inner rings are located more asymmetrically with respect to the equatorial plane than the outer ring.
Unlike the standard image of the black hole \citep{EHTI-2019}, here we observe a multiple ring structure around the event horizon. As the inner edge of the disc recedes outwards ($t\sim8700-9000$), we observe negligible emission from the low-density flow close to the black hole. During this time, most of the emission is coming from the disc wind. As the wind leaves our area of interest $(r\lesssim60)$, total emission drops drastically ($t=9000$). Our observation of an extended ring structure nicely fits with the GR-MHD ray-traced emission maps by \citet{Bandyopadhyay-eyal2021}, who investigated how the structural components of a BH jet source - the BH, the disc, the spine jet, and the disc wind - may be disentangled by their radiative imprints.
Due to the transport of angular momentum, the sub-Keplerian matter accretes towards the black hole. Also, some of this accreting matter contributes to the outflow as jet and disc-wind. This indicates that the low angular momentum matter around the black hole plays a crucial role in the emission of hard X-rays. Consequently, the emissions coming within the energy range $h\nu\sim1-100keV (\nu\sim 10^{17}-10^{19}Hz)$ change in two-orders of magnitude during the oscillation (see Fig. \ref{fig-spec}). Similarly, in the high energy range ($h\nu>100KeV$ or $\nu >10^{19}Hz)$, the emission increases by one order of magnitude in the oscillating phase. These features may significantly impact the understanding of spectral and timing properties in BH-XRBs. BH-XRBs often exhibit spectral variability in their outbursting phase (\cite{Belloni2010, Belloni-etal2011, Belloni-Motta2016, Ingram-Motta2019}, etc.).
Note that in this study, we do not consider any Comptonization process in the calculation of the emission features. However, it is needless to mention that the inclusion of the Comptonization process is essential to understand the hard X-rays from BH-XRBs (e.g., \cite{Steiner-etal2009, Titarchuk-etal2014, Poutanen-etal2018}, etc.). With the inclusion of the Comptonization process, the high-energy emissions from the hot disc-wind and the post-shock region are expected to increase.
The radiation and spectral signatures calculated due to the thermal synchrotron and Bremsstrahlung depend on the considered parameters of the $R-\beta$ relation. In reality, a large number of these parameters $(R_l, R_h)$ need to be tested to find a suitable combination/range of parameters that match the observations \citep[e.g.,][]{Mizuno-etal2021,Fromm-etal2021}. Such scaling relations are often used in literature to calculate the near horizon radiative properties of AGNs \citep[e.g., ][]{EHTI-2019}. A more consistent approach would be to account for two-temperature fluid and include relevant heating/cooling terms to obtain the electron temperature and the emission self-consistently \citep[e.g.,][]{Ressler-etal2015, Sadowski-etal2017, Ryan-etal2017, Mizuno-etal2021,Dihingia-etal2022}. Nevertheless, with the inclusion of these processes, the qualitative radiative feature of modulation of emission during the oscillation of accretion flow is expected to be the same.
\section{Summary and discussion}
In this work, we set up a highly resolved, magnetized, truncated accretion disc in axisymmetry around a Kerr black hole in ideal and resistive GRMHD frameworks. The initial conditions for the thin-disc are set following \cite{Dihingia-etal2021}. The accreting sub-Keplerian matter from the truncated accretion disc plays a crucial role in the flow dynamics and radiative properties around the black hole.
The simulations show turbulent mass loading features in the profiles of accretion rate, wind rate, and mass flux rates through the funnel due to the oscillation of inner accretion flow. They show quasi-periodic peaks in their profiles. We observe flow oscillating between MAD and non-MAD states rather than a fully developed magnetic arrested state. We also observed quasi-periodic magnetic eruption events following the accretion flow.
During this process, the inflow and outflow develop shear and become unsteady. Such instabilities at the interface between inflow and outflow can facilitate the mixing of matter and may help in mass loading to the jet. In our simulations, we also observe the formation of a hot corona around the black hole due to shock transition in our simulations, which is intermittent in nature. The hot corona and the inner edge of the accretion disc mainly contribute to the observed high-energy emission.
Our study supports the extended corona model during the evolutionary phase considered here. Also, the high energy emission modulates by orders of magnitude during the oscillation of the accretion flow. The modulations in the X-rays are often observed in terms of QPOs \citep[etc.]{Belloni-etal2011, Belloni-Motta2016, Ingram-Motta2019}. These are typical features of the hard-intermediate state (HIMS) of BH-XRBs. In the HIMS, intermediate photon index ($\alpha\sim1.8-2.4$, \cite[references therein]{Nandi-etal2012}) with occasional giant radio flares can be observed \citep[etc.]{Fender-etal2004,Fender-etal2009}. Some of the major findings from our extensive study are listed below,
\begin{itemize}
\item[(1)] The qualitative features for all the ideal MHD models are similar. The reconnection events and the plasmoids formed due to active tearing mode instabilities play a crucial role in the dynamics of the accretion flow. They generate poloidal magnetic fields penetrating the equatorial plane and develop a strong magnetic tension force in the disc mid-plane. The resistance offered by this force halts the accretion flow momentarily.
With time, the ram pressure of the flow in the inner edge overcomes this force due to outward transport of angular momentum transport (due to MRI and disc-winds), leading to accretion again.
This process continues throughout the simulation in the form of oscillations of the inner accretion flow. These oscillations are quasi-periodic in nature. The frequency of oscillations lie in the range of low-frequency QPOs (LFQPOs).
The QPO frequency $(\nu_{\rm QPO, max})$ increases from $\sim13.5$ to $20$ Hz $(10M_{\odot}/M_{\rm BH})$ as the initial plasma-$\beta$ parameter increases from $\beta_{\rm tr}=100$ to $1000$. Similarly, the QPO frequency $(\nu_{\rm QPO, max})$ decreases from $\sim13.5$ to $7$ Hz $(10M_{\odot}/M_{\rm BH})$ as the truncation radius increases from $r_{\rm tr}=40$ to $60$.
\item[(2)] In the resistive MHD models, we observe a qualitatively similar oscillating feature of the inner accretion flow as in the ideal MHD models. However, unlike the ideal MHD models, we do not observe the formation of plasmoids with the increase of resistivity. The turbulent features of the inner accretion flow flow are also subdued with the increase of magnetic resistivity. The KHI across the inflow and outflow boundaries is suppressed due to the increase in magnetic resistivity.
The frequency of oscillation increases with the increase of magnetic resistivity. Also, the radius of balancing magnetic tension force and the ram pressure $(r_{\rm max})$ decreases with the increase of magnetic resistivity.
\item[(3)] We find that the high-energy emission comes mainly from the edge of the truncated accretion disc and the post-shock region. It suggests that the low angular momentum matter around the black hole plays a crucial role in the emission of hard X-rays. Subsequently, we find that the high-energy radiation modulates during the oscillation of the inner accretion flow. The emission within the energy range $h\nu\sim1-100keV (\nu\sim 10^{17}-10^{19}Hz)$ increases by two orders of magnitude during the oscillation. Similarly, we also find that the emission in the very high energy range $(h\nu>100KeV$ or $\nu >10^{19}Hz)$ increases around one order of magnitude during the oscillation. The 2D synthetic intensity maps at 1KeV show an edge brightened structure with most of the emission coming from the inner edge of the truncated accretion disc. We also observe two bright rings close to the black hole due to the presence of the hot post-shock corona with a typical post-shock proton temperature $T_p\gtrsim10^{11}$K.
\end{itemize}
The current study is useful in comprehending the 'Q' diagram of outburst BH-XRBs.
A cartoon diagram of the same is shown in Fig.~\ref{fig-hid} for better understanding.
The figure depicts various spectral states (hard, intermediate, and soft states), with arrows indicating the evolution's direction. The intermediate state is further classified as HIMS (hard-intermediate state) and SIMS (soft-intermediate state).
\citep[see][]{Belloni2010}. The jet-line separates the intermediate and soft states, where the source exhibits peaks in radio emission as it approaches the jet-line and drastic changes in jet properties are observed along this line \citep{Fender-etal2009}. The schematic evolution of the jet Lorentz factor and the inner disc radius ($r_{\rm max}$) is shown in the lower panel of the cartoon diagram.
The inner edge of the high angular momentum material is responsible for the $r_{\rm max}$ (i.e., Keplerian matter).
HIMS is associated with a stage in which high angular momentum material lies around the equatorial plane for a radius $r> r_{\rm max}$, while the material inside ($r < r_{\rm max}$) oscillates, resulting in periodic modulation in high energy emission.
Giant radio flares are a common HIMS \citep{Fender-etal2009} signature.
These HIMS characteristics closely resemble those of our current simulation models (as shown in Fig.~\ref{fig-hid}).
Recently, the X-Ray transient MAXI J1803-298, which is hosted by a stellar-mass Kerr black hole X-ray binary, was observed in HIMS and found to have QPOs of the order of $\nu_{\rm QPO} \lesssim 10$\,Hz \citep{Chand-etal2022,Jana-etal2022}, which agrees closely with our simulation results. Earlier studies on low angular momentum transonic non-magnetized accretion flows also hinted at the possibility of such QPOs due to oscillations of shock waves \citep{Lee-etal2011, Lee-etal2016, Das-etal2014, Sukova-etal2017}.
Our study also shows that with time, the critical radius $r_{\rm max}$ reduces and eventually is expected to reach the event horizon/ISCO.
Such a reduction in the radial extent of the high-angular momentum flow is even faster for cases with high magnetic resistivity.
As the radial extent of the inner edge approaches the ISCO, the outburst transits into a soft state.
In our earlier study \citep{Dihingia-etal2021}, the focus was to understand the outburst evolution from such a soft state, starting with a
high angular momentum thin disc close to the black hole.
We had observed a transition from a quasi-steady phase to an oscillatory phase (HIMS) (marked on the Fig. \ref{fig-hid}).
Thus, both these studies cover a branch of the `Q' diagram whereby the outburst in an intermediate state evolves to a soft state (high-angular
momentum disc close to the black hole) and further evolves to an intermediate state again \cite[e. g.,][]{Dunn-etal2010, Belloni2010, Belloni-Motta2016}.
Ideally, to cover the complete path followed by the outburst, one would need to evolve the truncated accretion disc for a much longer
time duration with realistic boundary conditions at the outer boundary.
Further, for a more consistent inference on radiative and timing properties, the thermodynamics of electrons (heating and cooling)
and protons (heating and cooling) needs to be considered.
Incorporation and application of such two-temperature models within the GRMHD simulations are currently under development (e.g. \cite{Ressler-etal2015,Ryan-etal2017,Sadowski-etal2017,Mizuno-etal2021,Dihingia-etal2022}).
In the future, longer temporal evolution simulation studies with physically consistent thermodynamics would be crucial to understand the complete `Q' diagram of an outburst in BH-XRBs.
\appendix
\section{MRI quality factor}
To ensure that MRI is resolved for a given set of numerical resolutions, we calculate the MRI quality factor in terms of wavelength ($\lambda_\theta$) of the fastest growing MRI mode in the $\theta$ direction as $Q_\theta = \lambda_\theta/\Delta x_\theta$. The wavelength of the fastest growing MRI mode $\lambda_\theta$ is given in this case by
\begin{align}
\lambda_\theta = \frac{2\pi}{\sqrt{(\rho h + b^2)\Omega}}b^\mu e_\mu^{(\theta)},
\end{align}
and the grid resolution $\Delta x_\theta = \Delta x^\mu e_\mu^{(\theta)}$ (see \cite{Takahashi2008, Siegel-etal2013, Porth-etal2019, Nathanail-etal2020}, for details).
Typically, $Q_\theta \gtrsim 6$ is require to resolve this MRI mode (see \cite{Sano-etal2004}). In Fig. \ref{fig-qtheta}, we show the distribution of $Q_\theta$ for model 2D40A (resolution: $1024\times512$) at simulation times (a) $t=0$ and (c) $t=4380$, and for model 2D40AH (resolution: $2048\times1024$) at simulation times (d) $t=0$ and $t=4380$. The panels suggest that for our simulation runs, $Q_\theta$ sufficiently greater than $10$ in most of the numerical domain. For the low resolution model (2D40A, Fig. \ref{fig-qtheta}a and \ref{fig-qtheta}c), in the high density equatorial part is somewhat under-resolved $(Q_\theta\sim4-5)$. However, with the increase of resolution, in the high density equatorial part for is also resolved with $Q_\theta\sim6$ (Fig. \ref{fig-qtheta}b and \ref{fig-qtheta}d).
\section{3D low resolution comparison run}
For completeness, we devised a 3D run of truncated accretion disc with a truncation radius $r_{\rm tr}=40$ and $\beta_{\rm tr}=100$ considering resolution $224\times96\times128$. All other initial conditions are the same as in the reference model. With time, non-asymmetric disc instabilities develop in the accretion disc \citep[e.g.,][]{Hawley1987, Savonije-etal1990}.
Depending on the active instability modes, matter starts to accrete from certain regions of the truncated accretion disc. As a result, the axisymmetric nature of the truncated accretion breaks, and accretion happens in terms of spiraling {\it fingers}. To study the evolution of the accretion {\it fingers} in detail, in panels of Fig. \ref{3D-density}, we show the distribution of logarithmic density ($\rho/\rho^{\rm tr}$) for model 3D40 at three different simulation times $t=2500, 3500$, and $4500$, in panels (a), (b), and (c), respectively.
In Fig. \ref{3D-density}a, we observe a small active accretion {\it finger}. At this time, the accretion rate is expected to be at its minimum. With time, more matter accrete towards the black hole in terms of accretion {\it fingers} (see Fig. \ref{3D-density}b). In Fig. \ref{3D-density}c, we show a state where we observe abundant spiraling matter close to the black hole. Thus, the accretion {\it fingers} is highly dynamical. Also, it is interesting to note that the spiraling {\it fingers} are separated by a low-density region. The dynamics of both regions may interact and play a very interesting role in understanding physics around the black hole.
The long-term temporal evolution of the 3D model is shown in Fig. \ref{3D-fluxes}, where we plot the accretion rate as a function of simulation time in panel Fig. \ref{3D-fluxes}a.
In the figure, we observe that the accretion rate profile does not show clear sharp peaks in its profile as in the axisymmetric model. Rather, we observed broader peaks in the accretion rate profile.
Similarly, in Fig. \ref{3D-fluxes}b, we show the plot of magnetic flux accumulated at the event horizon $(\dot{\phi}_{\rm BH})$ for the 3D model as a function of simulation time. With the temporal evolution, magnetic flux started to accumulate around the event horizon, and $(\dot{\phi}_{\rm BH})$ increases. After a certain time $(t\sim2000)$, the inner accretion disc reaches a MAD configuration. After that ($t>2000$), the value of $\dot{\phi}_{\rm BH}$ remains in between within $\dot{\phi}_{\rm BH}\sim 10-20$, suggesting a fully developed MAD state in the flow \citep{Tchekhovskoy-etal2011}.
In summary, despite the low resolution, we capture the salient features of the 3D truncated accretion disc. We observe that due to the presence of non-axisymmetric disc instabilities, the matter from the truncated accretion disc accretes in terms of spiral {\it fingers}. The dynamics of accretion {\it fingers} determines the dynamics of the accretion flow. Unlike the axisymmetric models, we do not observe periodic sharp peaks in the accretion rate profile in this case. Instead, we see broad peaks of uneven strength throughout the simulations. A recent high-resolution 3D study by \cite{Ripperda-etal2022} has shown the formation of magnetic flux bundles due to reconnection events. Thus, the observed broader peaks in our 3D runs may be due to the high numerical resistivity and absence of reconnection events. The peaks may become prominent with the increase in resolution.
\section*{Acknowledgements}
All simulations were performed on the Max Planck Gesellschaft (MPG) super-computing resources.
We would like to thank the financial support from the Max Planck partner group award at Indian Institute Technology of Indore. IKD thank Swarnajayanti Fellowship from the Department of Science and Technology (DST/SJF/PSA-03/2016-17) for the financial support.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
\bibliography{references}
\bsp %
\label{lastpage} |
Title:
Statistical distribution of HI 21cm absorbers as potential cosmic acceleration probes |
Abstract: Damped Lyman-$\alpha$ Absorber(DLA), or HI 21cm absorber, is an important
probe to directly measure the acceleration of spectroscopic velocity
$v_\mathrm{S}$ via the Sandage-Loeb(SL) effect. Confined by the shortage of
actual DLAs samples and the coarse background radio sources assignment, the
detectable amount of Damped Lyman-$\alpha$ Absorption System(DLAS) is ambiguous
in most cases. After differing the unmeasurable, global and physical $\ddot{a}$
from the observed and local $\dot{v}_\mathrm{S}$, we make a statistical
investigation of the components of DLASs. We use Kernel Density Estimation(KDE)
to depict a general redshift distribution of background radio sources via three
radio deep survey datasets, CENSORS, LBDS-Hercules and CoNFIG-4, and provide a
multi-Gaussian expression. Testing the generation process of DLA redshift
number density in literature, we try to make a modified power law fitting of
low-redshift($z\lesssim1.65$) DLA preselected by MgII absorption and analysis
its defects. Finally, we present a simple DLASs number estimation of FAST,
ASKAP and SKA-Mid when considering a blind HI absorption survey with our
derived radio number density and the previous DLA one in literature. For
comparability, our FAST prediction gives a practical amount of 100, and an
optimistic amount of 470, while our latter amount and previous predictions are
within an order of magnitude.
| https://export.arxiv.org/pdf/2208.05639 | command.
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\newcommand\raa{RAA}
\shorttitle{Statistical distribution of HI 21cm absorbers as cosmic acceleration probes}
\shortauthors{C.-Z. LU et al.}
\graphicspath{{./}{figures/}}
\usepackage{graphicx}
\begin{document}
\title{Statistical distribution of HI 21cm absorbers as potential cosmic acceleration probes}
\author[0000-0001-5834-6459]{Chang-Zhi Lu}
\affiliation{Department of Astronomy, Beijing Normal University, Beijing 100875, People's Republic of China}
\author{Tingting Zhang}
\affiliation{College of Command and Control Engineering, PLA Army Engineering University, Nanjing 210000, People's Republic of China}
\author[0000-0002-3363-9965]{Tong-Jie Zhang}
\affiliation{Department of Astronomy, Beijing Normal University, Beijing 100875, People's Republic of China}
\affiliation{Institude for Astronomical Science, Dezhou University, Dezhou 253023, People's Republic of China}
\correspondingauthor{Tingting Zhang}
\email{101101964@seu.edu.cn}
\correspondingauthor{Tong-Jie Zhang}
\email{tjzhang@bnu.edu.cn}
\keywords{Interstellar medium(xxx) --- Observational cosmology(xxx) --- Quasar absorption line spectroscopy(xxx)}
\section{Introduction}
An accelerating expansion process is happening in our universe, whose acceleration has not been convincingly measured up to now. An unprecedentedly precise observational determination will provide more clues to the fundamental modeling of the expanding mechanism, such as the most recognized dark energy with many candidates of Equation of State(EoS).
Common cosmological probes can detect a roundabout acceleration of scale factor, $\ddot{a}$, while the stringently defined spectroscopic velocity drifts of objects faithfully tracing the Hubble flow in real time are still beyond our reach. Proposed by \cite{sanda62apj} and improved by \cite{loeb98apj}, the redshift drift, namely the SL effect now is a model-independent probe of cosmic acceleration and free of cosmic geometry, used to distinguish cosmological models\citep{codur21prd,mishr22prd} and explore the inhomogeneity and anisotropy of the universe\citep{thoma22arx,heine22jcap}. \cite{mores22arx} recorded more details of redshift drift, and \cite{melia22ejph} discussed the significance of a zero or non-zero redshift drift.
Direct measurement of redshift drift most depends on Damped Lyman-$\alpha$ Absorbers(DLAs) or Damped Lyman-$\alpha$ Absorption Systems(DLASs). DLAs contain abundant dense HI gas(column density $N_\mathrm{HI}\geq2\times10^{20}\mathrm{cm^{-2}}$) which absorbs Lyman-$\alpha$ photons(optical) and most 21cm radiation(radio) from HI hyperfine spin-turning in the local rest frame.
The first feasible Lyman-$\alpha$ forests(LFs) approach was carefully searched by \cite{liske08mn} for the next generation instrument ESO-ELT, and is currently renewed by \cite{dong22arx}. Optical DLAs are often discovered in intergalactic environment. With abundant potential Lyman-$\alpha$ absorption in line forests, the LFs approach gathers much attention like the Cosmic Accelerometer project\citep{eiken19baas}, the ACCELERATION programme\citep{cooke20mn}, ESPRESSO and NEID spectrographs\citep{chakr22arx}. Interestingly, \cite{estev21mn} concluded that measuring redshift drift and constraining cosmological parameters is a dilemma. However, confined by the earth's ionosphere, ground-based observation can only receive LFs photons at $z\gtrsim1.7$, which corresponds to the decelerating expansion and jerk era. As for the lower redshift and most of the lookback time, only space-borne experiments could cover up.
The second HI 21cm Absorbers or HI 21cm Absorption Systems approach was first conducted by \cite{darli12apj}, in which it gave the best constraint(until now) on redshift drift of three magnitudes larger than theoretical prediction. Radio DLAs usually originate in the active parts(star-forming or nuclei) of galaxies at low redshift. Long-term stabilities in frequency were also established in GBT and used DLAs. Due to the tiny energy temperature difference which theoretically gives HI atoms in the ground state a steady distribution, DLAs' 21cm lines are less affected in the cold neutral medium(CNM) and have a narrow width, while the inner-galactic origination of DLAs may cause prominent frequency shifts(namely velocity uncertainty) and disturb the connected results. \cite{jiao20jcap} made an HI 21cm absorption spectral observation in PARKES, advocating the necessity of consecutive high-resolution spectral observations against the high-velocity uncertainty. \cite{lu21arx} made a high-accuracy HI 21cm spectral observation with FAST as a preliminary effort to reach final the SL signal, introducing semi-theoretical velocities uncertainties to express the velocity error in one epoch. Further secular observations to obtain the stricter constraints are still in applied and prepared.
The number density predictions of HI 21cm absorption systems were produced by \cite{yu14prl,yu17raa} and \cite{jiao20jcap}, where they focused on CHIME, Tianlai and FAST respectively, and aimed to give an immediate result. The HI 21cm absorption surveys have found few new DLAs\citep{dutta17mn}, while the radio surveys already obtained massive samples extended to deeper view and fainter sources\citep{matth21apj}, with different redshift and flux density statistical features. Among these distributions, the actual physical situation is the one and only, determining the forecasting of the potential samples. Therefore it entails checking as many datasets to produce a reliable and reasonable number density of radio sources. Meanwhile, it should be noticed that many HI 21cm absorption surveys are on schedule or already undergoing, such as the First Large Absorption Survey in HI(FLASH) in ASKAP\citep{allis22pasa}, MeerKAT Absorption Line Survey(MALS)\citep{gupta16mks,gupta21apj}, Widefield ASKAP L-band Legacy All-sky Blind surveY(WALLABY)\citep{kori20apss} and so on, will renew our understanding of DLAs. Any advance in the two aspects would modify the final anticipation of cosmic acceleration experiments in the HI 21cm approach.
In this paper, the defined difference between the measured radial velocity change and the homogeneous realistic cosmic acceleration is stressed again in sec \ref{sec2}. We conduct comparative research on the number density of radio soureces(sec \ref{sec3.1}) and DLAs(sec \ref{sec3.2}) based on several databases, and predict the possible detection amount of DLASs for FAST and SKA(sec \ref{sec3.3}). Our discussion and conclusion are given in sec \ref{sec4} and \ref{sec5}, respectively. All the calculation in this paper is based on a fiducial Planck18 $\Lambda$CDM model\citep{planck20aa}($\Omega_\mathrm{K0}=\Omega_\mathrm{R0}=0$, $\Omega_\mathrm{M0}=0.315$, $H_0=67.4\mathrm{km\ s^{-1}\ Mpc^{-1}}$).
\section{COSMIC ACCELERATION}\label{sec2}
For many papers contain the necessary formulae and their derivations, we would give a brief description with a standard $\Lambda$CDM model:
\begin{eqnarray}%
E^2(z)=&&\Omega_\mathrm{R0}(1+z)^4+\Omega_\mathrm{M0}(1+z)^4\nonumber\\&&+\Omega_\mathrm{K0}(1+z)^3+\Omega_\mathrm{L0}.
\end{eqnarray}
The redshift drift of the tracers in Hubble flow is:
\begin{eqnarray}%
\Delta z=H_0[1+z-E(z)]\Delta t_0=\frac{\dot{a}_0-\dot{a}(z)}{a(z)}\Delta t_0,
\end{eqnarray}
and the change of its spectroscopic radial velocity in low redshift approximation($v_\mathrm{S}\approx cz$) is:
\begin{eqnarray}\label{eq3}
\Delta v_{\rm S}=&&\frac{c}{1+z}\Delta z=cH_0[1-\frac{E(z)}{1+z}]\Delta t_0\nonumber\\=&&c[\dot{a}_0-\dot{a}(z)]\Delta t_0.
\end{eqnarray}
In this case, the velocity drift is an indicator of $\dot{a}$ differences between today and the past(redshift $z$).
However, when we explain the cosmic expansion strictly with general relativity(GR), the recession velocity\citep{davis01aipc} is:
\begin{eqnarray}%
v_{\rm G}=\dot{a}(z)D_{\rm C}(z)=\frac{c}{H_0}\dot{a}(z)\int_0^z\frac{dz'}{E(z')},
\end{eqnarray}
where $D_{\rm C}(z)=c\int_0^z\frac{dz'}{H(z')}$ is the comoving distance. The time derivative of $v_{\rm G}$\citep{jiao20jcap} is:
\begin{eqnarray}%
\dot{v}_{\rm G}=&&\ddot{a}(z)D_{\rm C}(z)+\dot{a}(z)\dot{D}_{\rm c}(z)\nonumber\\=&&\ddot{a}(z)D_{\rm C}(z)+\dot{a}(z)\frac{dD_{\rm c}(z)}{dz}\frac{dz}{dt_0}.
\end{eqnarray}
According to $q(z)$, we could write down $\ddot{a}(z)$:
\begin{eqnarray}%
q(z)&=&\frac{1+z}{2E^2(z)}\frac{dE^2(z)}{dz}-1,\\
\ddot{a}(z)&=&-H_0^2q(z)a(z)E^2(z).
\end{eqnarray}
With the above quantities shown in Figure \ref{fig1}, we can notice that the zero-points of $\ddot{a}$ and $\dot{z}$(or low-redshift approximated $\dot{v}_\mathrm{S}$) are very different, where the former($z_\mathrm{a0}\approx0.65$) relies on $q(z)$ and the latter($z_\mathrm{z0}\approx1.9$) depends on $1+z-E(z)$.
It is worthy of attention that only dynamical $\ddot{a}$ can depict the physical process of actual deceleration or acceleration of the universe expansion physically, and has consistent value at any point from any distance in the same moment.
However, the observed $\dot{z}$ or $\dot{v}_\mathrm{S}$ is related to comparison within a pair of locations(present observer and some past receding spot). For a fixed spot, its $\dot{z}$ value would change with the reference(the observer position) selection. Thus it is just a relative quantity to measure the apparent velocity changes for a local observer. Although we measure it, it does not represent the real acceleration of the universe's expansion. Considering the secular observing time in the magnitude of a decade, what we actually measure is a time-averaged drift.
From the upper panel in Figure \ref{fig1}, only when $z<z_\mathrm{a0}$, the $\ddot{a}$ is positive and the universe undergoes an accelerating expansion. And the universe expanded in deceleration at $z>z_\mathrm{a0}$. The $z_\mathrm{z0}$ cannot distinguish between the two states. Combined with the eq. \ref{eq3}, the difference is further illustrated in Figure \ref{fig1c}. When $z<z_\mathrm{z0}$, $\dot{a}_\mathrm{0}$ is always larger than $\dot{a}_\mathrm{z}$, and the observed $\dot{v}_\mathrm{S}\propto(\dot{a}_\mathrm{0}-\dot{a}_\mathrm{z})$ is always positive, but it does not mean the universe expanding acceleratingly through redshift from 2 to 0.
The other difference between $\ddot{a}$ and $\dot{z}$(or $\dot{v}_\mathrm{S}$) is, that the former is not directly measurable, while the latter could come from spectra conveniently.
From the lower two panels in Figure \ref{fig1}, it can be seen that, if the universe is genuinely dominated by a $\Lambda$CDM model, we could use FAST safely to research spectroscopic velocity with good approximation at low redshift.
\section{EXPECTATION OF DLASs}\label{sec3}
The redshift number density of DLASs(or HI 21cm absorption systems) in every square degree of the sky could be expressed as
\begin{eqnarray}\label{eq8}
&n_\mathrm{21}(z)=n_\mathrm{DLA}(z)\int_{F_\mathrm{min}}^\infty\int_z^\infty n_\mathrm{R}(z',F')dz'dF',
\end{eqnarray}
where $n_\mathrm{DLA}(z)$ is the averaged number density of DLAs(or HI 21cm absorbers) in some certain line of sight directing toward a background radio source, and $n_\mathrm{R}(z',F')$ is the number density of background radio source per square degree related with its redshift and observed flux density.
\subsection{Prediction of Radio Source}\label{sec3.1}
The commonly quoted redshift distribution of radio source in the calculation of $n_\mathrm{21}(z)$ is given by \cite{dezot10aapr}:
\begin{eqnarray}\label{eq9}
n_\mathrm{R}(z)=&&1.29+32.37z-32.89^2z\nonumber\\&&+11.13z^3-1.25z^4,
\end{eqnarray}
which comes from a polynomial fitting of the CENSORS data\citep{brook08mn}. The Combined EIS-NVSS Survey of Radio Sources(CENSORS) was conducted at 1.4GHz, targeting the ESO Imaging Survey(EIS) Patch D covering a $3\times2\mathrm{deg^2}$ field of view and 150 sources selected from NRAO VLA Sky Survey(NVSS). \cite{rigby11mn} finally presented a subset of 135 in CENSORS with a received flux density completeness of $7.2\mathrm{mJy}$. We notice that the CENSORS data contains redshift and flux density at the same time, useful to limit the background radio source. And in the following paper, we would use Rigby's dataset to represent the CENSORS.
\subsubsection{Comparing the Radio-source Description}\label{sec3.1.1}
\cite{marcab13arx} made a gamma fitting for the redshift-binned count of CENSORS. When comparing the galaxy angular power spectrum via the Bayesian evidence test, the gamma fitting outperformed the polynomial description. However, the two ways both overemphasize the ability to fit the peak bin(for the rigid shape, the gamma fitting even cannot fit the second small peak at high redshift), seem to lack the complexity and understanding of the actual redshift distribution.
Here we introduce the Kernel Density Estimation(KDE) to depict the CENSORS dataset. As a mature non-parametric method, KDE is an expert in estimating the probability density of a random variable without any prior knowledge. Particularly when using the Gaussian kernel, the accurate expression of the final distribution can be reached by multi-Gaussian fitting.
We use a 0.1-length redshift bin to count the CENSORS data(133 sources at redshift z $<$ 4.0), and make 4-order polynomial and gamma fitting. Then two different KDE functions, \textit{scipy.stats.gaussian\_kde} and \textit{sklearn.neighbors.KernelDensity}, are separately considered. The former(grid-KDE), fed by 2-dimension data, is easy to apply without assigning more prior parameters, while the latter(pure-KDE) needs to carefully test the bandwidth parameter. We present our fitting comparison in Figure \ref{fig2}, in which the KDE methods both behave excellently, but the bandwidth 0.2 and lower values(not given) pure-KDEs show severe over-fitting. The bandwidth 0.3 pure-KDE is a comparable result to the grid-KDE. However, regarding the effort to check the bandwidth for every dataset, it is more convenient and impersonal for us to adapt the grid-KDE as the default way in the following data analysis.
\subsubsection{Enlarging the Radio Dataset}\label{sec3.1.2}
The CENSORS data does provide a comprehensive insight to research the global redshift revolution of radio sources. Nonetheless, whether a $6\mathrm{deg^2}$ survey could be an ideal representative of the whole sky situation? Although the cosmological principle should guarantee, it is still worth verifying this point.
For the comparison with the CENSORS, other radio-source deep surveys with the information of redshift and 1.4GHz flux density are needed, and the LBDS-Hercules\citep{waddi01mn} and CoNFIG-4\citep{gendre10mn} covering the different sky satisfy the important requirement.
The Leiden-Berkeley Deep Survey(LBDS) Hercules sampled $2\mathrm{deg^2}$ sky and 64 radio sources with $S_\mathrm{1.4GHz}>2\mathrm{mJy}$, and the Combined NVSS-FIRST Galaxies-4(CoNFIG-4) sample observed $52\mathrm{deg^2}$ sky and 185 radio sources with $S_\mathrm{1.4GHz}>50\mathrm{mJy}$. With the introduction of the two datasets, our analysis is greatly improved.
To unify the comparison standard and eliminate several outliers, we choose radio sources with redshift between 0.01 and 4, and flux density $s<400\mathrm{mJy}$. There needs special attention that only 86 of 184 sources from CoNFIG-4 have the redshift record. Therefore when considering the sky coverage of CoNFIG-4, we make a direct proportion to counteract the lost information, regarding it as $86/184*52\approx24.3\mathrm{deg^2}$. The flux density and redshift distribution of the three datasets are plotted in Figure \ref{fig3}.
For the lower limit of flux density, 50mJy, of the CoNFIG-4, we can only evaluate the radio-source redshift distribution above this value first. Given rare samples from CENSORS and Hercules in this flux density region, CoNFIG-4 could make a considerable contribution to complement the number-lacking. According to the observation of HI 21cm absorption\citep{gereb15aa}, 32 systems are observed, and only 3 background radio sources show 1.4GHz flux density lower than 50mJy(but larger than 35mJy). Thus 50mJy can be a proper lower limit for the present observation ability. Here we use grid-KDE to extract the redshift distribution per degree square from three datasets, and make a number-weighted average of the former three as the final result of 50mJy, showing the above four cases in Figure \ref{fig4}. However, toward the next generation of great radio telescope and HI 21cm absorption blind survey project, we make further research with a lower limit 10mJy in CENSORS and Hercules data, and add their number-weighted average in Figure \ref{fig5}. In the end, we list our final 3-Gaussian fitting parameters for the two cases in Table \ref{tab1}.
\begin{deluxetable}{ccccc}
\tablenum{1}
\tablecaption{The number-weighted average of 3-Gauss fitting of two cases. The every basic component of two cases($n_\mathrm{r50}(z)$ and $n_\mathrm{r10}(z)$) is $f(z)=\frac{a}{\sqrt{2\pi}\sigma}\exp{\frac{(z-\mu)^2}{2\sigma^2}}$.}\label{tab1}
\tablewidth{0pt}
\tablehead{
case & component & a & $\mu$ & $\sigma$\\
}
\startdata
50mJy & 1 & 0.03490 & 1.94418 & 0.78346\\
& 2 & 0.08259 & 1.32178 & 0.36315\\
& 3 & 0.23736 & 0.51247 & 0.45345\\
10mJy & 1 & 0.17191 & 2.49829 & 0.67762\\
& 2 & 0.58973 & 1.32355 & 0.41893\\
& 3 & 0.78615 & 0.51484 & 0.43653
\enddata
\end{deluxetable}
\subsection{Forecasting of DLAs}\label{sec3.2}
The recent research of the DLAs statistical distribution at large redshift region($0\lesssim z\lesssim5$) was advanced by \cite{rao17mn}(thereby RTS17). They provided an optical MgII preselection criterion of DLAs, used it to testify 70 DLAs at low redshift region($z\lesssim1.65$) from 369 MgII absorbers, and combined their data with high-redshift($z\gtrsim1.65$) SDSS-detected DLAs and a low redshift modified value to give a global redshift number density fitting:
\begin{eqnarray}\label{eq10}
n_\mathrm{DLA1}(z)=(0.027\pm0.007)(1+z)^{(1.682\pm0.200)}.
\end{eqnarray}
Considering their result at the same time, we are more concentrated on the low-redshift scenario for radio telescope detection. So we use their original data and obtain a more detailed local low-redshift description, where 72 DLAs from 369 MgII absorbers are extracted. The general number approximately matches the RTS17 case(70/369), and the details of some sub-datasets vary slightly.
We select DLAs with their lower estimation of column density $N_\mathrm{HI,low}>2\times10^{20}\mathrm{cm^{-2}}$, the equality is not enough here to make sure the rough consistency to Rao's result. And then we will show two main differences in sub-datasets. The paper of \cite{rao06apj} presented 41 DLAs from 197 MgII absorbers, but we find 46 DLAs with their rest-frame equivalent widths $W_\mathrm{0}^{\lambda2796}>0.6\mathrm{\AA}$. And they selected 26 DLAs from 96 MgII absorbers(labeled as A and B) in the paper of \cite{turns15mn}, while we extract 23 DLAs from the same MgII absorbers.
At first, we make a statistic of the DLAs incident rate in every 0.1-redshift bin as the yellow star dotted line in Figure \ref{fig6}(the lines referred in this paragraph are all listed in Figure \ref{fig6}), showing a fluctuating ascending trend. Therefore we decide to fit it with an easy power law as the yellow circle dotted line. Then, we construct the MgII redshift number density via grid KDE as the green square dashed line. The DLA number density is derived from the product of DLA incidence and MgII number density at corresponding redshifts, $n_\mathrm{DLA}(z)=i_\mathrm{DLA}n_\mathrm{MgII}(z)$. Fitting the DLA number density with a power law(the magenta cross line) and a 3-Gaussian profile(the blue up-triangle line), we also plot the RTS17's result as the red pentagon line together.
Besides, we also compare the DLA number densities from RTS17 and \cite{curran16mn,curran21mn}, which provided 88 associated(A-type) and 56 intervening(I-type) DLAs, in Figure \ref{fig7}. According to the comparison, we conclude that the DLA number density directly derived by us from RTS17 may have some severe bias around the $z\gtrsim0$ to lower the detection rate and cannot fix itself. Therefore, considering the similarity between our original power law and RTS17 at the low redshift region(0.1$\sim$0.5), we manually use the estimation $n_\mathrm{DLA}(z=0)=0.026\pm0.003$\citep{braun12apj} to replace the first element in the $n_\mathrm{DLA}$, and re-fit a final modified power law(the cyan plus line in Figure \ref{fig6}):
\begin{eqnarray}\label{eq11}
n_\mathrm{DLA2}(z)=0.03381z^{0.41206}+0.02545.
\end{eqnarray}
Our result given here is rough and experimental. We will simply discuss it with the RTS17 result(eq. \ref{eq10}) later, but the latter is our main reference in the later calculations. As for why we do not appreciate our result, we would explain it in the discussion.
\subsection{Anticipation of DLASs}\label{sec3.3}
Two estimations of radio-source redshift number density per square degree, $n_\mathrm{r50}(z)$ and more extended $n_\mathrm{r10}(z)$ are exhibited in Table \ref{tab1} at the end of section \ref{sec3.1.2}. Two predictions of DLA number density in the sightline toward every background radio source are presented in equation \ref{eq10}(global RTS17's, $n_\mathrm{dglo}(z)$) and \ref{eq11}(local ours, $n_\mathrm{dloc}(z)$). We plot the four functions of redshift in Figure \ref{fig8}.
According to the eq. \ref{eq9}, we combine the four distributions to derive the redshift distribution of DLASs(or HI 21cm absorption systems), $n_\mathrm{21}(z)$, where the upper limit of integrated redshift for radio source is replaced by 5 instead of $\infty$. In the upper panel of Figure \ref{fig9} we show the four kinds of redshift number density $n_\mathrm{21}(z)$, and give the corresponding redshift number integration $N_\mathrm{21}(z)$ in the lower panel of Figure \ref{fig9}.
Now it is easy to use our result to anticipate the possible total detected numbers of a HI absorption blind survey. We use the $n_\mathrm{r50}(z)$\&$n_\mathrm{dglo}(z)$ as a practical(harsh), and $n_\mathrm{r10}(z)$\&$n_\mathrm{dglo}(z)$ as an extended(optimistic) one.
FAST covers about 24000 square degrees of the sky and the redshift of 0 to 0.352(1050-1450MHz)\citep{li18imm}, gives a practical amount of 100 and an extended amount of 470.
ASKAP covers 33000 square degrees of the sky and the redshift of 0.4 to 1.0(711.5-999.5MHz)\citep{allis22pasa}, gives a practical amount of 290 and an extended amount of 1480.
As for the most remarkable instrument, the full SKA(now the SKA1-mid) will cover about 30000 square degrees of the sky and the redshift of 0 to 1\citep{klo15aaska,welt20pasa}, give a practical amount of 420 and an extended amount of 2030.
\section{discussion}\label{sec4}
We examine how much the different binning methods would affect our fitting. The result presented in Figure \ref{fig10} shows that they have little impact on fitting, and are negligible in smaller bins, confirming the feasibility.
Considering the three radio source datasets we use, CENSORS, LBDS-Hercules and CoNFIG-4, the most samples of them are AGNs, and the classification we apply just stops here. A more detailed classification must help us to better grip the redshift distribution and radio luminosity function of different types of radio sources. But confined to the extremely limited data amount, it is still beyond our present reach.
The RTS17 dataset does provide the most abundant DLA research at low-redshift($z\lesssim1.65$) where the ground-based optical telescope could not directly receive the Lyman-$\alpha$ photons due to the truncation effect of the Earth's ionosphere. However their MgII absorption preselection is likely to have luminosity and dust-extinction bias, having the risk of omitting some MgII absorbers or maybe a few DLAs not related to the MgII absorption. \cite{dutta17mn,dutta19jaa} advanced an absorption-blind or galaxy-selected approach, where one selects a galaxy visually close to a background quasar(Quasar-Galaxy Pair, QGP). Without the prior of any absorption toward the quasar, their method is not biased by dust as MgII preselection, and gives a more direct DLASs redshift distribution without the integration in eq. \ref{eq8}. We hope their project would provide more QGPs data and distribution information of radio DLASs.
Although we give a statistical description of DLAs redshift distribution, the key challenge to identify a DLA as qualified cosmic acceleration probe is its inner state of the absorbers inside or near the host galaxy. Moreover, the local environment of DLAs, according to \cite{dutta17mn}, is very complicated. It is possible that not all of them faithfully trace their local Hubble flow, and needs long-term observation to verify the frequency stability.
There is an intriguing phenomenon that the redshift number densities of A-type DLAs from Curran and Rao's one have more similar profiles in the low-redshift region(in Figure \ref{fig7}) than I-type DLAs'. After checking the emission and absorption redshift and considering the MgII absorption preselection, however, Rao's green curve should represent the I-type DLAs. The difference between the two I-type's profiles may reflect the discrimination of detection methods. The RTS17's result is purely from MgII preselection, while Curran's result does not show the clear detection origin of every sample.
The difference in DLA number densities between our original power-law and RTS17's global fitting is notable at a very low and high range of redshift z in $(0,1.65)$. At the very low region, our analysis of DLA number density is heavily impacted by its low detection rate of MgII absorbers(and no DLA is detected), and that's why we introduce a zero-point prior value to modify the estimation. At the high redshift region, our prediction is descending for the actual local prediction(shorter bin and less MgII absorbers), while the RTS17 result is still rising for their global vision of the DLAs(longer bin and higher number density beyond the redshift 1.65). Therefore we regard RTS17's prediction as a better statistical work and use their formula in our prediction. But the modified power law and the RTS17 one also raise the risk of overestimating the DLA number density at the very lower redshift with the introduction of $n_\mathrm{DLA}(z=0)$ and its fitting, instead of the realistic and reachable low detection rate at the low redshift region\citep{darl11apj}. Furthermore, we should recall that the closer DLAs usually suffer more severe peculiar motion or acceleration, making them harder to serve as the probe.
Given a more stringent physical condition in a preliminary blind radio HI 21cm absorption in FAST, our practical prediction of the potential DLA amount($\sim 100$) is about 1-order magnitude smaller than the bulk of previous work, such as Zhang's 1500 from a local luminosity function estimation\citep{zhang21mn} and Jiao's 2600 for a decade CRAFTS observation\citep{zhang19scpma} with a similar calculation\citep{jiao20jcap}. However, ours can agree with the magnitude of 100 from the estimation from \cite{wu15aas}. Considering our extended amount($\sim 470$), it is at most one-third of them. Additionally, Zhang's result from the estimated amount of AGNs probably omitted that only 10\% AGNs are radio-loud. And if adding the factor of 10\%, their result would be very close to our prediction as well. Our result is convenient(harsh) and possible(optimistic) detectable numbers for the blind survey considering the background. When adding more proper limitations, it is quite natural that the anticipation would continue to decrease. And there is no doubt that we might ignore some existing but undetectable DLAs with very faint absorption.
\section{conclusion}\label{sec5}
In this paper, we make a small but important distinction between the physical $\ddot{a}$ and the observed $\dot{v}_\mathrm{S}$ at first, and emphasize that only the $\ddot{a}$ can express an actual expansion state for the whole universe at one certain time, but it cannot be measured straightforwardly.
Subsequently, we separately explore the redshift number density of radio source via three datasets of radio deep survey containing the information of redshift and 1.4GHz received flux density, and explore the redshift number density of DLAs(or HI 21cm absorbers) through a low-redshift($z\lesssim1.65$) MgII-absorption preselected dataset. (1) Introducing the KDE method, we fresh the traditional polynomial description of the radio-source redshift number density, with a more flexible and comprehensive form that can be fitted by multi-Gaussian profiles. (2) After checking the procedure of generating DLA number density from RTS17 data solely and giving a modified power law expression, we finally use their global fitting. (3) With the re-estimated distributions, we predict a new total number for a HI 21cm absorption blind survey with more detailed condition settings. For a conservative evaluation, FAST covers 100 DLAs totally, and SKA-Mid covers 420. But for a prospective and optimistic survey(running for a long time with high sensitivity), FAST will cover 470 DLAs, and SKA-Mid can cover 2030 of DLASs.
The lack of samples from radio deep surveys(better with their morphology) and low-redshift DLAs now is one of the crucial obstructions to further study. Nevertheless, with more potential DLASs(HI 21cm absorption systems) discovered in the future, it is a good chance to research the physical essences and environments of DLAs, as well as to revise the background radio sources from a new aspect. And all this progress could propel our knowledge of the universe, and help us to comprehend its expansion and the underlying drivers.
\section*{acknowledge}
We acknowledge support from the National Natural Science Foundation of China(Grants No. 61802428, 11929301) and the Ministry of Science and Technology of China (grant Nos. 2020SKA0110100).
\bibliography{document}{}
\bibliographystyle{aasjournal}
|
Title:
Linking star formation thresholds and truncations in the thin and thick disks of the low-mass galaxy UGC 7321 |
Abstract: Thin and thick disks are found in most spiral galaxies, yet their formation
scenarios remain uncertain. Whether thick disks form through slow or fast,
internal or environmental, processes is unclear. The physical origin of outer
truncations in thin and thick disks, observed as a drop in optical and
near-infrared (NIR) surface brightness profiles, is also a much debated topic.
These truncations have been linked to star formation (SF) thresholds in
Milky-Way type galaxies, but no such connection has been made for their
low-mass counterparts or in thick disks. Our photometric analysis of the
edge-on galaxy UGC 7321 offers a possible breakthrough. This well-studied
diffuse, isolated, bulgeless, ultra-thin galaxy is thought to be under-evolved
both dynamically and in SF. It is an ideal target to disentangle internal
effects in the formation of thick disks and truncations. Our axial light
profiles from deep far- and near-ultraviolet (UV; GALEX) images, tracing recent
SF, and optical (DESI grz) and NIR (Spitzer 3.6 microns) images, tracing old
stellar populations, enable a detailed identification of an outer truncation in
all probed wavelengths in both the thin and thick disks. After deprojecting to
a face-on view, a sharp truncation signature is found at a stellar density of
roughly 1.5 solar masses per square parsec, in agreement with theoretical
expectations of gas density SF thresholds. The redder colours beyond the
truncation radius are indicative of stellar migration towards the outer
regions. We thus show that thick disks and truncations can form via internal
mechanisms alone, given the pristine nature of UGC 7321. We report the
discovery of a truncation at and above the mid-plane of a diffuse galaxy that
is linked to a SF threshold; this poses a constraint on physically-motivated
disk size measurements among low-mass galaxies.
| https://export.arxiv.org/pdf/2208.13527 |
\title{
Linking star formation thresholds and truncations \\ in the thin and thick disks of the low-mass galaxy UGC~7321
}
\titlerunning{Truncations in the thin and thick disks of UGC~7321}
\author{S. D\'iaz-Garc\'ia\inst{1,2,3,4}
\and
S. Comer\'on\inst{2,1}
\and
S. Courteau\inst{3}
\and
A. E. Watkins\inst{5}
\and
J. H. Knapen\inst{1,2}
\and
J. Rom\'an\inst{1,2,6}
}
\institute{Instituto de Astrof\'isica de Canarias, E-38205, La Laguna, Tenerife, Spain \\
\email{simondiazgar@gmail.com}
\and
Departamento de Astrof\'isica, Universidad de La Laguna, E-38205, La Laguna, Tenerife, Spain
\and
Department of Physics, Engineering Physics \& Astrophysics, Queen's University, Kingston, ON K7L 3N6, Canada
\and
Personal Docente, Consejer\'ia de Educaci\'on, Universidades, Cultura y Deportes del Gobierno de Canarias, E-35002, Las Palmas de Gran Canaria, Spain
\and
Centre for Astrophysics Research, School of Physics, Astronomy
and Mathematics, University of Hertfordshire, Hatfield AL10 9AB, UK
\and
Kapteyn Astronomical Institute, University of Groningen, PO Box 800, 9700 AV Groningen, The Netherlands
}
\date{Received 14 October 2021; accepted 29 August 2022}
\abstract
{
Thin and thick disks are found in most spiral galaxies, yet their formation scenarios remain uncertain.
Whether thick disks form through slow or fast, internal or environmental, processes is unclear.
The physical origin of outer truncations in thin and thick disks, observed as a drop in optical and
near-infrared (NIR) surface brightness profiles, is also a much debated topic.
These truncations have been linked to star formation (SF) thresholds in Milky-Way-type galaxies,
but no such connection has been made for their low-mass counterparts or in thick disks.
Our photometric analysis of the edge-on galaxy UGC~7321 offers a possible breakthrough.
This well-studied diffuse, isolated, bulgeless, ultra-thin galaxy is thought to be under-evolved both dynamically and in SF.
It is an ideal target for disentangling internal effects in the formation of thick disks and truncations.
Our axial light profiles from deep far- and near-ultraviolet ({GALEX}) images,
tracing recent SF, and optical (DESI $grz$) and NIR (\emph{Spitzer} 3.6 $\mu$m) images, tracing old stellar populations,
enable a detailed identification of an outer truncation in all probed wavelengths in both the thin and thick disks.
After deprojecting to a face-on view, a sharp truncation signature is found
at a stellar density of $1.5 \pm 0.5 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$,
in agreement with theoretical expectations of gas density SF thresholds.
The redder colours beyond the truncation radius are indicative of stellar migration towards the outer regions.
We thus show that thick disks and truncations can form via internal mechanisms alone, given the pristine nature of UGC~7321.
We report the discovery of a truncation at and above the mid-plane of
a diffuse galaxy that is linked to a SF threshold;
this poses a constraint on physically motivated disk size measurements among low-mass galaxies.
}
\keywords{galaxies: individual UGC~7321 - galaxies: structure - galaxies: star formation}
\section{Introduction}\label{introduction}
The presence of sharp drops in the outer parts of the surface brightness (SB) profiles of some edge-on galaxies
has been known for decades \citep[][]{1970ApJ...160..811F, 1979A&AS...38...15V}.
The formation of these so-called truncations has been studied extensively in galaxies of different inclinations
\citep[e.g.][]{2006A&A...454..759P,2008MNRAS.386.1821F,2012ApJ...758...41R,2012ApJ...759...98C,2016MNRAS.456.1359F}
and redshifts \citep[$z$; e.g.][]{2004A&A...427L..17P}.
Truncations have been linked to a critical gas surface density for star formation \citep[SF;][]{2001ApJ...555..301M}.
Examples include the Milky Way (MW)-like edge-on galaxies NGC~4565 and NGC~5907,
whose large angular sizes allow for a photometric analysis with high spatial resolution.
\citet[][]{2019MNRAS.483..664M} identified truncations in their disks, at heights of up to $\sim 3$~kpc,
from near-ultraviolet (NUV; tracer of recent SF), optical (stacked $gri$), and near-infrared
(NIR; 3.6 $\mu$m, tracer of old stellar populations) wavelengths.
Truncations in these two galaxies lie at a stellar surface density ($\Sigma_{\star}$)
of $\sim 1-2 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$ \citep[][]{2019MNRAS.483..664M},
which is consistent with the critical gas surface density ($\sim 3-10 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$)
beyond which gas can no longer be transformed into stars \citep[][]{2004ApJ...609..667S}.
In this paper we extend this work into the realm of low-mass galaxies with an analysis of the
truncated diffuse edge-on galaxy UGC~7321
\citep[maximum circular velocity $V_{\rm c}=108 \, {\rm km} \, {\rm s}^{-1}$;][]{2005ApJS..160..149S}.
\citet[][]{2020A&A...633L...3C} and \citet[][]{2020MNRAS.493...87T} proposed the radius corresponding to
the isomass contour at $1 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$
as a physically motivated disk size measurement ($R_{1}$) for face-on and moderately inclined galaxies.
This is supported by the aforementioned $\Sigma_{\star}$ values at the truncation of MW-type galaxies.
The use of $R_{1}$ yields a narrower stellar mass–size relation than the half-light radius
\citep[see also][]{2020MNRAS.495...78S,2021MNRAS.505.3135A,2022A&A...660A..69W}.
Similar conclusions were reached by \citet[][]{2015ApJS..219....3M} using the 25.5 mag arcsec$^{-2}$ isophotal radius
from 3.6 $\mu$m images from the \emph{Spitzer} Survey of Stellar Structure in Galaxies \citep[S$^4$G;][]{2010PASP..122.1397S}.
Below, we study $\Sigma_{\star}$ at the edge of a low-mass low-SB galaxy to further constrain its size.
Thick disks --- the faint, large scale-height counterpart of thin discs --- are frequently
identified in edge-on galaxies and
can also present truncations \citep[e.g.][]{2011ApJ...741...28C}.
They are dominant in low-mass galaxies \citep[][]{2006AJ....131..226Y}.
Their formation, whether fast or slow, internal or driven by the environment, is also a point of contention.
Thick disks could be the fossil record of a primordial turbulent disk \citep[e.g.][]{2006ApJ...650..644E}
built through an intense SF episode \citep[][]{2014A&A...571A..58C}
or quickly formed at high $z$ through wet mergers \citep[e.g.][]{2004ApJ...612..894B}.
They could also gradually arise from stars stripped from, and dynamically heated by,
infalling satellites \citep[][]{2003ApJ...597...21A} or via internal
mechanisms such as radial migration \citep[][]{2009MNRAS.399.1145S} and
dynamical heating by giant molecular clouds \citep[][]{1985ApJ...290...75V}.
Thick disk formation can result from the simultaneous interplay of internal and external processes
\citep[][]{2015ApJ...804L...9M,2019A&A...623A..19P}, though we now focus on the former
by studying a galaxy (UGC~7321) without a strong sign of environmental activity.
UGC~7321 is catalogued as an isolated galaxy by \citet[][]{2005A&A...436..443V} and \citet[][]{2017A&A...603A..18H}.
NGC$\,$4204 appears at an offset of $\approx 2^{\circ}$ southwards,
and NGC$\,$4455 lies at an even larger angular distance,
though both galaxies are a factor of $\sim 2$ closer to us than
UGC~7321 \citep[e.g.][]{2006Ap.....49..450K,2011A&A...532A.104N,2017A&A...603A..18H}.
The isolation of UGC~7321 is further confirmed from the low values of the projected surface density
to the third nearest neighbour ($\Sigma_{3}^{\rm A}=0.7$) and its Dahari parameter ($Q=-5.1$) \citep[][]{1984AJ.....89..966D}
in a velocity interval of $\pm \, 500 \, {\rm km \, s}^{-1}$, following \citet[][]{2014MNRAS.441.1992L}.
Inspection of optical images from the Dark Energy Spectroscopic Instrument (DESI)
Legacy Imaging Surveys \citep[][]{2019AJ....157..168D}
does not reveal the presence of any low-SB satellite within an area encompassing a factor of several times the galaxy radius,
down to depths of $\mu_{g}$(AB)$=28.9$ mag arcsec$^{-2}$ and $\mu_{r}$(AB)$=28.2$ mag arcsec$^{-2}$
\citep[$3\sigma$, 10\arcsec x10\arcsec boxes;][]{2020A&A...644A..42R}.
Hence, the main photometric and kinematic properties of UGC~7321 are likely not affected or caused by interactions.
The presence of extremely faint dwarfs cannot be discarded, as their detection might demand even deeper imaging
\citep[see e.g.][]{2017A&A...603A..18H}.
UGC~7321 is also a bulgeless galaxy with a very cold disk \citep[][]{1999AJ....118.2751M},
hinting at a quiescent merger history \citep[][]{2019A&A...628A..58S}.
However, its warped H{\sc\,i} disk is indicative of a possible encounter ($>1.6$ Gyr ago)
with a neighbour, followed by disk cooling \citep[][]{2003AJ....125.2455U},
or angular momentum misalignments between the disk and the dark matter halo \citep[][]{1999ApJ...513L.107D}.
Such a warp could indeed be of intrinsic origin \citep[see further discussion in][]{2017ASSL..434..209B}.
UGC 7321's ultra-thin disk \citep[][]{2017MNRAS.465.3784B} suggests a higher
spin parameter than those of low-SB and regular disk galaxies \citep[][]{2019MNRAS.488..547J}
or a dark matter dominance \citep[][]{2013MNRAS.431..582B}.
Also, ultra-thin galaxies have been found in specific cosmic web environments with a very low density,
as they are less connected with filaments \citep[][]{2017MNRAS.465.3784B}.
Altogether, UGC~7321 appears to be extremely isolated and
under-evolved in terms of dynamics and SF \citep[e.g.][]{1999AJ....118.2751M}.
UGC~7321 has a thick disk. Early studies of its vertical density distribution revealed that a
single sech$^{2/n}$ fit cannot reproduce optical observations; a second component is needed \citep[][]{2000AJ....120.1764M}.
The model of \citet[][]{2019A&A...628A..58S} -- including a gravitationally coupled stellar disk and a H{\sc\,i} disk
in the potential of a dark matter halo \citep[see also][]{2010NewA...15...89B} -- suggests that
$n$ cannot be trusted as a robust parameter, as it varies with radius and fitting range, and thus a double-disk fit may not be necessary.
This model is, on the other hand, based on the relatively shallow optical surface photometry
from \citet[][]{1999AJ....118.2751M}. Even so, a thick disk was already detected in the Matthews et al. $B$-$R$ colour maps.
The deeper 3.6~$\mu$m imaging from the S$^4$G and distance-independent
photometric decomposition models of \citet[][]{2018A&A...610A...5C} confirm that a thick disk is definitely needed to
fit the vertical SB profiles (see their Appendix B).
Comer\'on et al. assumed fitting functions for two stellar disks and
one gaseous isothermal coupled disk in equilibrium, and their fit is used in this article.
In addition, a truncation in both the thin and thick disks is detected in the 3.6~$\mu$m SB axial\footnote{The
axial direction is the mid-plane projection of a vector pointing away from the galaxy centre in the sky plane.}
profiles by \citet[][]{2018A&A...610A...5C}.
Here, we revisit this truncation in NUV, far-ultraviolet (FUV), $grz$, and 3.6 $\mu$m NIR images of UGC~7321
in order to study its connection to SF thresholds.
Such a connection found in thick disks would constrain formation scenarios, given their expected old age.
For comparison with MW-type galaxies, the SB profiles of NGC~4565
\citep[$V_{\rm c}=250 \, {\rm km} \, {\rm s}^{-1}$;][]{2005ApJS..160..149S}
are also revisited; further details are available in \citet[][]{2019MNRAS.483..664M} and Mart\'inez-Lombilla et al. (in prep.).
We adopt redshift-independent distances of $22.28 \pm 3.34$ and $13.43 \pm 2.01$ Mpc for UGC~7321 and
NGC~4565, respectively \citep[][]{2016AJ....152...50T}, assuming a $15\%$ uncertainty \citep[][]{2015ApJS..219....3M}.
\section{Ultraviolet, optical, and near-infrared imaging}\label{data}
To trace old stellar populations \citep[][]{2014RvMP...86...47C,2015MNRAS.452.3209R},
we used 3.6 $\mu$m images from the S$^{4}$G obtained with the Infrared Array Camera \citep[][]{2004ApJS..154...10F}
on board the \emph{Spitzer} Space Telescope \citep{2004ApJS..154....1W}, with an exposure time of 240~s per galaxy.
We also used stacked \text{DESI} $g$, $r$, and $z$ images, with nominal exposure times of 166, 134, and 200 s,
respectively \citep[][]{2019AJ....157..168D}.
We probed recent SF with the Galaxy Evolution Explorer (GALEX) ultraviolet (UV)
images from the catalogue of \citet[][]{2018ApJS..234...18B}.
Specifically, we used NUV ($\lambda_{\rm eff}=2267\,\AA$) and FUV ($\lambda_{\rm eff}=1516\,\AA$) images with long exposure times:
for UGC~7321, 1683 and 2822\,s in FUV and NUV, respectively; for NGC~4565, 1693\,s for both.
The FUV emission traces SF of several tens to $100$~Myr, while the NUV traces
$\lesssim 300$~Myr populations \citep[][]{1998ARA&A..36..189K}.
All images used the masks created by \citet[][]{2018A&A...610A...5C}, and
fluxes in masked regions were interpolated following \citet[][]{2015ApJS..219....4S}.
The sky levels were measured within the same $30\arcsec$x$30\arcsec$ boxes as in \citet[][]{2015ApJS..219....4S},
which are located far from the galaxy.
The median of the median values of each box was then subtracted from the images.
\section{Axial surface-brightness profiles}\label{surface_cuts}
The GALEX, DESI, and S$^{4}$G images were aligned with the Interactive Data Language (IDL) package \texttt{hastrom},
which builds on the \texttt{poly\_2d} function to perform polynomial warping of images.
The 3.6 $\mu$m images were used as reference to match the astrometry.
We then obtained axial SB profiles from the 3.6~$\mu$m, $grz$, FUV, and NUV images (Fig.~\ref{images_UGC07321_NGC4565_all})
by folding the images with respect to the mid-plane and averaging the flux between heights $z=0$ and $z=z_{\rm u}$,
where $z_{\rm u}$ is the height at which $\mu_{3.6 \mu \rm m}$(AB) = 26 mag arcsec$^{-2}$ on average.
The latter was measured from vertical SB profiles presented in \citet[][]{2018A&A...610A...5C}.
The resulting SB values are sensitive to the selection of $z_{\rm u}$ --- which was
chosen to maximise the disk light and minimise the background noise --- and are only used for the detection of the truncations.
The axial profiles were then converted to face-on radial profiles (Sect.~\ref{inclination_correction}).
Following \citet[][]{2006A&A...454..759P}, we used a logarithmic binning in the axial direction:
each range is 1.03 times wider than the previous, where the first data point is located at the S$^{4}$G pixel size ($0.75\arcsec$).
Surface brightness profiles were folded with respect to the galaxy minor axis and averaged
in the axial direction (asymmetries are discussed in Sect.~\ref{7321_truncation}).
Vertical 3.6~$\mu$m SB profiles were decomposed by \citet[][]{2018A&A...610A...5C} into thin and thick disks.
As in their work, we hereafter consider the thin (thick) disk as the region below (above)
the height at which $90\%$ of the light comes from the thick disc (blue lines in Fig. \ref{images_UGC07321_NGC4565_all}).
A characterisation of the point spread function (PSF) may be required to reveal the faintest
stellar structures \citep[][]{2017MNRAS.470..427P,2019A&A...629A..12M,2020MNRAS.491.5317I,2020A&A...644A..42R}.
We did not consider PSF modeling in NIR and optical wavelengths, however, as we are not probing the
dimmest regions of thick disks ($> 26$ mag arcsec$^{-2}$) in vertical SB profiles,
where PSF effects become dominant \citep[see e.g.][]{2018A&A...610A...5C,2019MNRAS.483..664M}.
In order to assess whether the high-altitude UV emission in UGC~7321 can be caused by scattered light,
we estimated the line PSF (LSF) following \citet[][]{2017ApJ...847...14E} \citep[see also][]{2013A&A...556A..54V}.
We used the {GALEX} NUV and FUV PSF, extended with a power law using the parametrisation by \citet[][]{2016ApJ...833...58H}.
We compared the LSFs to vertically integrated SB profiles calculated from the inner $30\arcsec$ and
confirmed that the LSF is much narrower than the UV disk of UGC~7321.
We then convolved the LSF with a sech$^2$ disk with an exponential scale height of $1.5\arcsec$ \citep{2018A&A...610A...5C}
and found negligible differences relative to the LSF. We finally verified that the differences between a radial cut of the PSF
and the LSF arise in the wings. The wings start affecting the LSF
at $\sim 1/100$ of the peak value, which is below our detection threshold.
We thus conclude that the scattered UV light from PSF wings is limited to fainter SB levels than those probed in this work.
The outer truncations and their radii were identified using the break-finding algorithm of \citet[][]{2019A&A...625A..36W}.
Their method looks for significant changes in the mean of the local slope of the SB profile --- obtained
following \citet[][]{2006A&A...454..759P} --- using a cumulative sum (CS) of the difference from the mean.
The location of the truncation corresponds to the maximum of the CS.
The significance of the truncation is tested by bootstrapping the SB profile $10^{5}$ times
so that it is randomly reordered in the axial direction. The break strength --- measured as max(CS)-min(CS) --- in the reordered
profiles must also fall below that of the real profile in $>95\%$ of cases.
\subsection{The NIR, optical, and UV truncations of UGC~7321}\label{7321_truncation}
The SB profiles of UGC~7321 have the same outer truncation radius of $131.3 \arcsec$ or $14.2 \pm 2.1 \, {\rm kpc}$
(Fig.~\ref{plot_thin_thick_S4G}) at all probed wavelengths (NUV, FUV, stacked $grz$, 3.6$~\mu$m).
Likewise, we confirmed that truncations hold, at the same radii,
in 3.4 $\mu$m images from the Wide-field Infrared Survey Explorer \citep[WISE;][]{2011ApJ...735..112J}
(Prof. T. Jarrett; private communication).
We verified that the multi-$\lambda$ truncation is not due to a morphological asymmetry:
it holds if, instead of symmetrising the axial profiles,
we study separately the left and right sides along the major axis, as displayed in Fig.~\ref{images_UGC07321_NGC4565_all}.
This is not the case for NGC~4565 \citep[][]{2019MNRAS.483..664M,2020ApJ...897..108G},
which, unlike UGC~7321, is known to be interacting \citep[e.g.][]{2012ApJ...760...37Z}.
The truncation in UGC~7321 appears at the same radius in both the thin and thick disks.
(Since the latter are defined from 3.6~$\mu$m SB decomposition models alone,
the claim for the existence of a truncated thick disk with UV emission sources cannot be formally made;
see Sect.~\ref{thick_disk_internal}.) Dust obscuration in the mid-plane is substantial in the UV (and milder in the NIR),
but correcting for the dust is non-trivial and beyond the scope of this paper.
As truncations were also identified above the mid-plane at all probed wavelengths, where there is less dust,
they cannot be artificially created by dust obscuration
\citep[for a related discussion on the case of NGC~4565, see][]{2019MNRAS.483..664M}.
Conversely, the dust distribution in galaxies with $V_{\rm c}<120 \, {\rm km} \, {\rm s}^{-1}$ is thought to have
a larger scale-height and to be more diffuse than in their massive counterparts \citep[][]{2004ApJ...608..189D}.
Nonetheless, it is known that UGC~7321 is not optically thick and that its internal extinction is low \citep[][]{1999AJ....118.2751M}.
Also, the observed central UV SB is > 8 mag dimmer than expected from an extrapolation of
the observed SB profile slope beyond the truncation; such a difference cannot be solely ascribed to dust.
NUV-[3.6] and NUV-[$grz$] colours trace the specific SF rate \citep[e.g.][]{2018ApJS..234...18B},
or the ratio of the SF rate to stellar mass surface densities,
which in turn is related to the star formation efficiency \citep[SFE; e.g.][]{2011MNRAS.415...61S}.
Colours become redder beyond the truncation of UGC~7321
(bottom two panels of Fig.~\ref{plot_thin_thick_S4G}; see Sect.~\ref{discussion_section} for a follow-up discussion).
\subsection{Deprojection of the stellar surface density profiles}\label{inclination_correction}
The SB of a highly inclined galaxy is enhanced due to line--of--sight integration of stars.
Correcting for this effect is necessary for the photometric analysis of
galaxies~\citep[e.g.][]{1994ApJ...432..114B,2020MNRAS.493...87T,2021ApJ...912...41S}.
Indeed, to expand the connection between SF thresholds and galaxy edges in low-mass galaxies,
the stellar surface density at the truncation radius must be computed from deprojected NIR SB profiles.
The conversion of axial 3.6 $\mu$m SB profiles ($\mu_{3.6\mu \rm m}$) to
face-on radial profiles was done following \citet[][]{2012ApJ...759...98C}.
In short, broken disks are parametrised in the plane of the galaxy through the generalisation
of broken exponential functions \citep[][]{2008AJ....135...20E} that are integrated along the line-of-sight.
Figure~\ref{plot_thin_thick_deprojections} shows the 1D model of $\mu_{3.6\mu \rm m}$ for UGC~7321,
as well as the deprojected SB radial profiles for the thin and thick disks.
For comparison, the insets display the same profiles for NGC~4565.
After deprojection, the multi-band truncation in UGC~7321 was identified
at $\mu_{3.6 \mu \rm m}$(AB)$\approx 26$ mag arcsec$^{-2}$, which is close to the SB limit of the S$^4$G.
This means that the survey is not ideally suited for the analysis of truncations in face-on galaxies,
but sufficiently deep for their identification in edge-on galaxies.
Deprojected 3.6 $\mu$m SB profiles were converted to surface stellar
densities ($\Sigma_{\star}$) following \citet{2013ApJ...771...59M}:
\begin{equation}
{\rm log_{10}}(\Sigma_{\star}/[\mathcal{M}_{\odot}\rm \,kpc^{-2}])=16.76-0.4 \cdot \mu_{3.6\mu \rm m}/[\rm mag\,arcsec^{-2}],
\label{munoz2}
\end{equation}
adopting a stellar mass-to-light ratio $\mathcal{M}_{\star}/L=\Upsilon_{3.6 \rm \mu m}=0.53$ \citep[][]{2012AJ....143..139E}.
A $30\%$ uncertainty on $\mathcal{M}_{\star}/L$ was assumed \citep[see][]{2012ApJ...744...17M}.
The deprojected stellar density of UGC~7321 at the truncation is $1.5 \pm 0.5 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$
(grey lines in Fig.~\ref{plot_thin_thick_deprojections}).
In NGC~4565, the truncation occurs at $3.9 \pm 1.2 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$,
which is slightly larger than the $\Sigma_{\star}$ ($\sim 1 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$) implied
by \citet[][]{2019MNRAS.483..664M} and \citet[][]{2020MNRAS.493...87T}.
This discrepancy, and the relevance of the local value of $\Sigma_{\star}$ at the truncation,
are discussed in Sect.~\ref{discussion_section}.
\section{Discussion}\label{discussion_section}
\subsection{Galaxy sizes measured from SF thresholds}\label{SF_threholds_truncation}
Many galaxies show SB truncations in their outskirts \citep[e.g.][]{1970ApJ...160..811F, 1979A&AS...38...15V}
at optical and NIR wavelengths. These have either been linked to the maximum angular momentum of the protogalactic
cloud \citep[e.g.][]{1987A&A...173...59V,2012MNRAS.427.1102M} and the presence of disk warps \citep[][]{1987A&A...173...59V}
or to the presence of a SF threshold \citep[][]{1989ApJ...344..685K}.
Following the work of \citet[][]{2001ApJ...555..301M}, \citet[][]{2020MNRAS.493...87T} proposed exploring truncations in H$\alpha$ and UV to further constrain the link between galaxy edges and SF thresholds.
This is tested here using deep UV, optical, and NIR imaging.
The isomass contour at $1 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$ has been proposed as a proxy of the
galaxy edge linked to a SF threshold \citep[][]{2020A&A...633L...3C,2020MNRAS.493...87T} (Sect.~\ref{introduction}).
We revisited this in two widely studied nearby edge-on galaxies, a MW-type and a low-mass diffuse galaxy,
with a factor of $2.3$ difference in $V_{\rm c}$.
Their disk truncations are found, in face-on view, at surface stellar densities of $3.9 \pm 1.2$ and
$1.5 \pm 0.5 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$, respectively.
\citet[][]{2020MNRAS.493...87T} speculated that the gas density threshold for SF is lower in dwarf galaxies than
in their more massive counterparts, possibly due to a lower SFE in
low-mass galaxies \citep[][]{2008AJ....136.2782L}. This is in agreement with our observations.
We also report the existence of a NUV-[3.6] and NUV-[$grz$] colour upturn beyond the truncation in UGC~7321,
in both the thin and thick disks (Sect.~\ref{7321_truncation}).
A similar reddening was found in the NUV-[$gri$] colour profile of the thin disk of NGC~4565 \citep[][]{2019MNRAS.483..664M}.
These $U-$shaped profiles could be related to trends reported in previous works
for low-inclination galaxies using optical colours \citep[e.g.][]{2008ApJ...683L.103B,2008ApJ...679L..69A}.
They may indicate that the truncation is linked to a drop in SFE, but accurately probing the cold gas
and properly correcting for dust would be needed for a decisive interpretation.
In addition, \citet[][]{2003AJ....125.2455U} argue that the H{\sc\,i} gas surface density in UGC~7321 is
systematically below that required for efficient SF based on the dynamical criterion of \citet[][]{1989ApJ...344..685K}.
Deriving deprojected SFE profiles for edge-on galaxies is highly non-trivial.
This effort might demand integral field unit data, which would also allow for the recovery of SF histories,
which is beyond the scope of this paper.
We conclude that $R_{1}$ is an accurate proxy of the disk size of UGC~7321.
However, notable uncertainties are inherent to the deprojection of $\Sigma_{\star}$ and hence affect its local value at the truncation.
The calibration of $\mathcal{M}_{\star}/L$ is non-trivial and
can vary for galaxies of different masses and metallicities \citep[e.g.][]{2018ApJ...865..154H}.
Also, the scale height may vary with radius (see further discussion in Sect.~\ref{thick_disk_internal}).
On the other hand, the radius of UGC~7321 is not necessarily representative of other galaxies of similar mass:
ultra-thin galaxies have been claimed to have larger scale-lengths than ordinary disk galaxies
given their higher stellar specific angular momentum \citep[][]{2019MNRAS.488..547J}.
While $R_{1}$ can be used as a disk size definition in massive and faint galaxies,
it is likely that the value of $\Sigma_{\star}$ at the disk edge is not constant for all masses.
\subsection{Thick disks can form through internal processes only}\label{thick_disk_internal}
Whether thick disks form through internal or external, slow or fast, processes is still debated (Sect.~\ref{introduction}).
The analysis of UGC~7321 allows us to focus on internal slow processes alone, as this galaxy is isolated, bulgeless,
and shows no clear signs of environmental activity at the explored SB levels (Sect.~\ref{introduction} and references therein).
The relevant internal processes proposed include in situ thick disk formation,
concomitant with the buildup of the galaxy, or secular heating and the radial migration of stars.
Thick disks truncate less frequently than thin disks \citep[][]{2011ApJ...741...28C}.
When they do, the truncation radius is comparable to that in the thin disk.
\citet[][]{2007ApJ...667L..49D} reported that the truncation in the low-mass edge-on galaxy NGC~4244 occurs at the same radius for
young, intermediate age, and old stars, at different heights above the mid-plane (up to 1.5 kpc).
Based on the analysis of resolved stellar populations,
\citet[][]{2012ApJ...753..138R} also concluded that the thin and the thick disk truncate at the same radius
in the face-on spiral galaxy NGC~7793. These observational analyses led to the conclusion that dynamical processes are likely
responsible for the occurrence of truncations in both thin and thick disks,
while SF thresholds were discarded as an explanation for this phenomenon \citep[e.g.][]{2007ApJ...667L..49D}.
If this were indeed the case, the different formation epochs of the two components should place their truncations at different radii.
Interestingly, \citet[][]{2019MNRAS.483..664M} find truncations at and above the mid-plane
(up to $3 \, {\rm kpc}$) at the same radius in NUV, $gri$, and 3.6 $\mu$m images in two MW-like galaxies;
these truncations are compatible with a SF threshold.
NGC$\,$4565's multi-$\lambda$ truncation is, however, not found in SB axial profiles
averaged over the thick disk (Sect.~\ref{7321_truncation}).
This is expected, as massive galaxies likely formed their thick disk $\sim 8-10$ Gyr ago \citep[][]{2021A&A...645L..13C}.
Signatures of a SF threshold in the thick disk could have been washed out due to internal (migration) or
environmental transformations \citep[NGC$\,$4565 has an asymmetric warp and is interacting;][]{1979A&AS...38...15V}.
Whether these arguments can be extended to low-mass galaxies is unclear.
Galaxies with different masses may form thick disks through different paths \citep[][]{2012ApJ...759...98C}.
We have reported the discovery of a truncation linked to a SF threshold in the disk of an edge-on diffuse galaxy.
The sharp SB drop-off is found at the same location in NIR, optical, NUV, and FUV bandpasses (Sect.~\ref{7321_truncation}).
Moreover, it is detected at the same radius in both the thin and thick disks, the latter being defined
from decomposition models of 3.6~$\mu$m images by \citet[][]{2018A&A...610A...5C}.
This challenges thick disk formation models and gives rise to various interpretations.
\citet[][]{2021A&A...645L..13C} predict that the age of the youngest stars in thick discs is $\sim 4-6$~Gyr
for galaxies with a total stellar mass of $\mathcal{M}_{\star}\approx 10^{9} \, \mathcal{M}_{\odot}$.
Thus, low-mass galaxies such as UGC~7321 may host relatively young thick disks whose SF threshold is
preserved and similar to that of the thin disk.
In this sense, our observations are likely a consequence of the pristine nature of UGC~7321 in terms of dynamics and SF,
relative to giant spirals \citep[][]{1999AJ....118.2751M}. The low level of UV radiation
detected in the region of the thick disk is likely associated with the emission from H{\sc\,ii} regions
in the mid-plane of this super-thin galaxy.
In fact, some UV clumps in the thin disk can have a full-width-at-half-maximum as large as $10-15\arcsec$.
The truncation in a thick disk can also result from both heating and radial and vertical migrations,
despite the galaxy being isolated \citep[][]{2013MNRAS.433..976R}.
Thick disks can be made through embedded flares of mono-age stellar populations \citep[][]{2015ApJ...804L...9M}.
Moreover, simulations by \citet[][]{2012A&A...548A.127M} showed that secular evolution (no external perturbations)
can form flared disks due to angular momentum redistribution caused by spirals or bars.
This is consistent with the report of a flared disk in UGC~7321 by \citet[][]{2019A&A...628A..58S},
where the used data probe the disk up to $x=2$ arcmin, while the flaring is predicted beyond this point,
far out in the outskirts (the factor of $\sim 2$ difference in their adopted distance with respect to our work is noted).
If such a flare exists, it should happen at a very low SB ($\mu_{3.6 \mu \rm m}$(AB) > 26 mag arcsec$^{-2}$),
and its contribution to the thick disk fit would not be large. It is not noticeable from the images presented in Sect.~\ref{data}.
The fits of the outermost vertical cuts of 3.6~$\mu$m SB performed by \citet[][]{2018A&A...610A...5C}
indeed yielded $\sim 20\%$ larger scale-heights, for both thin and thick disks, than those of the innermost cuts\footnote{
Inner and outer vertical cuts refer to those averaged over axial ranges
$0.2 R_{25} < x < 0.5 R_{25}$ and $0.5 R_{25} < x < 0.8 R_{25}$, respectively, where
$R_{25}$ is the isophotal 25 mag arcsec$^{-2}$ radius in the $B$ band \citep[for further details, see][]{2012ApJ...759...98C}.
For UGC~7321, inner and outer axial ranges correspond to roughly [3-7.5] kpc and [7.5-12.5] kpc, respectively.}.
That is, the scale heights moderately increase with increasing axial distance.
(Unfortunately, these outer fits did not fulfil the Comer\'on et al. quality criteria in the decompositions,
but the thick disk component is also present in them.)
We conclude that the outer parts of its thick disk may be biased by the light of a flared thin disk,
but this effect should be negligible as the thick disk already dominates at
3.6 $\mu$m SB levels as high as $\mu_{3.6}$(AB) $\approx 24$ mag arcsec$^{-2}$ \citep[][]{2018A&A...610A...5C}.
Bars do play an important role in the redistribution of material throughout the
disk of massive galaxies \citep[e.g.][]{2006ApJ...645..209D,2016A&A...596A..84D}; this is the case of NGC~4565,
which hosts a prominent peanut-shaped bulge \citep[][]{1986AJ.....91...65J}.
Bars are also more frequent than previously thought in low-mass galaxies \citep[][]{2016A&A...587A.160D}.
\citet[][]{2003A&A...409..485P} provided evidence for peanut-shaped outer isophotes in UGC~7321 from an $R-$band image.
A bar in UGC~7321, as also inferred from the analysis of its H{\sc\,i} position-velocity diagram \citep[][]{2003AJ....125.2455U},
may also be responsible for its stellar migration \citep[but see][]{2014MNRAS.439..929G}.
Thus, UGC~7321's thick disk may be linked to bar-induced internal dynamics.
Likewise, the outer reddening of UGC~7321 (colour $U-$shape; Fig.~\ref{plot_thin_thick_S4G})
is likely associated with SF thresholds followed by bar-driven outer migrations of stars, as in NGC~4565 \citep[][]{2019MNRAS.483..664M}.
\section{Conclusions}\label{summarysection}
We have reported the discovery of an outer truncation in the SB profile of the diffuse ultra-thin edge-on galaxy UGC~7321
that is seen in UV ({GALEX} FUV and NUV), optical (DESI \emph{grz}), and NIR (\emph{Spitzer} 3.6 $\mu$m) images.
The truncation, detected at the same radius in both the thin and thick disks,
hints at similar or interconnected (migration) formation mechanisms for both components.
The truncation occurs at a deprojected stellar surface density of $1.5 \pm 0.5 \, \mathcal{M}_{\odot} \, {\rm pc}^{-2}$,
in agreement with the theoretical gas density thresholds for SF.
The redder colours beyond the truncation are indicative of the radial migration of stars to the galaxy's outskirts.
As UGC~7321 is isolated and has no strong signs of accretion,
our findings are consistent with its thick disk and truncations being formed via internal mechanisms alone.
\begin{acknowledgements}
We thank the anonymous referee for a constructive and detailed report.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme
under the Marie Sk$\l$odowska-Curie grant agreement No 893673,
from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation
under the grant ``The structure and evolution of galaxies and their central regions''
with reference PID2019-105602GB-I00/10.13039/501100011033,
and under the grant ``Thick discs, relics of the infancy of galaxies" with reference PID2020-113213GA-I00,
and from IAC project P/300724, financed by the Ministry of Science and Innovation, through the State Budget and by the
Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community.
S.C. is especially grateful to the Natural Sciences and Engineering Research Council of Canada, the Ontario Government,
and Queen's University for support through various scholarships and grants.
A.W. acknowledges support from the STFC [ST/S00615X/1].
J.H.K. acknowledges support from the ACIISI, Consejer\'{i}a de Econom\'{i}a,
Conocimiento y Empleo del Gobierno de Canarias and the
European Regional Development Fund (ERDF) under grant with reference PROID2021010044.
J.R. acknowledges funding from University of La Laguna through the Margarita Salas Program from the Spanish Ministry of Universities ref.
UNI/551/2021-May 26, and under the EU Next Generation.
This research makes use of python (\href{http://www.python.org}{http://www.python.org})
and IDL (\href{https://www.harrisgeospatial.com/docs/using_idl_home.html}{https://www.harrisgeospatial.com/docs/using$\_$idl$\_$home.html}).
\end{acknowledgements}
\bibliographystyle{aa}
\bibliography{bibliography}
|
Title:
Dark matter substructures affect dark matter-electron scattering in direct detection experiments |
Abstract: Recent sky surveys have discovered a large number of stellar substructures.
It is highly likely that there are dark matter (DM) counterparts to these
stellar substructures. We examine the implications of DM substructures for
electron recoil (ER) direct detection (DD) rates in dual phase xenon
experiments. We have utilized the results of the LAMOST survey and considered a
few benchmark substructures in our analysis. Assuming that these substructures
constitute $\sim 10\%$ of the local DM density, we study the discovery limits
of DM-electron scattering cross sections considering one kg-year exposure and
1, 2, and 3 electron thresholds. With this exposure and threshold, it is
possible to observe the effect of the considered DM substructure for the
currently allowed parameter space. We also explore the sensitivity of these
experiments in resolving the DM substructure fraction. For all the considered
cases, we observe that DM having mass $\mathcal{O}(10)\,$MeV has a better
prospect in resolving substructure fraction as compared to
$\mathcal{O}(100)\,$MeV scale DM. We also find that within the currently
allowed DM-electron scattering cross-section; these experiments can resolve the
substructure fraction (provided it has a non-negligible contribution to the
local DM density) with good accuracy for $\mathcal{O}(10)\,$MeV DM mass with
one electron threshold.
| https://export.arxiv.org/pdf/2208.14471 |
\section{Introduction}
\label{sec:intro}
It is important to leave no stone unturned in the search for the DM identity. Numerous astrophysical and cosmological observations infer the irrefutable evidence of DM \cite{Bertone:2004pz,Lin:2019uvt,Slatyer:2021qgc,Planck:2018vyg}. Despite these insurmountable evidences of the gravitational interaction of DM, we do not yet know if the DM candidate interacts via other forces. Numerous experiments have been performed to discover the non-gravitational signature of DM, but none of them have revealed a positive result. The DD experiments have been playing a pivotal role in their quest for the DM identity. The typical nuclear recoil (NR) DD experiments, searching for weak-scale DM, have made extraordinary progress\,\cite{SuperCDMS:2015eex, Akerib:2016lao, Cui:2017nnn, DarkSide:2018bpj, XMASS:2018bid, Aprile:2018dbl, EDELWEISS:2019vjv, Amare:2019jul, CRESST:2019jnq, CDEX:2019hzn, Adhikari:2018ljm, PandaX-4T:2021bab, DEAPCollaboration:2021raj, Schumann:2019eaa, DelNobile:2021wmp, Cooley:2021rws, Aalbers:2022dzr}. Typical NR DD experiments lose their sensitivity due to kinematic mismatch for an incident non-relativistic ambient sub-GeV DM (see for instance \cite{Battaglieri:2017aum, Kahn:2021ttr, Mitridate:2022tnv, Essig:2022dfa}).\footnote{Alternatively, one can boost non-relativistic light DM through scattering with energetic particles to overcome the threshold barrier, see for e.g., \cite{Bringmann:2018cvk, Ema:2018bih, Cappiello:2019qsw, An:2017ojc, Wang:2021jic, Granelli:2022ysi, Li:2022jxo, Calabrese:2022rfa, Calabrese:2021src} or by utilizing the Migdal effect\,\cite{Ibe:2017yqa, Dolan:2017xbu, Bell:2019egg, XENON:2019zpr, Essig:2019xkx, Dey:2020sai, Knapen:2020aky, Bell:2021zkr, Bell:2021ihi, Chatterjee:2022gbo, DarkSide:2022dhx}.} In order to fully characterize particle DM properties, it is important to probe DM-electron coupling too. A promising strategy to search for such DM interactions is to consider its scattering with electrons of the target materials\,\cite{Dedes:2009bk, Kopp:2009et, Essig:2011nj, Graham:2012su, Essig:2012yx, Lee:2015qva, Essig:2015cda, Roberts:2016xfw, Essig:2017kqs, Emken:2019tni, Catena:2019gfa, Bloch:2020uzh, Bose:2021cou}. In contrast with nuclear scattering, the maximum sensitivity to DM-electron interaction is typically achieved at a lower DM mass. For e.g., assuming a xenon target and momentum independent scattering cross-section, the maximum sensitivity is achieved at $\sim$ 30 GeV for DM-nuclear scattering and $\sim$ 200 MeV for DM-electron scattering.
An ambient DM of mass $\mathcal{O}(10)$ MeV will have a kinetic energy of the $\mathcal{O}(10)$ eV, which is in the ball-park of the atomic ionization energy or the band gap energy of semiconductor. This indicates that a sub-GeV DM can ionize an electron from an atomic shell or facilitate an electron's transition from the valance band to the conduction band. Many experiments like XENON \cite{XENON:2019gfn}, SuperCDMS \cite{SuperCDMS:2018mne}, DarkSide-50 \cite{DarkSide:2018ppu, DarkSide-50:2022hin}, DAMIC \cite{DAMIC:2019dcn}, EDELWEISS \cite{EDELWEISS:2020fxc}, SENSEI \cite{Crisler:2018gci,SENSEI:2020dpa}, PandaX-II \cite{PandaX-II:2021nsg} etc.\,\,are searching for the signatures of such a phenomenon.
The boundedness of electrons in the target material makes DM-electron scattering events inelastic. The DM velocity required to have a measurable recoil is rather high, which can be found near the tail of the DM velocity distribution (assuming that it has a Maxwell-Boltzmann form). These tails are quite sensitive to the choice of the DM velocity distribution \cite{Hryczuk:2020trm, Buch:2020xyt, Radick:2020qip, Maity:2020wic}. The present DM velocity distribution depends on the galactic structure formation history. In the well-known paradigm of $\Lambda$CDM (Lambda Cold Dark Matter), bottom-up hierarchical structure formation is a generic feature \cite{10.1093/mnras/183.3.341,Freeman:2002wq, Vogelsberger:2014kha, Springel:2017tpz, Feldmann:2022qvd, Somerville_2015, Vogelsberger:2019ynw}. Larger galaxies are formed from the merger of smaller galaxies (although the merger of similar mass galaxies may also lead to a bigger galaxy \cite{Belokurov_2018, Helmi_2018}). The gravitational field of the Milky Way (MW) is non-uniform, and this non-uniformity gives rise to strong tidal forces. When smaller galaxies accrete into the MW galaxy, the gravitational force disrupts these galaxies resulting in tidal stripping of various components (including DM) of these infalling galaxies. For an ancient merger, the DM component will have time to virialize within the MW, which may lead to an isotropic, isothermal DM halo. This scenario is often referred to as the Standard Halo Model (SHM), with the Maxwell-Boltzmann distribution representing the DM distribution. However, for relatively recent mergers, there will not be sufficient time for virialization, resulting in plenty of substructures both in the stellar and in the DM component\,\cite{Ibata:1994fv, Helmi:1999ks, Ibata:2000ys, Belokurov:2006kc, Lisanti:2011as, Myeong:2017skt, myeong2018shards, Necib:2018iwb, Necib:2019zka, Yuan_2020, 2022arXiv220102404S, 2022arXiv220102405R, 2022arXiv220611248D}. The presence of such additional stellar substructures (beyond the MW stars) have been detected by different sky-surveys like Gaia \cite{Ahn_2012,Myeong:2017skt,Belokurov_2018,2018, 2021ApJ...912L..30Z, 2022arXiv220611248D}, SDSS \cite{Myeong:2017skt}, LAMOST\,\cite{2018ApJS..238...16L, Yan:2022arj}, etc.,\,and have also been predicted in various N-body simulations\,\cite{Diemand:2008in, Vogelsberger:2008qb, Kuhlen:2012fz, Kuhlen:2012ft, Necib:2018igl, Simpson_2019, Helmi_2020, https://doi.org/10.48550/arxiv.2208.08443, https://doi.org/10.48550/arxiv.2208.11135}.
Since these stellar substructures arise from merged galaxies, a DM counterpart must be associated with them too (because the DM is also present in the accreted galaxies before their merger). Whether DM would follow stellar distribution or not is a matter of debate. For example, the celestial part of the Sagittarius stream might not substantially overlap with the Solar neighborhood. However, the extended DM counterpart may overlap with our local position \cite{Purcell_2012}. The similarities between DM and stellar distributions in debris flow have been pointed out in Refs.\,\cite{Lisanti:2011as, Lisanti:2014dva}. The dwarf spheroidals, which give rise to the S2-stream, are believed to have similar DM and stellar shape \cite{OHare:2019qxc} before they merged with MW. Therefore the resemblance between stellar and DM substructures is not settled yet; more dedicated studies are needed to understand this. However, the presence of this DM might manifest in the local DM density and velocity distribution: this will result in a difference of the velocity distribution from the normal MB distribution with cut off at the galactic escape velocity\,\cite{Goodman:1984dc, Drukier:1986tm}. DM DD rate is strongly dependent on the local velocity distribution of DM\,\cite{Vergados:2002hc, Green:2003yh, Ling:2009eh, McCabe:2010zh, Fox:2010bz, Fox:2010bu, Catena:2011kv, Peter:2011eu, Frandsen:2011gi, Green:2011bv, Gondolo:2012rs, DelNobile:2013cta, Mao:2013nda, Bozorgnia:2013pua, Fox:2014kua, Feldstein:2014gza, Bozorgnia:2016ogo, Gelmini:2016pei, Laha:2016iom, Benito:2016kyp, Gelmini:2017aqe, Ibarra:2017mzt, Wu:2019nhd, Bozorgnia:2017brl, Fowlie:2017ufs, Ibarra:2018yxq, Herrero-Garcia:2019ntx, Bozorgnia:2019mjk, Poole-McKenzie:2020dbo, Lawrence:2022niq}, and a different DM velocity distribution can result in a large change in our theoretical expectations. The effects of these substructures have been extensively studied in the literature in the context of typical NR DD experiments\,\cite{Gelmini:2000dm, Stiff:2001dq, Freese:2003tt, Freese:2003na, Bernabei:2006ya, Savage:2006qr, Peter:2013aha, OHare:2017rag, OHare:2018trr, Evans:2018bqy, Buckley:2019skk, Ibarra:2019jac, OHare:2019qxc, Buch:2019aiw, DEAP:2020iwi}. This paper aims to study the effect of these DM substructures in the ER DM DD experiments assuming xenon-based detectors. Such a study has been conducted for semiconductor target material in Ref.\,\cite{Buch:2020xyt}. It was shown in Ref.\,\cite{Maity:2020wic} that the effect of such astrophysical uncertainties is quite prominent for xenon targets. Further, in large regions of the DM parameter space, the sensitivity of xenon targets is a few orders of magnitude stronger than those from semiconductor-based experiments\,\cite{XENON:2019gfn, Crisler:2018gci, SENSEI:2020dpa, PandaX-II:2021nsg} implying that xenon detectors will probably play a big role in discovering DM-electron scattering. These facts motivate our detailed study in this manuscript, where we highlight the importance of considering DM substructures while searching for DM-electron scattering.
It has been argued in Refs.\,\cite{Ahn_2012, Myeong:2017skt,Belokurov_2018,2018,Necib:2018iwb, Necib:2019zka, Yuan_2020, Ou:2022wvr} that there are plenty of stellar substructures in the local halo. We utilize the results of the LAMOST survey \cite{2018ApJS..238...16L} to present the effect of the DM substructure \cite{Yuan_2020} in DM ER experiments. Without a loss of generality, we demonstrate our results by choosing a few benchmark substructures. We expect broadly similar results for other relevant substructures. In addition, our formalism will be useful for future analysis of DM ER experiments for xenon-based targets. Due to the lack of current understanding of how much of these substructures contributes to the local DM density, we adopt two approaches: an aggressive and a conservative approach where the DM substructure constitutes $100\%$ and $10\%$ of the local DM density, respectively. Our choices are motivated by Ref.\,\cite{2022arXiv220102405R} which states that stellar substructures near the Sun may constitute $\gtrsim 20\%$ of the stellar halo. We also consider the forecast of xenon targets in resolving the fraction of DM substructures components for a few benchmark choices of the DM parameter space.
The rest of the paper is organized as follows. In Sec.\,\ref{sec:DMe}, we briefly review the DM-electron scattering in xenon-based detectors. In Sec.\,\ref{sec:DMSS}, we describe DM substructures that we have considered in our analysis. In Sec.\,\ref{sec:DMeSS}, we present our results along with the statistical methodology, and conclude in Sec.\,\ref{sec:conclusion}.
\section{DM-electron scattering at xenon}
\label{sec:DMe}
If the ambient DM particle scatters off an electron of xenon, DM may transfer its kinetic energy to the electrons, leading to free electrons. For example, a non-relativistically moving ambient DM of mass $\sim 100 $ MeV will have kinetic energy $\sim 50$ eV (in the Solar system), which is in the ballpark of the electron ionization energy of xenon.
In a two-phase xenon time projection chamber, DM particles interact with the liquid Xe target material, and depending on interaction type (electronic or nuclear), the signal topologies are different. For DM-nuclear interaction, the deposited DM energy produces excited atoms, electron-ion pairs, and some non-observable heat. Some free electrons recombine with ionized atoms to generate more excited atoms. Essentially both the direct and excited states produced by electron-ion recombination make a characteristic scintillation light. This prompt scintillation light, known as S1, is detected in photomultiplier tubes (PMTs) immersed in the liquid Xe at the bottom. Due to an external electric field, the remaining electrons drift through liquid xenon and cross the liquid and gaseous interface, producing proportional scintillation in the upper PMTs. This signal is known as S2. For the ER interactions, almost all the ionized electrons are collected at the upper PMTs through scintillation, producing a dominant S2 signal with a subdominant S1 signal. Hence ER interactions manifest through a large S2/S1 ratio compared to the NR case\,\cite{DiGangion:2021thw}.
Let us consider a DM particle of mass $m_{\chi}$ and velocity $v$ scattering off an electron in the xenon atom. Energy conservation implies\,\cite{Bloch:2020uzh}
\begin{equation}
\label{eq:vmin}
v_{\rm min}=\frac{q}{2 m_{\chi}}+\frac{\Delta E_e}{q},
\end{equation}
where $v_{\rm min}$ is the minimum DM velocity required to get an ER of $\Delta E_e$, and $q$ is the momentum transfer to the electron. Note that $\Delta E_e$ must be greater than the ionization energy of the corresponding shell $E_{n,l}$ to have an observable recoil $E_e$, i.e., $\Delta E_e = E_{n,l} + E_e$. The differential DM-electron scattering event rate can be written as \cite{Essig:2017kqs}
\begin{equation}
\label{eq:rateXe}
\frac{dR}{d\,{\rm ln}\, E_e}=N_T\frac{\rho_{\chi}}{m_{\chi}}\,\sum_{nl} \frac{\bar{\sigma}_e}{8\mu_{\chi e}^2} \int q dq \,F_{\rm DM}(q)^2\, |f_{\rm ion}^{n,l}(k^{\prime},q)|^2 \,\eta\left(v_{\rm min}(k^{\prime},q),t\right),
\end{equation}
where $N_T$ is the number of electrons in the target, $\rho_{\chi}$ denotes the local DM density, and DM-electron reduced mass is represented by $\mu_{\chi e}$. DM-electron scattering cross section for a reference momentum transfer, namely $q=\alpha m_e$, is indicated by $\bar{\sigma}_e$. The DM form factor, $F_{\rm DM}(q)$, takes care of the momentum dependency in the cross-section. The ionization form factor is represented by $f_{\rm ion}^{n,l}$ with $n$ and $l$ being the principal and angular momentum quantum number, respectively. The recoil momentum is denoted by $k^{\prime}=\sqrt{2 m_e E_e}$. The time dependency of the recoil signal is described through $t$. The quantity $\eta$, also called the mean inverse speed, depends on the $i^{\rm th}$ DM velocity distribution as
\begin{equation}
\label{eq:eta}
\eta^i(v_{\rm min},t)=\int_{v_{\rm min}}^{\infty} \frac{f_{\rm lab}^i(\mathbf{v},t)}{v} d^3v,
\end{equation}
where $f_{\rm lab}^i$ is the DM velocity distribution at the the detector's rest frame in the location of the Earth for the $i^{\rm th}$ DM component (which contributes to the DM velocity distribution). The latter can be obtained by boosting the galactic rest frame DM velocity distribution ($f_{\rm gal}$)
\begin{equation}
\label{eq:galtolab}
f_{\rm lab}^i(\mathbf{v},t) = f_{\rm gal}^i(\mathbf{v+v}_{\rm E}(t)),
\end{equation}
where $\mathbf{v}_{\rm E}$ is the Earth's velocity in the galactic rest frame:
\begin{equation}
\mathbf{v}_{\rm E}(t)=\mathbf{v}_{\rm LSR}+\mathbf{v}_{\rm pec}+\mathbf{u}_{\rm E}(t).
\end{equation}
Here $\mathbf{v}_{\rm LSR}$ is the velocity of the local standard of rest (LSR), $\mathbf{v}_{\rm pec}$ is the peculiar velocity of the Sun with respect to the LSR. Conventionally these are expressed in galactic rectangular co-ordinate and expressed as $\mathbf{v}_{\rm LSR}=(0,v_0,0)$, $\mathbf{v}_{\rm pec}=(11.1 \pm 1.5, 12.2 \pm 2, 7.3 \pm 1)$ km/s \cite{Sch_nrich_2010}. Following Refs.\,\cite{Evans:2018bqy, Maity:2020wic}, throughout the paper we fix $v_0=233$ km/s. The uncertainties associated with $v_0$ and other astrophysical parameters have been studied in Refs. \cite{Hryczuk:2020trm, Radick:2020qip, Maity:2020wic} in the context of ER. The time-dependent Earth's velocity is represented by $\mathbf{u}_{\rm E}(t)$ which leads to the well-known annual modulation of the signal. The expression for $\mathbf{u}_{\rm E}(t)$ can be found in \cite{McCabe:2013kea}.
The differential event rate given in Eq.\,\eqref{eq:rateXe} can be divided into three parts. The particle physics input is indicated by $\bar{\sigma}_{e}$ and $F_{\rm DM}$. Throughout our analysis, we will do a model-independent analysis with two choices of $F_{\rm DM}$: 1 and $1/q^2$, which appears in large classes of particle physics model \cite{ Holdom:1985ag, Borodatchenkova:2005ct, Chu:2011be, Lin:2011gj, Izaguirre:2015yja, Alexander:2016aln, Boehm:2020wbt, 10.21468/SciPostPhysLectNotes.43}. We will present the results of $F_{\rm DM}= 1$ in the main text and that of $F_{\rm DM}= 1/q^2$ in the appendix. The atomic physics part symbolized by $f_{\rm ion}^{\rm n,l}$ signify ionization probability. The numerical values of the $f_{\rm ion}^{n,l}$ is adopted from {\tt QEdark} \cite{Essig:2015cda, Essig:2017kqs, QEdark}. The local DM density and $\eta$ constitute the astrophysical inputs.
The galactic DM velocity distribution is traditionally assumed to be a Maxwell-Boltzmann (MB) distribution truncated at the galactic escape velocity ($v_{\rm esc}$)
\begin{equation}
f_{\rm gal}^{\rm MB}(\mathbf{v})=
\frac{1}{(2 \pi \sigma_v^2)^{3/2} N_{\rm esc}^{\rm MB}}\exp{\left(-\frac{|\mathbf{v}|^{2}}{2 \sigma_{v}^{2}}\right)} \Theta(v_{\rm esc}-|\mathbf{v}|) \,.
\label{eq:fvSHM}
\end{equation}
The isotropic velocity dispersion $\sigma_{v}$ is related to $v_0$: $v_0=\sqrt{2} \sigma_v$. The normalization constant $N_{\rm esc}^{\rm MB}={\rm erf}(z)- 2\pi^{-1/2}z e^{-z^2}$ with $z=v_{\rm esc}/v_0$ and erf is the error function. Throughout the discussion the galactic escape velocity ($v_{\rm esc}$) has been fixed to $528$\,km/s \cite{Evans:2018bqy, Deason_2019}. While the MB distribution may describe the DM velocity distribution which is in equilibrium (hydrodynamical simulations indicate that MB distributions may not adequately describe the velocity distribution of the smooth DM halo component), the equilibration condition will not be met for relatively recent mergers of the MW with other galaxies. These recent mergers will have unique signatures, both in velocity and position space, called substructures. The existence of these substructures is also observed in various N-body simulations. When a galaxy accretes into the Milky Way, the stellar component of the accreted galaxy carries several tell-tale signatures: stellar streams, stellar shards, and stellar debris flow\,\cite{Ibata:1994fv, Helmi:1999ks, Ibata:2000ys, Belokurov:2006kc, Lisanti:2011as, Myeong:2017skt, myeong2018shards, Necib:2018iwb, Necib:2019zka, Yuan_2020, 2022arXiv220102405R, 2022arXiv220102404S, 2022arXiv220611248D}.
The recent results of various surveys like Gaia, SDSS, and LAMOST indeed indicate the presence of these stellar substructures. Combining the effect of the substructure with the SHM, we get total average inverse speed as
\begin{equation}
\eta(v_{\rm min},t)= \int_{v_{\rm min}}^{\infty} \frac{1}{v} \left[ (1-\delta) f_{\rm lab}^{\rm MB}(\mathbf{v},t) + \delta f_{\rm lab}^{\zeta_i}(\mathbf{v},t)\right] d^3v,
\label{eq:etacombine}
\end{equation}
where $f_{\rm lab}^{\zeta_i}(\mathbf{v},t)$ refers to the substructure velocity distribution (discussed in Sec.\,\ref{sec:DMSS}) and $\delta$ represents the fractional contribution that the corresponding component constitutes to the local density of DM.\footnote{If each of the substructures contributes different fractions then instead of one $\delta$ there will be a set of such $\delta$'s. For simplicity, we have ignored the effect of multiple substructures.} In what follows, we will consider the effect of these substructures in DM velocity distribution and the ER DD rate in liquid xenon experiments.
\section{DM substructures}
\label{sec:DMSS}
This section discusses the benchmark DM substructures that we have studied in this work. We have utilized the results of Ref. \cite{Yuan_2020} where the stellar substructure is obtained using the star catalog of LAMOST DR3 \cite{2018ApJS..238...16L}. We choose a few representative substructures to present our results. For clarity, we also mention the name of the associated dynamically tagged groups (DTG) with the relevant substructures \cite{Yuan_2020}. The details of these substructures are summarised in Table \ref{tab:subs}. We emphasize that the chosen substructures are for illustrative purposes only. Further research is required in order to understand the DM content of various substructures and whether the substructure DM profile coincides with the Solar circle. Whether the corresponding DM substructure will follow the same velocity distribution as the stellar substructure or not is currently not understood. Using Via Lactea II high-resolution N -body simulation, it has been shown that DM debris flows closely follows their stellar counterpart \cite{Lisanti:2011as, Lisanti:2014dva}. However, the same is not valid for Sagittarius stream \cite{Purcell_2012}. Nevertheless, we will assume that the velocity distributions of the substructures follow that of the corresponding stellar components. This assumption can be confirmed or refuted by future research. However, the broad conclusion of this study will hold.
We note that the substructures we have considered in this paper have similarities with previous considerations \cite{OHare:2019qxc, Buch:2020xyt}. For instance, the Helmi substructure is analogous to S2-substructure \cite{Helmi_2020}. The velocity properties of the Nyx substructure are somewhat similar to the prograde (Pg) stream and are expected to arise from the same Splashed Disk event \cite{Yuan_2020}.\footnote{Ref.\,\cite{2021ApJ...912L..30Z} has argued that Nyx is a part of thick disk.} Some of the considered substructures are also found in Gaia DR3 data at the Solar neighborhood \cite{2022arXiv220102404S, 2022arXiv220102405R, Ou:2022wvr}.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{\multirow{2}{*}{Substructure}} & \multicolumn{3}{c|}{Mean velocity (km/s)} & \multicolumn{3}{c|}{Velocity dispersion (km/s)} \\ \cline{2-7}
\multicolumn{1}{|l|}{} & $\mu_R$ & $\mu_{\phi}$ & $\mu_z$ & $\sigma_R$ & $\sigma_{\phi}$ & $\sigma_z$ \\ \hline
HelmiDTG1 & 4.5 & 197.2 & 244.3 & 146.0 & 62.6 & 42.4 \\ \hline
HelmiDTG3 & 26.2 & 157.1 & -241.3 & 78.9 & 28.8 & 27.2 \\ \hline
PolarDTG11 & -47.9 & 21.8 & 229.2 & 75.4 & 19.2 & 21.5 \\ \hline
PgDTG2 & 221.2 & 155.7 & 139.7 & 26.2 & 33.8 & 52.3 \\ \hline
Sausage & 2.1 & -0.3 & -8.7 & 136.6 & 35.0 & 72.3 \\ \hline
RgDTG28 & -4.0 & -106.1 & -143.2 & 115.8 & 29.3 & 30.3 \\ \hline
Sequoia & -36.9 & -273.9 & -87.0 & 138.2 & 36.7 & 65.0 \\ \hline
\end{tabular}%
\caption{The details of the substructures are used in this paper. The numerical values of the mean velocities and diagonal values of the velocity dispersions are adapted from tables 2 and 3 of \cite{Yuan_2020}. The DTG from which substructures are identified has also been specified.}
\label{tab:subs}
\end{table}
The mean stellar velocities and the diagonal values of the stellar velocity dispersions are given in Table \ref{tab:subs}. In general, DM substructures will have a different velocity distribution than the virialized component (SHM), which will dramatically impact the ER distribution. The galactic velocity distribution for each of the substructures (referred to by $\zeta_i$) can be written as \cite{OHare:2019qxc, Buch:2020xyt}
\begin{equation}
f_{\rm gal}^{\zeta_i}(\mathbf{v})=\frac{1}{(8\pi^3 \, {\rm det}\, \sigma^{\zeta_i})^{1/2} N_{\rm esc}^{\zeta_i} } {\rm exp} \left(-(\mathbf{v}-\boldsymbol{\mu}^{\zeta_i})^T\frac{1}{2 (\sigma^{\zeta_i})^2 } (\mathbf{v}-\boldsymbol{\mu}^{\zeta_i})\right) \Theta(v_{\rm esc}-|\mathbf{v}|),
\label{eq:fvsubs}
\end{equation}
where $\sigma^{\zeta_i}$ is the velocity dispersion matrix, assumed to be diagonal with the values given in Table \ref{tab:subs} and ${\rm det}\, \sigma^{\zeta_i}$ is the determinant of the the dispersion matrix. The mean velocities of the substructures in the galactic frame are expressed by $\boldsymbol{\mu}^{\zeta_i}$ which are non-zero in contrast to the SHM case, as indicated in Table \ref{tab:subs}. The normalization constant $N_{\rm esc}^{\zeta_i}$ is calculated numerically. The step function represents the cut-off at the galactic escape velocity, although the substructures' velocity distributions are likely to peak at smaller velocities. Therefore this cut-off will have a numerically insignificant effect. The index $\zeta_i$ refers only to the substructure, whereas $i$ includes both the substructures and SHM.
Assuming Eq.\,\eqref{eq:fvsubs} as the galactic velocity distributions for the DM substructures, we display the corresponding lab frame speed distributions, $f_{\rm lab}^i(v) = v^2 \int d \Omega f_{\rm lab}^i(\mathbf{v})$, using Eq.\,\eqref{eq:galtolab} in Fig.\,\ref{fig:VelDist}. Expect for the modulation signature (discussed in Sec.\,\ref{subsec:res}), we fix the Earth's velocity to $\mathbf{v_E}\,=\,(39.7, 243.2, 16.4)$\,km/s. The general trend we observe is that the substructures which peak at larger values of $v$ have negative $\mu_{\phi}$. Since the Earth moves with high positive rotational velocity $\sim 250$ km/s, substructures with negative $\mu_{\phi}, $ will hit the Solar system with larger velocities. On the other hand, substructures having large positive $\mu_{\phi}$ co-rotate with the Earth, leading to $f_{\rm lab}^i(v)$ peaking at smaller velocities. This has been displayed in Fig.\,\ref{fig:VelDist}, where the Helmi streams having larger values of $\mu_{\phi}$ peak at relatively smaller velocities, whereas Sequoia having a negative $\mu_{\phi}$ peaks at the higher velocity. We also display the velocity distribution of SHM by the solid black line. For reference we show the required $v_{\rm min} = 428.7$ km/s to obtain a recoil of $20$\,eV with momentum transfer $25$\,keV and $5p^6$ shell for DM mass $100$\,MeV by the vertical black dashed line.
Given these velocity distributions, we turn to the discussion of the mean inverse speed $\eta^i(v_{\rm min})$ (using Eq.\,\eqref{eq:eta}) of each of the astrophysical components. The values of $\eta^i(v_{\rm min})$ as a function of $v_{\rm min}$ are depicted in Fig.\,\ref{fig:eta}. Expectedly, $\eta^i(v_{\rm min})$ are monotonically decreasing function of $v_{\rm min}$, which can be understood from the integration over velocity starting from $v_{\rm min}$. The maximum values of $\eta^i(v_{\rm min})$, i.e., $\eta^i(0)$ is larger for the distributions which peak at lower velocities because the mean inverse speed is inversely proportional to the most probable speed (the speed at which velocity distribution attains maximum value) of the distribution. Hence in Fig.\,\ref{fig:eta}, we observe maximum and minimum $\eta^i(0)$ for HelmiDTG3 and Sequoia respectively. For the other distributions, $\eta^i(0)$ lie within the same of HelmiDTG3 and Sequoia. The flatness of $\eta^i(v_{\rm min})$ for Sequoia up to a large value of $v_{\rm min}$ as compared to other distributions is also a manifestation of the higher most probable speed of Sequoia. This indicates the extent to which $v_{\rm min}$ is supported by the distribution. It should also be noted that the flatness of $\eta^i(v_{\rm min})$ is also sensitive to the choice of the velocity dispersion.
\section{DM-electron scattering at xenon: effect of substructure}
\label{sec:DMeSS}
In this section, we discuss the effect of the substructures on the DM-electron scattering rate for liquid xenon experiments. For $F_{\rm DM}(q)=1$, the constraint on the DM-electron scattering cross-section from the xenon detectors dominate when DM mass is $\gtrsim$ 50 MeV. Xenon experiments may have a better prospect of discovering DM-electron scattering, and it is essential that we study this prospect thoroughly. Our work outlines the theory effort toward answering this important question.
Following Ref.\,\cite{Essig:2017kqs}, we convert the ER energy ($E_e$) to number of electrons ($n_e$). DM-electron scattering would produce $n_e$ number of observable electrons, unobservable photons, and heat. Some primary electrons would recombine with secondary ions with probability $f_R$. Further, each recoiling electron of energy $E_e$ will give rise to additional secondary $n_e^{(1)}={\rm Floor}[E_e/W]$ quanta (photon or electron). The average energy required to create a single quanta is $W$. Moreover, the scattering process can also lead to the ionization of electrons from the inner shell, which would de-excite by releasing a photon. These photons may also create secondary quanta, $n_e^{(2)}={\rm Floor}[\Delta E_{i,j}/W]$, $\Delta E_{i,j}$ is the difference between binding energies between the relevant inner and outer shells. The number of secondary electrons produced is calculated using a binomial distribution with $n_e^{(1)}+n_e^{(2)}$ trials, having success probability $f_e$. We have chosen fiducial values (i.e., $W=13.6$\,eV, $f_e=0.83$, $f_R=0$) of the relevant parameters to convolute Eq.\,\eqref{eq:rateXe} which will give the differential event rate as a function of number of produced electrons. Our paper does not consider uncertainties associated with $W,\,f_e,$ and $f_R$.
In Fig.\,\ref{fig:EventRate}, we show the differential event rate as a function of $n_e$ for $m_{\chi}=100$\,MeV, $\bar{\sigma}_e=10^{-41} \, {\rm cm}^2$, and 1 kg-year exposure. For each event rate, we have assumed that the corresponding astrophysical component (SHM or substructures) constitutes $100\%$ of the local DM density. For $m_{\chi}=100$\,MeV with typical momentum transfer of $\mathcal{O}(10)$\,keV, to obtain a measurable recoil the required minimum DM velocity should be around $500$\,km/s. Hence, the tail of $\eta^i(v_{\rm min})$ dominantly contributes to the recoil rate. Evidently the substructures having the largest value of $\eta^i(v_{\rm min})$ near $v_{\rm min}\sim 500$\,km/s give rise to a larger event rate.
\subsection{Neutrino background}
\label{subsec:nubag}
The scattering of neutrinos with electron/\,nucleon may also give rise to ionization signals in low-threshold DD experiments. Other background sources like radioactive background, Cherenkov radiation, etc.\,which can potentially mimic a DM signal \cite{Du:2020ldo}. The experimental collaborations confront and beat these non-neutrino backgrounds using various experimental techniques to isolate a potential DM signal. However, the neutrinos are an irreducible background that can not be removed by using shielding, purified detector material, and other experimental techniques. Because of this, we have taken neutrinos as the only source of background in our analysis. If other non-neutrino backgrounds are found in the data-set, then our results will degrade proportionally.
It has been argued in Refs.\,\cite{Essig:2018tss, Wyenberg:2018eyv} that Solar neutrinos are the main source of background for sub-GeV DM-electron scattering.\footnote{See Refs. \cite{Essig:2018tss, Schwemberger:2022fjl} or discussion related to the prospect of these detectors in probing beyond SM interactions of neutrino.} Neutrino-electron elastic scattering is the dominant contribution of background events for rather large recoil energy ($\sim 10^5$\,eV). Instead, coherent neutrino-nucleon scattering may produce small ionization, which would be the dominant source of background in our consideration. The neutrino-nucleon scattering event rate is \cite{Billard:2013qya, Essig:2018tss}
\begin{equation}
\frac{dR}{dE_{\rm NR}}=N_T M T \int_{E_{\nu}^{\rm min}} \frac{d\sigma}{dE_{\rm NR}} \frac{d\phi_{\nu}}{dE_{\nu}} dE_{\nu},
\label{eq:nurate}
\end{equation}
where $N_T,\, M$, and $T$ are the number of target nuclei per unit mass, total mass, and time respectively. The minimum neutrino energy to produce a nuclear recoil of energy $E_{\rm NR}$ is expressed by $E_{\nu}^{\rm min}=\sqrt{m_N E_R/2}$. The differential coherent neutrino nucleon cross section and the differential neutrino flux are denoted by $d\sigma/dE_{\rm NR}$ and $d\phi_{\nu}/dE_{\nu}$ respectively \cite{Essig:2018tss,OHare:2016pjy}. We have utilized low, fiducial, and high ionization models given in Ref.\,\cite{Essig:2018tss} to obtain number of electron $n_e$ for a particular nuclear recoil energy. The corresponding neutrino-induced event rate for fiducial model is displayed in Fig.\,\ref{fig:EventRate} by the grey dashed lines.\footnote{We note that there is a factor $\sim 3$ difference in the event rate between our result and Ref.\,\cite{Essig:2018tss}.} The grey shaded regions represent variation in the event rate for high and low ionization models of $n_e$\,\cite{Essig:2018tss}. Since there is a difference between three ionization models in the low $n_e$/energy bins, hence we observe a large change in the differential event rates at those bins. The discovery limits for low and high ionization models is given in appendix\,\ref{app:neUn}. For one electron threshold, the impact of the ionization model uncertainty leads to less than a factor of $3$ change in the discovery limits.
\subsection{Statistical methodology}
\label{subsec:stameth}
In this section, we discuss the statistical procedure to obtain the discovery limit for DM-electron scattering in the presence of substructures for liquid xenon experiments. We have employed the profile likelihood ratio test \cite{Cowan:2010js} with $\bar{\sigma}_{e}$ and substructure fraction ($\delta$) as the signal parameters of interest. In the following, we briefly discuss this procedure.
The binned likelihood for the background and signal model ($\mathcal{M}$), is given by
\begin{equation}
\mathcal{L}(m_{\chi},\bar{\sigma}_e,\delta,\Phi|\mathcal{M})=\prod_{i=1}^{N_{\rm bins}}\left( \mathcal{P}(N_{\rm obs}^i|N_{\chi}^i+\sum_{j=1}^{n_{\nu}}n_{\nu}^i(\Phi^j)) \right)\prod_{j=1}^{n_{\nu}}\mathcal{G}(\Phi^j)
\label{eq:llhood}
\end{equation}
Here the $N_{\rm bins}$ are the number of energy bins. The Poisson probability ($\mathcal{P}$) at the $i$-th bin is calculated using observed $N_{\rm obs}^i$ and the expected number of events. The expected number of events is the addition of DM events ($N_{\chi}^i$) and the sum of neutrino events ($n_{\nu}^i$) for all the neutrino components ($n_{\nu}$). The Gaussian function ($\mathcal{G}(\Phi^j)$) takes care of the uncertainty in the neutrino fluxes ($\Phi^j$) with mean values and standard deviation given in \cite{Essig:2018tss, OHare:2016pjy}.
Depending on the choice of the analysis, we vary one of the signal parameters (either $\bar{\sigma}_{e}$ or $\delta$), treating the other one as a nuisance parameter. We treat $\bar{\sigma}_{e}$ as the signal parameter for the discovery reach. Therefore, the profile likelihood ratio test statistic, which compares the background-only hypothesis ($\mathcal{M}_0$) with the background and signal model ($\mathcal{M}$), is given by \cite{Cowan:2010js, OHare:2020lva, Buch:2020xyt}
\begin{equation}
q_0 = -2 \, {\rm ln}\left(\frac{\mathcal{L}(\bar{\sigma}_{e}=0,\boldsymbol{\lambda}|\mathcal{M}_0)}{\mathcal{L}\left(\bar{\sigma}_{e},\boldsymbol{\lambda}|\mathcal{M} \right)} \right) \sim \chi_1^2,
\label{eq:q0Dis}
\end{equation}
where $\boldsymbol{\lambda}$ contains the nuisance parameters, i.e., $\delta$ and $\Phi^j$ in this case. The ratio in Eq.\,\eqref{eq:q0Dis} follows a $\chi^2$ distribution with one degree of freedom \cite{Cowan:2010js}. Thus, the significance of rejecting the background-only hypothesis is given by $\sqrt{q_0}$-$\sigma$. In this paper, we present all the discovery limits at the $90\%$ confidence level (CL).
We consider $\delta$ as the signal parameter for the prospective detection of DM substructure fraction. The corresponding profile likelihood ratio test to distinguish two neighboring points $\delta_1$ and $\delta_2$ can be written as \cite{Buch:2020xyt}
\begin{equation}
q_0 = -2\, {\rm ln}\left(\frac{\mathcal{L}(\delta_2,\boldsymbol{\lambda}|\mathcal{M}_{\delta_1})}{\mathcal{L}\left(\delta_2,\boldsymbol{\lambda}|\mathcal{M}_{\delta_2} \right)} \right) \sim \chi_1^2.
\label{eq:q0focast}
\end{equation}
This profile likelihood ratio is employed to reject the null hypothesis, which is that two neighboring points $\delta_1$ and $\delta_2$ are indistinguishable at $68\%$ CL. Both for Eqns.\,\eqref{eq:q0Dis} and \eqref{eq:q0focast} we utilized Asimov data set \cite{Cowan:2010js} to obtain the likelihood ratio test. In this scenario, artificial data is generated using the model's parameters (in our case $\mathcal{M}$). Then the expectation is that the number of observed events ($N_{\rm obs}$) should be equal to the number of the expected event ($N_{\rm exp}$). For a sufficiently large number of observations, the value of the profile likelihood ratio test approaches the median value. Compared to the Monte Carlo simulation, the Asimov data set scenario is computationally more economical while acquiring accurate results. For the $68\%$ and $90\%$ CL limit the required $q_0$'s are $0.99$ and $2.71$ respectively. For a fixed $m_{\chi}$ and $\delta$, the $90\%$ CL discovery limit is obtained by changing $\bar{\sigma}_{e}$ in Eq.\,\eqref{eq:q0Dis} until the required $q_0$ ($2.71$) is achieved. The $68\%$ CL contours in resolving substructure fraction are estimated using Eq.\,\eqref{eq:q0focast}. In this case for a fixed values of $m_{\chi}$, $\sigma_e$, and $\delta_1$, we iterate over $\delta_2$ until the required $q_0$ (= $0.99$) is attained.
\subsection{Results}
\label{subsec:res}
Here we will present the results using the statistical analysis discussed in the previous subsection. The three parameters of interest are DM mass ($m_\chi$), DM-electron cross section ($\bar{\sigma}_e$), and the DM substructure fraction ($\delta$). Given that DM has to be massive, we present our results through two possible choices, keeping one of the other two parameters to a fixed value. In the first part, the results are presented through the discovery limit, which is depicted in DM mass and DM-electron cross-section plane keeping a fixed DM substructure fraction. In the other case, considering a fixed DM-electron cross-section, we present the forecast of the xenon experiments to resolve the substructure fraction for a few benchmark choices of DM particle masses.
In Fig.\,\ref{fig:Exclusion}, we present the sensitivity to DM-electron cross-sections for each of the substructures considered in this paper, assuming that the corresponding substructure constitutes $100\%$ of the local DM density. In Fig.\,\ref{fig:Exclusion}, each line represents the minimum DM-electron cross section required to observe the effect of the corresponding substructure in a liquid xenon detector with $1$ kg-year exposure and one electron threshold. The discovery limits for two and three electron thresholds are given in appendix\,\ref{app:ne2and3}. The different discovery limits for different substructures are the implication of non-identical most probable speed. The tail of the DM velocity distribution will be more populous for the substructure having a relatively larger most probable speed. Therefore a sizable number of DM particles will be available to interact with the target electrons. This leads to a larger event rate, as has been depicted in Fig.\,\ref{fig:EventRate}, where for a fixed DM-electron cross section among the considered DM substructures, we obtain the minimum and the maximum number of events for HelmiDTG3 (lowest most probable speed, see Fig.\,\ref{sf:lowfv}) and Sequoia (highest most probable speed, see Fig.\,\ref{sf:highfv}) respectively. Owing to this, the DM-electron cross-section that can be probed for HelmiDTG3 is the largest, whereas the same for Sequoia is the lowest. The event rates and subsequently the discovery limits lie between HelmiDTG3 and Sequoia for the other considered substructures. The light grey shaded region demonstrates the constraint from the ionization signals in the XENON1T experiment \cite{XENON:2019gfn}, which is the most stringent current DD constraint for the parameter space shown in the plot. For reference, we have also shown the discovery limit for the SHM with the solid black line.
In reality, these substructures would not contribute $100\%$ to the local DM density. Therefore, we choose two benchmark values of $\delta$, namely $\delta=0.1$ and $\delta=0.2$ (shown in Fig.\,\ref{fig:Exclusiondel}). Further, as mentioned above, we have only considered two substructures, HelmiDTG3 and Sequoia, which lie at two extreme ends. SHM constitutes the rest of the local DM density for the combined DM distribution. If the discovery limit for a particular substructure (with $\delta=1$) is larger compared to SHM, then the same for the combined DM distribution will lie above the SHM limit. This effect would be more pronounced upon increasing $\delta$. In Fig.\,\ref{fig:Exclusiondel}, the combined discovery limit for HelmiDTG3 and Sequoia is displayed by brown and purple lines, respectively. Notably, brown and purple lines lie above and below the SHM scenario. Upon increasing the $\delta$, we observe more deviation from SHM. Importantly, it is still possible to see the effect of these substructures in liquid xenon experiments with this kind of realistic choice of $\delta$.
Next, we turn into the discussion of resolving substructure fractions in liquid xenon experiments. Again, we have restricted ourselves to HelmiDTG3 and Sequoia among the considered substructures as these two reside in the extreme ends. The sensitivity in resolving DM substructure at $68\%$ CL is displayed in Fig.\,\ref{fig:contour} for 1 kg-year exposure, one electron threshold, and $\bar{\sigma}_e=10^{-40}\, {\rm cm}^2$, with a few benchmark points. Generically, we observe a better resolution for low DM mass. Comparing Figs.\,\ref{sf:Helmicontour} and \ref{sf:Sequoiacontour} one can see that we will determine the substructure fraction more accurately for Sequoia compared to HelmiDTG3. This is due to Sequoia's large most probable velocity, which leads to a substantial number of DM-electron scattering events. Generically, it is possible to measure the substructure fraction more accurately, which is moving with a higher most probable speed. For $\delta=0.1$, with the considered exposure, threshold, and $\bar{\sigma}_e$, it is difficult to conclude whether the substructure is contributing to local DM density. Interestingly, for DM mass $\sim 50$ MeV, and $\delta = 0.4$, xenon target electron scattering experiments can resolve the substructure fraction with $\sim 50\%$ accuracy. Moreover, the structures of the contours can be understood from Eq.\,\eqref{eq:vmin} and from Fig.\,\ref{fig:eta}. Both for the lower and higher DM masses, the inclination of the contours is reversed as we compare HelmiDTG3 with Sequoia. For low DM masses, $v_{\rm min}$ is larger (for fixed $q$ and $E_e$ from Eq.\,\eqref{eq:vmin}), therefore it is the tail of the distribution which is contributing to $\eta^i$. Thus for HelmiDTG3 with low mass DM, an increment in $\delta$ will reduce the combined value of $\eta$. This reduction could be compensated by increasing DM mass for a fixed number of observed events. This results in a slightly tilted contour towards a higher DM mass. Whereas for higher DM mass (thus smaller $v_{\rm min}$), the maximum value of $\eta$ determines the orientation of contours. For Sequoia, the maximum value of $\eta^i$ is less than that of HelmiDTG3. Hence increasing $\delta$ for the former will reduce combined $\eta$, which can be elevated by reducing $v_{\rm min}$, i.e., by increasing DM mass.
We have not discussed a distinctive feature of the DM DD signal: annual modulation\,\cite{Lee:2015qva}, where the signal event rates vary with the time of the year in a specified manner. Due to the rotation of the Sun around the MW, there will be DM wind in the Solar rest frame. Due to the Earth's rotational motion around the Sun, the event rate will vary with time. The event rate will be larger (smaller) when the Sun and the Earth travel in opposite (same) directions, respectively. Due to this distinctive feature, which the background cannot mimic, annual modulation events are expected to be less dependent on the background reductions and identifications.
The main task for the modulation discovery limit would be to evaluate the event rate against both time and energy. The corresponding likelihood can be obtained by taking the difference of their individual Poisson distributions, referred to as the Skellam distribution\,\cite{Skellam1946TheFD}. Subsequently, one can estimate the discovery limit using the test statistic defined in Eq.\,\eqref{eq:q0Dis}. Following the prescription in \cite{Buch:2020xyt}, we find that the modulation discovery limit is weaker than the non-modulation counterpart. For example, with SHM or Sequoia, we observed that the modulation discovery reaches are weaker by a factor $\sim 10-100.$
\section{Conclusions}
\label{sec:conclusion}
The presence of DM in the Universe is well established. Many attempts have been made to discover the connection between DM and SM states. Among them, DD experiments look for the scattering signatures of DM and visible states. There has been a growing interest in the search for light DM (masses $\lesssim 1$\,GeV) through DD. Ambient non-relativistic DM having mass in the sub-GeV range can not impart sufficient energy to produce a measurable recoil in the typical nuclear recoil DD experiments. Electron, being a light particle, can be an excellent target in detecting such light DM. Many target materials have been considered to identify electronic excitation by the scattering of ambient DM. DM velocity distribution is an integral part of calculating the event rate or the exclusion limit of the DD experiments. DM is also an intrinsic part of structure formation; the history of galaxy formation influences its velocity distribution. While it is difficult to track the velocity distribution of DM, however, it may be manifested through stellar distribution. Surveys like Gaia, SDSS, LAMOST, etc., have made unprecedented progress mapping these stellar distributions. These data reveal the presence of stellar clumps and substructures. It is highly likely that there is a DM counterpart to these stellar substructures, called DM substructure. This paper investigates the prospects of detecting these substructures in low threshold DM DD experiments through elastic DM-electron scattering. Specifically, we have explored the prospect of xenon targets experiments in deciphering this. Note that compared to semiconductor targets experiments (like SENSEI), the xenon targets experiments have better sensitivity in the DM mass range of $\mathcal{O}(100)\,{\rm MeV}$.
We utilize the results of the LAMOST survey and choose a few benchmark DM substructures. We emphasize that there is no definite proof of the existence of the DM counterpart to the detected stellar substructures. However, it is likely that they exist. If these DM substructures overlap with the Earth's position, then we can observe the imprint of the same in xenon targets experiments through DM-electron scattering. We find that if the substructure constitutes $\gtrsim 10\%$ of the local DM density, then there is a possibility to observe the effect of the substructures in xenon target experiments with the currently allowed DM particle properties. We have also explored the forecast of xenon experiments in resolving the DM substructure fraction. We find that the uncertainty in resolving DM substructure fraction is considerable for higher DM mass compared to lower DM mass. For example, with $m_{\chi}=50\,$MeV, $\bar{\sigma}_{e}=10^{-40}{\,\rm cm}^2$, and one electron threshold in xenon experiments, we can resolve the substructure fraction to $\sim 50\%$ accuracy provided $\delta \sim 0.4.$ The discovery limit and resolving DM substructure fraction are mainly regulated by the most probable velocity of the corresponding velocity distribution. Given this correlation between DD rates and DM velocity distributions, a more detailed understanding of DM substructure is required. High-resolution cosmological simulations and near-future observations will play a crucial role in understanding this. We encourage the experimentalists to continue their excellent work in improving their detector sensitivity so that we are sensitive to such a signal. Our work shows that by pursuing this technique, we will be able to know more about the particle physics and astrophysics of DM and maybe even discover it.
\paragraph*{Acknowledgements\,:} We thank Jatan Buch, Ciaran A.\,J.\,O’Hare, Mukul Sholapurkar, and Tien-Tien Yu for useful correspondence. We thank John F.\,Beacom, Ciaran A.\,J.\,O’Hare, and Tien-Tien Yu for comments on the manuscript. TNM thanks IOE-IISc fellowship program for financial assistance. RL acknowledges financial support from the Infosys foundation (Bangalore), institute start-up funds, and Department of Science and Technology (Govt. of India) for the grant SRG/2022/001125.
\appendix
\section{Discovery limits for two and three electron threshold}
\label{app:ne2and3}
Throughout the main text, we have considered the reach of the xenon experiments for one kg-year exposure and one electron threshold with $F_{\rm DM}=1$. Here we present the discovery limit with two and three electron thresholds for $\delta=0.1$. The results are depicted in Figs.\,\ref{sf:ne20p1} and \ref{sf:ne30p1}. With higher thresholds, the expected event numbers decrease; thus, the required cross-section to see the possible effect of the substructure increases. Further, the lowest possible DM mass that can be probed also increases.
\section{Variation in the discovery limits}
\label{app:neUn}
As discussed in Sec.\,\ref{subsec:nubag}, background event rate from neutrino may change depending on the ionization model. In this appendix, we present the discovery limit for high and low ionization efficiencies models for $n_e$\,\cite{Essig:2018tss}. We display the result in Fig.\,\ref{fig:neUn}. For each of the substructures, solid lines represent discovery limits for fiducial ionization model and shaded bands show the corresponding uncertainties associated with the ionization models.
\section{Momentum dependent DM-electron scattering}
\label{app:FDMqm2}
In this appendix, we present the discovery limits of the momentum-dependent DM-electron scattering, namely $F_{\rm DM}=\alpha^2 m_e^2/q^2$ for the considered DM substructures. In this case, we also observe a similar tendency, except that the minimum required DM-electron cross section for the discovery of the substructures is larger than the same of $F_{\rm DM}=1$. This is displayed in Fig.\,\ref{fig:FDmqm2}.
\bibliographystyle{JHEP}
\bibliography{ref.bib}
|
Title:
Kinematics and brightness temperatures of transition discs -- A survey of gas substructures as seen with ALMA |
Abstract: In recent years, high-angular-resolution observations of the dust and gas in
circumstellar discs have revealed a variety of morphologies, naturally
triggering the question of whether these substructures are driven by forming
planets. While it remains difficult to directly image embedded planets, a
promising method to distinguish disc-shaping mechanisms is to study the gas
kinematics as characterising deviations from Keplerian rotation can be used to
probe underlying perturbations such as planets. Creating spiral structures, the
latter can also be traced in the brightness temperature. Here we analyse the
brightness temperatures and kinematics of a sample of 36 transition discs
observed with ALMA to search for substructures possibly tracing companions. We
use archival Band 6 and 7 observations of different CO isotopologues and fit
Keplerian disc models to the velocity fields. After subtraction of an
azimuthally averaged brightness temperature and Keplerian rotation model from
the data, we find significant substructures in both residuals of eight sources.
Other sources show tentative features, while half of our sample does not show
any substructures that may be indicative of planet-disc interactions. For the
first time, we compare the substructures from our analysis with various other
indicators of planets. About 20% of discs show strong features such as arcs or
spirals, possibly associated with the presence of planets, while the majority
do not present as clear planet-driven signatures. Almost all discs that exhibit
spirals in near-infrared scattered light show at least tentative features in
the CO data. The present data are able to reveal only very massive bodies and a
lack of features may suggest that, if there are planets, they are of lower mass
(<1-3Mj) or located closer to the star within deep cavities. Dedicated
observations and modelling efforts are needed to confirm such scenarios.
| https://export.arxiv.org/pdf/2208.09494 |
\titlerunning{Gas substructures in transition discs}
\authorrunning{WГ¶lfer et al.}
\subtitle{A survey of gas substructures as seen with ALMA}
\title{Kinematics and brightness temperatures of transition discs}
\author{L. W\"olfer
\inst{1},
S. Facchini\inst{2},
N. van der Marel\inst{1},
E. F. van Dishoeck\inst{1}\fnmsep\inst{3},
M. Benisty\inst{4},
A. J. Bohn,
L. Francis\inst{5}\fnmsep\inst{6},\\
A. F. Izquierdo\inst{7},
R. D. Teague\inst{8}
}
\institute{Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands
\and Dipartimento di Fisica, Universit\`{a} degli Studi di Milano, Via Giovanni Celoria 16, 20133 Milano, Italy
\and Max-Planck-Institut f\"ur extraterrestrische Physik, Gie\ss enbachstr. 1 , 85748 Garching bei M\"unchen, Germany
\and Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France
\and Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Elliot Building, Victoria, BC V8P 5C2, Canada
\and NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Road, Victoria, BC V9E 2E7, Canada
\and European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching bei M\"unchen, Germany.
\and Center for Astrophysics $\vert$ Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
\\ e-mail: \href{mailto:woelfer@strw.leidenuniv.nl}{woelfer@strw.leidenuniv.nl}
}
\date{Received ; accepted}
\abstract
{In recent years, high-angular-resolution observations of the dust and gas content in circumstellar discs have revealed a variety of morphologies, naturally triggering the question of whether these substructures are driven by forming planets interacting with their environment or other mechanisms. While it remains difficult to directly image embedded planets, one of the most promising methods to distinguish disc-shaping mechanisms is to study the kinematics of the gas disc. Characterising deviations from Keplerian rotation can then be used to probe underlying perturbations such as planet--disc interactions. Creating spiral structures, the latter can also be traced in the brightness temperature.}
{In this paper we aim to analyse the gas brightness temperature and kinematics of a sample of 36 transition discs observed with ALMA to resolve and characterise possible substructures that may be tracing embedded companions.}
{For our analysis we use archival Band 6 and Band 7 ALMA observations of different CO isotopologues ($^{12}$CO, $^{13}$CO and C$^{18}$O) and fit different Keplerian disc models (thin and thick disc geomerty) to the retrieved velocity field of each disc.}
{After subtraction of an azimuthally averaged brightness temperature profile and Keplerian rotation model from the peak brightness temperature and velocity maps, we find significant substructures in eight sources of our sample (CQ\,Tau, GG\,Tau, HD\,100453, HD\,142527, HD\,169142, HP\,Cha, TW\,Hya and UX\,Tau\,A) in both the brightness temperature and velocity residuals. Other sources show tentative features, while about half of our sample does not show any substructures in the temperature and kinematics that may be indicative of planet--disc interactions.}
{For the first time, we compare the substructures from our analysis with various other indicators for the presence of planets. About 20\,\% of discs show strong features such as arcs or spirals, possibly associated with the presence of planets, while the majority of discs do not present as clear planet-driven signatures. Almost all discs that exhibit spirals in near-infrared scattered light show at least tentative features in the CO data. The present data are able to reveal only very massive bodies and a lack of features may suggest that, if there are planets at all, they are of lower mass (< 1-3\,$M_{\mathrm{J}}$) or may be located closer to the star within deep cavities. Deeper and higher resolution observations and modelling efforts are needed to confirm such scenarios.}
\keywords{accretion, accretion discs --
protoplanetary discs --
planet-disc interactions --
submillimeter: planetary systems
}
\section{Introduction}
Circumstellar discs are formed as a consequence of angular momentum conservation during the process of star formation when material from a molecular cloud core is channeled towards the newborn star in the center. Also called protoplanetary or planet-forming discs they provide the gas and dust needed for the formation of planetary systems such as our solar system. Far from being static they evolve and eventually disperse while birthing planets, with different mechanisms shaping their appearance and the planet formation processes. At the same time, planets will interact with their environment and are expected to alter their host discs structure, leaving observable marks depending on their mass and location in the disc.
In the last decade, high-angular resolution dust and gas observations with the Atacama Large Millimeter/submillimeter Array (ALMA; \citealp{ALMA2015}) as well as near-infrared (NIR) scattered light observations with e.g the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE; \citealp{Sphere2019}), the Gemini
Planet Imager (GPI; \citealp{Macintosh2014}) or the Subaru telescope's High-Contrast Coronographic Imager for Adaptive Optics (HiCIAO), equipped with the Extreme Adaptive Optics System (SCExAO; \citealp{Subaro2015}) have indeed shown that a variety of substructures such as gaps or even cavities, rings, spiral arms and azimuthal asymmetries are ubiquitous in both the dust and the gas component of planet-forming discs (e.g. \citealp{Marel2013,Benisty2015,Benisty2017,Benisty2018,Casassus2016,Andrews2018,Cazzoletti2018,Feng2018,Andrews2020,Uyama2020}).
Even though there exist several mechanisms that may explain these observations, such as gravitational instabilities (e.g. \citealp{Kratter2016}), photoevaporation (e.g. \citealp{Owen2011,Picogna2019}), magnetorotational instabilities (e.g. \citealp{Flock2015,Flock2017,Riols2019}), zonal flows (e.g. \citealp{Uribe2015}) or compositional baroclinic instabilities (e.g. \citealp{Klahr2004}), at least some of the substructures are expected to be linked to the presence of (massive) planets \citep{Lin1979,Zhang2018}. To interpret the origin of the various substructures it is crucial to understand how common they are, if they follow certain patterns and if differences/similarities can be identified for different star-disc system morphologies.
One particularly interesting subgroup of young stellar objects (YSOs) is represented by the so-called transition discs. Originally identified through a lack of infrared (IR) excess in their spectral energy distribution (SED) (\citealp{Strom1989}) they are characterised by dust (and gas) depleted inner regions (e.g. \citealp{Espaillat2014,Ercolano2017}). While they are sometimes classified as an intermediate state between a full optically thick disc and disc dispersal, planet-disc interactions provide an alternative explanation for the observed cavities. At least some transition discs - especially those with very deep dust and gas cavities ( e.g. \citealp{Marel2016}) - are expected to be the result of dynamical clearing of a massive either planetary or binary companion. This may imply that transition discs are not an evolutionary state which every disc goes through, since massive planets (or binary companions) are not found around every star (e.g. \citealp{Johnson2010,Nielsen2019,MarelMulders2021}). Transition discs therefore represent excellent candidates to catch planet formation in action, test planet formation models and probe disc evolution mechanisms.
To unambiguously link the observed substructures to the presence of a planet, the latter needs to be directly imaged in its environment. However, this method is only feasible for very bright and massive planets that are not severely affected by dust extinction (\citealp{Sanchis2020}). To this date, the only system in which a robust direct detection of proto-planets has been obtained is PDS 70, hosting two planets with masses of several $M_{\mathrm{J}}$ (\citealp{Keppler2018,Haffert2019,Benisty2021}).
Alternatively we can study the indirect effects that planets may have on the dust and gas distributions. In this context, one promising method is to investigate the kinematics to look for perturbations that are induced in the velocity field of the rotating gas. Identifying deviations from Keplerian rotation can then be used to probe the local pressure gradient and to characterise the shape of the perturbation. \cite{Teague2018a} use this technique to constrain the rotation profile of HD\,163296 and its deviation from a Keplerian profile. In addition \cite{Teague2019a} report significant meridional flows in that disc. Evidence of similar meridional flows is found in HD\,169142 by \cite{Yu2021}. The kinematics of AS\,209 are studied by \cite{Teague2018b} and \cite{Rosotti2020b} who report a vertical dependence on the pressure maxima and measure the gas-dust coupling as well as the width of gas pressure bumps respectively. So-called kink-features are detected by \cite{Pinte2018a,Pinte2019} in the iso-velocity curves of HD\,163296 and HD\,97048 that are consistent with a Jupiter-mass planet.
A possible signature of an embedded planet is also found by \cite{Casassus2019} in the HD\,100546 disc in the form of a Doppler-flip in the residual kinematics. In TW\,Hya (\citealp{Teague2019Spiral}), HD\,100453 (\citealp{Rosotti2020b}), HD\,135344B \citep{Casassus2021}, CQ\,Tau (\citealp{Woelfer2021}) and HD\,163296 \citep{Teague2021} spiral structures are found in the kinematics after subtraction of a Keplerian model, possibly connected to a companion. Non-Keplerian gas spirals are are also found in HD\,142527 by \cite{Garg2021}.
\cite{Calcino2022} show that the outer kink in HD\,163296 is possibly associated with a planetary spiral wake. \cite{Izquierdo2021a} developed a new, channel-map-fitting package to robustly identify localised velocity perturbations in both radius and azimuth and thus infer the position of an embedded planet. Applied to HD\,163296 data they are able to find indications for two embedded planets with this method \citep{Izquierdo2022}.
\begin{table*}
\centering
\caption{Stellar properties, outer disc inclination and dust cavity radius of the disc sample studied in this work.}
\begin{tabular}{lccccccccc}
\hline
\hline
Object & $d$ (pc) & Spectral Type & $T_{\mathrm{eff}}$ (K) & $L$ ($L_{\odot}$) & $M_*$ ($M_{\odot}$) & i ($\degree$) & Classification & Dust Cavity (au) & Ref.\tablefootmark{a} \\
\hline
AA\,Tau & 137 & K7 & 4350 & 1.1 & 0.68 & 59 & TTS & 44 & 1\\
AB\,Aur & 163 & A0 & 9520 & 65.1 & 2.56 & 23 & Herbig & 156 & 1\\
CQ\,Tau & 163 & F2 & 6890 & 10.0 & 1.63 & 35 & Herbig & 50 & 1\\
CS\,Cha & 176 & K2 & 4780 & 1.9 & 1.4 & 8 & TTS & 37 & 1\\
DM\,Tau & 145 & M2 & 3580 & 0.2 & 0.39 & 35 & TTS & 25 & 1\\
DoAr\,44 & 146 & K2 & 4780 & 1.9 & 1.4 & 20 & TTS & 40 & 1\\
GG\,Tau & 140 & K7+M0 & 4060 & 1.6 & 0.66 & 36 & TTS & 224 & 1\\
GM\,Aur & 160 & K5 & 4350 & 1.0 & 1.01 & 53 & TTS & 40 & 1\\
HD\,100453 & 104 & F0 & 7200 & 6.2 & 1.47 & 30 & Herbig & 30 & 1\\
HD\,100546 & 110 & A0 & 9520 & 25.0 & 2.13 & 42 & Herbig & 27 & 1\\
HD\,135344B & 136 & F5 & 6440 & 6.7 & 1.51 & 12 & Herbig & 52 & 1\\
HD\,139614 & 134 & A9 & 7750 & 6.0 & 1.57 & 18 & Herbig & - & 2 \\
HD\,142527 & 157 & F6+M5/M6 & 6360 & 9.9 & 1.69+0.26 & 27 & Herbig & 185 & 1,3\\
HD\,169142 & 114 & A5 & 8200 & 8.0 & 1.65 & 12 & Herbig & 26 & 1\\
HD\,34282 & 312 & A0 & 9520 & 10.8 & 2.11 & 59 & Herbig & 87 & 1\\
HD\,97048 & 185 & A0 & 9520 & 30.0 & 2.17 & 41 & Herbig & 63 & 1\\
HP\,Cha & 160 & K7 & 4060 & 2.4 & 0.95 & 37 & TTS & 50 & 1\\
IP\,Tau & 131 & M0 & 3850 & 0.6 & 0.54 & 45 & TTS & 25 & 1\\
IRS\,48 & 134 & A0 & 9520 & 17.8 & 1.96 & 50 & Herbig & 83 & 1\\
J1604.3-2130 & 150 & K3 & 4780 & 0.7 & 1.1 & 6 & TTS & 87 & 1\\ %
LkCa\,15 & 159 & K2 & 4730 & 1.3 & 1.32 & 55 & TTS & 76 & 1\\
MWC\,758 & 160 & A7 & 7850 & 14.0 & 1.77 & 21 & Herbig & 62 & 1\\
PDS\,70 & 113 & K7 & 4060 & 0.3 & 0.8 & 52 & TTS & 74 & 1\\
PDS\,99 & 155 & K6 & 4205 & 1.1 & 0.88 & 55 & TTS & 56 & 1\\
RXJ1615.3-3255 & 156 & K7 & 4100 & 0.6 & 0.73 & 47 & TTS & - & 2 \\
RXJ1852.3-3700 & 146 & K2 & 4780 & 0.6 & 1.05 & 30 & TTS & 49 & 1\\
RY\,Lup & 159 & K2 & 4780 & 1.9 & 1.4 & 67 & TTS & 69 & 1\\
RY\,Tau & 175 & G2 & 5860 & 15.0 & 2.25 & 65 & TTS & 27 & 1\\
SR\,21 & 138 & G4 & 5770 & 11.0 & 2.12 & 16 & TTS & 56 & 1\\
Sz\,91 & 159 & M1 & 3850 & 0.2 & 0.54 & 45 & TTS & 86 & 1\\
SZ\,Cha & 190 & K2 & 5100 & 1.7 & 1.45 & 47 & TTS & - & 2\\
T\,Cha & 107 & G8 & 5570 & 1.3 & 1.12 & 73 & TTS & 34 & 1\\
TW\,Hya & 60 & K7 & 4205 & 0.3 & 0.81 & 7 & TTS & 2 & 1\\
UX\,Tau\,A & 140 & G8 & 5570 & 2.5 & 1.4 & 40 & TTS & 31 & 1\\
V1247\,Ori & 400 & F0 & 7200 & 15.0 & 1.82 & 30 & Herbig & 64 & 1\\
V4046\,Sgr & 72 & K7+K5 & 4060 & 0.5 & 0.9+0.85 & 34 & TTS & 31 & 1,4 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Unless indicated otherwise, data for spectral type, distance, effective temperature, stellar luminosity, stellar mass and disc inclination are taken from (1) \cite{Francis2020} where all original references can be found. The distances are according to \cite{Gaia2018}. (2) \cite{Bohn2022}, (3) \cite{Claudi2019}, (4) \cite{Rosenfeld2012}}. The radius of the dust cavity was determined by \cite{Francis2020}.
}
\label{tab:stellarProp}
\end{table*}
Besides the kinematics, it can also be useful to look for substructures or asymmetries in the peak intensity/brightness temperature residuals in the search for evidence of companions. The density waves created by a companion will result in an increase in surface density and thus in a higher opacity. This will move the $\tau = 1$ layer to a higher altitude where the temperature is generally higher, resulting in spiral substructures in the gas brightness temperature (\citealp{Phuong2020b,Phuong2020a}). In addition, planets can generate tightly wound spirals in the brighness temperature through buoyancy resonances \citep{Bae2021}. The temperature structure in planet-driven spiral arms is investigated by \cite{Muley2021} and their models may explain the observed thermal features in discs like TW\,Hya and CQ\,Tau: \cite{Teague2019Spiral} and \cite{Woelfer2021} report the detection of spiral structures in the $^{12}$CO brightness temperature of TW\,Hya and CQ\,Tau respectively after subtraction of an azimuthally averaged model. These spirals are (at least partly) linked to the spirals observed in the velocity residuals (for TW\,Hya see also \cite{Sturm2020}), and in the case of CQ\,Tau connected to a small spiral in the NIR (\citealp{Uyama2020}).
Studying the gas component in discs may enable to assess the different dynamical processes described above that are shaping the disc and reveal previously undetected substructures. In this context, probing different disc layers with various molecules may help distinguish the formation mechanisms of the observed substructures (e.g \citealp{Pinte2018, Law2021}). For example, in a passively heated disc with a positive vertical temperature gradient, more tightly wound spirals are expected towards the midplane in the planetary scenario, while similar spiral pitch angles would be established between the surface and midplane layers if resulting from gravitational instabilities (\citealp{Juhasz2018}). Furthermore, an embedded planet will induce perturbations in all three velocity components, which will vary as a function of height: The magnitude of radial and rotational perturbations ($v_{\mathrm{r}}$, $v_{\mathrm{\varphi}}$) decreases and that of vertical perturbations ($v_{\mathrm{z}}$) increases towards the disc surface \citep{Pinte2019}.
To this point, the connection between inner and outer disc structures in protoplanetary discs is not fully understood but represents an important piece in the planet formation puzzle. Several observations of transition discs in NIR scattered light have revealed dark regions (e.g. \citealp{Stolker2016, Casassus2018}) which are commonly interpreted as shadows resulting from a misalignment between the inner and the outer disc (e.g. \citealp{Marino2015, Facchini2018}). One, particularly exciting explanation for this is the presence of (one or several) massive misaligned companions that induce a misalignment in specific disc regions around them \citep{Francis2020,Perraut2021,Bohn2022}.
In this work, we investigate archival CO data of a sample of 36 transition discs in terms of both their velocity and brightness temperature structure to search for possible perturbations and features that may be linked to the presence of embedded companions. The paper is structured as follows: In \hyperref[sec:observations]{Sect.~\ref*{sec:observations}} we give an overview of the selected targets. The observational results, including brightness temperature and velocity maps, as well as radial intensity profiles are presented in \hyperref[sec:results]{Sect.~\ref*{sec:results}}. In \hyperref[sec:analysis]{Sect.~\ref*{sec:analysis}} we describe our analysis, showing the resulting velocity and brightness temperature residuals. These results are discussed in \hyperref[sec:Discussion]{Sect.~\ref*{sec:Discussion}} where a comparison with other indicators of planets is done. A summary of our work is presented in \hyperref[sec:Summary]{Sect.~\ref*{sec:Summary}}.
\begin{table*}
\centering
\caption{Characteristics of the ALMA line data for the main lines of this analysis.}
\begin{tabular}{lcccccccc}
\hline
\hline
Object & Line & ALMA Project ID & Beam ($\arcsec$) & LAS ($\arcsec$) & $\Delta \upsilon$ (km$\,$s$^{-1}$) & RMS (mJy$\,$beam$^{-1}$) & Cube Source\tablefootmark{a} \\%& Ref. \\
\hline
AA\,Tau & $^{13}$CO 3-2 & 2015.1.01017.S & 0.28x0.22 & 8.0 & 0.11 & 13.5 & A \\
AB\,Aur & $^{13}$CO 3-2 & 2012.1.00303.S & 0.37x0.23 & 7.2 & 0.2 & 6.2 & P/PC & \\
CQ\,Tau & $^{12}$CO 2-1 & 2013.1.00498.S & 0.12x0.1 & 5.3 & 0.5 & 1.2 & P/PC\\
& & 2016.A.00026.S & & 2.9\\
& & 2017.1.01404.S & & 2.7\\
CS\,Cha & $^{12}$CO 3-2 & 2017.1.00969.S & 0.1x0.07 & 2.4 & 0.11 & 4.2 & A & \\
DM\,Tau & $^{12}$CO 2-1 & 2016.1.00724.S & 0.86x0.8 & 10.5 & 0.08 & 17.6 & A & \\
DoAr\,44 & $^{13}$CO 3-2 & 2012.1.00158.S & 0.25x0.19 & 3.2 & 0.5 & 13.7 & P/PC & \\
GG\,Tau & $^{12}$CO 2-1 & 2018.1.00532.S & 0.34x0.27 & 9.7 & 0.08 & 2.6 & A & \\
GM\,Aur & $^{12}$CO 2-1 & 2018.1.01055.L & 0.15x0.15 & 3.6-44.1 & 0.2 & 2.8 & P/PC\\
HD\,100453 & $^{12}$CO 3-2 & 2017.1.01424.S & 0.05x0.05 & 1.3 & 0.42 & 1.0 & P/PC & \\
HD\,100546 & $^{12}$CO 2-1 & 2016.1.00344.S & 0.08x0.06 & 1.1/2.7 & 0.5 & 1.2 & P/PC & \\
HD\,135344B & $^{13}$CO 3-2 & 2012.1.00158.S & 0.26x0.21 & 3.1 & 0.24 & 19.1 & P/PC & \\
HD\,139614 & $^{13}$CO 2-1 & 2015.1.01600.S & 0.77x0.55 & 8.4 & 0.4 & 26.5 & P/PC\\
HD\,142527 & $^{12}$CO 2-1 & 2015.1.01353.S & 0.28x0.26 & 3.9 & 0.09 & 2.5 & A\\
HD\,169142 & $^{12}$CO 2-1 & 2015.1.00490.S & 0.18x0.13 & 4.2 & 0.05 & 1.2 & P/PC\\
HD\,34282 & $^{12}$CO 3-2 & 2013.1.00658.S & 0.26x0.2 & 5.1/9.7 & 0.2 & 8.2 & P/PC\\
HD\,97048 & $^{13}$CO 3-2 & 2016.1.00826.S & 0.11x0.07 & 1.8/4.4 & 0.12 & 3.8 & P/PC\\
HP\,Cha & $^{12}$CO 2-1 & 2016.1.00583.S & 0.3x0.21 & 6.2 & 0.63 & 3.7 & A\\
IP\,Tau & $^{12}$CO 2-1 & 2013.1.00163.S & 0.24x0.21 & 3.2 & 1.0 & 5.5 & P/PC \\
IRS\,48 & $^{12}$CO 3-2 & 2013.1.00100.S & 0.19x0.13 & 1.4-3.5 & 0.24 & 8.3 & P/PC \\
J1604 & $^{12}$CO 3-2 & 2015.1.00888.S & 0.23x0.19 & 3.3/6.7 & 0.21 & 7.1 & P/PC\\
LkCa\,15 & $^{12}$CO 2-1 & 2018.1.01255.S & 0.41x0.29 & 7.6 & 0.04 & 6.0 & P/PC \\
MWC\,758 & $^{13}$CO 3-2 & 2012.1.00725.S & 0.19x0.16 & 5.0 & 0.11 & 8.9 & P/PC \\
PDS\,70 & $^{12}$CO 3-2 & 2017.A.00006.S & 0.11x0.1 & 2.3 & 0.42 & 1.1 & P/PC \\
PDS\,99 & $^{12}$CO 2-1 & 2015.1.01301.S & 0.3x0.22 & 4.9 & 0.16 & 8.3 & A \\
RXJ1615 & $^{12}$CO 3-2 & 2012.1.00870.S & 0.3x0.23 & 3.2/6.5 & 0.21 & 14.1 & P/PC\\
RXJ1852 & $^{12}$CO 2-1 & 2018.1.00689.S & 0.16x0.12 & 1.8 & 0.63 & 4.3 & A \\
RY\,Lup & $^{12}$CO 3-2 & 2017.1.00449.S & 0.22x0.17 & 2.71 & 0.85 & 4.0 & P/PC\\
RY\,Tau & $^{12}$CO 2-1 & 2013.1.00498.S & 0.28x0.16 & 1.67 & 0.5 & 9.1 & A\\
SR\,21 & $^{12}$CO 2-1 & 2018.1.00689.S & 0.14x0.12 & 1.71 & 0.64 & 4.8 & A\\
Sz\,91 & $^{12}$CO 3-2 & 2012.1.00761.S & 0.17x0.13 & 1.36 & 0.2 & 11.8 & A \\
SZ\,Cha & $^{12}$CO 3-2 & 2013.1.01075.S & 0.82x0.43 & 3.70 & 0.5 & 26.2 & P/PC\\
T\,Cha & $^{12}$CO 2-1 & 2017.1.01419.S & 0.24x0.17 & 2.55 & 0.32 & 9.0 & A\\
TW\,Hya & $^{12}$CO 3-2 & 2015.1.00686.S & 0.14x0.13 & 0.37 & 0.25 & 3.5 & P/PC\\
& & 2016.1.00629.S & & 1.3/6.0\\
UX\,Tau\,A & $^{12}$CO 3-2 & 2015.1.00888.S & 0.2x0.16 & 2.41 & 0.21 & 3.4 & P/PC\\
V1247\,Ori & $^{12}$CO 3-2 & 2016.1.01344.S & 0.05x0.03 & 0.86 & 1.0 & 2.0 & P/PC\\
V4046\,Sgr & $^{12}$CO 2-1 & 2016.1.00724.S & 0.41x0.29 & 4.48 & 0.08 & 9.9 & A\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{P/PC: Reimaged data cube. Public data or obtained via private communication, A: Archival data product.}
}
\label{tab:dataProp}
\end{table*}
\section{Observations}\label{sec:observations}
Our selected sample consists of 36 transition discs, chosen from the sample of \cite{Francis2020} where sufficient CO data are available. Except for TW\,Hya, these discs show large (> 25\,au) inner dust cavities and therefore represent the ideal candidates to search for planet-disc interactions. It comprises different star-disc system architectures, including a range of spectral types (M2 to A0; primary) and stellar masses (0.4\,$M_{\odot}$ to 2.6\,$M_{\odot}$, primary) counting 23 TTauri and 13 Herbig stars. Some stellar and disc properties of our targets are listed in \autoref{tab:stellarProp}.
For our analysis we collect either Band 6 or Band 7 archival CO line data, observed with ALMA. For most sources (two third) we use reimaged/self-calibrated data cubes that are either public or were obtained via private communication, for the remaining sources we use archival data products. This is indicated in \autoref{tab:dataProp}, where some characteristics of the data cubes are listed for the main lines used in our study. Typical spectral resolutions of the data are a few 100 m\,s$^{-1}$ and spatial resolutions lie between $\sim 6 -135$\,au (median: 31\,au). RMS values lie between $\sim$0.5-47\,K (median: 3.4\,K) when scaled for a channel width of 100\,m\,s$^{-1}$.
To assess if combining reimaged and archival products affects our results we compare the reimaged data sets with the archival products for the same data set. We find that the Keplerian fit (see \hyperref[sec:velocityRes]{Sect.~\ref*{sec:velocityRes}}) is not significantly affected. The detection of extended (over several beams) substructures such as spirals or the non-detection of features is also not affected. Only tentative features are sometimes only visible in the reimaged data. Some examples for this test are shown in \hyperref[fig:compClean]{Fig.~\ref*{fig:compClean}} in the Appendix: While clear spirals are found in both the reimaged and archival product data of UX\,Tau\,A, a tentative spiral in the brightness temperature of HD\,135344B and spiral/arc in the kinematics of J1604 are only visible in the reimaged data. RXJ\,1615 on the other hand shows no clear spirals or arcs in both data products.
Several discs in our sample are affected by cloud absorption, namely AB\,Aur, HD\,142527, HD\,97048, HP\,Cha, IP\,Tau, IRS\,48, PDS\,99, SR\,21, Sz\,91 and SZ\,Cha. We mask the regions affected by this in the calculations of radial profiles (\hyperref[sec:radprof]{Sect.~\ref*{sec:radprof}}) and brightness temperature residuals (\hyperref[sec:tempres]{Sect.~\ref*{sec:tempres}}). For some of the targets we analysed additional CO isotopologues, that are listed with the data properties in the Appendix in \autoref{tab:dataPropAdd}. Our main lines for analysis (\autoref{tab:dataProp}) are based on their brightness as well as the spatial and spectral resolution of the observation. A few of our targets have already been analysed with the same techniques, as explained in \hyperref[sec:analysis]{Sect.~\ref*{sec:analysis}} (CQ\,Tau \citep{Woelfer2021}, HD\,100453 \citep{Rosotti2020a}, TW\,Hya \citep{Teague2019Spiral}), but are included in this work for comparison.
\section{Observational results}\label{sec:results}
\subsection{Brightness temperature}
In \hyperref[fig:TbMapsMajor]{Fig.~\ref*{fig:TbMapsMajor}} we present the peak brightness temperature maps (continuum subtracted) for the main CO lines (mostly $^{12}$CO, some $^{13}$CO). Maps for the additional lines can be found in \hyperref[fig:TbAddBand6]{Fig.~\ref*{fig:TbAddBand6}} and \hyperref[fig:TbAddBand7]{Fig.~\ref*{fig:TbAddBand7}} in the Appendix. The maps presented in \hyperref[fig:TbMapsMajor]{Fig.~\ref*{fig:TbMapsMajor}} are shown again in \hyperref[fig:TbMapsMajorCont]{Fig.~\ref*{fig:TbMapsMajorCont}} with overlaid continuum images, illustrating that for most targets the dust disc (mm-sized grains, B6 and B7) is substantially smaller than the gas disc. This can readily be explained by radial drift processes and/or by the difference between dust and gas opacities \citep{Facchini2017,Trapman2019}.
To compute the peak intensity maps we use the standard moment 8 implementation in the \texttt{bettermoments} code \citep{bettermoments} and then convert from flux density units to units of Kelvin with the Planck law. No masking is applied in the computation of the maps. The brightness temperature traces a combination of kinetic gas temperature and column density, with the optically thick lines mostly measuring the temperature while the more optically thin lines mostly trace the column density. In this context, the observed gas temperatures of $\sim 30$\,K up to $\sim 200$\,K are as expected in the upper disc layers (see e.g. \citealp{Bruderer2013,Bruderer2014, Leemker2022}).
As discussed below, some of the discs show interesting features in their peak brightness temperature. Very massive companions are clearly observable from the peak brightness temperature by their ability to induce spirals which are prominent enough to be observed with this data quality. Clear spiral structures can be discerned in GG\,Tau, HD\,100453, HP\,Cha and UX\,Tau\,A. GG\,Tau is (at least) a quadruple star system surrounded by a massive disc, with the substructures likely tracing star-disc or planet-disc interactions \citep{Leinert1991,Dutrey2014,Phuong2020b}. HD\,100453 and UX\,Tau\,A are also known to have stellar companions that are responsible for the observed spirals \citep{Rosotti2020a,menard2020}. HP\,Cha is affected by cloud absorption on the blue shifted side, but the extended (red-shifted) structure suggests interactions with the environment such as infalling material from a streamer or a fly-by. Even though the disc around V4046\,Sgr is also known to be circumbinary, no clear substructures can be seen. The reason for this is that the two stars in the system are orbiting each other at a small distance ($<1\,\mathrm{au}$, $2.4\,\mathrm{d}$, \citealp{Stempels2004}), acting like a single gravitational point source on much larger scales.
Indications of spirals are visible in CQ\,Tau, where one side of the disc is substantially brighter, representing the anchoring point of the spiral (see \citealp{Woelfer2021}) as well as in HD\,135344B and TW\,Hya. For HD\,135344B similar spiral features have been found by \cite{Casassus2021} in $^{12}$CO 2-1 data. Other discs show arc-like azimuthal asymmetries like HD\,142527 (in all CO isotopologues, \citealp{Casassus2015,Garg2021,Yen2020}) or Sz\,91. \cite{Tsukagoshi2019} explain the arc-like structure in Sz\,91 with a flared disc, showing emission from the front and the back side, in combination with a dust ring.
In a couple of maps - e.g of AB\,Aur, CQ\,Tau or MWC\,758 - symmetric dimmed regions are visible. Such features are commonly linked to the presence of a misaligned inner disc, casting a shadow over the outer disc (e.g. \citealp{Marino2015, Facchini2018}). Beam dilution effects can however cause artificial features along the minor axis (see example of CQ Tau in \citealp{Woelfer2021}), thus caution should be taken when interpreting these under-brightnesses. The misalignment hypothesis has recently been tested by \cite{Bohn2022} for a sub-sample of our discs by comparing position angle and inclination of the inner disc measured with VLTI/GRAVITY and the outer disc measured with ALMA. Significant misalignments are found for CQ\,Tau, HD\,100453, HD\,142527, HD\,34282, RY\,Lup and V1247\,Ori. \cite{Francis2020} also find misalignments from ALMA inner disc images, which are significant for eight sources in either position angle or inclination (AB\,Aur, GG\,Tau, HP\,Cha, MWC\,758, PDS\,70, SR\,24\,S, TW\,Hya, V4046\,Sgr). In \hyperref[fig:examplesFeatures]{Fig.~\ref*{fig:examplesFeatures}}, some examples are given for the different features that can be observed in the brightness temperature. We note that arcs can also be seen as part of a spiral. In this work we identify spirals as structures covering a larger range of radii, while arcs are mostly observed at one radius.
To un-cover small substructures in the brightness temperature structure, we further analyse these maps in \hyperref[sec:tempres]{Sect.~\ref*{sec:tempres}} by subtracting azimuthally symmetric brightness temperature profiles from the data.
\subsection{Rotation velocity}
In \hyperref[fig:vrotMapsMajor]{Fig.~\ref*{fig:vrotMapsMajor}} we present the kinematics of our targets, again showing the main lines, while the additional lines can be found in the Appendix in \hyperref[fig:v0AddBand6]{Fig.~\ref*{fig:v0AddBand6}} and \hyperref[fig:v0AddBand6]{Fig.~\ref*{fig:v0AddBand7}}. To compute the line-of-sight velocity of the gas we use the quadratic method implemented in the \texttt{bettermoments} package: a quadratic function is fitted to the brightest pixel in the spectrum as well as the two neighbouring pixels to find the centroid of the line in pixel coordinates. To reduce the noise at the disc edge, we apply a masking for regions below a certain signal-to-noise ratio (S/N). The magnitude of this clipping is obtained via inspection of each individual map, ranging between 2\,$\sigma$ to 5\,$\sigma$.
Even though the spiral features are not as prominent in the kinematics as in brightness temperature maps, they are still observable in GG\,Tau, HD\,100453, HP\,Cha and UX\,Tau\,A. CQ\,Tau, HD\,135344B and TW\,Hya show indications of spirals in the brightness temperature but in the kinematics these indications are only present for TW\,Hya. CQ\,Tau however shows twisted kinematics in the center that resemble a warp but are likely caused by the spiral structure \citep{Woelfer2021}. Clearly twisted kinematics can also be seen in the center of HD\,142527, for which several indications of a warped disc have been found \citep{Marino2015,Casassus2015,Bohn2022}.
For several discs with higher inclinations the vertical structure becomes visible in the isovelocity curves bending away from the semi-major axis in one direction (e.g. AA\,Tau, GM\,Aur, HD\,97048, LkCa\,15, PDS\,70, RY\,Lup, T\,Cha, V1366\,Ori). Fitting for this structure can be used to determine the flaring and scale height of the disc (e.g. \citealp{Casassus2019, Teague2019a}). Other more face-on or inclined but less elevated discs show a dipole morphology that is symmetric about the semi-major axis (e.g. AB\,Aur, CS\,Cha, HD\,135344B, HD\,139614, TW\,Hya, V4046\,Sgr).
To reveal possible deviations from Keplerian rotation that may be indicative of the presence of companions, we attempt to fit a Keplerian model to the rotation velocity of the discs in \hyperref[sec:velocityRes]{Sect.~\ref*{sec:velocityRes}} assuming thin and thick disc geometries.
\subsection{Radial profiles}\label{sec:radprof}
In \hyperref[fig:radialProfiles]{Fig.~\ref*{fig:radialProfiles}} the radial peak intensities are displayed for the different CO lines (colored lines) as well as the mm-continuum (black lines). These curves are calculated by azimuthally averaging the peak intensities for radial annuli of equal width, using the \texttt{GoFish} package \citep{Gofish}. By default, the widths of the annuli in this package are given as 1/4 of the beam major axis. For the computation we assume the geometries (thin or thick disc) obtained from the fitting of the rotation maps (see \hyperref[sec:velocityRes]{Sect.~\ref*{sec:velocityRes}}). For both geometries we recover similar radial profiles (due to similar fits, see discussion in \hyperref[sec:velocityRes]{Sect.~\ref*{sec:velocityRes}}) and thus the curves are only shown for the thin disc geometry in \hyperref[fig:radialProfiles]{Fig.~\ref*{fig:radialProfiles}}. All profiles are normalised to the peak value. The uncertainties, shown as shaded regions, correspond to the standard deviation per annulus divided by the square root of the number of independent beams in the annulus. Some discs in our sample are affected by cloud absorption, which can result in an artificially lower brightness temperature and a larger azimuthal scatter. We therefore exclude the affected azimuthal angles from our calculation. The beam size of the continuum and lines is indicated by a colored bar in each panel. The interpretation of radial profiles depends on the resolution. Given the inhomogeneity of our sample in that regard, the trends reported below may be subject to change with higher and comparable resolutions. The main features of the profiles are annotated in the individual panels of \hyperref[fig:radialProfiles]{Fig.~\ref*{fig:radialProfiles}}.
The radial profiles can be used to estimate the size of the cavity (steep emission drop). In this context it is important to note that artificial cavities can be created in the peak intensity, depending on the beam size. Near the star the emission is less extended than the beam size and during the beam convolution the intensity gets diluted over the full extent of the beam, hence the peak intensity decreases. When studying the inner disc it is therefore important to look at the velocity integrated intensity, which is less affected by this issue. However, for the integrated intensity there may be contributions from the back side of the disc \citep{Rab2019}. The radial profiles for the integrated intensity are shown in the Appendix in \hyperref[fig:radialProfiles2]{Fig.~\ref*{fig:radialProfiles2}}.
The measured brightness temperature will also be affected by the subtraction of continuum (except for HD\,97048 all data are continuum subtracted). For optically thick lines, that absorb part of the underlying continuum, line emission may be removed when subtracting the continuum, leading to artificial temperature drops (e.g. \citealp{Weaver2018, Rosotti2021, Bosman2021}). Since in this work we are mostly interested in substructures rather that obtaining a robust measurement of temperature, we do not expect this effect to significantly affect our results.
As visible in \hyperref[fig:radialProfiles]{Fig.~\ref*{fig:radialProfiles}} (and \hyperref[fig:radialProfiles2]{Fig.~\ref*{fig:radialProfiles2}}), the CO emission peaks inside the dust cavity for most discs as expected from previous work \citep{Bruderer2013,Marel2016}. For a few discs such as HD\,139614 the inner dust cavity is not resolved due to limited resolution. The radial profiles can further show dips or wiggles, especially in the dust indicating ring structures and depleted regions. It is important to note that it is also possible that instead of a dip, an enhanced desorption of CO ices by increased UV or a temperature inversion in the outer disc (more optically thin) at the edge of the continuum can result in an enhancement of gas emission (e.g. \citealp{Cleeves2016,Facchini2017}).
\section{Analysis}\label{sec:analysis}
\subsection{Brightness temperature structure}\label{sec:tempres}
To uncover possible substructures in the brightness temperature we construct an azimuthally symmetric model with \texttt{GoFish} and subtract this model from the data. The package computes an azimuthally averaged radial profile for a given geometry (compare \hyperref[sec:radprof]{Sect.~\ref*{sec:radprof}}) and then projects it onto the sky to create an azimuthally symmetric model. For the disc geometry we use the results for the thin and (if available) the vertically extended thick disc from the kinematics modelling described in \hyperref[sec:velocityRes]{Sect.~\ref*{sec:velocityRes}}. The angles affected by cloud absorption are again excluded from the calculation. The resulting residuals are presented in \hyperref[fig:TbResMajorThin]{Fig.~\ref*{fig:TbResMajorThin}} for the geometrically thin disc and in the Appendix for the geometrically thick disc models (\hyperref[fig:thickboth]{Fig.~\ref*{fig:thickboth}}). The color scale is adapted such that regions with hotter temperatures than the model are highlighted. In this work we are mostly interested in the general occurrence of features such as spirals and therefore this choice was made to help the readability of the plot. Residuals for the other CO lines can also be found in the Appendix. The main features are annotated in the individual panels and further discussed in \hyperref[sec:Discussion]{Sect.~\ref*{sec:Discussion}}.
\subsection{Velocity structure}\label{sec:velocityRes}
To analyse the gas kinematics of our sample we use the \texttt{eddy} code \citep{eddy} to fit a Keplerian profile
\begin{equation}\label{eq:kepler}
v_{\mathrm{rot}} (r, \phi) = \sqrt{\frac{G M_*}{r}} \cdot \cos{\phi} \cdot \sin{i} + v_{\mathrm{LSR}}
,\end{equation}
with $(r,\phi)$ being the deprojected cylindical coordinates, $i$ the inclination of the disc and $v_{\mathrm{LSR}}$ the systemic velocity, to the rotation maps shown in \hyperref[fig:vrotMapsMajor]{Fig.~\ref*{fig:vrotMapsMajor}}. To deproject the sky-plane coordinates $(x,y)$ into the midplane cylindrical coordinates $(r,\phi)$, the disc centre $(x_0,y_0)$, $i,$ and the disc position angle PA are used. The latter is measured between the north and the redshifted semi-major axis in an easterly direction. As a first step, the starting positions of the free fit parameters are optimised with \texttt{scipy.optimize} and their posterior distributions are estimated using the MCMC sampler.
Besides the model for a geometrically thin disc, \texttt{eddy} also includes the possibility to fit for the vertical structure of the disc. To parameterise the emission layer, we choose a simple model of a flared disc described by
\begin{equation}\label{eq:scaleheight}
{z(r) = z_0 \cdot \left(\frac{r}{1\arcsec}\right) \cdot r^{\psi}}
\end{equation}
, where $z_0$ describes the elevation and $\psi$ the flaring angle of the emission surface.
In the modelling process we fix the object's distance and stellar mass, taken from the literature, and fit for the disc centre $(x_0, y_0 \in [-0\farcsec5,0\farcsec5])$, systemic velocity $(v_{\mathrm{LSR}} \in [v_{\mathrm{min}}\mathrm{(data)},v_{\mathrm{max}}\mathrm{(data)}])$, inclination $(i \in [-90\,\degree,90\,\degree]),$ and disc position angle (PA $\in [-360\,\degree,360\,\degree$]) as well as surface elevation $(z_0\in [0,5])$ and flaring angle $(\psi \in [0,5])$ in the geometrically thick disc approximation. We also conducted runs where the inclination was fixed instead of the stellar mass and where both the inclination and the stellar mass were left as free parameters. Overall the results are very similar for these different cases and in the following we will only show the results for the models where the stellar mass was fixed.
For most targets we downsample the data by a factor of 2-4 before fitting.
We fit for the whole disc, choosing an outer radius depending on the disc size to exclude possible noise at the disc edge and an inner radius of twice the beam major axis to exclude regions that are strongly affected by beam smearing. For a few cases, where the beam is very large, we slightly reduce this inner radius to ensure a reasonable number of pixels to fit. For all models we use 100 walkers, 8000 steps to burn in, and 2000 additional steps to sample the posterior distribution function and assume flat priors that are allowed to vary over a wide range. The uncertainties of the posterior distributions represent the 16th to 84th percentiles about the median value. The uncertainties on the kinematics, computed with \texttt{bettermoments} are included in the fit and shown in \hyperref[fig:uncer]{Fig.~\ref*{fig:uncer}}. They mostly lie well below the channel width but they increase in the central regions due to beam smearing.
While most models converged rapidly within a few 100 steps, none of the models for HP\,Cha converged. Furthermore the models considering the vertical structure of the disc - even though rapidly converging - often do not match the bending of the isovelocity curves clearly seen in the data of the higher-inclination discs and return substantially smaller values for the elevation and flaring than expected. We tried both orientations of the inclination (positive and negative) in this context. For highly inclined sources the backside becomes prominently visible, resulting in a quadrupole morphology. However, this is fit by a dipole morphology and therefore the best fit lies between the two lobes of the quadrupole morphology, representing the average of the front and back side of the disc. The residuals of these models thus resemble those of the flat disc (see \hyperref[fig:VrotResMajorThin]{Fig.~\ref*{fig:VrotResMajorThin}} and \hyperref[fig:thickboth]{Fig.~\ref*{fig:thickboth}}). Higher spectral and spatial resolution data and individual modelling of the two disc sides (front and back) may be required to find a better fit for the vertical structure (e.g. directly from the channel maps).
In \hyperref[fig:VrotResMajorThin]{Fig.~\ref*{fig:VrotResMajorThin}} the residuals after subtraction of the Keplerian model from the data are shown for the geometrically thin disc approximation. The residuals of the vertically extended disc approximation and the additional CO lines are presented in \hyperref[fig:thickboth]{Fig.~\ref*{fig:thickboth}}. Again, the residuals for the other CO lines can be found in the Appendix. In the plots we mask out disc regions inside of twice the beam, since these are strongly affected by beam smearing and not included in the fit. The main features are again annotated in the individual panels and further discussed in the following section. Same as for the brightness temperature residuals we mark the substructures found in the positive residuals. While it is interesting to study both the positive and negative residuals, the interpretation of these structures is not straight forward and a lot of effort is currently put into understanding the different patterns, which will be part of up-coming works. In contrast to that, this work aims at comparing the different substructures found in the gas of circumstellar discs that may be indicative of embedded planets.
\begin{table*}
\centering
\caption{Summary of the various features exhibited by our targets, that may be indicative of embedded planets\tablefootmark{a}.}
\begin{tabular}{l cccccccc}
\hline
\hline
source & deep gas & $T_{\mathrm{B}}$ & $v_{\mathrm{rot}}$ & NIR & NIR & Misalignment\tablefootmark{b} & comments & Ref.\tablefootmark{c} \\
& cavity & spirals/arcs & spirals/arcs & spirals & shadows & inner/outer\\
\hline
AA\,Tau & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & - & - \\
AB\,Aur & \color{Green}{\cmark} & \color{Green}{(\cmark)} & \color{Green}{(\cmark)} & \color{Green}{\cmark} & \color{Red}{\xmark} & {\color{Green}{\cmark}} (A) & & 1\\
CQ\,Tau & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & {\color{Green}{\cmark}} (A+G) & & 2 \\
CS\,Cha & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & & 3\\
DM\,Tau & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & - & - & low res.\\
DoAr\,44 & \color{Green}{\cmark} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Green}{\cmark} & - & & 2 \\
GG\,Tau & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & {\color{Green}{\cmark}} (A) & multiple & 4\\
GM\,Aur & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Red}{\xmark}} (A+G) & & 2\\
HD\,100453 & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & {\color{Green}{\cmark}} (A+G) & binary & 2\\
HD\,100546 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Green}{(\cmark)} & \color{Green}{\cmark} & \color{Red}{\xmark} & - & & 2,5 \\
HD\,135344B & \color{Green}{\cmark} & \color{Green}{(\cmark)} & \color{Green}{(\cmark)} & \color{Green}{\cmark} & \color{Green}{\cmark} & - & & 2\\
HD\,139614 & \color{Red}{\xmark} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Green}{\cmark} & - & low res. & 2\\
HD\,142527 & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & {\color{Green}{\cmark}} (A+G) & binary & 2\\
& & & & & & & warp, cloud absorption\\
HD\,169142 & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Green}{(\cmark)} & - & & 2,6 \\
HD\,34282 & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Green}{\cmark}} (A+G) & & 2\\
HD\,97048 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & cloud absorption & 2\\
HP\,Cha & \color{Red}{\xmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & - & - & {\color{Green}{\cmark}} (A) & cloud absorption\\
IP\,Tau & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Red}{\xmark}} (A+G) & cloud absorption & 2\\
IRS\,48 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & - & - & cloud absorption\\
J1604 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Green}{\cmark} & - & & 7\\
LkCa\,15 & \color{Green}{(\cmark)} & \color{Green}{(\cmark)} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Green}{(\cmark)} & - & & 2,8\\
MWC\,758 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Green}{\cmark} & \color{Red}{\xmark} & {\color{Green}{\cmark}} (A) & & 9\\
PDS\,70 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Red}{\xmark}} (A+G) , {\color{Green}{\cmark}} (A) & imaged planets & 2\\
PDS\,99 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & - & - & cloud absorption \\
RXJ\,1615 & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Red}{\xmark}} (A+G) & & 2\\
RXJ\,1852 & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & & 10\\
RY\,Lup & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & {\color{Green}{\cmark}} (A+G) & & 2,11\\
RY\,Tau & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & & 12\\
SR\,21 & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Green}{\cmark} & \color{Red}{\xmark} & - & cloud absorption & 13\\
Sz\,91 & \color{Green}{\cmark} & \color{Green}{(\cmark)} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & cloud absorption & 14\\
SZ\,Cha & \color{Red}{\xmark} & \color{Green}{(\cmark)} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Green}{\cmark} & {\color{Red}{\xmark}} (A+G) & low res., cloud absorption & 2\\
T\,Cha & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & - & & 6\\
TW\,Hya & \color{Red}{\xmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Green}{\cmark}} (A) & & 16\\
UX\,Tau\,A & \color{Red}{\xmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Green}{\cmark} & \color{Red}{\xmark} & - & binary & 2\\
V1247\,Ori & \color{Red}{\xmark} & \color{Green}{(\cmark)} & \color{Green}{(\cmark)} & \color{Green}{\cmark} & \color{Green}{\cmark} & {\color{Green}{\cmark}} (A+G) & & 2\\
V4046\,Sgr & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & \color{Red}{\xmark} & {\color{Green}{\cmark}} (A) & binary & 17\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Green checkmarks point out a detection, red crosses a non-detection and brackets indicate if a detection is tentative. The absence of substructures does not necessarily imply the absence of a companion/planet but may be a resolution/sensitivity effect.}
\newline
\tablefoottext{b}{A: Obtained from ALMA continuum data in \cite{Francis2020}, A+G: Obtained from ALMA CO and GRAVITY data in \cite{Bohn2022}}.
\newline
\tablefoottext{c}{References for spirals and shadows observed in the NIR scattered light. 1) \cite{Boccaletti2020}; 2) \cite{Bohn2022} (original reference can be found in this paper); 3) \cite{Ginski2018}; 4) \cite{Keppler2020}; 5) \cite{Garufi2016}; 6) \cite{Pohl2017}; 7) \cite{Pinilla2018}; 8) \cite{Thalmann2016}; 9) \cite{Benisty2015}; 10) \cite{Villenave2019}; 11) \cite{Langlois2018}; 12) \cite{Takami2013}; 13) \cite{MuroArena2020}; 14) \cite{Tsukagoshi2014}; 15) \cite{Pohl2017}; 16) \cite{Boer2020}; 17) \cite{Avenhaus2018}
}
}
\label{tab:checks}
\end{table*}
\section{Discussion}\label{sec:Discussion}
The residuals presented in \hyperref[fig:TbResMajorThin]{Fig.~\ref*{fig:TbResMajorThin}} and \hyperref[fig:VrotResMajorThin]{Fig.~\ref*{fig:VrotResMajorThin}} show various features that are annotated in the different panels. For the discs that show very clear substructures (see \hyperref[sec:cleafFeat]{Sec.~\ref*{sec:cleafFeat}}), the different maps and radial profiles are collected again in \hyperref[fig:mapscomp]{Fig.~\ref*{fig:mapscomp}}.
For several higher-inclination discs, the vertical structure is still clearly visible in the residuals in form of a butterfly-like pattern even after subtraction of a geometrically thick disc model (e.g. AA\,Tau, HD\,34282). As mentioned before, most of these fits returned only slightly elevated emission surfaces and are not able to pick up the actual vertical structure of the disc. For these geometrically thick disc models still only a single side of the disc is modeled, while for the highly inclined discs both the front and back side are visible. An approach modelling both sides independently (such as used in the \texttt{Discminer}, \citealp{Izquierdo2021a}) would be needed to avoid this problem. Connected to that, in a few cases arc features can be seen at the disc edge, which result from looking into the back side of the disc (e.g J1604, LkCa\,15, PDS\,70).
In \autoref{tab:checks} we list some features that may be indicative of interactions between disc and planets and/or stellar companions. Here a green checkmark stands for the detection of a feature, a red cross identifies a non-detection and brackets indicate a tentative detection. Especially if several features are observed in the same disc, this is a strong indication that we are tracing embedded planets/companions. As a deep gas cavity we mark cases where we see a clear drop in the radial profile of the peak and integrated intensity for at least the more optically thin lines that tend to trace the column density. In a few cases such as LkCa\,15 or HD\,34282 no steep drop is found but the gas cavity is very extended and better data is needed to confirm if a deep cavity is present \cite{Leemker2022}. In this work we do not investigate the presence of kink features and thus these are missing as a possible planet signpost in \autoref{tab:checks}. While \citep{Pinte2020} report the detection of such azimuthally located features in about half of the discs in the DSHARP program \citep{Andrews2018}, more data are needed to confirm the robustness of such claims and it is difficult to draw conclusions on the presence of kinks in our sample, given the inhomogeneity of spatial and spectral resolutions. Furthermore, the interpretation of kink features is not straight forward as they can be casued by a gap or density substructure rather than a planet \citep{Izquierdo2021a}. More detailed studies (beyond the scope of our work) of a homogenous data set at high resolution are needed to test for such scenarios. In the following we describe the observed structures in more detail.
\subsection{Clear spiral or arc like features}\label{sec:cleafFeat}
A few discs show clear spiral or arc like structures in both the brightness temperature and the velocity residuals (\hyperref[fig:mapscomp]{Fig.~\ref*{fig:mapscomp}}): CQ\,Tau \citep{Woelfer2021}, GG\,Tau, HD\,100453 \citep{Rosotti2020a}, HD\,142527 \citep{Garg2021}, HD\,169142, HP\,Cha, TW\,Hya \citep{Teague2019Spiral} and UX\,Tau\,A \citep{menard2020}. Half of these discs (CQ\,Tau, GG\,Tau, HD\,100453 and HD\,142527) are also marked by a spiral in the NIR, deep gas cavities, shadows in the NIR and a misalignment between inner and outer disc \citep{Francis2020, Bohn2022}. These four systems represent the best candidates for planet-disc or companion-disc interactions. In the cases of GG\,Tau and HD\,100453 and HD\,142527 binary components are indeed known, likely causing (at least part of) the observed spiral structures. Among these eight discs, HD\,142527 and HP\,Cha are affected by cloud absorption on the red-shifted and blue-shifted side respectively, however this is unlikely to explain the arc features on the blue-shifted side of HD\,142527 or the red-shifted spiral in HP\,Cha.
\subsection{Tentative spiral or arc like features}
Some of the discs show tentative features (spirals, arcs or bright spots): For AB\,Aur, HD\,135344B (see also \citealp{Casassus2021}), LkCa\,15, Sz91 and V1247\,Ori these are found in both the brightness temperature and the velocity, with the ones in AB\,Aur and HD\,135344B - which are also marked by most other features - being most convincing. In addition to that, features are found in the brightness temperature residuals of DoAr\,44, HD\,139614 and SZ\,Cha and in the velocity residuals of HD\,100546 and J1604. Most of these discs have deep gas cavities and are marked by shadows in the NIR, for the other cases (e.g. HD\,139614 or SZ\,Cha) a deep cavity may be resolved with higher spatial resolution. Four of the ten sources with tentative features exhibit spirals in the NIR. It is important to note that some of the arc features may result from the misfit of the vertical structure (or other disc parameters) rather than dynamical interactions (see Fig. 12 in \citealp{Yen2020}). Furthermore some discs are strongly marked by cloud absorption. For example, the asymmetries seen in the brightness temperature of SZ\,Cha may result from this effect. For AB\,Aur, HD\,100546, HD\,139614, HD\,135344B and Sz\,91 the residuals of the additional lines (shown in the Appendix) partly support the detection of the described features.
\subsection{No spiral/arc features}
Among the remaining sources that do not show any substructures in the kinematics or brightness temperature, most are in general marked by few of the indicators listed in \autoref{tab:checks} (except for a deep gas cavity). The only sources that have clearly observed NIR spirals are MWC\,758 and SR\,21. The lack of substructures in the brightness temperature and kinematics does not necessarily imply that there are no planet-disc interactions but these may be unresolved instead. Besides simply being an effect of missing spatial or spectral resolution, the position and mass of an embedded planet can also result in the absence of detectable substructures. For example, a planet closer to the star embedded in a deep cavity will be more difficult to trace with the chosen methods, that are mostly able to reveal substructures on larger scales. To confirm the presence or absence of features and identify different trends, follow-up, more homogeneous observations are needed.
\subsection{Conclusive remarks}\label{sec:Conclusions}
Within our sample, eight discs are marked by clear substructures. In all cases features are seen in the gas temperature and the kinematics - indicating a connection - alongside other substructures in the gas and dust. Ten other discs show tentative features, of which half present signatures in the brightness temperature and kinematics simultaneously. Half of our sample is not showing any substructures in the ALMA data besides a deep gas cavity.
Except for MWC\,758 and SR\,21, all targets that exhibit clear spirals in NIR scattered light show at least tentative features in the brightness temperature and /or kinematics. Scattered light is tracing the hot upper disc layers, most likely to show spirals, and since $^{12}$CO is also tracing the disc surface we expect to pick up these features there as well \citep{Law2021}. However, many of the tentative features in our sample are not spirals, which is likely related to poor spatial resolutions as well as cloud absorption in some cases. To resolve the spirals expected from the NIR, high sensitivity and spectral and spatial resolution ALMA data is crucial. For the targets showing cloud absorption, deep observations of the less affected $^{13}$CO may help to pick up clear substructures. Two targets (HP\,Cha and TW\,Hya) show clear spiral structures without a counterpart having yet been observed in the NIR.
It is expected that substructures such as spirals are more likely found in systems with high luminosity and a wide cavity, where the upper disc layers can reach higher temperatures, making it much easier to observe them (e.g. \citealp{Garufi2018,vanderMarel2021}). We search for such correlations in our data set: While we do not find any clear trends, most discs that show no spiral substructures indeed represent the less massive, cooler and less luminous stars and features tend to become more visible for the more luminous sources. Given however the inhomogeneity of our sample we expect a large observational bias and to draw clear conclusions on correlations it is crucial to study a more uniform (in terms of resolution) data set, that includes a wide range of spectral types and stellar masses. As shown for TW\,Hya, for which very high sensitivity data exist at high spatial resolution ($\sim$ 8\,au), substructures can still be distinguished despite a sub-solar stellar mass and an almost face on disc. To understand if there exist differences between the different stellar groups comparable observations are essential.
Connected to that, the lack of features can not be seen as an indication of missing embedded planets. PDS\,70 for example hosts two confirmed massive planets, that have been directly imaged, but does not show any other features in our data despite a deep gas cavity. This is likely because the planets are located further inside the cavity in this system and higher sensitivity plus resolution observations are needed to reveal substructures in the temperature and the kinematics, related to dynamical interactions.
Furthermore, as shown by \cite{Izquierdo2021a} an embedded planet has to be rather massive to excite strong observable signatures in the kinematics: From simulations the authors find strong perturbations for planets more massive than 1\,$M_{\mathrm{J}}$, however to pick these up in ALMA data, very high spectral and spatial resolution is essential. Thus it is not surprising that in our sample the clearest features are seen in the multiple star systems.
\section{Summary}\label{sec:Summary}
In this work we have analysed the brightness temperatures and kinematics of a sample of 36 large cavity transition discs, representing the best candidates to search for dynamical interactions. Our main results are summarised as follows:
\begin{itemize}
\item Eight discs out of our sample show significant perturbations in both the brightness temperature and velocity residuals, while no features are found in half of the sample at the current (spatial and spectral) resolution and sensitivity.
\vspace{0.1cm}
\item Several discs show tentative features that need to be confirmed with deep, high resolution ALMA observations in the upcoming years.
\vspace{0.1cm}
\item Almost all targets that exhibit spirals in NIR scattered light show at least tentative features in the CO data.
\vspace{0.1cm}
\item In most cases our method reveals deviations that are caused by sub-stellar companions.
\vspace{0.1cm}
\item
For about 60\,\% of the sources a deep gas cavity is resolved in addition to the dust cavity at the current spatial resolution.
\end{itemize}
To detect planets in the Jupiter-mass range the available observations are neither deep enough nor do they have the required spatial and spectral resolution, explaining the lack of features in many discs. Up-coming and future deep ALMA observations at high spectral and spatial resolution together with dedicated modeling efforts may reveal more of such features and help to disentangle different formation scenarios.
\section*{Acknowledgements}
We greatly thank the referee Ruobing Dong for his helpful feedback that improved the quality of this work. We also would like thank all the people that kindly provided the reimaged/self-calibrated data sets. This paper makes use of different ALMA data sets, detailed in \autoref{tab:dataProp} and \autoref{tab:dataPropAdd}. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, auI/NRAO and NAOJ.
\bibliographystyle{aa}
\bibliography{bibliography}
\begin{appendix}
\onecolumn
\section{Continuum}\label{appendix:Continuum}
\hyperref[fig:TbMapsMajorCont]{Figure~\ref*{fig:TbMapsMajorCont}} shows the peak brightness temperature maps of the main lines used for analysis in this work with overlaid mm-continuum (either ALMA B6 or B7). For most cases, the size of the dust disc is substantially smaller than that of the gas disc.
\newpage
\section{Thick disc residuals of the main lines}\label{appendix:thick}
\newpage
\section{Uncertainties of the rotation velocity}
In \hyperref[fig:uncer]{Fig.~\ref*{fig:uncer}}
we show the uncertainties of the kinematics for the main lines, computed with \texttt{bettermoments}. These statistical uncertainties are calculated by linearising and propagating the uncertainty from the fluxes to the centroid estimate. They uncertainties mostly lie well below the channel width but increase in the central regions due to beam smearing. For lower sensitivity observations, thermal broadening plays an important role, significantly increasing the uncertainties.
\newpage
\section{Comparison of reimaged and archival data products}\label{appendix:CompareCleaning}
In this work we combine reimaged with archival data products and it is important to understand what impact this has on the results. For the data where reimaged sets are available, we conduct our analysis for both the reimaged and the according archival image cubes and compare the results. A few examples are shown in \hyperref[fig:compClean]{Fig.~\ref*{fig:compClean}} for a disc with clear spirals, two discs with tentative spirals and one without any observed features. We find that the fitting procedure is not significantly affected by the choice of cube. Moreover, the detection of clear spirals as well as a non-detection do not depend on the data set. Due to an increased S/R, tentative features come out stronger (e.g. J1604) and in some cases only become visible (e.g. HD\,135344B) in the reimaged products, thus some tentative substructures may be hidden in the cases that we classified as non-detections, when the archival product data was used.
\newpage
\section{Additional Lines}\label{appendix:AppendixAddLines}
This appendix comprises the results for the additional lines used in this work. When studying substructures in discs it is useful to look at various molecular tracers that probe different disc layers. Understanding if and how substructures vary vertically and radially can be used to assess the underlying pertrubation. For example, in a passively heated disc with a vertical temperature gradient, the opening angle of a spiral is expected to decrease towards the midplane, while for spirals launched by gravitational instability the midplane would be heated by shocks, resulting in similarly wound spirals throughout the disc \citep{Juhasz2018}.
\subsection{Characteristics}\label{appendix:table}
Some characteristics of the data are listed below in \autoref{tab:dataPropAdd}. In most cases, the archival data products were used for analysis, as indicated in the last column.
\begin{table*}[h!]
\centering
\caption{Characteristics of the additional ALMA line data}
\begin{tabular}{llccccccc}
\hline
\hline
Object & Line & ALMA Project ID & Beam & $\Delta \upsilon$ & RMS & Cube \tablefootmark{a} \\
& & & ($\arcsec$) & (km$\,$s$^{-1}$) & (mJy$\,$beam$^{-1}$) & source \\
\hline
ABAur & $^{12}$CO 3-2 & 2012.1.00303.S & 0.31x0.19 & 0.05 & 11.4 & P/PC \\
& $^{12}$CO 2-1 & 2015.1.00889.S & 0.11x0.08 & 0.32 & 3.5 & A\\
& $^{13}$CO, C$^{18}$O 2-1 & 2019.1.00579.S & 0.95x0.57 & 0.17 & 9.5, 7.9 & A\\
CQTau & $^{13}$CO 2-1 & 2013.1.00498.S & 0.13x0.1 & 0.7 & 0.9 & P/PC\\
& & 2016.A.00026.S & & \\
& & 2017.1.01404.S & & \\
DMTau & $^{12}$CO 3-2 & 2013.1.00647.S & 1.0x0.75 & 0.2 & 55.9 & A\\
& $^{13}$CO, C$^{18}$O 3-2 & 2016.1.00565.S & 0.37x0.29, 1.04x0.81 & 0.11 & 16.8, 21.4 & A\\
& $^{13}$CO, C$^{18}$O 2-1 & 2016.1.00724.S & 0.9x0.84 & 0.08 & 19.2 , 14.1 & A\\
GGTau & $^{12}$CO, $^{13}$CO 3-2 & 2012.1.00129.S & 0.4x0.3 & 0.25 & 5.7, 7.8 & A\\
GMAur & $^{13}$CO 3-2 & 2016.1.00565.S & 0.38x0.26 & 0.11 & 16.2 & A\\
& $^{13}$CO, C$^{18}$O 2-1 & 2018.1.01055.L & 0.15x0.15, 0.17x0.13 & 0.2 & 2.7, 1.1 & P/PC \\
HD100546 & $^{12}$CO 3-2 & 2011.0.00863.S & 0.94x0.42 & 0.11 & 19.4 & A\\
& $^{13}$CO, C$^{18}$O 2-1 & 2016.1.00344.S & 0.25x0.14 & 0.17 & 6.8, 6.4 & A\\
HD135344B & $^{12}$CO 3-2 & 2012.1.00870.S & 0.36x0.29 & 0.11 & 38.9 & A\\
& $^{12}$CO, $^{13}$CO, C$^{18}$O 2-1 & 2018.1.01066.S & 0.1x0.08 & 0.2 & 2.6, 2.8, 2.4 & P/PC \\
HD139614 & C$^{18}$O 2-1 & 2015.1.01600.S & 0.73x0.53 & 0.33 & 11.9 & A\\
HD142527 & $^{12}$CO 3-2 & 2011.0.00465.S & 0.57x0.35 & 0.5 & 11.1 & P/PC\\
& $^{13}$CO, C$^{18}$O 3-2 & 2012.1.00725.S & 0.31x0.27 & 0.11 & 8.0, 9.9 & A\\
& $^{13}$CO, C$^{18}$O 2-1 & 2015.1.01353.S & 0.29x0.26, 0.84x0.77 & 0.1 & 7.5, 19.3 & A\\
HD169142 & $^{12}$CO, $^{13}$CO 3-2 & 2012.1.00799.S & 0.18x0.13, 0.19x0.13 & 0.21, 0.22 & 17.9, 19.8 & A\\
& $^{13}$CO, C$^{18}$O 2-1 & 2015.1.00490.S & 0.19x0.14 & 0.08 & 5.4, 4.0 & A\\
HD34282 & $^{12}$CO, $^{13}$CO, C$^{18}$CO 2-1 & 2015.1.00192.S & 0.24x0.21, 0.25x0.23 & 0.08, 0.17 & 10.6, 7.9, 5.5 & A\\
HD97048 & $^{12}$CO 3-2 & 2013.1.00658.S & 0.65x0.39 & 0.2 & 13.93 & A\\
& $^{12}$CO, $^{13}$CO, C$^{18}$O 2-1 & 2015.1.00192.S & 0.46x0.22 & 0.3 & 6.9, 6.5, 6.30, 4.9 & P/PC\\
HPCha & $^{12}$CO 3-2 & 2013.1.01075.S & 0.77x0.4 & 0.2 & 35.7 & A\\
IRS48 & $^{13}$CO 3-2 & 2013.1.00100.S & 0.21x0.16 & 0.26 & 14.4 & A\\
LkCa15 & $^{13}$CO, C$^{18}$O 2-1 & 2018.1.00945.S & 0.48x0.3 & 0.33 & 3.7, 4.8 & P/PC\\
MWC758 & $^{12}$CO 3-2 & 2011.0.00320.S & 0.82x0.47 & 0.05 & 37.3 & A\\
& C$^{18}$O 3-2 & 2012.1.00725.S & 0.34x0.22 & 0.6 & 26.1 & A\\
& $^{12}$CO, $^{13}$CO, C$^{18}$O 2-1 & 2017.1.00940.S & 0.2x0.15 & 1.26, 1.33 & 1.7, 1.6, 1.2& A\\
PDS99 & $^{13}$CO 2-1 & 2015.1.01301.S & 0.34x0.23 & 0.17 & 9.8 & A\\
RXJ1852 & $^{13}$CO 2-1 & 2018.1.00689.S & 0.16x0.12 & 0.66 & 4.8 & A\\
RYLup & $^{12}$CO 2-1 & 2017.1.00449.S & 0.19x0.16 & 0.04 & 15.7 & A\\
SR21 & $^{13}$CO 3-2 & 2012.1.00158.S & 0.27x0.23 & 0.24 & 11.5 & A\\
& $^{13}$CO 2-1 & 2018.1.00689.S & 0.15x0.12 & 0.66 & 5.1 & A\\
Sz91 & $^{12}$CO 2-1 & 2013.1.00663.S & 0.62x0.59 & 0.5 & 27.6 & A\\
TCha & $^{12}$CO, $^{13}$CO 3-2 & 2012.1.00182.S & 0.26x0.14, 0.27x0.14 & 0.85, 0.88 & 14.5, 17.4 & A\\
& $^{13}$CO 2-1 & 2017.1.01419.S & 0.24x0.16 & 1.33 & 5.0 & A\\
UXTauA & $^{12}$CO 2-1 & 2013.1.00498.S & 0.26x0.21 & 0.63 & 7.4 & A\\
V4046Sgr & $^{12}$CO, $^{13}$CO 3-2 & 2016.1.00315.S & 0.28x0.17, 0.3x0.18 & 0.21, 0.44 & 25.4, 22.5 & A\\
& $^{13}$CO, C$^{18}$CO 2-1 & 2016.1.00724.S & 0.42x0.3 & 0.08 & 11.4, 7.8 & A\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{P/PC: Reimaged data cube. Public data or obtained via private communication, A: Archival data product.}
}
\label{tab:dataPropAdd}
\end{table*}
\newpage
\subsection{Brightness temperature maps}\label{appendix:tbmaps}
\hyperref[fig:TbAddBand6]{Figure~\ref*{fig:TbAddBand6}} and \hyperref[fig:TbAddBand7]{Fig.~\ref*{fig:TbAddBand7}} show the brightness temperature maps for the additional CO lines in Band 6 and Band 7 respectively.
\newpage
\newpage
\subsection{Rotation velocity maps}\label{appendix:v0maps}
\hyperref[fig:TbAddBand6]{Figure~\ref*{fig:v0AddBand6}} and \hyperref[fig:TbAddBand7]{Fig.~\ref*{fig:v0AddBand7}} show the kinematical maps for the additional CO lines in Band 6 and Band 7 respectively.
\newpage
\newpage
\subsection{Brightness temperature residuals}\label{appendix:tbresadd}
\hyperref[fig:TbresB6thin]{Figure~\ref*{fig:TbresB6thin}}, \hyperref[fig:TbresB7thin]{Fig.~\ref*{fig:TbresB7thin}} and \hyperref[fig:TbresB6B7thick]{Fig.~\ref*{fig:TbresB6B7thick}} show the brightness temperature residuals for the additional CO lines in Band 6 and Band 7 for both the thin and the thick disc geometry.
\newpage
\newpage
\newpage
\subsection{Rotation velocity residuals}\label{appendix:v0resadd}
\hyperref[fig:v0resB6thin]{Figure~\ref*{fig:v0resB6thin}}, \hyperref[fig:v0resB7thin]{Fig.~\ref*{fig:v0resB7thin}} and \hyperref[fig:v0resB6B7thick]{Fig.~\ref*{fig:v0resB6B7thick}} show the rotation velocity residuals for the additional CO lines in Band 6 and Band 7 for both the thin and the thick disc geometry.
\newpage
\newpage
\newpage
\section{Radial profiles of the integrated intesity}
In \hyperref[fig:radialProfiles2]{Fig.~\ref*{fig:radialProfiles2}} the radial profiles of the integrated intensity are shown for different CO isotopologues. Such profiles can be used to estimate the size of the cavity. As a deep cavity we define those cases where a clear drop of the emission can been seen in the inner regions for at least the more optically thin lines that tend to trace the column density.
\end{appendix}
|
Title:
Circumnuclear dense gas disk fuelling the active galactic nucleus in the nearby radio galaxy NGC 4261 |
Abstract: The cold molecular gas in the circumnuclear disk (CND) of radio galaxies
provides critical information for understanding the mass accretion onto active
galactic nuclei. We present the first detection and maps of HCN J=1-0 and HCO+
J=1-0 emission lines from the circumnuclear region of a nearby radio galaxy,
NGC 4261, using the Northern Extended Millimeter Array. Both molecular lines
are detected at a radial velocity of +-700 km/s relative to the systemic
velocity of the galaxy, and they arise from a CND with an outer radius of 100
pc. The velocity fields of HCN and HCO+ are fitted with a Keplerian disk
rotation. The enclosed mass is (1.6+-0.1)x10^9 M_solar, assuming a disk
inclination angle of 64 degree. The continuum image at 80 GHz reveals a weak
two-sided jet structure extending over 5 kpc along the east-west direction and
a bright core at the centre. The continuum spectrum between 80 and 230 GHz
shows a spectral index of -0.34+-0.02, which suggests optically thin
synchrotron radiation. The dense gas mass associated with the CND is calculated
to be 6.03x10^7 M_solar. It supports a positive correlation between the dense
gas mass in the CND and the accretion rate onto the supermassive black hole,
though there are uncertainties in the parameters of the correlation.
| https://export.arxiv.org/pdf/2208.05079 |
\title{Circumnuclear dense gas disk
fuelling the active galactic nucleus
in the nearby radio galaxy NGC~4261}
\author{
Satoko Sawada-Satoh\inst{1}\fnmsep\inst{2},
Seiji Kameno\inst{3}\fnmsep\inst{4},
\and
Sascha Trippe\inst{5}%
}
\institute{
The Research Institute for Time Studies, Yamaguchi University,
1677-1 Yoshida, Yamaguchi, Yamaguchi 753-8511, Japan \\
\email{swdsth@gmail.com}
\and
Graduate School of Science, Osaka Metropolitan University,
1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
\and
Joint ALMA Observatory,
Alonso de C\'{o}rdova 3107 Vitacura, Santiago 763-0355, Chile
\and
NAOJ Chile Observatory,
Alonso de C\'{o}rdova 3788, Oficina 61B, Vitacura, Santiago, Chile
\and
Department of Physics and Astronomy, Seoul National University,
1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea
}
\abstract
{
The cold molecular gas in the circumnuclear disk (CND) of radio galaxies
provides critical information
for understanding the mass accretion onto active galactic nuclei.
We present the first detection and maps of
HCN $J$=1--0 and HCO$^+$ $J$=1--0 emission lines
from the circumnuclear region of a nearby radio galaxy, NGC~4261,
using the Northern Extended Millimeter Array.
Both molecular lines are detected
at a radial velocity of
$\pm700$ km~s$^{-1}$ relative to
the systemic velocity of the galaxy,
and they arise from a CND
with an outer radius of ~100 pc.
The velocity fields of HCN and HCO$^+$
are fitted with a Keplerian disk rotation.
The enclosed mass is
$(1.6\pm0.1)\times10^{9}$ $M_{\odot}$,
assuming a disk inclination angle of $64^{\circ}$.
The continuum image at 80 GHz reveals a weak two-sided jet structure
extending over 5 kpc along the east--west direction
and a bright core at the centre.
The continuum spectrum between 80 and 230 GHz shows a spectral index
of $-0.34\pm0.02$,
which suggests optically thin synchrotron radiation.
The dense gas mass associated with the CND is calculated
to be $6.03\times10^7$ $M_{\odot}$.
It supports a positive correlation
between the dense gas mass in the CND
and the accretion rate onto the
supermassive black hole,
though there are uncertainties
in the parameters of the correlation.
}
\keywords{
ISM: molecules ---
galaxies: active ---
galaxies: individual (NGC~4261, 3C~270) ---
galaxies: ISM ---
galaxies: jets ---
galaxies: nuclei ---
radio lines: galaxies
}
\titlerunning{Circumnuclear dense gas disk in NGC~4261}
\authorrunning{S. Sawada-Satoh et al.}
\section{Introduction}\label{sec:intro}
It is widely accepted that an
active galactic nucleus (AGN) is powered by
mass accretion onto a supermassive black hole (SMBH)
in the centre of the host galaxy.
The gravitational energy of accreting matter is converted
into radiation and/or jets.
Radio galaxies (RGs) are radio-loud AGNs characterised
by powerful synchrotron radiation
driven by relativistic jets on scales approximately
in the range
10--100 kiloparsec.
The interstellar medium (ISM) in the centre of the RGs
can play a key role in fuelling the SMBH.
Several research groups have suggested that
the different roles of hot and cool ISM accretion
can be related to a different mode of accretion
in RGs,
which can lead to different radio-loud AGN classifications
\citep{hardcastle07,buttiglione10,best12}.
Certain CO observations of RGs support the hypothesis
that RGs are fed by cold gas that probes
the circumnuclear disks
\citep[CNDs;][]{prandoni10,maccagni18,ruffa19}.
The CNDs of RGs
can serve as a reservoir of fuel for their SMBHs.
Thus,
by determining the molecular gas structure and kinematics of CNDs,
important clues can be obtained
regarding mass accretion in RGs.
High angular resolution imaging of the molecular gas
within the SMBH sphere of influence ($r_g$)
can also be a powerful tool for accurately measuring
SMBH masses \citep[e.g.][]{davis13}.
However, the distribution of low-$J$ CO lines appears to extend
to the edge of CNDs,
and the detection of strong CO emission
from within $r_g$ seems to be rare
for CNDs in early-type galaxies \citep[ETGs;][]{davis18,boizelle19,north19}.
Alternative lines may trace the molecular gas distribution
within $r_g$ better,
thereby enabling more accurate measurements of SMBH masses.
At present, emission lines other than CO have largely been overlooked,
and dense gas tracers
such as HCN and HCO$^+$ lines are expected to trace possibly
farther into $r_g$ than the optically thick low-$J$ CO lines.
Moreover, recent interferometric observations of
the dense gas emission-line tracer HCN
towards Seyfert galaxies (SGs)
have indicated
that the accretion onto SMBHs is triggered
by star formation and supernovae originating from within CNDs
\citep{izumi16}.
To date, star formation activities in CNDs
have been investigated in a limited number of RGs,
such as
NGC~5128 \citep{espada19},
NGC~1052 \citep{kameno20},
and NGC~1275 \citep{nagai21}.
These investigations of RGs
have been primarily conducted
using the distribution of CO as a molecular gas mass tracer.
Dense gas tracers can potentially
serve as better probes
for examining star formation activity in CNDs
because star formation is closely related to dense gas.
NGC~4261 (3C~270) is a nearby Fanaroff--Riley I RG
with a symmetric, kiloparsec-scale two-sided jet
\citep{birkinshaw85}.
Its AGN is classified as a type 2
low-ionisation nuclear emission-line
region (LINER) galaxy \citep{jaffe96, ho97}
with a low Eddington ratio,
$L_{\rm bol}/L_{\rm Edd}$, ranging
from $10^{-5.11}$ to $10^{-4.54}$
\citep{hernandez13, inayoshi20}
and
a low X-ray luminosity of $L_{\rm 2-10 keV} = 10^{41.51}$
erg s$^{-1}$ \citep{hernandez13}.
This galaxy is known to have a nuclear disk of dust and gas
with a radius of a few hundred parsecs
lying orthogonal to the jet,
as revealed by \textit{Hubble} Space Telescope (HST)
observations
\citep[][]{jaffe93,jaffe96}.
At radio frequencies,
H~$\textsc{i}$ absorption
has been detected at the systemic velocity of the galaxy
($V_{\rm sys}$) towards the core
via the Very Large Array \citep[VLA;][]{jaffe94},
and it has been confirmed
at a projected distance of approximately 2.5 pc
from the core
with the European
Very Long Baseline Interferometry (VLBI)
Network \citep{vanlangevelde00}.
The H~$\textsc{i}$ absorbing gas is interpreted to be
in the inner part of the disk of dust and gas
found in the HST image,
and it obscures the core and innermost counter jet.
Subsequent VLBI observations at multiple frequencies
revealed the presence of parsec-scale ionised absorbing gas,
which was likely at the inner parsec-scale radii of the HST disk
\citep[][]{jones97,jones00,jones01,haga15}.
Molecular lines were observed towards the centre
of NGC~4261 for emission
(CO $J$=2--1 and $J$=3--2; \citealt{boizelle21})
and absorption (CO $J$=1--0; \citealt{jaffe94}).
In this Letter, we report the first detection and
the interferometric emission-line maps of
HCN $J$=1--0 and HCO$^{+}$ $J$=1--0 transitions in NGC~4261,
which trace a 100-parsec rotating CND
perpendicular to the kiloparsec-scale radio jet.
We adopt a luminosity distance ($D_{\rm L}$) of 31.7 Mpc and
a $V_{\rm sys}$ of 2212 km s$^{-1}$
\citep[e.g.][]{babyk19,cappellari11}.
Hence, 1 arcsecond corresponds to 151 pc for the galaxy.
\section{Observations and data reduction} \label{sec:obs}
Observations were conducted on February 15, 2019,
with the NOrthern Extended Millimeter Array (NOEMA)
of the Institut de radioastronomie millim\'{e}trique (IRAM)
in A configuration with ten antennas.
The phase centre was set to the position of NGC~4261 at
RA(J2000)=$12^{\rm h}19^{\rm m}23^{\rm s}.220$ and
Dec(J2000)=05$^{\circ}$49$^{\prime}$30$^{\prime\prime}$.775.
The full width at half maximum of the NOEMA primary beam was
$55^{\prime\prime}$ at 88 GHz.
The projected baseline lengths ranged from 22 to 734 m
over the course of the observations.
The local oscillator frequency was set to 82.0 GHz,
with frequencies ranging from 70.398 to 78.115 GHz in the lower sideband
and from 85.886 to 93.603 GHz in the upper sideband.
The PolyFix correlator was configured with a frequency resolution of 2~MHz.
The nearby source 3C~273 was observed
as both the bandpass and gain calibrators.
The absolute flux calibration was performed using MWC 349 (1.07 Jy)
and LkH$\alpha$101 (0.22 Jy).
The absolute flux calibration uncertainty for NOEMA is
less than $10\%$ at Band 1 ($\lambda$3 mm)
\footnote{IRAM NOEMA Data Reduction CookBook,
https://www.iram.fr/IRAMFR/GILDAS/doc/html/pdbi-cookbook-html/pdbi-cookbook.html}.
The raw visibility data were first converted into
the Flexible Image Transport System (FITS) format data
through the GILDAS software \citep{pety05}.
Then, calibration and imaging were performed
by using the NRAO Astronomical Image Processing System (AIPS) package
\citep{greisen90}.
We applied uniform weighting to the images
to obtain a higher spatial resolution of $<1^{\prime\prime}$.
To create the continuum map,
all line-free spectral windows with frequency ranges
of 70.398--74.459, 74.461--78.115, and 89.948--93.603 GHz
were combined, resulting in a centre frequency of 80 GHz.
After the continuum emission was subtracted in the \textit{u-v} plane,
channel maps of lines were made every 20~MHz,
corresponding to a velocity resolution of 68 km~s$^{-1}$.
\section{Results}
\subsection{Continuum emission}
Figure~\ref{fig:n4261cnt} shows a bright compact source
at the phase centre
and weak jet features spanning 5 kpc
aligned along the east-west direction.
The position angle (PA) of the alignment is estimated
as $87\pm1^{\circ}$
via a linear-regression fit to the jet features.
This estimation is in agreement
with the 5 GHz radio jet PA of $88\pm2^{\circ}$
imaged with the VLA
\citep{birkinshaw85}.
The bright compact source is partially resolved into
a core
at $\Delta$RA = $0^{\prime\prime}.0$
along with the east and west nuclear jet components
at $1^{\prime\prime}.5$
and $-2^{\prime\prime}$.0, respectively.
The peak position of the continuum emission
coincides with that of the phase centre.
The flux density of the compact core
within the central $\pm3^{\prime\prime}$
is measured as $S_{\rm 80GHz}$ = 360 mJy
by means of a two-dimensional Gaussian fit
using the AIPS task JMFIT.
Together with literature flux measurements,
$S_{\rm 115GHz}$ = $326.34\pm0.82$ mJy
observed with the IRAM 30 m telescope
\citep{ocana10},
$S_{\rm 236GHz}$ = $253\pm25$ mJy,
and
$S_{\rm 348GHz}$ = $223\pm22$ mJy
from Atacama Large Millimeter/submillimeter Array projects
2017.1.00301.S and 2017.1.01638.S
(Boizelle, priv. communication),
we find a spectrum $S_\nu \propto \nu^\alpha$
with $\alpha = -0.34\pm 0.02$ in the 80-348 GHz range.
This core spectrum is very inconsistent with thermal emission,
although it is still somewhat shallower than
canonical synchrotron values measured in the extended jets,
suggesting a partially optically thick core environment.
\subsection{HCN and HCO$^{+}$ emission}
The spectral profiles of
the HCN $J$=1--0 and HCO$^+$ $J$=1--0 emission lines
integrated over the region
within the central $\pm1^{\prime\prime}$
are shown in Figure~\ref{fig:n4261spc}.
Both molecular lines are detected
above the $3\sigma$ level
within a velocity range of $\pm700$ km~s$^{-1}$
from $V_{\rm sys}$.
Furthermore, both lines are
below the $3\sigma$ level
in the channels around $V_{\rm sys}$
and
exhibit a nearly symmetrical double-peaked spectral profile.
The double-peaked spectra resemble the double-horned profile
expected from an inclined rotating disk
with a central depression or a cavity
\citep[e.g.][]{wiklind97}.
In addition, a possible absorption feature is detected
at the redshifted frequencies of C$_2$H $N$=1--0
with a significance of $3\sigma$
in the deepest absorption channel.
The detected feature could be a blend of six hyperfine components of C$_2$H $N$=1--0
($J$=3/2--1/2, $F$=1--1, 2--1, 1--0, and $J$=1/2--1/2, $F$=1--1, 0--1, 1--0).
The velocity-integrated intensity (moment-0) maps
of the HCN $J$=1--0 and HCO$^+$ $J$=1--0 emission lines
shown in Figures~\ref{fig:moment}(a) and (b)
reveal a single component,
which spatially coincides with the central continuum peak.
A faint feature can be seen $1^{\prime\prime}.5$ east
of the phase centre in the HCN moment-0 map,
but it does not reach the $4\sigma$ level.
A least-squares ellipse fit
to the regions defined by the
$4\sigma$ contour of the integrated intensity
for each moment-0 map
is listed in Table~\ref{tab:ellifit}.
The extent of the significant HCN emission
spans $1^{\prime\prime}.4$ (210 pc)
along the north-south direction (PA$=2^{\circ}$),
which is
slightly longer than
a beam size of $0^{\prime\prime}.75$.
The distribution of HCO$^{+}$ emission is more concentrated
at the centre
and the HCO$^+$ component is fainter.
Both of these molecular lines
originate from the same $1^{\prime\prime}.7$-diameter dust disk
found in the HST images \citep{jaffe93,jaffe96}
and are more centrally concentrated
compared to the CO $J$=2--1 emission
that spans $2^{\prime\prime}$
in prior interferometric observations \citep{boizelle21}.
The intensity-weighted velocity (moment-1) map of HCN
(Figure~\ref{fig:moment}(c))
tentatively
shows a velocity gradient
along the major axis,
perpendicular to the jet PA.
The distribution and velocity structure of the HCN line
are in agreement with those obtained for
CO $J$=2--1 and $J$=3--2 lines
\citep{boizelle21}.
Furthermore,
the moment-1 map of HCO$^+$ (Figure~\ref{fig:moment}(d))
roughly follows
the velocity gradient along the north-south direction,
although the velocity gradient is less evident
than that of HCN.
It should be noted that
the HCO$^+$ distribution appears to exhibit
a barely resolved disk structure.
Thus, multiple velocity features should be spatially unresolved.
The HCN $J$=1--0 to HCO$^{+}$ $J$=1--0 ratio ($R_{\rm HCN/HCO^{+}}$)
and the HCN $J$=1--0 to CO $J$=1--0 ratio ($R_{\rm HCN/CO}$)
are proposed to be good indicators of an AGN-dominated environment in SGs
\citep{kohno01, kohno05}.
Velocity-integrated flux densities of
HCN ($S_{\rm HCN} \Delta V$)
and HCO$^+$ ($S_{\rm HCO^{+}} \Delta V$)
within the central $\pm1^{\prime\prime}$ are
1.48 and 0.79 Jy km s$^{-1}$, respectively.
We derive the $R_{\rm HCN/HCO^{+}}$
= 1.87
on sub-kiloparsec scales,
which is consistent with the mean ratio of $1.84\pm0.43$
for a sample of AGN host galaxies \citep{privon15}.
Assuming the CO $J$=2--1 to $J$=1--0 intensity ratio $R_{\rm 21}$ = 0.79
from the xCOLD (extended CO Legacy Database) for GASS (GALEX Arecibo SDSS Survey)
sample of nearby galaxies \citep{koss21}
and the velocity-integrated flux density of CO $J$=2--1 of 3.06 Jy km s$^{-1}$
measured for NGC~4261 \citep{boizelle21},
we get $R_{\rm HCN/CO}$ = 0.38.
The resultant line ratios $R_{\rm HCN/HCO^{+}}$ of 1.87 and
$R_{\rm HCN/CO}$ of 0.38 in NGC~4261 are typical values
expected for `pure' AGNs
with the absence of any associated nuclear starburst
activity \citep{kohno05}.
\begin{table}
\caption{Elliptical fit parameters to integrated intensities of HCN and HCO$^+$}
\label{tab:ellifit}
\begin{tabular}{c c c c c}
\hline\hline
Line &
$\Delta{\rm RA}$ &
$\Delta{\rm Dec}$ &
$\theta_{\rm mj}\times\theta_{\rm mn}$ &
PA \\
& [$^{\prime\prime}$]
& [$^{\prime\prime}$]
& [$^{\prime\prime}\times^{\prime\prime}$]
& [$^\circ$] \\
(1) & (2) & (3) & (4) & (5) \\
\hline
HCN(1--0)
& $0.08\pm0.01$
& $0.02\pm0.01$
& $1.4\times0.8$
& 2 \\
HCO$^+$(1--0)
& $-0.04\pm0.02$
& $-0.07\pm0.02$
& $1.0\times0.7$
& 35 \\
\hline
\end{tabular}
\tablefoot{
Column(1): Line species.
(2): RA offset from the continuum source centroid
at RA(J2000)=$12^{\rm h}19^{\rm m}23^{\rm s}.220$.
(3): DEC offset from the continuum source centroid
at
DEC(J2000)=05$^{\circ}$49$^{\prime}$30$^{\prime\prime}$.775.
(4): Angular widths of the major and minor axes.
(5): Position angle.
}
\end{table}
\section{Discussions}
\subsection{Keplerian rotation of CNDs}
Luminosity-weighted moment-1 measurements
for $J$=1--0 emission lines of HCN and HCO$^+$
along the major axis (PA$=0^{\circ}$)
are shown in Figure~\ref{fig:pv}.
Data points are derived from the velocity slice
across the centre along PA$=0^{\circ}$
in the HCN and HCO$^+$ moment-1 maps
(Figure~\ref{fig:moment}(c)(d)).
We performed a linear fit to the HCN data points
at a position offset within $\pm0^{\prime\prime}.4$
and a Keplerian rotation fit to the HCN data
at $<-0^{\prime\prime}.4$ and $>+0^{\prime\prime}.4$.
The HCN data points and
their best-fit Keplerian rotation curves
indicate that
the HCN emission traces the rotation with
a radius in the range 66--130 pc.
The enclosed mass estimated from the Keplerian rotation fitting is
$(1.6\pm0.1)\times10^9$ $M_{\odot}$,
after adopting a disk inclination angle of $64^{\circ}$
\citep{ferrarese96}.
This is in good agreement with the black hole mass measurement
made using the CO lines
($1.67\times10^9 M_{\odot}$; \citealt{boizelle21}),
while it is three times larger
than the mass determination inside $0^{\prime\prime}.1$
(14.5 pc)
based on the ionised gas kinematics
($4.9\times10^8 M_{\odot}$; \citealt{ferrarese96}).
The HCO$^+$ data also appear to display a velocity gradient
along the major axis,
while the data points are more scattered
from the linear gradient.
This could be due to
the barely resolved HCO$^+$ multiple velocity features.
The best-fit rotational gas model
for the HCO$^+$ data is
consistent with the Keplerian rotational model
obtained from the HCN data.
This implies that
HCN and HCO$^+$ emission trace the same galaxy potential
for the same radii of approximately 60--130 pc.
\subsection{Mass of dense molecular gas in CNDs}
The HCN line luminosity
has been used to estimate the mass of the dense molecular gas,
by applying the HCN luminosity-to-mass conversion factor
$\alpha_{\rm HCN}$.
Using the HCN line luminosity
in accordance with \cite{solomon97} and \cite{tan18},
the dense molecular gas mass ($M_{\rm dg}$) is as follows:
\begin{eqnarray}
M_{\rm dg} &=& \alpha_{\rm HCN} L_{\rm HCN}^{\prime} \nonumber \\
&=& 3.25\times10^7 \alpha_{\rm HCN} S_{\rm HCN}\Delta V
\nu_{\rm obs}^{-2} D_{\rm L}^2 (1+z)^{-3}
~ M_{\odot},
\end{eqnarray}
where
$\alpha_{\rm HCN}$ denotes the HCN luminosity-to-dense-gas-mass
conversion factor in $M_{\odot}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$,
$L_{\rm HCN}^{\prime}$ denotes the HCN luminosity
in K~km~s$^{-1}$~pc$^2$,
$S_{\rm HCN} \Delta V$ denotes the velocity-integrated flux density
of HCN in Jy~km~s$^{-1}$,
$\nu_{\rm obs}$ denotes the observed line frequency in GHz
and $D_{\rm L}$ denotes the luminosity distance in Mpc.
We obtain $L_{\rm HCN}^{\prime}$ = $6.03\times10^6$ K~km~s$^{-1}$~pc$^2$.
The luminosity-to-mass conversion factor, $\alpha_{\rm HCN}$,
can vary from
0.24 to over 20 $M_{\odot}$ (K km~s$^{-1}$ pc$^2$)$^{-1}$
\citep{barcos18,evans20,jones21},
depending on the gas density and the line opacity
\citep{jones21,wang21}.
If we consider the standard extragalactic conversion factor
$\alpha_{\rm HCN}$ of
10 $M_{\odot}$ (K km~s$^{-1}$ pc$^2$)$^{-1}$
\citep{gao04},
$M_{\rm dg}$ is calculated to be
$6.03\times10^7$ $M_{\odot}$.
This value is over five times higher than the total gas mass
$M_{\rm gas}$ = $1.12\times10^7$ $M_{\odot}$
reported by \citet{boizelle21}
using a typical CO conversion.
The adopted conversion factor $\alpha_{\rm HCN}$ = 10 could be
an order of magnitude too large
because NGC~4261 has a higher $R_{\rm HCN/CO}$ compared with other galaxies.
The values of $M_{\rm dg}$ and $M_{\rm gas}$ ($10^{7}$ $M_{\odot}$)
are consistent with CND masses on 100 pc scales
in galaxies of various types, including SGs, ETGs, and RGs
\citep[e.g.][]{izumi16,boizelle17,garcia19,ruffa19}.
In contrast,
it is significantly more massive than
the molecular gas mass of the 100 pc CND
in other nearby RGs:
NGC~5128 ($2 \times 10^6$ $M_{\odot}$;
\citealt{mccoy17})
and NGC~1052
($5.3 \times 10^5$ $M_{\odot}$; \citealt{kameno20}).
Interestingly, these three RGs
show dissimilar CND characteristics to one another
despite many resemblances,
such as
the classification of a LINER AGN,
the bright and two-sided radio jets,
the presence of a surrounding torus,
the central condensation of the ionised gas,
and
the central high column density of hydrogen
\citep[e.g.][]{marconi00,kameno01,markowitz07,balokovic21}.
Further observations in a larger sample of RGs
are required
to understand the variety of the observed CND characteristics.
\subsection
{$M_{\rm dg}$--$\dot M_{\rm BH}$ correlation}
\citet{izumi16} report
a positive correlation between $M_{\rm dg}$ and
the black hole mass accretion rate, $\dot M_{\rm BH}$,
for SGs.
By applying an $M_{\rm dg}$ of $6.03\times10^7$ $M_{\odot}$
and
using our measurement
to the regression line for $M_{\rm dg}$ and $\dot M_{\rm BH}$
offered by \citet{izumi16},
the inferred $\dot M_{\rm BH}$ corresponds to
$10^{-2.48}$ $M_\odot$ yr$^{-1}$.
This value is comparable to
an $\dot M_{\rm BH}$ of $10^{-2.70}$ $M_\odot$ yr$^{-1}$,
which is obtained by using the
$L_{\rm bol}$--$\dot M_{\rm BH}$ relation \citep{alexander12}:
\begin{equation}
\dot M_{\rm BH} = 0.15
\biggl( \frac{0.1}{\eta} \biggr)
\biggl( \frac{L_{\rm bol}}{10^{45} {\rm erg~s}^{-1}} \biggr)
~ M_\odot {\rm yr}^{-1},
\end{equation}
where $\eta$ = 0.1 is a typical value
for mass--energy efficiency conversion
\citep{marconi04}
and $L_{\rm bol}$ is equal to $10^{42.6}$ erg s$^{-1}$ for NGC~4261,
as reported by \citet{hermosa22}.
The derived $M_{\rm dg}$ appears to be in agreement with
the positive correlation between $M_{\rm dg}$ and
$\dot M_{\rm BH}$ at the CND scale in NGC~4261.
It should be noted, however, that there are significant uncertainties
in $\alpha_{\rm HCN}$,
the $L_{\rm 2-10keV}$--$L_{\rm bol}$ relation \citep[e.g.][]{eracleous10},
and
the confidence interval in the correct $\eta$ to use
in general SGs.
\section{Conclusions}
We mapped
the central 5 kpc of NGC~4261 with NOEMA
in the HCN and HCO$^+$ $J$=1--0 lines and
the 80 GHz continuum.
The continuum image reveals a core-dominant
synchrotron jet structure,
which consists of
a bright central source and weak jet features
aligned along the east-west direction.
HCN and HCO$^{+}$ emission lines are detected in NGC~4261
for the first time,
covering a velocity range of
$\pm700$ km~s$^{-1}$ relative to $V_{\rm sys}$.
The molecular gas is distributed in
a rotating sub-kiloparsec disk structure,
which coincides with the bright central source
in position.
The Keplerian rotation model obtained from
the velocity fields of HCN and HCO$^+$
yields an enclosed mass of $(1.6\pm0.1)\times10^{9}$
$M_\odot$.
Using the HCN line luminosity
and the standard extragalactic luminosity-to-mass
conversion factor,
the dense gas mass, $M_{\rm dg}$,
associated with the CND is estimated
to be $6.03\times10^7$ $M_{\odot}$.
This value is comparable to a typical CND mass
measured in galaxies of various types,
including SGs, ETGs, and RGs.
The derived $M_{\rm dg}$ and $\dot M_{\rm BH}$ in NGC~4261
align with the positive correlation
between $M_{\rm dg}$ and $\dot M_{\rm BH}$
seen in SGs,
which supports the scenario that
star formation in CNDs drives mass accretion onto SMBHs,
although there are significant uncertainties in the parameters
of the correlation.
\begin{acknowledgements}
We acknowledge the anonymous referee for valuable comments that improved our manuscript.
This work is based on IRAM/NOEMA observations carried out under project number W18CK.
IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
S.S.-S. is supported by JSPS KAKENHI grant No. 21K03628.
S.K. is supported by JSPS KAKENHI grant No. 18K0371.
\end{acknowledgements}
\bibliographystyle{aa} %
\bibliography{aa202244047} %
|
Title:
Multiple locations of near-infrared coronal lines in NGC 5548 |
Abstract: We present the first intensive study of the variability of the near-infrared
coronal lines in an active galactic nucleus (AGN). We use data from a one-year
long spectroscopic monitoring campaign with roughly weekly cadence on NGC 5548
to study the variability in both emission line fluxes and profile shapes. We
find that in common with many AGN coronal lines, those studied here are both
broader than the low-ionisaton forbidden lines and blueshifted relative to
them, with a stratification that implies an origin in an outflow interior to
the standard narrow line region. We observe for the first time [S VIII] and [Si
VI] coronal line profiles that exhibit broad wings in addition to narrow cores,
features not seen in either [S IX] or [Si X]. These wings are highly variable,
whereas the cores show negligible changes. The differences in both the profile
shapes and variability properties of the different line components indicate
that there are at least two coronal line regions in AGN. We associate the
variable, broad wings with the base of an X-ray heated wind evaporated from the
inner edge of the dusty torus. The coronal line cores may be formed at several
locations interior to the narrow line region: either along this accelerating,
clumpy wind or in the much more compact outflow identified with the obscurer
and so emerging on scales similar to the outer accretion disc and broad line
region.
| https://export.arxiv.org/pdf/2208.12821 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
galaxies: Seyfert -- galaxies: active -- infrared: galaxies -- quasars: emission lines -- quasars: individual: \ngc
\end{keywords}
\section{Introduction}
\label{sec:intro}
Active galactic nuclei (AGN) display emission lines from both permitted and forbidden transitions. The latter are usually associated with the narrow emission line region (NLR), which is formed by gas of low densities ($\log[n_{\rm H}/\mathrm{cm}^{-3}] \sim 3$--6) located at relatively large distances from the central ionising source (from a few pc up to several 100~pc). But some of these forbidden emission lines require energies $\gtrsim 100$~eV to form the corresponding ions, have higher critical densities for collisional deexcitation ($\log[n_{\rm e}^{\rm crit}/\mathrm{cm}^{-3}] \sim 7$--10) and broader profiles (full widths at half maxima [FWHM] $\sim 500-1500$~km~s$^{-1}$) than the low-ionisation narrow emission lines \citep[e.g.,][]{Pen84, App88, Gian95, Erk97, Rod02, Rod11}: these are the so-called `coronal lines', named after their presence in the spectrum of the solar corona \citep{Oke68}.
Coronal lines are forbidden fine-structure transitions arising from highly-ionised states of heavy metals. Low-energy electrons or weak interactions with high-energy electrons can efficiently excite these transitions from the ground state, however, the formation of the ions themselves requires relatively high energies.
Both types of AGN display coronal lines \citep{Ost77, Koski78}, but they are stronger in broad-line (type~1) than in narrow-line (type~2) AGN relative to their low-ionisation narrow emission lines \citep{Mur98}. Therefore, it is likely that the coronal line region has two components, one compact and one spatially extended, with only the latter remaining unobscured by the dusty torus in type~2 AGN. Support for this assumption comes from the fact that the emission from this region is often extended but much less so than that from the low-ionisation NLR (on scales of $\sim 80-150$~pc; e.g.\ \citealt{Prieto05, Mueller06, Mueller11, Maz13, Riffel21}; although \citealt{Negus21} have recently reported coronal line emission on kpc scales).
Furthermore, coronal lines are often blueshifted relative to the low-ionisation narrow lines \citep[e.g.,][]{Pen84, Erk97, Rod02}, which indicates an outflowing wind component. Given their similar physical conditions, it could be that the partly ionised gas that produces absorption lines and edges in the soft X-ray spectra of AGN, i.e., the so-called `warm absorber', produces also the coronal lines in its (colder) outer regions \citep{Netzer93, Erk97, Por99}. In any case, the coronal line region is most likely dust free, since strong emission from refractory elements such as iron, silicon and calcium are observed, which would be severely reduced in a dusty environment \citep{Ferg97}.
The high ionisation potentials ($\chi$) required for the coronal lines can be found either in a hot, collisionally ionised plasma or be produced by the hard continuum in AGN if the gas is photoionised. In the first case, the electron temperatures would be of the order of $\log(T_{\rm e}/\mathrm{K}) \approx6$ and in the second case much lower ($\log[T_{\rm e}/\mathrm{K}] \sim 4$--5). Currently, photoionisation is favoured, since for most AGN the observed flux ratios between different coronal lines can be reproduced within a factor of $\sim 2-3$ by these models \citep{Oliva94, Ferg97, Landt15a, Landt15b}, whereas in the case of a hot plasma either its temperature needs to be fine-tuned within a very narrow range \citep{Oliva94} or no acceptable fit can be obtained \citep{Landt15a, Landt15b}.
More recently, growing interest in near-IR coronal lines has arisen due to their potential for yielding estimates of the black hole mass in AGN \citep{Cann18, Rod20}. Since these coronal lines can potentially probe the accretion disc spectral energy distribution (SED) at far-UV/soft-X-ray energies, a regime difficult to observe but most influenced by the mass of the black hole, they could uncover the long-sought population of intermediate black holes, which are crucial for our understanding of black hole growth over cosmic time \citep{Hopkins12}. Even more importantly, if near-IR coronal lines can help identify AGN in dwarf galaxies in large numbers \citep{Bohn21, Cann21}, it would open up a unique opportunity to understand the role of AGN feedback in galaxy evolution and to test cosmological models. Since dwarf galaxies are believed to be dark matter-dominated, these sources serve as important probes in the low-mass halo regime in particular for the Lambda Cold Dark Matter ($\Lambda$CDM) paradigm \citep{Nav19}.
Variability studies can strongly constrain the properties of the coronal line region, in particular if several lines can be studied simultaneously and together with other AGN components such as the broad emission line region (BLR) and the UV/X-ray continuum. However, since these emission lines \textcolor{black}{are relatively weak} and so require high-quality spectroscopy, very few studies of this kind have been attempted so far. \citet{Vei88} presented the only systematic study of coronal line variability. In a sample of $\sim 20$ AGN, he found firm evidence that both the \fevii~$\lambda 6087$ and \fex~$\lambda 6375$ emission lines varied (during a period of a few years) for only one source (NGC\,5548) and tentative evidence for another seven sources (including NGC~4151). Then, within a general optical variability campaign on Mrk~110 for about half a year, \citet{Kolla01} reported strong \fex~variations. Strong variability of the coronal lines manifested mainly as a fading of the flux was first reported for IC~3599 \citep{Grupe95, Brandt95} and is now usually associated with a new class of non-active galaxies, the so-called `strong coronal line emitters' (or `coronal line forest AGN'). Most of these sources have been detected in the Sloan Digital Sky Survey (SDSS: \citealt{York00}) and a stellar tidal disruption event seems to be the most plausible explanation for the strong fading of their coronal lines over a time period of several years \citep{Kom08, Gel09, Kom09, WangT11, WangT12, Yang13, Rose15, Winkler16, Cer21, VV21}.
\citet{Landt15a} and \citet{Landt15b} presented the first extensive studies of the coronal line variability in individual AGN. Their data sets for the nearby, well-known sources NGC~4151 and NGC\,5548 included a handful of epochs of quasi-simultaneous optical, near-IR and X-ray spectroscopy spanning a period of several years. They found very different variability behaviours for the two sources, with only weak variations detected for the coronal lines in NGC~4151, but strong flux variability (mainly a decrease) by factors of $\sim 2-4$ observed in NGC\,5548. In both sources, the coronal line gas density was constrained to relatively low values of $\log(n_{\rm e}/\mathrm{cm}^{-3}) \sim3$ for a relatively high ionisation parameter of $\log U \sim 1$, which put it at a distance from the central ionising source of a few light~years and so well beyond the hot inner face of the obscuring dusty torus. Therefore, they proposed that the coronal line region in AGN is an independent entity rather than part of a continuous gas distribution connecting the BLR and low-ionisation NLR, possibly an X-ray heated wind as first suggested by \citet{Pier95}.
Here we revisit the variability of the near-IR coronal lines in \ngc\ with a much improved data set that allows us to study in detail both flux and profile shape variations. Our paper is structured as follows. In Section \ref{data}, we briefly discuss the near-IR spectroscopy, which we analyse in detail in Section~\ref{analysis}. In Section~\ref{results}, we present the results on the coronal line profiles and their variability, which we compare to theoretical photoionisation simulations in Section~\ref{sec:cloudy}. In Section~\ref{sec:discussion}, we discuss the likely origin of the near-IR coronal lines in \ngc. Finally, in Section \ref{sec:conclusions}, we present a summary of our main results and conclusions. We quote all laboratory line wavelengths as vacuum wavelengths and define velocities as negative if they are in the blue-shifted (outflowing) direction.
\section{The data} \label{data}
\citet{Landt19} observed NGC~5548 between 2016 August and 2017 July with the SpeX spectrograph \citep{Rayner03} at the NASA Infrared Telescope Facility (IRTF), a 3~m telescope on Maunakea, Hawaii. The main aim of this near-IR spectroscopic reverberation mapping campaign was to measure the time delay of the hot dust in the obscuring torus together with estimates of the torus radius based on thermal equilibrium arguments. Another important goal was to study the variability of emission lines such as those from the coronal line region.
The campaign achieved a total of 18 near-IR spectra with an average cadence of about ten days, excluding a 3.5-month period when the source was unobservable. They used the short cross-dispersed (SXD) mode (0.7--\SI{2.55}{\micro\meter}) and a $0.3^{\prime\prime} \times 15^{\prime\prime}$ slit oriented at the parallactic angle, resulting in an average spectral resolution of $R=2000$ or full width at half maximum (FWHM) $\sim 150$~km~s$^{-1}$. The spectra have in general a relatively high signal-to-noise (S/N) ratio with an average continuum $\mathrm{S/N} \sim 100$.
Care was taken to ensure \textit{a posteriori} an accurate absolute flux calibration. As described in detail in \cite{Landt19} (see their section~3.3), they used the narrow forbidden emission line \siii~$\lambda 9531$ to align the flux scale of the spectra and verified it with a photometric light-curve. Furthermore, the impact of the extended low-ionisation emission line region on the enclosed flux in the slit was assessed based on a near-IR Integral Field Unit (IFU) observation. For our following analysis, we used the observed spectra with their multiplicative photometric correction factors applied. We estimated that the uncertainty on the wavelength calibrations of our spectra is on average 0.15~\AA, which corresponds to $\approx2-5$~\kms\ at the location of the lines studied. We report velocity shifts of the coronal lines measured relative to the \siii~$\lambda 9531$ narrow emission line and the wavelength calibration uncertainty was added to the measurement uncertainties.
\section{The spectral analysis} \label{analysis}
\begin{table}
\centering
\caption{Coronal emission lines and their contaminants}
\label{tab:contaminants}
\begin{tabular}{lccclc}
\hline
\multicolumn{4}{c}{Coronal emission line} & \multicolumn{2}{c}{Contaminant emission line} \\
Ion & $\lambda$ & $\chi$ & $\log\left(n_\mathrm{e}^\mathrm{crit}\right)$ & Ion & $\lambda$ \\
& [\um] & [eV] & [cm$^{-3}$] & & [\um] \\
(1) & (2) & (3) & (4) & (5) & (6) \\
\hline
\Tstrut
\se & 0.9914 & 281.0 & 9.5 & \ci\ & 0.9827 \\
& & & & \ci\ & 0.9853 \\
& & & & \hi\ (\pad) & 1.005 \\
& & & & \heii\ & 1.0126 \\
\sn & 1.2523 & 328.8 & 9.4 & Unknown & 1.248 \\
& & & & \feii\ & 1.2570 \\
& & & & Unknown & 1.262 \\
& & & & \hi\ (\pab) & 1.282 \\
\sit & 1.4305 & 351.1 & 8.1 & & \\
\silvi & 1.9650 & 166.8 & 8.5 & \hi\ (\paa) & 1.875 \\
& & & & \hi\ (\brd) & 1.9440 \\
& & & & \htwo\ & 1.957 \\
\hline
\end{tabular}
\parbox[]{\columnwidth}{The columns are: (1) Ion; (2) vacuum rest-frame wavelength; (3) ionisation potential; and (4) critical density calculated with \cloudy\ for the coronal emission lines at temperature $\log(T_\mathrm{e}/\mathrm{K})=4$; (5) ion; and (6) vacuum rest-frame wavelength for the contaminating broad and narrow emission lines. The emission line parameters of all these lines are listed in Table \ref{tab:mean_fits}.}
\end{table}
The large wavelength range our cross-dispersed near-IR spectra covers four strong coronal lines in \ngc. Two of them are produced by highly-ionised sulphur (\se~and \sn) and two by highly-ionised silicon (\silvi~and \sit). We list their basic properties in Table~\ref{tab:contaminants}. Our main aim was to reliably isolate the coronal lines in the individual spectra in order to measure their fluxes, velocity shifts and profile shapes and study their variability. However, since coronal lines are in general weak emission lines, we analysed them first in the high-quality mean spectrum presented by \citet{Landt19} and then used these results as a guide for our analysis of the individual spectra, which have a lower $S/N$ in the wavelength regions of interest.
We performed the emission line and continuum fits using custom Python scripts employing the package \textsc{lmfit} \citep{Newville14}, which is a non-linear, least-squares, curve-fitting routine based on the Levenberg-Marquardt algorithm. \textcolor{black}{The majority of the emission lines can be adequately modelled with Gaussian profiles.
From these fits we have calculated the emission line fluxes and FWHMs and the associated uncertainties.
All narrow emission lines were allowed to have widths varying in the range of $\mathrm{FWHM}=100$--900~\kms\
and the widths of broad Gaussian components were allowed to vary in the range of 1000--9999~\kms.
Values quoted for the FWHMs of the emission lines have been corrected for instrumental broadening.
The velocity shifts of the near-IR lines are determined from the centroids of the fitted Gaussians and measured relative to the \siii$\lambda9531$ line, which is assumed to have zero velocity shift.
The uncertainties on the velocity shifts incorporate the uncertainty on the wavelength calibration of our spectra.
More details on the fitting procedures for specific lines and blends are given in the following subsections.}
\subsection{The mean spectrum} \label{sec:meanspec}
\begin{table*}
\centering
\caption{Emission line parameters measured in the mean spectrum}
\label{tab:mean_fits}
\begin{tabular}{cccccc} %
\hline
Ion & $\lambda$ & Line & Velocity & FWHM & Flux \\
& [\um] & component & offset & [km\,s$^{-1}$] & [$10^{-15}$~erg\,s$^{-1}$\,cm$^{-2}$] \\
(1) & (2) & (3) & (4) & (5) & (6) \\
\hline
\siii & 0.90711 & & & $378\pm11$ & $13.3\pm0.5$ \\
\siii & 0.95332 & & & $365\pm7$ & $33\pm3$ \\
\ci & 0.98268 & & $+48\pm24$ & $483\pm27$ & $0.5\pm0.2$ \\
\ci & 0.98530 & & $+48\pm24$ & $483\pm27$ & $2.1\pm0.2$ \\
\se\ & 0.99138 & core & $-119\pm15^\star$ & $737\pm39$ & $7.3\pm0.4$ \\
" & & blue wing & & & $1.7\pm0.4$ \\
" & & red wing & & & $2.8\pm0.7$ \\
\hi\ (\pad) & 1.00521 & narrow & $-69\pm25$ & $461\pm26$ & $2.9\pm0.2$ \\
\hi\ (\pad) & & broad~1 & $-2961$ & $5091$ & \\
\hi\ (\pad) & & broad~2 & $+1680$ & $5149$ & $109\pm11$ \\
\heii\ & 1.01264 & narrow & $-7\pm18$ & $461\pm26$ & $2.9\pm0.2$ \\
\heii\ & & broad & $+1576\pm1025$ & $8847\pm956$ & $49\pm11$ \\
Unknown & 1.2475 & & & $334\pm13$ & $1.3\pm0.2$ \\
\sn\ & 1.2523 & & $-120\pm14^\star$ & $554\pm24$ & $4.3\pm0.2$ \\
\feii & 1.25702 & & $+25\pm18$ & $334\pm13$ & $2.2\pm0.2$ \\
Unknown & 1.2612 & & & $380\pm15$ & $<0.45$ \\
\hi\ (\pab) & 1.28216 & narrow & $-55\pm12^\star$ & $334\pm13$ & $6.4\pm0.3$ \\
\hi\ (\pab) & & broad~1 & $-2961\pm125$ & $5091\pm197$ & \\
\hi\ (\pab) & & broad~2 & $+1680\pm112$ & $5149\pm141$ & $215\pm7$ \\
\sit\ & 1.430 & & $-138\pm14^\star$ & $734\pm43$ & $7.9\pm0.6$ \\
\hi\ (\brd) & 1.94509 & narrow & $-69\pm41$ & $334$ & $0.9\pm0.2$ \\
\hi\ (\brd) & & broad~1 & $-2961$ & $5091$ & \\
\hi\ (\brd) & & broad~2 & $+1680$ & $5149$ & $39$ \\
\silvi\ & 1.9650 & core & $-384\pm14^\star$ & $607\pm25$ & $13.0\pm0.6$ \\
" & & blue wing & & & $3.6\pm0.4$ \\
" & & red wing & & & $8.7\pm0.6$ \\
\htwo\ & 2.0332 & & $-66\pm17^\star$ & $174\pm25$ & $0.8\pm0.1$ \\
\hi\ (\brg) & 2.16612 & narrow & $0\pm22$ & $389\pm46$ & $1.3\pm0.2$ \\
\hi\ (\brg) & & broad~1 & $-2961$ & $5091$ & \\
\hi\ (\brg) & & broad~2 & $+1680$ & $5149$ & $43.3\pm0.3$ \\
\hline
\end{tabular}
\parbox[]{11cm}{The columns are: (1) Ion; (2) vacuum wavelength; (3) fitted emission line component; (4) velocity offset of the emission line peak relative to \siii$\lambda0.95332$~\um: narrow line shifts with a significance $>3\sigma$ are marked $^\star$; (5) full width at half maximum of the emission line component corrected for an instrumental broadening of 150~\kms; and (6) integratedline flux. We give $1\sigma$ errors. We note that all \hi\ emission lines are assumed to have the same double-peaked broad-line profile as \pab\ and that we list for them only the total broad line flux.}
\end{table*}
Fig.~\ref{fig:fourlines} shows the wavelength regions around the four near-IR coronal lines in the mean spectrum. Out of these, only \sit\ is free from contaminating, neighboring emission lines, whereas the other coronal lines are located close to a hydrogen Paschen broad emission line and other chemical species (Table \ref{tab:contaminants}). Therefore, rather than isolating the coronal lines by `clipping' the profiles to the local continuum on the red and blue sides of the line, we carefully modelled all the line complexes around the coronal lines in order to deblend them from neighbouring features (Fig. \ref{fig:mean_spec_fits}). The resultant coronal line fluxes, their blends and other observed emission lines are listed in Table \ref{tab:mean_fits}. In the following we give details of the analysis.
\subsubsection{The continuum} \label{sec:continuum}
Firstly, the underlying pseudo-continuum is modelled and subtracted in four spectral regions: 0.7--1.22, 1.12--1.33, 1.39--1.53 and 1.65--2.4~\um. The 1.12--1.33 and 1.39--1.53~\um\ regions could be satisfactorily fit with a simple power-law, whereas the 0.7--1.22 and 1.65--2.4~\um\ regions required polynomial models to reproduce the curvature of the pseudo-continuum.
There are no other emission lines in the vicinity of \sit, therefore its profile could be obtained by simply subtracting the local pseudo-continuum flux. Whilst \sit\ is free from other emission lines, this spectral region is affected by telluric absorption, particularly on the blue side of the line.
We were therefore careful to avoid the inclusion of residual telluric features when modelling the pseudo-continuum (see Fig.~\ref{fig:mean_spec_fits}).
The coronal lines \se, \sn\ and \silvi\ are all blended with other emission lines. We determined their profiles by modelling the emission line complexes as further described below.
\subsubsection{The \sn~line} \label{sec:mean_s_ix}
The \sn\ coronal line is located on the blue wing of broad \pab. \cite{Schonell17} analysed a high-spectral resolution near-IR spectrum of \ngc\ obtained in 2012 with the Near-Infrared Integral Field Spectrometer (NIFS) at Gemini North. They reported that \pab\ has two kinematically distinct components producing a double-peaked broad emission line.
Therefore, we decomposed the \pab~emission line using two Gaussians, one red and one blue of the line centre.
The widths of all Gaussians modelling the narrow lines were tied together. In addition to the narrow, forbidden \feii\ lines at 1.2570 and 1.2791~\um, we find two unidentified lines at $\approx1.2483$ and $\approx1.2612$~\um. The former unidentified line appears to be present also in the spectrum of \cite{Schonell17}, but not mentioned by the authors, whereas the latter unidentified line is not seen in their spectrum. No transitions near these two wavelengths are reported in observations of classical novae (\citealt{Wagner96}). We fit these unknown features with narrow Gaussians, but the $1.2612$~\um\ line is very weak and we could only obtain an upper limit on its flux. Therefore, we have not considered this line further in the fits of the individual spectra. In order to not make any prior assumptions about the \sn\ line profile, the narrow window containing the \sn\ line was masked and not used in the fitting process.
Having fit for \pab, \feii~1.2570~\um\ and the unidentified line at $1.2483$~\um, we take the residual flux in the emission line complex as the \sn\ profile (Fig. \ref{fig:mean_spec_fits}).
\subsubsection{The \silvi~line} \label{sec:mean_si_vi}
The \silvi\ coronal line is blended with both the red wing of the weak hydrogen \brd\ broad line and the extreme red wing of the hydrogen \paa\ broad line. A very weak \htwo\ line is also expected at 1.9570~\um, although we did not convincingly detect it. A narrow window containing the \silvi\ line was masked and all other emission lines were modelled with Gaussians.
We assumed that the broad components of the hydrogen lines \paa\ and \brd\ have the same double-Gaussian profile as the \pab\ broad component (see Section~\ref{sec:mean_s_ix}). The amplitude of the broad \brd\ profile was fixed so as not to exceed the observed flux. Since the width of the narrow \brd\ line hit the maximum allowed value, we fixed it to that of the narrow \pab\ line and fit only for the flux and velocity offset. The resulting ratio between the \brd\ and \pab\ narrow line fluxes is $0.14\pm0.03$, which is similar to the value of 0.11 expected for Case B and a gas temperature of 15000~K (\citealt{OF06}). We note that an additional, weak broad Gaussian component was required by the fits in order to adequately model the red wing of \paa, but we do not ascribe a particular physical meaning to this component.
The residuals of our best-fit model reveal the profile of the relatively strong \silvi\ line. It shows a prominent and extended red wing and excess blue flux in addition to a narrow core (Figs.~\ref{fig:mean_spec_fits} and \ref{fig:coronal_line_Gauss}).
\textcolor{black}{The width, flux and velocity offset of the core component have been determined by fitting a Gaussian to a region spanning $\approx800$~\kms\ across the peak of the line (the solid red line in Fig.~\ref{fig:coronal_line_Gauss}); these are the values reported for the `core' component in Table~\ref{tab:mean_fits}.
To determine the fluxes in the wings, we have integrated the excess flux from the peak of the core out to $\approx-2000$~\kms\ on the blue side of the core and $\approx+3000$~\kms\ on the red side.
The integration limits for the wings were determined by a visual inspection of the spectrum to determine the maximum velocity extents of the excess flux.
Since the red wing in particular is shallow at its extremity, it is likely that an appreciable amount of the integrated flux is just noise.
To account for this, we estimate the noise contribution to the integrated flux.
We first calculate the noise in the pseudo-continuum in two windows adjacent to the wings.
Using this value, we then simulate 1000 Gaussian noise `spectra' on the same velocity grid over which the wings are integrated.
The positive fluxes in these mock spectra are integrated and the mean value gives us an estimate of the amount of noise included in the wings' flux, which we take to be the uncertainty.
These values are reported for the `blue wing' and `red wing' components in in Table~\ref{tab:mean_fits}.}
\subsubsection{The \se~line}
\label{sec:mean_s_viii}
The \se\ coronal line is located near the emission maximum of a blend between the hydrogen \pad\ and \heii\ broad emission lines and close to their respective narrow components. This complex also contains the narrow, forbidden lines \ci\ 0.9827 and 0.9853~\um\ and \sii\ 1.0290, 1.0323, 1.0339 and 1.0373~\um. We fit the wavelength region of 0.975--1.027~\um, which excludes the \sii\ lines. We again used the broad \pab\ profile as a template for the broad hydrogen lines (here \pad) and we fit for its flux only. On the red side of the emission line blend, a single broad Gaussian was adequate to fit the remaining flux from broad \heii. Single Gaussians were included to model the \ci, \pad\ and \heii\ narrow lines. The velocity offsets of the two \ci\ lines were tied together.
Having modelled and subtracted emission lines other than \se\ in the complex, we take the residual flux from our model as the \se\ profile (see Fig.~\ref{fig:mean_spec_fits}).
The \se\ coronal line has a similar profile to that of \silvi\, comprising of a red wing, excess blue flux and a narrow core (Figs.~\ref{fig:mean_spec_fits} and \ref{fig:coronal_line_Gauss}).
\textcolor{black}{Measurements of the line core and the blue and red wings have been made in the same manner as for \silvi, described in Section~\ref{sec:mean_si_vi}; the results are reported in Table~\ref{tab:mean_fits}.}
\subsubsection{Other emission lines}
For our study, we have modelled also the hydrogen \brg\ broad emission line, since it is of interest for black hole mass determinations in AGN (see Section~\ref{sec:mass}), and the \siii~$\lambda\lambda9069,9531$ narrow emission line doublet, since these lines can inform us about the intrinsic profile of transitions from the narrow emission line region.
There are no other broad emission lines in the vicinity of \brg\ and we isolated the line from a linear local continuum. The broad component was modelled with the same profile as the other hydrogen lines and we fit only for its flux. In addition to a narrow Gaussian for \brg, a second narrow Gaussian was added to model the \htwo~2.0332~\um\ emission line, which sits on the blue wing of broad \brg.
The \siii~$\lambda\lambda9069,9531$ narrow emission line doublet is a strong feature in our spectra. We subtracted the hydrogen \pae\ broad emission line from beneath the \siii\ lines, again using broad \pab\ as a template. We also subtracted a narrow Gaussian line at the rest-frame wavelength of \pae\ from the \siii~$\lambda9531$ line profile.
Having accounted for the \pae\ emission, each \siii\ line was adequately fit with a single Gaussian of $\mathrm{FWHM}\approx370$~\kms.
No significant velocity shift was measured between the two \siii\ lines.
The centroid velocity shifts of all other lines measured in the mean spectrum were measured relative to \siii~$\lambda9531$.
The observed flux ratio of \siii~$\lambda9531$/\siii~$\lambda9069=2.5\pm0.1$ is consistent with the theoretical value of 2.58.
\subsection{The single-epoch spectra} \label{singlespec}
Because of the lower quality of the single-epoch spectra relative to the mean spectrum, multi-Gaussian fits to the emission line complexes were not always successful in reliably isolating the coronal lines. Therefore, we used their emission-line profiles determined in the mean spectrum as a guide. Having subtracted the underlying broad-band continuum following the method described in Section~\ref{sec:continuum} we then performed a fit to the emission line blend.
The mean \pab\ profile was again used as a template for all of the hydrogen lines; the template was rescaled in flux to match the hydrogen lines in the single-epoch spectra.
Each narrow line profile was modelled as a single Gaussian with its parameters taken from the fit to the same line in the mean spectrum.
If necessary, small scaling adjustments were made to the narrow lines to improve the fit.
\textcolor{black}{To some spectra we added a local continuum to correct cases in which we judged the broad-band continuum placement could be improved.}
The \textcolor{black}{local} continuum and contaminating lines \textcolor{black}{were} adjusted to obtain a good fit to the data then subtracted from the spectrum, leaving just the coronal line profile.
\textcolor{black}{As can be seen in Figs.~\ref{fig:siii_variations}--\ref{fig:s_ix_variations} the residuals away from the coronal line are generally featureless (with the exception of the wings near the \se\ and \silvi\ profiles, and some artefacts of imperfect telluric correction around \sit) indicating that this procedure generally worked well.}
\textcolor{black}{However, t}he coronal line profiles in a few epochs were of very poor quality and we have excluded them for our further studies.
\textcolor{black}{As for the mean spectrum, the fluxes, widths and velocity offsets of the narrow line cores are determined by fitting a Gaussian to the isolated emission line profile.
Again, for \se\ and \silvi\ we fit only the central portion of the profiles (within a few hundred \kms\ of the peak) to avoid the wings on either side of the line.}
\textcolor{black}{Because the coronal lines are low-contrast features in our spectra the FWHM and flux measurements are very sensitive to the placement of the underlying pseudo-continuum.
If the continuum is placed too high, for example, then both the FWHM and flux will be underestimated in the fitted profile.
This is an issue in the single-epoch spectra, in which the precise continuum placement is less certain than in the high-S/N mean spectrum.
We therefore calculate the range in both FWHM and flux that result from moving the the local continuum to its highest and lowest plausible levels (i.e.\ $\pm1\sigma$).
The value of $\sigma$, the noise in the continuum, is calculated from the standard deviation of flux points about the mean value in featureless windows near to each coronal line (and typically this noise was $\sim1$~per cent of the continuum flux).
This additional uncertainty was added in quadrature to the Gaussian measurement uncertainties calculated by the fitting algorithm.
A similar approach was taken to estimate the uncertainties on the \se\ and \silvi\ wings; we calculate the error range from the maximum and minimum excess flux integrated with the continuum is moved $\pm1\sigma$.
Because the blue wing is weak and the red wing is very shallow at its extremity, this results in rather large uncertainties for the wings (particularly in the noisy part of the spectrum containing \se) and there are several epochs where no flux in excess of the Gaussian core is detected with confidence.}
Our results for the four near-IR coronal lines as well as the strong \siii~$\lambda9531$ line are shown in Figs. \ref{fig:siii_variations}-\ref{fig:s_ix_variations}\textcolor{black}{. The fluxes of the line cores and wings over the course of the campaign} are recorded in Table~\ref{tab:core_wing_fluxes}.
\begin{table*}
\centering
\caption{Fluxes of the cores and blue and red wings in the profiles of \siii$\lambda9531$ and the near-infrared the coronal lines}
\begin{tabular}{l|ccccccccc}
\hline
& \siii\ & \multicolumn{3}{c}{\se} & \sn\ & \sit\ & \multicolumn{3}{c}{\silvi} \\
& & \multicolumn{3}{c}{$\overbrace{\hspace{4.5cm}}$} & & & \multicolumn{3}{c}{$\overbrace{\hspace{4.5cm}}$} \\
Date & Core & Blue & Core & Red & Core & Core & Blue & Core & Red \\
& & \textcolor{black}{wing} & & \textcolor{black}{wing} & & & \textcolor{black}{wing} & & \textcolor{black}{wing} \\
\hline
Mean spectrum & $33\pm3$ & $1.7\pm0.4$ & $7.3\pm0.4$ & $2.8\pm0.7$ & $4.3\pm0.2$ & $7.9\pm0.6$ & $3.6\pm0.4$ & $13.0\pm0.6$ & $8.7\pm0.6$ \\
\hline
\textcolor{black}{602} \textcolor{black}{16/08/02} & $34\pm2$ & $3.5\pm\textcolor{black}{1.8}$ & $8.9\pm\textcolor{black}{3.4}$ & $4.4\pm\textcolor{black}{3.2}$ & $4.5\pm\textcolor{black}{1.3}$ & $7.7\pm\textcolor{black}{2.5}$ & $2.0\pm\textcolor{black}{1.1}$ & $13.2\pm\textcolor{black}{1.7}$ & $18.7\pm\textcolor{black}{2.4}$ \\
\textcolor{black}{611} \textcolor{black}{16/08/11} & $35\pm2$ & $1.5\pm\textcolor{black}{1.9}$ & $7.2\pm\textcolor{black}{5.0}$ & $4.7\pm\textcolor{black}{2.9}$ & - & $5.7\pm\textcolor{black}{2.6}$ & $1.7\pm\textcolor{black}{1.3}$ & $12.4\pm\textcolor{black}{1.8}$ & $16.7\pm\textcolor{black}{2.7}$ \\
\textcolor{black}{743} \textcolor{black}{16/12/21} & - & - & - & - & $3.6\pm\textcolor{black}{1.3}$ & $6.0\pm\textcolor{black}{2.5}$ & - & - & - \\
\textcolor{black}{759} \textcolor{black}{17/01/06} & $34\pm2$ & $<\textcolor{black}{4.6}$ & $8.0\pm\textcolor{black}{3.7}$ & $7.0\pm\textcolor{black}{4.1}$ & $3.6\pm\textcolor{black}{1.8}$ & $7.9\pm\textcolor{black}{2.7}$ & $2.4\pm\textcolor{black}{0.5}$ & $13.3\pm\textcolor{black}{1.7}$ & $<\textcolor{black}{2.1}$ \\
\textcolor{black}{773} \textcolor{black}{17/01/20} & $34\pm2$ & $4.1\pm\textcolor{black}{1.8}$ & $5.3\pm\textcolor{black}{3.4}$ & $<\textcolor{black}{6.2}$ & $5.2\pm\textcolor{black}{1.4}$ & $7.2\pm\textcolor{black}{2.7}$ & $3.9\pm\textcolor{black}{0.7}$ & $11.3\pm\textcolor{black}{1.7}$ & $3.4\pm\textcolor{black}{1.8}$ \\
\textcolor{black}{777} \textcolor{black}{17/01/24} & $33\pm2$ & - & - & - & - & $6.5\pm\textcolor{black}{3.6}$ & $4.7\pm\textcolor{black}{1.1}$ & $1\textcolor{black}{4.7}\pm\textcolor{black}{2.3}$ & $5.5\pm\textcolor{black}{2.4}$ \\
\textcolor{black}{789} \textcolor{black}{17/02/05} & $33\pm1$ & $3.4\pm\textcolor{black}{2.2}$ & $6.6\pm\textcolor{black}{3.3}$ & $\textcolor{black}{<7.2}$ & $6.1\pm\textcolor{black}{1.7}$ & $8.1\pm\textcolor{black}{2.6}$ & $4.3\pm\textcolor{black}{0.8}$ & $13.9\pm\textcolor{black}{1.7}$ & $3.4\pm\textcolor{black}{1.8}$ \\
\textcolor{black}{799} \textcolor{black}{17/02/15} & $35\pm2$ & $<\textcolor{black}{3.8}$ & $4.9\pm\textcolor{black}{3.3}$ & $3.6\pm\textcolor{black}{3.5}$ & $2.5\pm\textcolor{black}{1.3}$ & $6.6\pm\textcolor{black}{2.6}$ & $4.3\pm\textcolor{black}{0.9}$ & $14.7\pm\textcolor{black}{1.9}$ & $4.9\pm\textcolor{black}{2.0}$ \\
\textcolor{black}{804} \textcolor{black}{17/02/24} & $34\pm1$ & $2.4\pm\textcolor{black}{1.7}$ & $4.5\pm\textcolor{black}{3.3}$ & $3.4\pm\textcolor{black}{3.4}$ & $4.1\pm\textcolor{black}{1.5}$ & $5.5\pm\textcolor{black}{2.6}$ & $2.8\pm\textcolor{black}{1.1}$ & $11.5\pm\textcolor{black}{1.8}$ & $3.8\pm\textcolor{black}{2.4}$ \\
\textcolor{black}{829} \textcolor{black}{17/03/17} & $35\pm2$ & $2.9\pm\textcolor{black}{1.9}$ & $5.3\pm\textcolor{black}{3.2}$ & $3.7\pm\textcolor{black}{3.3}$ & $4.2\pm\textcolor{black}{1.7}$ & $5.8\pm\textcolor{black}{2.6}$ & $2.3\pm\textcolor{black}{0.7}$ & $11.3\pm\textcolor{black}{1.9}$ & $2.2\pm\textcolor{black}{1.7}$ \\
\textcolor{black}{834} \textcolor{black}{17/03/22} & $35\pm2$ & $<\textcolor{black}{2.7}$ & $6.3\pm\textcolor{black}{3.3}$ & $5.3\pm\textcolor{black}{2.4}$ & $5.5\pm\textcolor{black}{1.5}$ & $8.2\pm\textcolor{black}{3.0}$ & $2.3\pm\textcolor{black}{1.1}$ & $15.3\pm\textcolor{black}{2.0}$ & $\textcolor{black}{<5.1}$ \\
\textcolor{black}{882} \textcolor{black}{17/05/09} & $33\pm2$ & - & - & - & $3.8\pm\textcolor{black}{1.8}$ & $6.7\pm\textcolor{black}{2.6}$ & $3.1\pm\textcolor{black}{0.7}$ & $12.0\pm\textcolor{black}{1.7}$ & $4.1\pm\textcolor{black}{1.5}$ \\
\textcolor{black}{898} \textcolor{black}{17/05/25} & $33\pm3$ & $4.0\pm\textcolor{black}{2.7}$ & $8.9\pm\textcolor{black}{4.7}$ & $7.0\pm\textcolor{black}{4.5}$ & $5.2\pm\textcolor{black}{1.7}$ & $9.1\pm\textcolor{black}{3.1}$ & $4.0\pm\textcolor{black}{1.1}$ & $15.3\pm\textcolor{black}{1.7}$ & $4.9\pm\textcolor{black}{2.4}$ \\
\textcolor{black}{914} \textcolor{black}{17/06/10} & $34\pm2$ & - & - & - & - & - & $3.7\pm\textcolor{black}{0.8}$ & $13.2\pm\textcolor{black}{1.8}$ & $6.7\pm\textcolor{black}{1.7}$ \\
\textcolor{black}{924} \textcolor{black}{17/06/20} & $33\pm2$ & $\textcolor{black}{<6.4}$ & $12.5\pm\textcolor{black}{6.9}$ & $6\textcolor{black}{.3\pm5.0}$ & $3.2\pm\textcolor{black}{1.9}$ & $7.3\pm\textcolor{black}{4.5}$ & $4.8\pm\textcolor{black}{1.3}$ & $10.5\pm\textcolor{black}{1.7}$ & $9.9\pm\textcolor{black}{2.9}$ \\
\textcolor{black}{932} \textcolor{black}{17/06/28} & $33\pm2$ & $3.0\pm\textcolor{black}{1.5}$ & $7.8\pm\textcolor{black}{3.3}$ & $7.7\pm\textcolor{black}{2.8}$ & $1.6\pm\textcolor{black}{1.2}$ & $7.0\pm\textcolor{black}{3.4}$ & $5.6\pm\textcolor{black}{1.1}$ & $9.7\pm\textcolor{black}{1.8}$ & $9.3\pm\textcolor{black}{2.5}$ \\
\textcolor{black}{937} \textcolor{black}{17/07/03} & $33\pm2$ & $2.7\pm\textcolor{black}{1.6}$ & $9.4\pm\textcolor{black}{3.4}$ & $5.3\pm\textcolor{black}{2.9}$ & - & - & - & - & - \\
\hline
RMS & 0.84 (2.5\%) & 0.\textcolor{black}{62} (\textcolor{black}{20}\%) & 2.\textcolor{black}{25} (3\textcolor{black}{2}\%) & 1.5\textcolor{black}{0} (\textcolor{black}{29}\%) & 1.25 (3\textcolor{black}{1}\%) & 1.0\textcolor{black}{3} (15\%) & 1.18 (3\textcolor{black}{5}\%) & 1.\textcolor{black}{69} (13\%) & 5.\textcolor{black}{22} (\textcolor{black}{79}\%) \\
\hline
\end{tabular}
\parbox[]{0.95\textwidth}{\textcolor{black}{The observation date is given in the formats $\mathrm{MJD}-57000$ and YY/MM/DD and the fluxes are in units $10^{-15}$~erg\,s$^{-1}$\,cm$^{-2}$.}}
\label{tab:core_wing_fluxes}
\end{table*}
\subsection{Optical iron coronal lines}
\label{sec:optical}
During one of our other observing programs, we obtained a high-resolution optical spectrum of \ngc\ in 2015 March at the William Herschel Telescope (WHT), a 4~m telescope on La Palma.
We dereddened this spectrum using $E(B-V)=0.0168$ (\citealt{SF11}) and the extinction curve of \cite{CCM89}.
The spectrum contains six high-ionisation forbidden lines of iron:
[Fe\,\textsc{vii}]$\lambda3759,5159,5721,6087$, [Fe\,\textsc{x}]$\lambda6374$ and [Fe\,\textsc{xi}]$\lambda7892$.
With the exception of [Fe\,\textsc{x}] these lines are unblended so we simply measure their profiles with a single Gaussian plus linear local continuum.
[Fe\,\textsc{x}] is blended with [O\,\textsc{i}]$\lambda6363$ so we first determine the profile of [O\,\textsc{i}]$\lambda6300$ and use this as a template for [O\,\textsc{i}]$\lambda6363$, assuming a 1:3 flux ratio.
We assume \oiii$\lambda5007$ to be at rest within \ngc\ and measure all other line velocity shifts relative to it.
The FWHMs of the lines are all corrected for instrumental broadening, assuming this is $\sim250$~km\,s$^{-1}$ for the ISIS instrument on the WHT.
The properties of these lines are reported in Table~\ref{tab:opt_fe}.
\begin{table*}
\centering
\caption{Properties of optical iron coronal lines and measurements from the 2015 WHT spectrum}
\label{tab:opt_fe}
\begin{tabular}{ccccccc}
\hline
Ion & $\lambda$ & $\chi$ & $\log(n_\mathrm{crit})$ & Velocity offset & FWHM & Flux \\
& [\AA] & [eV] & [cm$^{-3}$] & [km\,s$^{-1}$] & [km\,s$^{-1}$] & [$10^{-15}$ erg\,s$^{-1}$\,cm$^{-2}$] \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) \\
\hline
{[Fe\,\textsc{vii}]} & 3759 & 99.1 & 7.60 & $-207\pm17$ & $559\pm38$ & $16.9\pm1.5$ \\
" & 5159 & 99.1 & 6.54 & $-126\pm21$ & $<250$ & $3.01\pm0.80$ \\
" & 5721 & 99.1 & 7.57 & $-111\pm11$ & $485\pm18$ & $10.5\pm0.5$ \\
" & 6087 & 99.1 & 7.64 & $-201\pm9$ & $632\pm13$ & $20.1\pm0.5$ \\
{[Fe\,\textsc{x}]} & 6374 & 233.6 & 8.64 & $-188\pm15$ & $605\pm46$ & $5.24\pm0.51$ \\
{[Fe\,\textsc{xi}]} & 7892 & 262.1 & 8.81 & $-212\pm24$ & $579\pm57$ & $4.44\pm0.56$ \\
\hline
\end{tabular}
\parbox[]{11cm}{The columns are: (1) Ion; (2) vacuum wavelength; (3) ionisation potential; (4) critical density for $\log(T_\mathrm{e}/\mathrm{K})=4$; (5) velocity offset of the emission line peak relative to \oiii$\lambda5007$~\AA; (6) full width at half maximum of the emission line component corrected for an instrumental broadening of 250~\kms; and (7) line flux. We give $1\sigma$ errors.}
\end{table*}
\section{Results} \label{results}
\subsection{The \textcolor{black}{mean} coronal line profiles}
\label{sec:profiles}
Fig. \ref{fig:coronal_line_Gauss} shows the profiles of the four near-IR coronal lines as observed in the mean spectrum. In all four cases, the central (core) part can be modelled well with a single Gaussian. It is intriguing to notice that both the \se\ and \silvi\ coronal lines show significant excess flux blue- and red-ward of the core, whereas the \sn\ and \sit\ coronal lines do not.
In Fig.~\ref{fig:ip} we show the coronal line and \siii\ widths and velocity shifts as a function of both their ionisation potentials and critical densities.
The coronal line (core) profiles are all broader than the \siii\ doublet lines, with line widths of $\mathrm{FWHM}\sim500$--$800$~\kms, compared with $\approx370$~\kms\ for \siii.
Whilst we have only five data points, Fig.~\ref{fig:ip} shows a trend of increasing FWHM with ionisation potential up to $\approx300$~eV.
It is clear in Fig.~\ref{fig:ip} that all four coronal lines have emission line peaks blue-shifted with respect to \siii.
The \se, \sn\ and \sit\ lines have very similar average velocity offsets of $\approx-130$~\kms; curiously, the lowest-ionisation potential coronal line \silvi\ has a much greater velocity offset of $\approx-380$~\kms, so there is no simple relationship between the velocity shift and ionisation potential.
We also measured the velocity shifts of the other forbidden, narrow emission lines in the mean spectrum, with none of them having a significant velocity shift (see Table~\ref{tab:mean_fits}).
Although it is clear that the coronal lines have higher critical densities and both greater line widths and velocity shifts than \siii, no simple trend with critical density is observed between the four coronal lines.
\subsection{The coronal line variability}
\label{sec:variability}
The low-ionisation narrow forbidden emission lines are assumed to be produced in AGN at distances from the central engine large enough to not show significant flux variability over the course of decades. But the optical narrow-line reverberation results for \ngc\ show that the \oiii~$\lambda5007$ line emitting region is more nuclear than expected, with an extent of only 1--3~pc and a density of $\sim10^5$~cm$^{-3}$ \citep{Pet13}. Since our campaign extends over the course of roughly a year, we can still assume that the flux of the \siii~$\lambda9531$ line is not variable. This expectation is confirmed by the results in Table~\ref{tab:core_wing_fluxes} and Fig.~\ref{fig:siii_variations}:
the RMS variability of \siii$\lambda9531$ is $\approx8.4\times10^{-16}$~erg\,s$^{-1}$\,cm$^{-2}$, only around 2.5~per~cent of the mean line flux.
The width of this line is also stable during our campaign, with two-thirds of data points being consistent with the mean value to within 1$\sigma$.
\subsubsection{The \silvi\ and \se\ lines}
The \silvi\ line is the strongest of the four near-IR coronal lines in our spectra;
we reliably isolated its profile in 15/18 spectra (Fig.~\ref{fig:si_vi_variations}). We measure a maximum flux variability of $\sim50$~per cent for the \silvi\ line core, with $\mathrm{RMS}\approx13$~per~cent (Table \ref{tab:core_wing_fluxes}).
In addition, we observe variable excess emission blue- and red-ward of the line core. Both of these flux excesses are relatively broad, with the blue- and red-shifted part stretching over $\sim2000$ and $\sim3000$~\kms, respectively (Fig. \ref{fig:min_max_wings}), and in the mean spectrum their combined flux is similar to the flux in the core of the line. The red flux excess variability shows a cyclical behaviour: it is most prominent in the first spectra with a flux exceeding that in the line core, becomes weak by the middle of the campaign and increases in flux again at later epochs (Fig.~\ref{fig:si_vi_variations} and Table~\ref{tab:core_wing_fluxes}).
In the second part of the campaign (after the seasonal gap) the blue excess varies in the same manner as the red excess however the blue excess is weakest in the first observing epochs, unlike the red excess, which is strongest.
The \se\ coronal line
was strong enough to be reliably isolated in 13/18 single-epoch spectra (Fig.~\ref{fig:s_viii_variations}). The \se\ line core is much more variable than that of the \silvi\ line, with maximum flux changes by a factor of $\approx3$ (Table~\ref{tab:core_wing_fluxes}).
Like for the \silvi\ line, we observe variable and broad excess emission blue- and red-ward of the line core (Fig.~\ref{fig:min_max_wings}), but in the mean spectrum their combined flux is only about half that of the core of the line. Given that the \se\ line is also on average a factor of $\sim2$ weaker than the \silvi\ line, the isolation of the excess fluxes in the single-epoch spectra is more problematic. However, there is a trend for this excess emission to be strongly variable (by factors of a few) also in the \se\ coronal line.
In Fig.~\ref{fig:min_max_wings} we overplot all of the single-epoch line profiles of both \se\ and \silvi\ for easier comparison.
In \silvi\ it is clear that most of the variability in the profile is in the broad wings, with the narrow core showing relatively little variation from the mean.
This is reflected in the RMS variability measured for the different line components (Table~\ref{tab:core_wing_fluxes}).
Because of the noisier data, it is more difficult to see the same behaviour in the \se\ line and the RMS variability of the core and blue and red wings are similar ($\approx30$~per~cent).
In Fig.~\ref{fig:min_max_wings} we highlight two spectra with strong and weak wings but similar cores, so visually at least a similar trend as observed in \silvi\ can be discerned.
\subsubsection{The \sit\ and \sn\ lines}
The \sit\ line is the second strongest coronal line in our spectra and is free of blends. However, it is located in a region of telluric absorption, which is strongest blueward of it.
We extracted its profile in 15/18 single-epoch spectra (Figure~\ref{fig:si_x_variations}).
\textcolor{black}{Its RMS flux variability (15~per cent) is very similar to that of the \silvi\ core.}
\sn\ is the weakest near-IR coronal line in our study but was strong enough to be reliably isolated in 13/18 single-epoch spectra.
It shows a similar variability range to the \se\ line core, with maximum flux changes by a factor of $\sim3$ and RMS of 3\textcolor{black}{1}~per~cent, but without a clear trend (Fig.~\ref{fig:s_ix_variations}).
\subsubsection{Trends in coronal line variability}
\label{sec:variability-trends}
\textcolor{black}{As can be seen from Table~\ref{tab:core_wing_fluxes}, the coronal line cores show a slightly higher degree of variability than that of \siii~$\lambda9531$: $\mathrm{RMS}\approx10$--30~per cent for the coronal lines compared with $\mathrm{RMS}=2.5$~per cent for \siii.
Figs.~\ref{fig:si_vi_variations}, \ref{fig:s_viii_variations}, \ref{fig:si_x_variations} and \ref{fig:s_ix_variations} suggest that there is a correspondence between the measured FWHM and flux of the coronal line cores, in the sense that as the flux increases the line becomes broader.
However, this variability is not statistically significant and simple fits for a constant line flux return $\chi^2_\nu=0.27$, 0.77, 0.14 and 0.85 for the \se, \sn, \sit\ and \silvi\ line cores, respectively.
Given the substantial uncertainties, we cannot make strong statements about the variability of the narrow line cores.}
\textcolor{black}{It is clear from Figs.~\ref{fig:si_vi_variations}, \ref{fig:s_viii_variations} and \ref{fig:min_max_wings} that most of the variability in the coronal line profiles is in the wings of \se\ and \silvi.
Flux variations in the \silvi\ red wing are statistically significant and we can reject the null hypothesis that the variability is purely statistical with greater than 99.99~per cent confidence.
A fit of constant flux to the \silvi\ blue wing lightcurve is poor ($\chi^2_\nu=1.45$), suggesting variability of the blue wing also, but the variability is less significant than in the red wing ($p\approx0.12$).}
In Fig.~\ref{fig:dust_wings} we show the variability of the \silvi\ \textcolor{black}{and \se} broad red wing\textcolor{black}{s} and compare them to that of the hot dust.
\textcolor{black}{Because of the noisier data, trends in the raw \se\ red wing lightcurve (Fig.~\ref{fig:s_viii_variations}d) are less clear.
We have therefore binned the lightcurve to better show the long-term trend and in Fig.~\ref{fig:dust_wings} we show the error-weighted average and standard deviations from the mean of consecutive flux points, as indicated.}
The dust lightcurve is derived from spectroscopic data, with fluxes integrated over a narrow emission line free region near the centre of the \textit{H} photometric band (1.55--1.60~\um; \citealt{Landt19}).
We observe a strong similarity in the shapes of the dust and the coronal line red wing lightcurves \textcolor{black}{(particularly \silvi)}: the fluxes are initially high, but weaken substantially during the gap in observations between MJD 57621 and 57743; the fluxes remain low in the middle of the campaign, before a systematic rise from MJD 57882 onwards.
\subsection{The black hole mass from near-IR emission line ratios}
\label{sec:mass}
\cite{Rod20} presented a novel method to calculate black hole masses using the flux ratio of coronal lines to low-ionisation permitted lines (see also \citealt{Prieto22}).
From flux measurements made by \cite{Riffel06}, \cite{Rod20} calculate the flux ratio \silvi/\brg$=(9.97\pm0.86)/(16.27\pm2.0)=0.61\pm0.09$
from which they determined a black hole mass of $6.7\times10^6$~M$_\odot$.
Considering the total flux (core $+$ wings) in the \silvi\ line, we calculate \silvi/\brg$=(25.3\pm0.9)/(43.3\pm0.3)=0.58\pm0.02$ from our mean spectrum,
which is consistent within the uncertainties with the value determined by \cite{Rod20} and therefore gives the same black hole mass.
Therefore, although the absolute fluxes of the lines differ by more than a factor two between the 2002 IRTF spectrum of \cite{Riffel06} and our 2016--17 mean spectrum, the flux ratio has not changed, as one would expect if the flux ratio reflects the black hole mass.
We note that using the \silvi\ core flux only results in the flux ratio $=0.31\pm0.01$ and so the estimated black hole mass is approximately four times greater at $2.6\times10^7$~M$_\odot$, much closer to the $3.2\times10^{7}$~M$_\odot$ obtained by \cite{Pancoast14} via reverberation mapping.
The mass obtained from the total \silvi\ line flux is 0.66~dex discrepant with the \cite{Pancoast14} estimate, whereas the mass from the core flux is only 0.09~dex discrepant.
These compare with a 0.44~dex scatter in black hole mass reported by \cite{Rod20} for their scaling relation.
\section{Photoionisation models}
\label{sec:cloudy}
AGN coronal lines are thought to be photoionised by the soft X-ray nuclear continuum.
Since ionic species are most effectively produced by photons with energies just above their ionisation potentials, the flux ratios of lines with different ionisation potentials will depend on the shape of the ionising SED.
The shape of the ionising SED depends not just on the properties of the accretion flow
(the black hole mass, accretion rate, electron temperature and optical depths of the warm and hot coronae, etc.) but also on extrinsic factors such as obscuration of the ionising source by gas and dust.
This is of particular relevance in studies of \ngc\ in which a persistent obscurer, first reported by \cite{Kaastra14}, strongly absorbs the nuclear soft X-ray emission along certain lines of sight.
\cite{Kaastra14} described the obscurer as a persistent, clumpy, ionised gas outflow on scales of only a few light-days from the nucleus.
\cite{Dehghanian19} interpreted it as the upper, line-of-sight component of an accretion disc wind; the wind originates interior to the BLR and its dense base shields the BLR from the nuclear continuum source.
We know that this obscurer was present during the 2016--17 near-IR spectroscopic campaign because of the low-flux state \ngc\ was observed in and the presence of persistent, broad He\,\textsc{i}~$\lambda1.08$~\um absorption associated with the obscurer (\citealt{Wildy21}).
\cite{Mehdipour15} presented two SEDs based on 2013--14 X-ray observations: one seen through the obscurer, and the intrinsic nuclear SED.
We show these SEDs in Fig.~\ref{fig:seds}, where we mark also the ionisation potentials of the near-IR coronal lines.
Clearly the coronal line emission would be strongly affected by the obscurer if it intervenes between the coronal line gas and the X-ray source.
We do not know \textit{a priori} the location of the coronal line emitting gas and therefore we can explore whether the line emission predicted by the obscured or unobscured SED better match our observations. Therefore, we used both SEDs in \cloudy\ to make predictions about the resultant line emission.
\subsection{The \cloudy\ parameter region}\label{sec:cloudy_params}
We used the fluxes of the coronal line cores reported in Table~\ref{tab:mean_fits}\footnote{For \se\ and \silvi\ these are the line core fluxes (excluding the wings).}, and calculated the equivalent widths of the lines relative to the 1215~\si\angstrom\ continuum flux of the \cite{Mehdipour15} SEDs. Subsequently, we searched the parameter space for regions which reproduced the observed emission line equivalent widths within uncertainties of $\approx30$~per~cent.
Because we expect that the \siii-emitting gas is not cospatial with the coronal line gas, the \cloudy\ predictions for \siii\ emission can be used to discriminate between models. For example,
regions that predict substantial \siii\ emission or overpredict the observed flux may be ruled out as potential sites for the coronal line gas. On the other hand, regions that produce little or no \siii\ are still viable, since this emission line flux could come from elsewhere.
We additionally considered the equivalent widths of the optical Fe coronal lines (Section~\ref{sec:optical}) to aid our search.
We note that lines were measured in a spectrum from 2015 March and are therefore non-contemporaneous with our near-infrared data from 2016--17.
Flux variability of a factor $\sim2$ was seen in the [Fe\,\textsc{vii}] lines over $\approx5$~years (\citealt{Landt15b})\textcolor{black}{.}
\textcolor{black}{Given this level of intrinsic variability (and some uncertainty in the absolute flux scaling of the optical spectrum) we reasonably expect the optical coronal lines to have fluxes within a factor $\approx2$--3 of those we measure in the 2015 spectrum.}
Our \cloudy\ models are “luminosity cases” similar to the studies of \cite{Ferg97} and \cite{bald95}.
This means that we have used the obscured and unobscured SEDs with their corresponding bolometric luminosities estimated by \cite{Mehdipour15}.
In particular, for the unobscured case,
the bolometric luminosity is
$\log(L_{\mathrm{bol}}/\mathrm{erg\,s}^{-1})=44.2$, whereas, for the
obscured case, it decreases to
$\log(L_{\mathrm{bol}}/\mathrm{erg\,s}^{-1})=43.5$. Below, we compare
all results to the observed values of
the coronal lines from the 2016--17
campaign of \cite{Landt19}. While a
three-year time span between the SEDs
and the coronal lines probably slightly
affects the results, it is also possible
that the SEDs were quite different in
2016, which would lead to very different
predictions. For this reason, we choose
not to fine-tune the \cloudy\ results, and
we only propose them as a possible
solution while we emphasize the
importance of the approach. For the
future application of the method, it
would be beneficial to measure both the
emission lines and ionising continuum in the same time period.
Another source of uncertainty is the
column density: the hydrogen column
density of the possible coronal line
emitting cloud is unknown. The stopping
criteria for the \cloudy\ models is set to
be the column density of the cloud,
which is assumed to be
$\log(N_\mathrm{H}/\mathrm{cm}^{-2})=23$
\textcolor{black}{during most of our
calculations}. This is a typical value
assumed for the coronal line emitting
regions; however, it is not measured and
can be a different value \textcolor{black}{so we have also
created some models using
$\log(N_\mathrm{H}/\mathrm{cm}^{-2})=22$ and 22.5}.
The cloud has an ionisation structure
with the highest ionisation lines
forming near its illuminated face and
the lowest ionisation lines forming near
the shielded face. Decreasing the column
density of the cloud will not \textcolor{black}{dramatically} affect the
higher ionisation lines but will reduce
the intensity of the low-ionisation
lines. \textcolor{black}{To check this claim, we created two
obscured models with different column
densities ($\log[N_\mathrm{H}/\mathrm{cm}^{-2}]=22$ and 23),
and similar hydrogen density of $\log(n_\mathrm{H}/\mathrm{cm}^{-3})=5$. Both
clouds were located at the same distance
of $\log(R/\mathrm{cm})=18$. The \cloudy\ calculations
indeed show that decreasing the column
density by 1 dex (from $\log[N_\mathrm{H}/\mathrm{cm}^{-2}]=23$ to 22) reduces the EW of
high ionization lines only by 15--30~per cent,
while the low ionization lines become at
least 60~per cent weaker.}
Therefore, we
think that the variation of the column
density does not \textcolor{black}{strongly} affect the
high-ionisation lines, but does impact on the
low-ionisation lines.
\textcolor{black}{It is worth mentioning
that tests have shown that the cloud can
be highly ionized in some cases. In such
cases, the temperature at the
illuminated face of the cloud gets very high ($>10^{6}$~K in some cases), resulting in almost no
near-IR and UV high-ionization lines being produced at the
illuminated face.
In those cases, the
EWs of all coronal lines will significantly
change by varying the column density.
}
Fig.~\ref{fig:cloudy} shows the results for \textcolor{black}{the models with
$\log(N_\mathrm{H}/\mathrm{cm}^{-2})=23$}. The
obscured case is shown in plots \ref{fig:cloudy}(a) and (b), and the unobscured case is shown in plots (c) and (d).
Plots (a) and (c) illustrate how the equivalent width of each near-IR coronal line (and also \siii) depends on the location (vertical axis) and the hydrogen density (horizontal axis) of the cloud.
The lower-right panel in these plots show the contours for \se\ and \silvi\ overlaid.
Plots (b) and (d) show the contours for the optical coronal lines ({[Fe\,\textsc{vii}]} 3759\AA, {[Fe\,\textsc{x}]} 6374\AA, and {[Fe\,\textsc{xi}]} 7892\AA). The lower-right panels of these series of plots have been created using the approach introduced by \cite{Dehghanian20}.
In these panels, each coloured line shows the observed value of one specific coronal line. By tracing these lines, one might be able find the place where they (almost) cross over each other. The crossover point would indicate the location and the density of the cloud that produces the observed line strengths.
\subsection{Solution for the coronal line cores}
\label{sec:cloudy-cores}
Considering Fig.~\ref{fig:cloudy}, we ran over a hundred
location-density points for three
column densities of
$\log(N_\mathrm{H}/\mathrm{cm}^{-2})=22$, $22.5$ and $23$.
Tables~\ref{tab:cloudyObs} and
\ref{tab:cloudyUno} show
\textcolor{black}{some of the results from
the mentioned \cloudy\ simulations. We
mainly selected those clouds
that produce zero to one
hundred per~cent of near-IR coronal lines within
a reasonable range of
uncertainty (a tolerance of $\approx30$~per cent). Models 3 and 5
overproduce \se\ beyond this uncertainty,
but are still presented in the table since
they produce the right amount of \silvi.}
\textcolor{black}{Regarding the optical
coronal lines (\feii, \fevii\ and \fex), as explained in Section~\ref{sec:cloudy_params},
since we lack contemporaneous measures of these lines
we search for models which agree with the observed values to within a factor $\approx2$--3.}
As both tables indicate, there is not a single location-density solution for which the cloud produces all of the observed coronal lines, implying that we must be seeing emission from different regions. By comparing the models, we note that while \se\ and \silvi\ are produced by the same cloud, \sn\ and \sit\ are emitted by different cloud(s). Also, since we could find more solutions in the obscured case, it is most likely that the coronal regions saw the photonionising source through the obscurer during this near-IR spectroscopic campaign.
However, it is also possible that the continuum source was unobscured to some coronal line production sites (i.e.\ the coronal line gas was located between the obscurer and the source), but obscured at other locations (i.e.\ the gas was located between the obscurer and the observer).
The \cloudy\ simulations indicate that models
3 (or 5) and 7 are most likely responsible
for producing the observed values of the
near-IR coronal line cores. In this scenario,
cloud 7 produces almost all of the \sit,
while cloud 3 (or 5) produces \se, \sn\ and
\silvi. \textcolor{black}{Model 4 is also a good choice to produce the right amount of \se\ and \silvi, however this cloud does not produce enough \sn.}
In general, it is difficult to
produce sufficient \sit~emission without
underestimating the emission from the other
near-IR coronal lines. Furthermore, the
\sit~region requires on average a gas density
reduced by a factor of $\sim 1000$ (and a
somewhat larger distance from the central continuum source).
\begin{table*}
\centering
\caption{A comparison of \cloudy\ coronal line flux predictions to the observed values in the obscured case}
\label{tab:cloudyObs}
\begin{tabular}{llccccccc}
\hline
\multicolumn{2}{c}{Coronal line} & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 & model 7 \\
& & $N_\mathrm{H}=10^{23}$ & $N_\mathrm{H}=10^{22}$ & $N_\mathrm{H}=10^{22.5}$ & $N_\mathrm{H}=10^{22.5}$ & $N_\mathrm{H}=10^{22.5}$ & $N_\mathrm{H}=10^{23}$ & $N_\mathrm{H}=10^{22}$
\\
& &$R=10^{17.3}$
&$R=10^{16.3}$
&$R=10^{16.6}$
&$R=10^{16.7}$
&$R=10^{16.6}$
&$R=10^{17.6}$
&$R=10^{18}$
\\
& &$n_\mathrm{H}=10^{7.4}$
&$n_\mathrm{H}=10^{8}$
&$n_\mathrm{H}=10^{8.3}$
&$n_\mathrm{H}=10^{8.3}$
&$n_\mathrm{H}=10^{8.2}$
&$n_\mathrm{H}=10^{7.5}$
&$n_\mathrm{H}=10^{4.5}$
\\
\multicolumn{2}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8)\\
\hline
{\se} & 0.9914~\um & 70\% & $<1$\% & 151\% & 118\% & 167\% & 3\% & $<1$\% \\
{\sn} & 1.2520~\um & 7\% & 2\% & 46\% & 23\% & 64\% & $<1$\% & 4\% \\
{\silvi} & 1.9625~\um & 129\% & 0\% & 81\% & 79\% & 90\% & 58\% & $<1$\% \\
{\sit} & 1.4300~\um & $<1$\% & 39\% & 5\% & 2\% & 12\% & 0\% & 81\% \\
{[Fe\,\textsc{vii}]} & 3759\AA & 78\% & $<1$\% & 19\% & 18\% & 23\% & 20\% & $<1$\% \\
{[Fe\,\textsc{x}]} & 6374\AA & 23\% & 97\% & 188\% & 77\% & 309\% & $<1$\% & 212\% \\
{[Fe\,\textsc{xi}]} & 7892\AA & $<1$\% & 81\% & 10\% & 3\% & 20\% & $<1$\% & 167\% \\
\hline
\end{tabular}
\parbox[]{14.8cm}{The columns are: (1) Ion and wavelength of the near-infrared transition; (2)--(8) the percentage of the observed flux predicted by different \cloudy\ models.
\textcolor{black}{For \se\ and \silvi\ the percentages are relative to the line core fluxes (excluding the wings).}}
\end{table*}
\begin{table}
\centering
\caption{A comparison of \cloudy\ coronal line flux predictions to the observed values in the unobscured case}
\label{tab:cloudyUno}
\begin{tabular}{llccc}
\hline
\multicolumn{2}{c}{Coronal line} & model 8 & model 9 & model 10 \\
& & $N_\mathrm{H}=10^{23}$ & $N_\mathrm{H}=10^{23}$ & $N_\mathrm{H}=10^{23}$
\\
& &$R=10^{17.25}$
&$R=10^{16.2}$
&$R=10^{17.55}$
\\
& &$n_\mathrm{H}=10^{9.5}$
&$n_\mathrm{H}=10^{4.4}$
&$n_\mathrm{H}=10^{6.3}$
\\
\multicolumn{2}{c}{(1)} & (2) & (3) & (4)\\
\hline
{\se} & 0.9914~\um & 92\% & 0\% & $<1$\% \\
{\sn} & 1.2520~\um & 3\% & $<1$\% & $<1$\%\\
{\silvi} & 1.9625~\um & 94\% & 0\% & 0\% \\
{\sit} & 1.4300~\um & 0\% & 58\% & 110\% \\
{[Fe\,\textsc{vii}]} & 3759\AA & 8\% & 0\% & 0\% \\
{[Fe\,\textsc{x}]} & 6374\AA & 3\% & 5\% & 10\% \\
{[Fe\,\textsc{xi}]} & 7892\AA & $<1$\% & 13\% & 35\% \\
\hline
\end{tabular}
\parbox[]{8cm}{The columns are: (1) Ion and wavelength of the near-infrared transition; (2)--(4) the percentage of the observed flux predicted by different \cloudy\ models.
\textcolor{black}{For \se\ and \silvi\ the percentages are relative to the line core fluxes (excluding the wings).}}
\end{table}
\subsection{Solution for the coronal line wings}
\label{sec:cloudy-wings}
\textcolor{black}{The values listed in Tables \ref{tab:cloudyObs} and \ref{tab:cloudyUno} are fluxes relative to the mean flux of the narrow emission line cores.
In our mean spectrum, the wings of \se\ and \silvi\ are $62\pm11$ and $95\pm7$~per cent of the core fluxes, respectively.
The ideal model to explain the coronal line wings will therefore produce a flux in \se\ equivalent to $\approx62$~per cent of its core flux, a flux in \silvi\ equivalent to $\approx95$~per cent of its core flux, and 0~per cent of the core flux of all of the other lines (none of which display wings).
}
For the unobscured SED, we find a solution (model 8, Table~\ref{tab:cloudyUno}) at a radius $\log(R/\mathrm{cm})=17.25$ which produces fluxes in \silvi\ and \se\ that are very similar to the observed fluxes in the broad wings of these lines.
The model predicts a flux equivalent to 94~per~cent of the narrow core flux of \silvi\ will be produced at this location (precisely what we measure for the flux in the wings in the mean spectrum). It also predicts an equivalent of 92~per~cent of the \se\ flux will be produced here (an overprediction, since we measure 62~per~cent).
Furthermore, no \sit\ and very little \sn\ emission (\textcolor{black}{equivalent to} 3~per~cent of the core flux) are produced in this region.
\textcolor{black}{Model 8 is therefore a near-ideal solution for the coronal line wings.}
Model 1 in the obscured case (Table~\ref{tab:cloudyObs}) has some similar properties to model 8: it is at a similar radius ($\log[R/\mathrm{cm}]=17.3$), and produces substantial \se\ and \silvi\ emission, with little \sn\ and \sit.
\textcolor{black}{However,} more Fe coronal line emission is produced in this model compared with model 8: $\approx78$~per~cent of the observed [Fe\,\textsc{vii}]$\lambda3759$ and $\approx23$~per~cent of the observed [Fe\,\textsc{x}].
Interestingly, the radius of both of these coronal line emitting clouds $\log(R/\mathrm{cm})\approx17.3$ or 70~light~days is exactly the radius of the inner torus as determined from the hot dust reverberation lags by \cite{Landt19}. However, the coronal line gas density in the unobscured case is similar to what is expected for the dusty torus material, whereas it is a factor of $\sim 100$ lower in the obscured case. We discuss this case further in Section~\ref{sec:discussion-wings}.
\section{Discussion}
\label{sec:discussion}
We have performed an in-depth study of the near-IR coronal lines in \ngc, making use of spectroscopic data recorded over a year with a roughly weekly cadence.
We were able to study not only the line profile shapes but the changes in line shapes and fluxes.
We have shown that two of the coronal lines (\se\ and \silvi) have prominent broad wings as well as narrow cores whereas only narrow cores are evident on the other two coronal lines.
Whilst the narrow line cores are persistent and clearly detected in all spectra, the wings are highly variable; the flux in the wings is comparable to the flux in the line cores in some epochs and in other epochs the wings are barely visible.
In the case of \silvi\ it is clear that most of the variability in the line profile is in the wings.
Broadly speaking, there must be at least two coronal line regions in \ngc: one which produces the persistent, narrow cores of all four coronal lines and another (presumably more compact) region in which the conditions favour the production of \se\ and \silvi\ emission which we observe as the broad, variable wings.
The existence of multiple sites of coronal line emission in AGN has been proposed before based on comparisons between Seyfert type 1 and type 2, but never before shown for a single object. \cite{Mur98} reported excess [Fe\,\textsc{vii}]~$\lambda6087$ emission in Seyfert 1 nuclei compared with Seyfert 2s, implying that some coronal line emission occurs interior to the dust torus (and so can be seen only in Seyfert 1s) and some coronal line emission occurs beyond the torus.
They proposed three main coronal line regions: the inner face of the dust torus, highly-ionised clumps of gas in the NLR and a further, very extended region on kpc-scales. We take up this interpretation for the coronal line regions in \ngc\ in more detail below and sketch it in Fig. \ref{fig:cartoon}.
\subsection{The coronal line cores}
\label{sec:discussion-cores}
In Section~\ref{sec:meanspec} we carefully analysed the emission line profiles in the high S/N mean spectrum.
In addition to the four near-infrared coronal lines we also measured the low-ionisation forbidden line \siii$\lambda9531$.
In comparison to \siii, the cores of all four coronal lines are broader (ranging from $\mathrm{FWHM}\approx550$--750~\kms) and are blueshifted with respect to it by $\approx120$--380~\kms\ (Fig.~\ref{fig:coronal_line_Gauss}).
The greater widths and blueshifts of the coronal lines with respect to the low-ionisation forbidden lines is commonly seen in AGN spectra and was reported in early studies (e.g.\ \citealt{Grandi78}; \citealt{Pelat81}).
This implies that the coronal lines are produced in different gas to the standard NLR responsible for the emission from low-ionisation species such as \siii\ and \oiii.
This is consistent with the idea that the coronal line emitting gas is part of an outflowing wind on scales more compact than the standard NLR.
\textcolor{black}{Our measurements of the coronal line cores over the duration of the campaign indicate that they only weakly variable, if at all.
In Section~\ref{sec:variability-trends} we noted a positive relationship between the FWHM and flux over time in the single-epoch measurements of the line cores.
This trend (if real) could be explained if the gas producing the broadest part of the line profiles is more compact (and varies more rapidly) than the gas producing the more persistent narrow cores of the lines.
However, in our data the same FWHM-flux trend can alternatively be explained by the movement of the underlying continuum within its uncertainty range, so we cannot draw any strong conclusion on this point.
In any case, the weakness of the variability is consistent with a geometry in which majority of the flux in the coronal line cores is emitted at radii $\gtrsim1$~light year and therefore the line cores have a variability timescale longer than we can probe with this data.
Then, considering both the profile widths and variability properties, we may deduce that the coronal line cores are produced at a characteristic radius $18\lesssim \log(R/\mathrm{cm}) \lesssim18.5$ from the nucleus, a scale intermediate between the BLR and NLR (Fig.~\ref{fig:rad_vel}).}
Although the widths and shifts of the four coronal line cores have common characteristics (being broader and blueshifted with respect to \siii), they are not identical.
It is likely that the lines are not emitted entirely cospatially and different coronal lines originate in different parts of the outflow.
Our photoionisation simulations with \cloudy\ (Section~\ref{sec:cloudy}) also strongly suggest this, since we do not find any single cloud that can account for all of the observed lines.
In the mean spectrum, \sn\ was found to be the narrowest coronal line ($\mathrm{FWHM}=554\pm24$~\kms) and \se\ the broadest ($\mathrm{FWHM}=737\pm39$~\kms).
Consistent with previous studies (e.g. \citealt{Rod11}; \citealt{Ferg97}; \citealt{DeRobertis86}; \citealt{Filippenko84} \citealt{Pen84}; \citealt{Pelat81}), we found a trend of increasing FWHM with the ionisation potential of forbidden emission lines (Fig.~\ref{fig:ip})\footnote{Contrary to \cite{Filippenko84} we observe a weaker relationship between emission line FWHM and critical density, as can be seen in Fig.~\ref{fig:ip}.}.
\cite{Rod11} noted that this trend extended up to $\chi\approx~300$~eV, above which the FWHM of lines decreased or plateaued, which is what we observe in Fig.~\ref{fig:ip}: \sit\ ($\chi=351.1$~eV) is no broader than \se\ ($\chi=281.0$~eV) and \sn\ ($\chi=328.8$~eV) is narrower than \se.
Since \se\ has the highest critical density of the lines studied here, some of its emission may arise from higher-density gas nearer to the nucleus where orbital motions are greater.
In this picture it is therefore expected that \se\ will be the broadest coronal line, as we observe.
This result suggests a degree of stratification of the coronal line emitting gas.
\cite{Pelat81} found a relationship between line width and ionisation potential, as well as line width and velocity shift, with the general trend that higher-ionisation lines were broader and more strongly blueshifted.
We see such a trend to first order (the coronal lines are broader than, and blueshifted with respect to, \siii); however, the reverse trend is seen within the coronal line sequence: the lowest-ionisation coronal line \silvi\ has the greatest blueshift (Fig.~\ref{fig:ip}).
The \se, \sn\ and \sit\ lines have centroid shifts that are consistent within error, with a mean and standard deviation of $-125$ and 9~\kms, respectively; the \silvi\ line has a significantly greater shift of $-383\pm10$~\kms.
This difference in blueshifts is real, and not the result of a systematic error in the wavelength calibration at the red end of our spectra: we assessed the centroid shifts of the low-ionisation \feii$\lambda1.644$~\um line near to \silvi\ and did not find any significant shift of this line.
Therefore the kinematics of the \silvi-emitting gas appear to be different to that of gas emitting the other three coronal lines.
\cite{Ferg97} noted that their photoionisation models predicted that lower-ionisation coronal lines ([Ne\,\textsc{v}]--[Si\,\textsc{vii}]) would form in more extended gas than the high-ionisation lines.
We see this trend in our \cloudy\ simulations, in which the contours of \silvi\ are displaced to larger radii compared with the other coronal lines (see the comparison of \se\ and \silvi\ contours in Fig.~\ref{fig:cloudy}).
We also find that the core of the \silvi\ line is narrower than that of \se\ and \sit, which (if interpreting the line widths as orbital motion) would also imply its emission occurs at typically larger radii.
If the \silvi-emitting gas is located at larger radii and greater velocities than the rest of the coronal line gas, this implies that the gas is part of an accelerating outflow.
Outflows which accelerate from the nucleus to distances of $\sim100$~pc have been observed in several other AGN (e.g.\ \citealt{Crenshaw00a}; \citealt{Crenshaw00b}; \citealt{Mueller11}).
Our photoionisation simulations appear to predict a very compact origin for the coronal line cores (Section~\ref{sec:cloudy-cores}).
\cloudy\ models 3 or 5 produce approximately the same fluxes as we observe in the cores of the \se, \sit\ and \silvi\ lines and the location of the line-emitting gas in these solutions is $\log(R/\mathrm{cm})=16.6$, a similar scale to the BLR or outer accretion disc (see Fig.~\ref{fig:rad_vel}).
The widths of the lines do not necessarily reflect Doppler broadening by virial motion of the emitting gas; instead the lines may be broadened by gas turbulence as is the case in coronal line novae.
However, the blueshifts of the coronal lines strongly suggest that the emitting gas is outflowing.
It is then reasonable to assume that the outflow is launched from the rotating accretion disc.
In this case we would expect the gas to have a large amount of rotational as well as radial motion, and for this to be evident in the widths of the emission lines.
The \cloudy\ simulations also suggest that the three coronal lines \se, \sn\ and \silvi\ may be produced in the same cloud (either 3 or 5), and \sit\ is produced in another cloud (7).
However, the most obvious difference in the coronal line core profiles is the much greater blueshift of \silvi\ with respect to the other lines.
This suggests that \silvi\ emitting gas is kinematically (and spatially) distinct from the gas producing the other three coronal lines, or alternatively has an additional component not seen in the other lines, which would not be accounted for in our \cloudy\ solutions.
\cite{Landt15b} determined greater distances for the coronal line gas in \ngc\ than implied here.
Based on measurements of the optical and X-ray coronal lines, they calculated that the coronal lines were produced in a low-density gas ($\log[n_\mathrm{e}/\mathrm{cm}^{-3}]\sim3$) at $\log(R/\mathrm{cm})\approx18.9$ ($\approx8$~light~years).
This distance is approximately coincident with the \oiii~$\lambda5007$ NLR and is just within the black hole's gravitational sphere of influence (see Fig.~\ref{fig:rad_vel}).
In spite of the similar size scale to the NLR, \cite{Landt15b} proposed that the coronal line region was likely to be an independent entity because of its high ionisation parameter ($U\sim1$).
The modest variability of the coronal line cores and their relatively narrow widths imply their origin in gas interior to the standard NLR ($\log[R/\mathrm{cm}]\lesssim18$).
The photoionisation simulations in Section~\ref{sec:cloudy-cores} suggest an even more compact coronal line region at $\log(R/\mathrm{cm})=16.6$.
It can be seen from Tables~\ref{tab:cloudyObs} and \ref{tab:cloudyUno} that \cloudy\ generally predicts much higher gas densities ($\log[n_\mathrm{H}/\mathrm{cm}^{-3}]\sim7$--9) than calculated by \cite{Landt15b}.
Our results are not necessarily at odds with those of \cite{Landt15b} since they derived physical parameters from optical and X-ray coronal lines, whereas we do so for the near-IR coronal lines. Since emission lines form wherever the conditions allow them to do so, it is likely that different coronal lines trace clumps of differing density, thus mapping out the outflowing wind that produces them.
In summary, the kinematics inferred from \textcolor{black}{the narrow line profile widths} suggest that the coronal line cores are emitted in an accelerating wind just interior to the low-ionisation NLR. However, at odds with this interpretation, we did not find photoionisation model solutions consistent with it, although there are a number of limitations to this modelling, as mentioned earlier. Therefore, it is possible that the line widths are not produced by virial motion within the black hole's gravitational field, but rather indicate radial motion as in the ejecta of classical novae during their coronal line phase \citep[e.g.][]{Greenhouse90, Woodward21}.
\subsection{The coronal line wings}
\label{sec:discussion-wings}
\textcolor{black}{Before proceeding, is worth considering whether the flux excesses seen near \silvi\ and \se\ (their `wings'), and the apparent variability of these features, could alternatively be explained by other spectral components unrelated to the coronal lines (i.e.\ the continuum or the blended broad emission lines).
We have incorporated the uncertainty on the placement of the continuum (assumed to be locally linear) in our flux measurements and the variability of the excess flux redward of \silvi\ is still highly significant; therefore, we can rule out the excess flux changes being due to the imperfect subtraction of a (smooth) continuum.
Of course, the shapes of these flux excesses do not look like parts of a smooth continuum and it does not seem likely that the variations could be due to variations in unexplained, local features in the continuum, either.
As the spectral decomposition by \cite{Landt19} showed, the continuum beneath \se~$\lambda0.9914$~\um\ is dominated by the accretion disc, whereas the continuum beneath \silvi~$\lambda1.9650$~\um\ is dominated by emission from the hot dust.
So even if it were the case that the wings are in fact continuum features, this interpretation requires there to be some physical link between the disc and dust emission at the \textit{specific} wavelengths $0.9914$ and $1.9650$~\um, respectively, which creates the similarly-shaped and similarly-varying flux excesses.
So we do not think it likely that the appearance and variability of these features can be attributed to the dust or disc continua.}
\textcolor{black}{The other possibility is that these spectral and temporal features are caused by the imperfect deblending of the coronal lines from the broad emission lines.
We used the mean \pab\ profile as a (scaled) template for all of the broad hydrogen lines in our extraction of the coronal line profiles from the single-epoch spectra (Section~\ref{singlespec}).
Therefore, any deviations of the broad lines from the average profile shape can cause features in the residuals which we may attribute to the coronal lines.
In the case of \silvi, such an effect is likely to be relatively minor.
Our spectral decomposition of the high-S/N mean spectrum around that line (Fig.~\ref{fig:mean_spec_fits}) indicates that \brd\ and particularly \paa\ are very weak features beneath \silvi.
The flux in \brd\ beneath the \silvi\ red wing is $\approx6\times10^{-15}$~erg\,s$^{-1}$\,cm$^{-2}$, less than the average flux in the red wing itself ($8.7\times10^{-15}$~erg\,s$^{-1}$\,cm$^{-2}$).
The flux variations we attribute to the \silvi\ red wing are of the order $10\times10^{-15}$~erg\,s$^{-1}$\,cm$^{-2}$.
So, if this variability were actually dominated by variations in \brd\ it would require incredibly large changes in the red wing of \brd.
We would then expect to see concomitant changes in the other broad hydrogen line profiles, which we do not.
So it appears that both \brd\ and \paa\ are too weak in this part of the spectrum to satisfactorily explain the changes we see in the flux excess.
Relatively minor changes in the broad \pad\ profile could more plausibly induce a feature that we mistake for \se\ emission.
But, in this interpretation it is then a coincidence that this flux excess on the blue shoulder of \pad\ appears at just the right location to give the appearance of a red wing on \se, with a similar shape and scale to the excess around \silvi\ (which sits on the red wing of \brd).
Additionally, if our deblending method did not generally work well, it is curious that we do not observe any excess flux around \sn\ (which is blended with broad \pab).
This similarity in the profile shapes and lightcurves of the \se\ and \silvi\ wings actually strengthens our preferred interpretation of these flux excesses as coronal line emission, since our \cloudy\ simulations demonstrate that emission from both \se\ and \silvi\ can be produced by the same gas cloud, on $\sim$light month scales from the nucleus.
We therefore conclude that the observed excess emission around the \se\ and \silvi\ lines (and the variability of these features) is genuinely associated with those lines.}
It is intriguing \textcolor{black}{then} that we see broad wings on two of the near-IR coronal lines (\se\ and \silvi) but not on the other two (\sn\ and \sit).
Since we observe wings on one sulphur and one silicon line, the absence of wings on some lines cannot simply be related to the absence of that element at that location (for example, if silicon were depleted onto dust). Our \cloudy\ models show that the appearance of these wings may be explained by photoionisation: we observe wings on the two coronal lines with the lowest ionisation potentials simply due to the combination of bolometric luminosity of the central source and the gas density of the coronal line emitting gas.
Models 8 and 1, which we described in Section~\ref{sec:cloudy-wings}, are appealing because they explain a number of observed properties.
\cloudy\ predicts emission from \silvi\ and \se\ emission at similar fluxes to those we observe in the wings occurring at $\log(R/\mathrm{cm})\approx17.3$. Additionally, very little \sn\ and \sit\ emission is produced at this location, explaining the absence of wings on these two lines.
In model 8 none of the other lines we have assessed (\siii\ or the optical Fe coronal lines) are strongly emitted, either.
Since we observe strong variability in the wings as they disappear and reappear over the year-long campaign, we can infer that the emission must arise on $\sim$light~month scales; the radius $\log(R/\mathrm{cm})\approx17.3$ equates to $\approx70$~light days so the implied spatial scale is consistent with the observed variability timescale. As can be seen in Fig.~\ref{fig:rad_vel}, orbital motions of a few thousand \kms\ are expected at this radius which would explain the observed extents of the wings in velocity. A value of 70~light~days is also the precise radius of the inner edge of the torus determined by \cite{Landt19} via the reverberation of the dust in response to the optical accretion disc emission. The strong similarity in the shapes of the \silvi\ wing and hot dust light-curves (Fig.~\ref{fig:dust_wings}) means that the two will have similar reverberation lags and so it is likely that the hot dust and coronal line wings are emitted at the same radius from the nucleus.
It is clear from Figs.~\ref{fig:coronal_line_Gauss} and \ref{fig:min_max_wings} that the broad coronal line wings are much more prominent on the red side of the core than the blue side.
If the wings are indeed produced in a wind launched off the inner edge of the torus, then the torus may be inclined such that our view of the approaching gas is blocked, and we see mostly the receding gas on its far side (see Fig. \ref{fig:cartoon}). A similar geometry was proposed also by \cite{Glidden16} to explain observations of coronal line forest AGN (see their figure 1).
\cite{Pier95} proposed that AGN coronal line emission would originate in an X-ray heated wind evaporated from the inner edge of the dust torus, and this was one of the main sites of coronal line production outlined by \cite{Mur98}. Our findings are consistent with these schemes. However, our chosen model is problematic in that the torus must see the unobscured SED, yet the obscurer is located between the torus and the X-ray source (see e.g.\ figure 4 of \citealt{Kaastra14}). A possible solution is that, if the obscurer is an extended stream or wind, it presents a much higher column density along our line of sight than toward the torus. In any case, the torus emission is not sensitive to the obscurer since it is heated mainly by the UV/optical nuclear emission and the obscurer is transparent at these wavelengths \citep{Dehghanian19}. But the coronal lines can very well differentiate between the obscured and unobscured SED through constraints on the gas density. The value of $\log(n_\mathrm{H}/\mathrm{cm}^{-3}) \sim 9$ in the unobscured case (model 8) is what is expected of the material in the dusty torus.
Whilst the shapes of the \silvi\ wing and hot dust light-curves are strikingly similar (Fig.~\ref{fig:dust_wings}), it is noteworthy that the amplitude of the \silvi\ wing variability is substantially greater than that of the dust.
The wing varies by more than an order of magnitude whereas the dust flux variations are only at the $\pm25$~per~cent level.
Stronger variability of the coronal lines in \ngc\ compared with the dust on timescales of several years was also reported by \cite{Landt15b}. This behaviour is expected if the coronal line wing emission varies mainly in dependence on the compact X-ray source which in turn varies with a higher amplitude and on shorter timescales than the UV/optical emission that heats the dust \citep{Edelson2015}.
Both our spectroscopic analysis and photoionisation models point to the inner face of the torus being the origin of the coronal line wings.
But we stress that we cannot claim that our \cloudy\ models present unique and definitive solutions and there are several caveats to bear in mind, such as, e.g.\ the assumption of particular SEDs non-contemporaneous with our emission line observations and the inexhaustive exploration of the parameter space. Nonetheless, these models serve to illustrate that our proposed solution for the origin of the coronal line wings is physically plausible.
\subsection{Links to outflowing absorption systems}
We interpreted the blueshifts of the coronal line cores as evidence of emission from outflowing gas.
Outflows in \ngc\ have been detected via absorption lines in other wavebands including X-rays (\citealt{Ebrero16}), UV (\citealt{Arav15}) and near-IR (\citealt{Wildy21}).
Some of the X-ray warm absorber components in \ngc\ are related to the UV absorbers (e.g.\ \citealt{Ebrero16}; \citealt{Arav15}) and \cite{Wildy21} associated several narrow He\,\textsc{i}~$\lambda1.08$~\um absorption features with corresponding UV absorbers.
Similarities between the inferred properties of the coronal line region and warm absorber (both are thought to originate in a highly-ionised, outflowing gas on scales intermediate between the BLR and NLR) have prompted further investigation into the possibility that they are in fact the same medium.
\cite{Por99} used photoionisation models to demonstrate that coronal emission lines could indeed form within the warm absorber.
They found that very high gas densities ($\log[n_\mathrm{H}/\mathrm{cm}^{-3}]\gtrsim10$) were required to avoid overproduction of the coronal lines relative to observations and that the gas was located on scales similar to the BLR.
It is therefore pertinent to investigate whether the near-IR emission features seen in the outflow of \ngc\ (i.e. the coronal lines) can be associated with the gas responsible for the observed absorption features.
The associations of X-ray, UV and near-IR absorption features was based on the gaseous kinematics (i.e.\ similar velocity shifts), with further consideration made of the gas properties (density, ionisation parameter).
The velocity shifts we determined for the coronal lines ($\approx-125$~\kms\ for \se, \sn\ and \sit; $\approx-380$~\kms\ for \silvi) do not closely match those reported for any of the absorption features mentioned above.
Additionally, the majority of these absorption features arise at much greater distances from the nucleus than we infer for the coronal lines.
The estimated location of the UV absorption clouds is at the NLR radius and above ($\gtrsim2$~pc or $\log[R/\mathrm{cm}]\gtrsim18.8$), whereas the coronal lines are likely to be produced interior to the NLR.
The highest-ionisation components of the X-ray warm absorber are located on smaller scales ($\log[R/\mathrm{cm}]\gtrsim18$), similar to what we infer for the coronal lines, although these components have much higher outflow velocities (blueshifts of $\approx250$--$1200$~\kms) and much narrower widths ($\sigma_v\approx20$--200~\kms) than we observe in the coronal lines.
Our photoionisation modelling with \cloudy\ suggests that the coronal lines observed in \ngc\ form in gas of densities $\log(n_\mathrm{H}/\mathrm{cm}^{-3})\sim7$--9, much lower density than the $\log(n_\mathrm{H}/\mathrm{cm}^{-3})\gtrsim10$ required by \cite{Por99} but also much higher than the range of densities calculated for the warm absorber in \ngc\ ($\log[n_\mathrm{H}/\mathrm{cm}^{-3}]\sim3$--5: \citealt{Ebrero16}).
In summary, although the outflow velocities of the coronal line gas are of the same order of magnitude as those of some of the observed absorbers ($\sim$few hundred \kms), we see little evidence of an association between the coronal line gas and the specific X-ray/UV absorption components previously reported in \ngc. Then, if the coronal line gas and the absorbers are indeed parts of the same wind, they must arise from widely different clumps and so are unlikely to be exactly cospatial.
\section{Summary and conclusions} \label{sec:conclusions}
We have performed the first intensive study of the variability of the near-infrared coronal lines in an AGN.
The near-infrared spectroscopic monitoring campaign of \cite{Landt19} reveals a complex and multi-component coronal line region.
From measurements of the high-S/N mean spectrum of the campaign, we observe some general trends which have been observed in previous studies of AGN coronal lines.
The narrow cores of the coronal lines are broader than, and blueshifted with respect to, the lower-ionisation forbidden lines\textcolor{black}{. The lack of strong variability of the coronal line cores in the year-long campaign suggests the majority of the line-emitting gas exists on scales $\gtrsim1$~light year, whilst the line widths suggest the gas is more compact than the standard NLR ($\lesssim3$~light years).}
These findings suggest that the narrow near-IR coronal lines in \ngc\ form in an outflow just interior to the inner NLR, although our photoionisation models show that coronal line emission originating closer to the BLR is possible.
An approximate correlation between the FWHM and ionisation potential of the coronal line cores implies a stratification of line emitting gas, with the lowest-ionisation coronal line \silvi\ line forming at a greater distance from the nucleus than the other lines.
The greater blueshift and narrower line profile of \silvi\ with respect to the higher-ionisation lines \se\ and \sit\ can be understood if this stratified wind is accelerating.
We therfore propose that the narrow coronal line cores form in a stratified, accelerating outflow from the nucleus of \ngc\ and this emission occurs predominantly just interior to the standard NLR.
In addition to narrow cores, the two lowest-ionisation coronal lines \se\ and \silvi\ show extended wings, particularly on the red side of the core.
The line widths, strong flux variability on a timescale of $\sim$months and photoionisation predictions all indicate that the wings are produced at a radius $\log(R/\mathrm{cm})\approx17.3$ from the nucleus.
This radius is exactly coincident with the inner dust radius determined by \cite{Landt19}.
We therefore associate these wings with an X-ray heated wind of material evaporating from the inner face of the torus.
This study has revealed the complex and multi-faceted nature of the coronal line region in an AGN.
The recent launch of the \textit{James Webb Space Telescope} (\textit{JWST}) mean it is an opportune time to investigate this complexity.
The high sensitivity and spectral resolution of \textit{JWST}'s instruments will enable us to study infrared coronal lines of AGN in much greater detail.
\section*{Acknowledgements}
DK acknowledges support from the Czech Science Foundation project No.\ 19-05599Y, funding from the Czech Academy of Sciences via the PPLZ program and the receipt of a UK Science and Technology Facilities Council (STFC) studentship (ST/N50404X/1).
HL acknowledges a Daphne Jackson Fellowship sponsored by the STFC.
MJW acknowledges support from STFC grants ST/P000541/1 and ST/T000244/1.
G.J.F. and M.D. acknowledge support from NSF (1816537, 1910687), NASA (ATP 17-ATP17-0141, 19-ATP19-0188), and STScI (HST-AR- 15018 and HST-GO16196.003-A).
\section*{Data availability}
The raw data underlying this work are publicly available at the NASA IRTF Archive hosted by the NASA/IPAC Infrared Science Archive (\url{https://irsa.ipac.caltech.edu}).
The processed data are available on request from the second author.
\bibliographystyle{mnras}
\bibliography{refs}
\appendix
\section{Taking a look into a typical coronal line emitting cloud}
Fig.~\ref{fig:Te} shows the temperature inside a coronal line emitting cloud for both
unobscured (green) and obscured (red) cases.
The model belongs to a cloud with
a column density of $N_\mathrm{H}=10^{23}$~cm$^{-2}$, hydrogen density of $n_\mathrm{H}=10^{7.4}$~cm$^{-3}$, located at $R=10^{17.3}$~cm from the
source.
Much of our previous work rejected clouds with temperatures
greater than the $\sim10^5$~K peak in the cooling function since they produce
negligible optical or IR emission.
However, the large column density
clouds have enough extinction to produce cooler gas near the shielded
side of the cloud, so we include these for completeness.
These clouds occupy the lower left quadrant of the density-radius plane shown in Fig.~\ref{fig:cloudy}.
\bsp %
\label{lastpage} |
Title:
Datumless Topography: A Universally Consistent Way to Quantify Relief |
Abstract: Despite having long been the standard for quantifying relief on Earth and
beyond, elevation has its limitations. The zero-elevation datum is defined by
arbitrary and inconsistent conventions, especially on planets without a sea
level, hence the lack of a universally standardized way to quantify relief.
Furthermore, when quantifying relief on such planets, the elevation of a point
is rather meaningless on its own, deriving most of its value when compared to
the elevation of other points. In light of these considerations, this paper
introduces a universally consistent framework for quantifying relief that does
not require a datum altogether, and is instead based on physically meaningful
concepts. Designed to be mathematically elegant and free of arbitrary
parameters, the so-called datumless measures are divided into the datumless
point measures and the datumless surface measures. As opposed to elevation,
which describes the vertical position of a point relative to a datum, the
datumless point measures directly describe the vertical position of a point
relative to local terrain, making them useful for comparing the relief of
features such as mountains across different planets. In the meantime, the
datumless surface measures quantify various aspects of relief within a region,
as opposed to that of a single point. This is done through datumless
formulations of surface area and surface mean value, which can be directly
applied to the fractal-like planetary surface without projecting it onto a
reference ellipsoid. Altogether, this paper lays the groundwork for a datumless
framework that enables future topographic tasks to transcend the limitations of
elevation.
| https://export.arxiv.org/pdf/2208.01600 |
\graphicspath{{./figures/}}
\section{Introduction}
Central to the field of topography is elevation, which is widely used to quantify relief on Earth and other planets.\footnote{For conciseness, a \textit{planet} will refer to any astronomical object with a terrestrial surface, including moons and asteroids.} Yet, despite its ubiquity, elevation has its limitations. The vertical datum, from which elevation is measured, is defined by arbitrary and inconsistent conventions that are subject to change \parencite{datum-1}. On Earth, this results in minor elevation differences between differing datums. The greater problem arises on planets without a sea level, where the datum is defined especially arbitrarily due to the lack of an obvious terrain feature to base it on, hence the lack of a universally standardized way to quantify relief \parencite{datum-2}. Furthermore, when quantifying relief on such planets, the elevation of a point is rather meaningless on its own, deriving most of its value when compared to the elevation of other points. Hence, elevation is better described as a reference frame for comparing height differences, rather than as an absolute measure of relief in and of itself. That is not to disregard elevation altogether, as it is invaluable for many tasks such as mapping, geolocation, and climatology. However, when it comes to the distinct task of quantifying topographic relief, it is beneficial to think outside the datum.
In light of these factors, this paper introduces a universally consistent framework for quantifying relief that does not rely on a datum. The so-called datumless measures are defined purely based on physically meaningful concepts without relying on arbitrary parameters. Designed to be mathematically elegant and easy to understand, the datumless measures can be divided into the datumless point measures and the datumless surface measures.
Following this introduction, the paper first introduces the preliminaries that are required for understanding the datumless measures. Preliminaries include a formal definition of the planetary surface and a way of quantifying the spatial relationship between two points on the planetary surface.
Next up are the datumless point measures, which describe various aspects about the vertical position of a point relative to local terrain (unlike elevation, which describes vertical position relative to an imaginary datum). Perhaps the most widely applicable point measure is \textit{dominance}, which measures how much a point rises above its large-scale surroundings. Among its applications, dominance provides a universally standardized base-to-peak measure of the height of any mountain on any planet. Another practical point measure is \textit{jut}, which measures how sharply a point rises above its immediate surroundings, usually from the bottom of a neighboring valley. Inspired by the omnidirectional relief and steepness (ORS) measure \parencite{spire-measure}, which quantifies the visual impressiveness of a protruding landform, jut achieves this same objective in a way that is more straightforward and easier to visualize geometrically. Several other point measures are also introduced in this section, including the \textit{composite point measures}, which combine different point measures to describe even more aspects of relief.
Following the datumless point measures are the datumless surface measures, which describe the overall relief within a region, as opposed to that of a single point. This section first introduces datumless formulations of surface area and surface mean value that can be directly applied to the fractal-like planetary surface without having to project it onto an arbitrary reference ellipsoid. Then, it uses these definitions to average the datumless point measures over a region, achieving measures of ruggedness, steepness, skewness, and kurtosis. The datumless surface measures are inspired by the surface roughness parameters \parencite{roughness-parameters} in mechanical engineering.
The final technical section of this paper describes the process of computing the datumless point measures. This section also displays the values of the point measures at various locations on Earth, Moon, Mars, and Vesta.
\section{Preliminaries}
\subsection{Datumless Definition of the Planetary Surface}
This subsection introduces a definition of the planetary surface that does not rely on a reference ellipsoid, and is based instead on meaningful concepts in physics. The provided definition is conducive to both computational representation and mathematical abstraction.
\subsubsection{Planet as a Manifold-With-Boundary}
Let the universe be represented by 3-dimensional space, or \(\Universe\). Every location relative to the planet of interest can be mapped to a point\footnote{The term \textit{point} will refer to a position vector, which provides more notational flexibility than an ordered triple.} in \(\Universe\). This paper uses an Earth-Fixed, Earth-Centered (ECEF) coordinate system. Nevertheless, what matters is not the absolute coordinate system, but the relative positions of points with respect to each other.
Let the planet of interest be represented by \(\Manifold{m}\), a theoretically infinite set of points in \(\Universe\), i.e., \(\Manifold{m} \!\subset \Universe\). A point in \(\Universe\) is in \(\Manifold{m}\) if and only if its location is occupied by physical matter that the planet of interest is composed of. Depending on the intentions of measurement, \(\Manifold{m}\) can be defined to include or exclude matter corresponding to features such as buildings, vegetation, water bodies, and permanent ice and snow.
Effectively, \(\Manifold{m}\) is a manifold-with-boundary---more specifically, a 3-manifold with a 2-dimensional boundary. The boundary of \(\Manifold{m}\), denoted by \(\Boundary{m}\), comprises of all points in \(\Manifold{m}\) that are infinitesimally close to a point in \(\Universe\) that is not in \(\Manifold{m}\). The boundary can be thought of as the parts of a planet that are directly exposed to the atmosphere (or outer space, if an atmosphere does not exist).
\[
\Boundary{m} = \qty{\Point{p} \in \Manifold{m} \mid \qty(\exists\, \Point{q} \in \qty{\Universe \setminus \Manifold{m}} : \abs{\,\Point{p} - \Point{q}\,} = \varepsilon)}
\]
where \(\varepsilon\) denotes an infinitesimally small quantity.
\(\Boundary{m}\) has a fractal-like texture all the way down to the microscopic level. It is not useful for topographic applications, as it contains points on cave walls and underneath overhangs, which are not captured by most surface models such as digital elevation models (DEMs). One need not worry about representing \(\Manifold{m}\) or \(\Boundary{m}\) digitally---they are merely conceptual intermediaries used to provide a formalized definition of the planetary surface.
\subsubsection{Planetary Surface}
A planet's gravitational field can be depicted as surfaces of constant gravitational potential surrounding the planet---also known as \textit{equipotential surfaces}---to which the direction of gravity is always perpendicular to. \textit{Plumblines} can be defined as curves running perpendicular to the equipotential surfaces, which are therefore tangent to the direction of gravity along their lengths. Roughly speaking, a falling object will travel along its plumbline.
To introduce terminology used in defining the planetary surface, point \(\Point{q}\) \textit{overhangs} point \(\Point{q}\) if and only if \(\Point{q}\) has a higher gravitational potential than \(\Point{p}\) and also lies on the same plumbline as \(\Point{p}\).
The \textit{planetary surface}, denoted by \(\Manifold{s}\), is the subset of \(\Boundary{m}\) consisting of all points in \(\Boundary{m}\) that are not overhung by another point in \(\Manifold{m}\). The planetary surface can be thought of as the parts of a planet that are directly exposed to falling raindrops.
\[
\Manifold{s} = \qty{\Point{p} \in \Boundary{m} \mid \qty(\nexists \Point{q} \in \Manifold{m} \mid \Overhangs{q}{p})}
\]
\(\Manifold{s}\) is an infinite set of points, meant to represent a perfect topographic model of infinite resolution. In practice, \(\Manifold{s}\) can be represented by a surface model such as a DEM,\footnote{The usage of a DEM in this paper is non-arbitrary, as it is for determining the objective position of points on the planetary surface, rather than as a subjective measure of relief.} converted to a 3-dimensional coordinate system such as ECEF. On Earth, depending on how \(\Manifold{m}\) is defined and which surface model is used, the planetary surface can either be \textit{wet}, including water surfaces in its set of points, or \textit{dry}, including underwater bathymetry instead. In this paper, \(\Manifold{s}\) and all derived measurements should be assumed as wet unless stated otherwise.
\subsection{A Point in the Reference Frame of Another Point}
The concepts in this subsection are best understood through the metaphor of point \(\Point{p}\) being observed from point \(\Point{q}\), ignoring viewshed obstructions. For instance, \(\Point{p}\) could be the summit of a mountain, and \(\Point{q}\) could be the location of an observer at the bottom of the mountain.
\subsubsection{Vertical Unit Vector and Horizontal Plane}
Let the \textit{vertical unit vector} of \(\Point{q}\), denoted by \(\VerticalUnit{q}\), be the vector of magnitude 1 that points \textit{opposite} the direction of gravity at \(\Point{q}\). The vertical unit vector represents the upwards direction of an observer at \(\Point{q}\).
Let the \textit{horizontal plane} of \(\Point{q}\) be the flat plane that \(\VerticalUnit{q}\) is perpendicular to that passes through \(\Point{q}\). Another point \(\Point{p}\) is \textit{above} the horizontal plane of \(\Point{q}\) if it is on the same side of the plane as \(\VerticalUnit{q}\) points towards, \textit{below} the horizontal plane if it is on the opposite side, and \textit{on} the horizontal plane if it touches it. The horizontal plane represents the ``ground level'' of an observer at \(\Point{q}\).
\subsubsection{Height Above the Horizontal Plane}
Let the \textit{position vector} of \(\Point{p}\) with respect to \(\Point{q}\), denoted by \(\PositionVector{p}{q}\), be the vector pointing from \(\Point{q}\) to \(\Point{p}\), i.e., the difference between their coordinates:
\[
\PositionVector{p}{q} = \Point{p} - \Point{q}
\]
The arrow subscript notation as used above denotes an attribute of \(\Point{p}\) in the reference frame of \(\Point{q}\). This notation will continue to be used.
The \textit{height} of \(\Point{p}\) above the horizontal plane of \(\Point{q}\), denoted by \(\Height{p}{q}\), is equal to the scalar projection of \(\PositionVector{p}{q}\) onto the vertical unit vector of \(\Point{q}\).
\[
\Height{p}{q} = \PositionVector{p}{q} \vdot \VerticalUnit{q}
\]
Point \(\Point{p}\) is above the horizontal plane of \(\Point{q}\) if \(\Height{p}{q}\) is positive, on the horizontal plane if \(\Height{p}{q}\) is 0, and below the horizontal plane if \(\Height{p}{q}\) is negative. Due to planetary curvature, the height of one point above the horizontal plane of another is not equal to the elevation difference between the two points. For instance, even though Mt. Everest has a much higher elevation than the Dead Sea, Mt. Everest is over 1000 kilometers below the horizontal plane of the Dead Sea.
\subsubsection{Angle of Elevation}
The \textit{angle of elevation} of \(\Point{p}\) above the horizontal plane of \(\Point{q}\), denoted by \(\AngleOfElevation{p}{q}\), is the signed angle between \(\PositionVector{p}{q}\) and the horizontal plane of \(\Point{q}\). It is positive if \(\Point{p}\) is above the horizontal plane of \(\Point{q}\), negative if \(\Point{p}\) is below the horizontal plane, and 0 if \(\Point{p}\) is on the horizontal plane.
\[
\AngleOfElevation{p}{q} = \arcsin(\PositionUnit{p}{q} \vdot \VerticalUnit{q})
\]
where \(\PositionUnit{p}{q}\) is the position unit vector, defined as follows:
\[
\PositionUnit{p}{q} = \frac{\PositionVector{p}{q}}{\abs{\PositionVector{p}{q}}}
\]
The value of \(\AngleOfElevation{p}{q}\) is undefined if \(\Point{p} = \Point{q}\).
\subsubsection{Angle-Reduced Height}
Let the \textit{angle-reduced height} of \(\Point{p}\) with respect to \(\Point{q}\), denoted by \(\AngleReducedHeight{p}{q}\), be equal to the following:
\[
\AngleReducedHeight{p}{q} = \Height{p}{q} \, \abs{\sin \AngleOfElevation{p}{q}}
\]
and if \(\Point{p} = \Point{q}\)\,, let \(\AngleReducedHeight{p}{q} = 0\).
The magnitude of angle-reduced height describes how sharply point \(\Point{p}\) gains its relief with respect to \(\Point{q}\). Its magnitude increases with both a greater height difference and a steeper angle of elevation. Meanwhile, the sign of angle-reduced height describes which side of the horizontal plane of \(\Point{q}\) that \(\Point{p}\) is on.
Subjectively, angle-reduced height describes how visually impressive point \(\Point{p}\) appears to an observer at \(\Point{q}\). For instance, the impressiveness of a mountain is a result of not only how high it rises, but also the angle at which it does, hence why mountains tend to appear more impressive from close-up. To capture this phenomenon, angle-reduced height is equal to height above the horizontal plane when \(\AngleOfElevation{p}{q}\) is \ang{90} or \ang{-90} (akin to a vertical cliff). However, the less steep the angle of elevation, the lower the value of \(\abs{\sin(\AngleOfElevation{p}{q})}\) is and the more the expression for \(\AngleReducedHeight{p}{q}\) gets reduced.
Earl and Metzler \parencite{spire-measure} was the first to present a formula that specifically captures the visual impressiveness of a point as observed from another, serving as an inspiration for this paper. In their paper, the formula \(H \frac{\theta}{\ang{90}}\) was used, where \(H\) is the absolute value of the elevation difference between two points and \(\theta\) is the angle of elevation on a flat Earth model, where the elevation difference and geodetic distance between two points are treated as the legs of a right triangle. Angle-reduced height presents an improvement upon this formula, as it is easier to visualize geometrically, defined for angles greater than \ang{90}, and does not require a datum.
\section{Datumless Point Measures}
The datumless point measures describe various aspects about the relief of a point relative to its surroundings.
\subsection{Dominance}
The \textit{dominance} of point \(\Point{p}\) is the maximum height of \(\Point{p}\) above the horizontal plane of any point on the planetary surface:
\[
\Dominance{p} = \max_{\Point{q} \, \in \, \Manifold{s}} \, \Height{p}{q}
\]
Dominance measures how much a point rises above its surroundings. It is guaranteed to be greater than equal to 0 for any point on the planetary surface (unlike elevation, which may be negative). This is because the height of \(\Point{p}\) above the horizontal plane of itself is 0, and dominance involves taking a maximum with respect to all surface points, including \(\Point{p}\) itself. In addition, due to planetary curvature, \(\Point{p}\) has a negative height with respect to points very far away. Hence, only points within a local vicinity of \(\Point{p}\), known as its \textit{curvature-scale surroundings}, are relevant to the calculation of dominance.
The \textit{base} of point \(\Point{p}\) is the point on the planetary surface with the maximum height of \(\Point{p}\) above its horizontal plane. The height of \(\Point{p}\) above the horizontal plane of its base is in turn equal to the dominance of \(\Point{p}\). For a point within a mountain range, its base is typically located where the mountain range meets lower plains. The dominance of the summit of a mountain provides a non-arbitrary base-to-peak measure of the mountain's height. This works for mountains on any terrestrial planet.
On Earth, most points have a base close to sea level, therefore measuring a dominance that is usually only slightly lower than elevation. However, for points with an elevated base, usually on a high plain or plateau, dominance can be significantly lower than elevation, providing a more perceptually accurate measure of height. Consider the summit of Pikes Peak in the Front Range of Colorado. The elevation of the summit is \SI{4352}{\meter}, a value that correlates well with the air pressure, climate, and vegetation of the peak. In contrast, its dominance is \SI{2575}{\meter}, which captures how much it rises above the neighboring Great Plains. Likewise, a point on the Great Plains may have an elevation above \SI{1000}{\meter}, but its dominance will be close to 0, reflecting the sheer flatness of the surroundings.
Dominance is considered a \textit{converging measure} because the arrow in the subscript of the maximized expression points towards \(\Point{p}\), the point of interest. Metaphorically, this is akin to looking towards \(\Point{p}\) from points on the planetary surface, the lines of sight converging at \(\Point{p}\). The \textit{converging height map} of \(\Point{p}\) shows the height of \(\Point{p}\) above the horizontal planes of points in its surroundings:
A point with a dominance of 0 is known as a \textit{submissive point}. A submissive point does not rise above the horizontal plane of any point on the planetary surface. Such points are usually found in recessed features such as valleys, canyons, trenches, and craters. An example of a submissive point is Badwater Basin in Death Valley.
\subsection{Submission}
The \textit{submission} of point \(\Point{p}\) is the maximum height of any point on the planetary surface above the horizontal plane of \(\Point{p}\):
\[
\Submission{p} = \max_{\Point{q} \, \in \, \Manifold{s}} \, \Height{q}{p}
\]
Submission measures how much a point dips below its surroundings, yielding a value greater than or equal to 0 for any point on the planetary surface. Like dominance, submission can be described as a measure of curvature-scale relief, as points very far away from \(\Point{p}\) correspond to negative height values that are irrelevant to the calculation of submission.
The point on the planetary surface with the maximum height above the horizontal plane of \(\Point{p}\) is known as the \textit{roof} of \(\Point{p}\). For a point within a mountain range, its roof is usually located at the top of a high mountain within the mountain range.\footnote{It may be interesting to note that base of the point with the greatest dominance on a planet (Mt. Everest on Earth) is the point with the greatest submission, and the roof of the point with the greatest submission on a planet is the point with the greatest dominance.}
Submission is a \textit{diverging measure} because the arrow in the subscript of the maximized expression points away from \(\Point{p}\). Metaphorically, this is akin to an observer standing at \(\Point{p}\) and looking away to other points on the planetary surface, the lines of sight diverging from \(\Point{p}\). The \textit{diverging height map} of \(\Point{p}\) shows the height of points in the surroundings of \(\Point{p}\) above the horizontal plane of \(\Point{p}\):
A point with a submission of 0 is known as a \textit{dominant point}. A person standing at a dominant point is ``on top of the world,'' as no point rises above their horizontal plane. Dominant points are usually found on protruding features such as mountains, hills, and islands. An example of a dominant point is the summit of Mt. Whitney, the highest point of the Sierra Nevada in terms of both dominance and elevation.
\subsection{Jut}
The \textit{jut} of point \(\Point{p}\) is the maximum angle-reduced height of \(\Point{p}\) with respect to any point on the planetary surface:
\[
\Jut{p} = \max_{\Point{q} \, \in \, \Manifold{s}} \, \AngleReducedHeight{p}{q}
\]
Jut measures how sharply a point rises above its immediate surroundings, yielding a greater value for higher and steeper rises. It is greater than or equal to 0 for any point on the planetary surface. Jut is inspired by the omnidirectional relief and steepness (ORS) measure \parencite{spire-measure}, which was designed to capture the visual impressiveness of a mountain peak or other protruding feature. Jut achieves the same objective in a much simpler way, as it only requires taking a maximum, as opposed to a surface integral in the case of ORS.
The point on the planetary surface that measures the maximum angle-reduced height of \(\Point{p}\) with respect to itself is known as the \textit{immediate base} of \(\Point{p}\). For a point within a mountain range, its immediate base is usually at the bottom of a neighboring valley (as opposed to its base, which is usually at the bottom of a mountain range).
Jut is a converging measure because the arrow in the subscript of the maximized expression points towards \(\Point{p}\). The \textit{converging angle-reduced height map} of point \(\Point{p}\) shows the angle-reduced height of \(\Point{p}\) with respect to points in its surroundings:
\subsection{Rut}
The \textit{rut} of point \(\Point{p}\) is the maximum angle-reduced height of any point on the planetary surface with respect to \(\Point{p}\):
\[
\Rut{p} = \max_{\Point{q} \, \in \, \Manifold{s}} \, \AngleReducedHeight{q}{p}
\]
Rut measures how sharply or impressively a point dips below its immediate surroundings, accounting for both height differences and steepness. It is greater than or equal to 0 for any point on the planetary surface.
The point on the planetary surface with the maximum angle-reduced height with respect to \(\Point{p}\) is known as the \textit{immediate roof} of \(\Point{p}\). For a point within a mountain range, its immediate roof is usually at the top of a neighboring mountain (as opposed to its roof, which is usually at the top of a superlative mountain in the mountain range).\footnote{It may be interesting to note that the immediate base of the point with the greatest jut on a planet is the point with the greatest rut, and the immediate roof of the point with the greatest rut on a planet is the point with the greatest jut.}
Rut is a converging measure because the arrow in the subscript of the maximized expression points away from \(\Point{p}\). The \textit{diverging angle-reduced height map} of point \(\Point{p}\) shows the angle-reduced height of points in the surroundings of \(\Point{p}\) with respect to \(\Point{p}\):
\subsection{Composite Point Measures}
The aforementioned point measures can be combined to describe more aspects of relief. Below are some examples:
\subsubsection{Range}
The \textit{range} of point \(\Point{p}\) is the sum of its dominance and submission:
\[\Range{p} = \Dominance{p} + \Submission{p}\]
Range measures the total span of vertical relief in the curvature-scale surroundings of a point. It yields similar values for points in close proximity. When applied to a point within a mountain range, range typically describes the span of vertical relief in the mountain range. For example, the range of the summit of Half Dome is the sum of its dominance and submission, \(\SI{2235}{\meter} + \SI{1252}{\meter} = \SI{3487}{\meter}\), an indicator of the span of vertical relief in the Yosemite region of the Sierra Nevada. By virtue of being in close proximity, the range of Mirror Lake is almost the same, with a value of \(\SI{809}{\meter} + \SI{2682}{\meter} = \SI{3491}{\meter}\).
\subsubsection{Normalized Dominance}
The \textit{normalized dominance} of point \(\Point{p}\) is its dominance divided by its range:
\[\NormalizedDominance{p} = \frac{\Dominance{p}}{\Range{p}}.\]
Normalized dominance yields a value ranging from 0 at a submissive point to 1 at a dominant point, describing the vertical position of a point within the range of vertical relief in its curvature-scale surroundings.\footnote{Normalized dominance is also equal to \(1 - \frac{\Submission{p}}{\Range{p}}\)\,.} Continuing the running example of Yosemite National Park: At the city of Merced in the Central Valley, the normalized dominance is 0.02\,. Moving higher to Mirror Lake in Yosemite Valley, the normalized dominance is 0.23\,. At the summit of Half Dome, the normalized dominance is 0.64\,. Finally, at the summit of Mt. Dana, a dominant point, the normalized dominance is 1.
\subsubsection{Fluctuation}
The \textit{fluctuation} of point \(\Point{p}\) is the sum of its jut and rut:
\[\Fluctuation{p} = \Jut{p} + \Rut{p}\]
Fluctuation measures how mountainous the surroundings of a point are, accounting for height differences and steepness in both the upwards and downwards directions of the point. For a point within a mountain range, its fluctuation usually approximates the angle-reduced height of a neighboring mountaintop with respect to a neighboring valley, thus describing mountaintop-to-valley relief (as opposed to relief in the entire mountain range, as described by range). For instance, the fluctuation of Mt. Whitney trailhead at Whitney Portal is equal to the sum of its jut and rut: \(\SI{134}{\meter} + \SI{680}{\meter} = \SI{814}{\meter}\). By being on the same slope of Mt. Whitney, Upper Boy Scout Lake has a similar fluctuation: \(\SI{376}{\meter} + \SI{465}{\meter} = \SI{841}{\meter}\).
\subsubsection{Normalized Jut}
The \textit{normalized jut} of point \(\Point{p}\) is its jut divided by its fluctuation:
\[\NormalizedJut{p} = \frac{\Jut{p}}{\Fluctuation{p}}\]
Normalized jut yields a value from 0 to 1 that describes the vertical position of a point relative to its immediate surroundings.\footnote{Normalized jut is also equal to \(1 - \frac{\Rut{p}}{\Fluctuation{p}}\)\,.} The normalized jut of a point within a mountain range typically describes the vertical position of the point relative to the mountain it is located on (rather than vertical position relative to the mountain range, as for normalized dominance) in a perceptually accurate way. Unlike normalized dominance, normalized jut is usually close to 1 at the top of a mountain and close to 0 at the bottom of a valley, regardless of the vertical position of the mountain or valley relative to a mountain range. In addition, normalized jut gives greater weight to steeper slopes of a mountain. For instance, if \(\Point{p}\) is, say, vertically halfway on a mountain with a steeper top half and a more gradual bottom half, the normalized jut of \(\Point{p}\) would be less than \(\frac{1}{2}\).
As an example, the normalized jut of Mather Point on the South Rim of the Grand Canyon is equal to its jut divided by its fluctuation, \(\frac{\SI{710}{\meter}}{\SI{723}{\meter}} = 0.98\)\,. The value close to 1 describe the vertical position of Mather Point relative to its immediate surroundings in a perceptually accurate way. (In contrast, the normalized dominance of Mather Point is only 0.58 because the distant San Francisco Peaks rise above its horizontal plane.) Another example is Whitney Portal, with an angle-reduced normalized dominance of \(\frac{\SI{138}{\meter}}{\SI{819}{\meter}} = 0.16\). The low value makes sense from a perceptual standpoint, given that Whitney Portal is the usual starting point of the Mt. Whitney ascent, and that the drive up to Whitney Portal is not nearly as steep as the ascent to the summit. (Meanwhile, Whitney Portal's normalized dominance of 0.53 suggests that it is positioned approximately halfway within the span of relief in the Sierra Nevada mountain range.)
\subsubsection{Domangle}
The \textit{domangle} of point \(\Point{p}\) is equal to the following:
\[\Domangle{p} = \arcsin(\frac{\Jut{p}}{\Dominance{p}})\]
Domangle measures yields a value from \ang{0} to \ang{90} that describes how steeply a point rises above its surroundings. It provides an interpolated value between the angle of elevation of \(\Point{p}\) above its base and its angle of elevation above its immediate base. Taking Yosemite as an example, the domangle of the summit of Half Dome is its jut divided by its dominance, \(\arcsin(\frac{\SI{1093}{\meter}}{\SI{2235}{\meter}}) = \ang{29.3}\)\,. While one might expect a greater value given how steeply Half Dome rises above Yosemite Valley, domangle also factors in the higher but less steep rise of Half Dome above the Central Valley, providing an interpolated value between these two. In fact, Half Dome has a significantly higher domangle than most summits. In comparison, the domangle of the summit of Mt. Lyell, the highest point in Yosemite National Park, is \(\arcsin(\frac{\SI{309}{\meter}}{\SI{3367}{\meter}}) = \ang{5.3}\). Despite having a higher dominance than Half Dome, Mt. Lyell rises much more gradually.
\subsubsection{Subangle}
The \textit{subangle} of point \(\Point{p}\) is equal to the following:
\[\Subangle{p} = \arcsin(\frac{\Rut{p}}{\Submission{p}})\]
Subangle yields a value from \ang{0} to \ang{90} that describes how steeply a point dips below its surroundings. Its value is between the angle of elevation of the roof of \(\Point{p}\) above \(\Point{p}\) and the angle of elevation of its immediate roof. For instance, the subangle of Mirror Lake is equal to its rut divided by its submission, \(\arcsin(\frac{\SI{1034}{\meter}}{\SI{2682}{\meter}}) = \ang{22.7}\), While one might expect a higher value given how steeply Half Dome rises above Mirror Lake, subangle also factors in the less steep rise of the higher peaks east of Half Dome. In comparison, the subangle of the summit of Half Dome is only \(\arcsin(\frac{\SI{68}{\meter}}{\SI{1252}{\meter}}) = \ang{3.1}\), as Half Dome does not dip steeply below higher mountains nearby.
\subsubsection{Rangle}
The \textit{rangle} of point \(\Point{p}\) is equal to the following:
\[\Rangle{p} = \arcsin(\frac{\Fluctuation{p}}{\Range{p}})\]
Rangle yields a value from \ang{0} to \ang{90} that describes the overall steepness of the surroundings of a point in both the upwards and downwards directions, weighted towards the direction of greater curvature-scale relief. The rangle of a point is between its domangle and subangle. As an example, the rangle of the summit of Half Dome is its fluctuation divided by its range, \(\arcsin(\frac{\SI{1161}{\meter}}{\SI{3487}{\meter}}) = \ang{19.4}\). The rangle of Half Dome is weighted more towards its domangle, as the dominance of Half Dome is greater than its submission.
\subsection{Domain-Restricted Point Measures}
Occasional tasks may call for including only certain points in the planetary surface when calculating the datumless point measures. Examples include the jut of Mt. Everest with respect to its Nepal side (ignoring points on the Tibetan side), or the dry submission of an underwater point with respect to the sea floor (ignoring points on land).
Let \(\Manifold{t}\) be a subset of the planetary surface (\(\Manifold{t} \!\subset\! \Manifold{s}\)) representing the region that \(\Manifold{s}\) is being restricted to. Let \(f\) be the point measure whose domain-restricted form is of interest, and let \(f(\Point{p}, \Manifold{t})\) denote the domain-restricted form of \(f\) with respect to \(\Manifold{t}\).
If \(f\) is a non-composite measure (i.e., dominance, submission, jut, or rut), \(f(\Point{p}, \Manifold{t})\) is equivalent to the result of taking the maximum with respect to all points in \(\Manifold{t}\), rather than all points in \(\Manifold{s}\). For example:
\[
\DominanceRestricted{p}{t} = \max_{\Point{q} \, \in \, \Manifold{t}} \, \Height{p}{q}
\]
Meanwhile, if \(f\) is a composite measure, \(f(\Point{p}, \Manifold{t})\) is equivalent to the result of replacing all non-composite point measures in the formula for \(f\) by their domain-restricted forms. For example:
\[
\NormalizedDominanceRestricted{p}{t} = \frac{\DominanceRestricted{p}{t}}{\RangeRestricted{p}{t}} = \frac{\DominanceRestricted{p}{t}}{\DominanceRestricted{p}{t} + \SubmissionRestricted{p}{t}}
\]
\section{Datumless Surface Measures}
The datumless surface measures reveal various aspects about the relief and surface characteristics of a region, as opposed to that of a single point.
\subsection{Preliminaries: Datumless Surface Functionals}
The datumless surface functionals (not to be confused with the datumless surface measures) are datumless formulations of surface area, mean value of a surface variable, and other statistics of a surface variable, all of which can be applied directly to the planetary surface without projecting it onto an arbitrary reference ellipsoid. However, quite conveniently, the datumless surface functionals on most roughly spherical planets measure a similar value as their analogues on a reference ellipsoid, and can be approximated as such in a geographic information system (GIS) program. The datumless surface functionals are not limited to topographic applications, and can be applied outside the field.
\subsubsection{Datumless Surface Area}
Due to the fractal-like texture of the planetary surface, it does not make sense to calculate its surface area directly. Instead, the most common pre-existing way of representing surface area involves projection onto a reference ellipsoid. While convenient, the requirement of a datum makes this approach arbitrary to an extent.
Meanwhile, consider the equipotential surfaces, or surfaces of constant gravitational potential. The equipotential surfaces are physically meaningful surfaces with a defined surface area. In addition, there is a one-to-one correspondence between points in \(\Manifold{s}\) and points on an equipotential surface. This is because points in \(\Manifold{s}\) can be moved along their plumblines to their corresponding locations on an equipotential surface. One may think of choosing a particular equipotential surface to represent the surface area of a planet. However, such an approach is still arbitrary, as the choice of which equipotential surface to use remains open-ended.
The non-arbitrary solution is to not have to choose a particular equipotential surface, but instead to add up the surface areas of bits and pieces of the equipotential surfaces that intersect the planetary surface. Let \(\Manifold{t}\) be a subset of the planetary surface (\(\Manifold{t} \subseteq \Manifold{s}\)) denoting the region whose surface area is of interest. The \textit{datumless surface area} of \(\Manifold{t}\), denoted by \(\DatumlessSurfaceArea{t}\), is the sum of the surface areas of infinitesimally small portions of different equipotential surfaces that intersect \(\Manifold{t}\). Datumless surface area can be notated by the coarea formula in geometric measure theory \cite{coarea-formula}:
\[
\DatumlessSurfaceArea{t} = \int_{T} \abs{\nabla\, U(\Point{p})} \,d\:\!\Point{p}
\]
where \(U(\Point{p})\) denotes the gravitational potential at point \(\Point{p}\).
\subsubsection{Datumless Mean Value}
Let \(f\) be a function that assigns points on the planetary surface \(\Manifold{s}\) to numerical values. Examples of such functions include surface temperature, precipitation, and in the case of this paper, the datumless point measures. The \textit{datumless mean value} of \(f\) over the region \(\Manifold{t}\), denoted by \(\fbar(\Manifold{t})\), is the sum of the products of the surface areas of infinitesimally small portions of \(\Manifold{t}\)-\:\!intersecting equipotential surfaces with the values of \(f\) at their respective locations, all of which is divided by the datumless surface area of \(\Manifold{t}\):
\[
\DatumlessMeanValue{t} = \frac{1}{\DatumlessSurfaceArea{t}} \int_{T} f(\Point{p}) \, \abs{\nabla\, U(\Point{p})} \,d\:\!\Point{p}
\]
\subsubsection{Other Datumless Statistics}
While its applications are outside the scope of this paper, it is possible to construct a frequency distribution of the values that \(f\) can take on within a region. This allows the representation of other statistics beyond mean value, such as standard deviation, median, and quantiles.
Let \(\Manifold{t}_{f \,\leq\, x}\) denote the subset of \(\Manifold{t}\) consisting of all points in \(\Manifold{t}\) where \(f(\Point{p})\) is less than or equal to \(x\):
\[
\Manifold{t}_{f \,\leq\, x} = \qty{\Point{p} \in \Manifold{t} \mid f(\Point{p}) \leq x}
\]
The \textit{datumless percentile} of the value \(x\) for the surface function \(f\) on region \(\Manifold{t}\), denoted by \(\DatumlessPercentile{x}{f}{t}\), is equal to the following:
\[
\DatumlessPercentile{x}{f}{t} = \frac{\DSA(\Manifold{t}_{f \,\leq\, x})}{\DatumlessSurfaceArea{t}}
\]
Using datumless percentile, one can construct a \textit{datumless frequency distribution} of the values of \(f\) within \(\Manifold{t}\) by calculating the datumless percentile of all values of \(f\) within \(\Manifold{t}\).\,\footnote{Note that more computationally efficient ways of constructing a datumless frequency distribution may exist; this is only a proof of concept.} From the datumless frequency distribution, a variety of \textit{datumless statistics} can be derived.\footnote{Concepts in the datumless surface functionals can also be applied to paths on the planetary surface that are locally one-dimensional. In such cases, datumless surface area becomes datumless path length instead.}
\subsection{Datumless Surface Measures: Special Considerations}
The \textit{datumless surface measures} describe various aspects about the relief and surface characteristics within a region, as opposed to that of a single point. They are essentially the mean values of the datumless point measures, with a few caveats and special considerations:
\subsubsection{Domain-Restricted Surface Measures}
Let \(f\) be the point measure whose mean value over region \(\Manifold{t}\) is being represented. If \(f\) is non-domain-restricted, its mean value over \(\Manifold{t}\) is notated as \(\DatumlessMeanValue{t}\). However, if \(f\) is domain-restricted over region \(\Manifold{g}\), its mean value over \(\Manifold{t}\) is notated as \(\DatumlessMeanValueRestricted{t}{g}\). The most common form of domain restriction regarding the datumless surface measures is to restrict the domain of \(f\) to \(\Manifold{t}\), the region whose mean value is being calculated. This is notated as \(\DatumlessMeanValueRestricted{t}{t}\).
For instance, let \(\Manifold{t}\) denote the subset of the planetary surface consisting of all points in the Central Valley of California. The mean submission of Central Valley (with respect to the planetary surface), denoted by \(\MeanSubmission{t}\), is well over \SI{1000}{\meter}, as the Sierra Nevada and Coast Ranges rise significantly above the region. However, the mean submission of Central Valley with respect to itself, denoted by \(\MeanSubmissionRestricted{t}{t}\), is much closer to 0, as the region itself is fairly flat.
\subsubsection{Special Definition for Composite Surface Measures}
If \(f\) is a composite point measure, \(\fbar\) is not simply the mean value of \(f\) over all points in \(\Manifold{t}\). Instead, it is defined as being equivalent to the result of replacing all non-composite point measures in the formula for \(f\) by their mean values over \(\Manifold{t}\). For example:
\[
\MeanNormalizedDominance{t} = \dfrac{\MeanDominance{t}}{\MeanRange{t}} = \dfrac{\MeanDominance{t}}{\MeanDominance{t} + \MeanSubmission{t}}
\]
The same applies to domain-restricted surface measures. For example:
\[
\MeanRangleRestricted{t}{t} = \dfrac{\MeanFluctuationRestricted{t}{t}}{\MeanRangeRestricted{t}{t}} = \dfrac{\MeanJutRestricted{t}{t} + \MeanRutRestricted{t}{t}}{\MeanDominanceRestricted{t}{t} + \MeanSubmissionRestricted{t}{t}}
\]
\subsection{Measuring Surface Characteristics}
This subsection describes the appropriate datumless surface measures to use to quantify different surface characteristics, namely ruggedness, steepness, skewness, and kurtosis. The datumless surface measures are inspired by the surface roughness parameters in mechanical engineering \parencite{roughness-parameters}.
\subsubsection{Curvature-Scale vs. Immediate Relief}
Depending on the resolution at which relief is to be assessed, different datumless surface measures should be used.
The \textit{curvature-scale relief} of a region is concerned with the shape, size, and structure of large-scale features such as mountain ranges and plains. Measures of curvature-scale relief include mean dominance, mean submission, mean range, and mean normalized dominance.
Meanwhile, the \textit{immediate relief} of a region is concerned with the shape, size, and structure of neighboring small-scale features, such as mountains and valleys within a mountain range. Measures of immediate relief include mean jut, mean rut, mean fluctuation, and mean normalized jut.
\subsubsection{Rise, Dip, and Span-Based Relief}
Certain datumless surface measures may be more appropriate than others, depending on the unique geomorphic processes within the region of interest.
\textit{Rise-based} surface measures equate relief with rise above local terrain. They include mean dominance, mean jut, and mean domangle. Rise-based surface measures make the most sense in places when the main geomorphic processes tend to generate mostly protruding features. This is true for much of Earth, as the planet's unique plate tectonics enable the formation of extensive mountain ranges.
\textit{Dip-based} surface measures equate relief with dip below local terrain. They include mean submission, mean rut, and mean subangle. Dip-based surface measures make the most sense in places where the main geomorphic processes tend to generate mostly recessed features. Such locations include ocean trenches on Earth and the Valles Marineris canyon on Mars.
\textit{Span-based} surface measures equate relief with both rise above and dip below local terrain, giving equal weight to rises and dips of the same magnitude. Span-based surface measures include mean range, mean fluctuation, and mean rangle. The span-based approach makes sense in places with a fairly balanced degree of protruding and recessed features. On Earth, that includes certain highland and mesa regions such as the Colorado Plateau and the Ethiopian Highlands. Span-based surface measures should be used in places heavily shaped by asteroid impacts, which tend to generate both recessed crater floors and protruding crater rims. Such locations include much of the Moon and most other planets, which lack the plate tectonics unique to Earth. In addition, span-based measures are recommended for comparing the degree of relief across different planets with differing geomorphic processes.
\subsubsection{Ruggedness Measures}
In this paper, \textit{ruggedness} refers to the degree of vertical relief within a region. The most common pre-existing method of quantifying ruggedness involves taking an average of elevation or an elevation-derived measure. However, elevation not only requires a datum, but also does not directly describe local relief, unlike the datumless measures.
Within the datumless framework, curvature-scale ruggedness is quantified by mean dominance (rise-based), mean submission (dip-based), and mean range (span-based).
In the meantime, immediate ruggedness is quantified by mean jut (rise-based), mean rut (dip-based), and mean fluctuation (span-based). The immediate ruggedness measures are useful for capturing the impressiveness of a mountainous region, as they account for both height differences and steepness between neighboring mountains and valleys.
\subsubsection{Steepness Measures}
Outside the datumless framework, slope is often used to quantify the steepness of a point's surroundings and averaged to determine the steepness of a region. Nevertheless, slope is wholly dependent on the resolution of a surface model and fails to provide meaningful values at very high resolutions.
Within the datumless framework, mean domangle (rise-based), mean subangle (dip-based), and mean rangle (span-based) are used to quantify the steepness of a region. They provide an interpolated value between the angle of elevation of a typical point relative to its immediate surroundings and the angle of elevation of a typical point relative to its curvature-scale surroundings.
\subsubsection{Skewness Measures}
In this paper, \textit{skewness} refers to the balance between low and high terrain. A region is low-skewed if it consists predominantly of low terrain, and high-skewed if it consists predominantly of high terrain. For instance, a mostly flat region with widely-separated mountain ranges is low-skewed, whereas a mostly flat region with widely-separated valleys is high-skewed.
Within the datumless framework, curvature-scale skewness is quantified by mean normalized dominance, with a value close to 0 indicating a low skew and a value close to 1 indicating a high skew. On the curvature-scale, Earth is especially low-skewed compared to other planets, as due to its unique plate tectonics, mountain ranges are its most common source of relief. In comparison, the Moon has much more of a neutral skew, as its asteroid impacts tend to generate a similar degree of protruding and recessed features.
Meanwhile, immediate skewness is quantified by mean normalized jut, with a value close to 0 indicating low-skewed terrain and a value close to 1 indicating high-skewed terrain. Immediate skewness is determined by the shape of neighboring mountains and valleys. For instance, a landscape of pointy mountains separated by U-shaped valleys has a low skew, whereas a landscape of flat-topped mountains separated by narrow valleys has a high skew.
\subsubsection{Kurtosis Measures}
In this paper, \textit{kurtosis} refers to the tendency of terrain to gravitate to high and low extremes. A region has a high kurtosis if points within it often gravitate to such extremes, and a low kurtosis if points within it tend to stay close to the middle of the local span of relief.
Kurtosis measures require a few preliminaries before arriving at their main definitions. The measure of curvature-scale kurtosis is defined as follows:
\begin{itemize}
\item Let the \textit{deviation} of point \(\Point{p}\) be equal to half its range minus its submission. Deviation describes how far a point is from central tendency on the curvature-scale.
\[
\Deviation{p} \:=\: \frac{\Range{p}}{2} - \Submission{p} \:=\: \frac{\Dominance{p} - \Submission{p}}{2}
\]
\item Let the \textit{mean absolute deviation} (MAD) of region \(\Manifold{t}\) be equal to the mean value of the absolute value of deviation \(\abs{\Deviation{p}}\) over region \(\Manifold{T}\), without applying the special definition of mean value for composite point measures.\footnote{Mean absolute deviation can also be used to measure curvature-scale ruggedness.}
\[
\MeanAbsoluteDeviation{t} = \frac{1}{\DatumlessSurfaceArea{t}} \int_{T} \abs{\Deviation{p}} \, \abs{\nabla\, U(\Point{p})} \,d\:\!\Point{p}
\]
\end{itemize}
The measure of kurtosis based on these preliminaries is called \textit{normalized mean absolute deviation} (NMAD), which is equal to the mean absolute deviation of \(\Manifold{t}\) divided by half the mean range of \(\Manifold{t}\):
\[
\NormalizedMeanAbsoluteDeviation{t} = \frac{\MeanAbsoluteDeviation{t}}{\MeanRange{t} \,/\, 2} = \frac{2 (\MeanAbsoluteDeviation{t})}{\MeanRange{t}}
\]
Normalized mean absolute deviation yields a value from 0 to 1, with a higher value indicating greater kurtosis on the curvature-scale.
Meanwhile, the measure of immediate kurtosis is defined via a similar process:
\begin{itemize}
\item Let the \textit{perturbation} of point \(\Point{p}\) be equal to half its fluctuation minus its rut. Perturbation describes how far a point is from central tendency relative to its immediate surroundings.
\[\Perturbation{p} \:=\: \frac{\Fluctuation{p}}{2} - \Rut{p} \:=\: \frac{\Jut{p} - \Rut{p}}{2}\]
\item Let the \textit{mean absolute perturbation} of region \(\Manifold{t}\), denoted by \(\MeanAbsolutePerturbation{t}\), be equal to the mean value of the absolute value of perturbation \(\abs{\Perturbation{p}}\) over region \(\Manifold{T}\), without applying the special definition for composite surface measures.\footnote{Mean absolute perturbation can also be used to measure immediate ruggedness.}
\[
\MeanAbsoluteDeviation{t} = \frac{1}{\DatumlessSurfaceArea{t}} \int_{T} \abs{\Perturbation{p}} \, \abs{\nabla\, U(\Point{p})} \,d\:\!\Point{p}
\]
\end{itemize}
The \textit{normalized mean absolute perturbation} of region \(\Manifold{t}\), denoted by \(\NormalizedMeanAbsolutePerturbation{t}\), is equal to the mean absolute perturbation of \(\Manifold{t}\) divided by half the mean fluctuation of \(\Manifold{t}\):
\[
\NormalizedMeanAbsolutePerturbation{t} = \frac{\MeanAbsolutePerturbation{t}}{\MeanFluctuation{t} \,/\, 2} = \frac{2 (\MeanAbsolutePerturbation{t})}{\MeanFluctuation{t}}
\]
Normalized mean absolute perturbation yields a value from 0 to 1, with a higher value indicating greater immediate kurtosis.
The domain-restricted forms of the kurtosis measures are revealed by replacing all instances of \((\Point{p})\) in the previous expressions by \((\Point{p}, \Manifold{t})\) and all instances of \((\Manifold{t})\) by \((\Manifold{t}, \Manifold{g})\)
\section{Computing the Datumless Point Measures on Earth and Beyond}
This section introduces formulas that can be directly applied in a GIS program to compute the datumless point measures. It also showcases the values of the datumless point measures at various locations on Earth, Moon, Mars, and Vesta, comparing their values with elevation and prominence. The datumless surface measures are not computed due to issues of high time complexity.
\subsection{Computing the Datumless Point Measures}
\subsubsection{Converting a Surface Model to ECEF}
The first step in computing the datumless point measures is to convert an existing surface model such as a DEM to an ECEF coordinate system. The following models are used for their respective planets:
\begin{table}[H]
\caption{Surface models used for their respective planets.}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{\textbf{Planet}} & \textbf{Model} & \textbf{Height Type} & \textbf{Resolution} \\ \hline
Earth (land and water surface) & ALOS World 3D \parencite{alos-1, alos-2, alos-3, alos-4} & Orthometric & \SI{30}{\meter} \\
Earth (bathymetry) & GEBCO\_\,2022 Grid \parencite{gebco} & Orthometric & \SI{450}{\meter} \\
Earth (geoid) & EGM96 \parencite{egm96-1, egm96-2} & Geoid & \SI{450}{\meter} \\
Moon & LRO LOLA \parencite{lola} & Ellipsoidal & \SI{118}{\meter} \\
Mars (orthometric) & MOLA - HRSC Blended \parencite{mola-hrsc} & Orthometric & \SI{200}{\meter} \\
Mars (areoid) & GMM-2B \parencite{gmm-2b} & Areoid & \SI{200}{\meter} \\
Vesta & Dawn FC HAMO \parencite{hamo} & Radius & \SI{93}{\meter} \\ \hline
\end{tabular}%
}
\end{table}
On Earth and Mars, elevation is presented as orthometric height, or height above the geoid---namely, the EGM96 geoid on Earth and the GMM-2B areoid (essentially a Martian geoid with an arbitrarily defined geopotential) on Mars. Before converting to ECEF, it is a good practice to convert orthometric height to height above a reference ellipsoid---the WGS84 reference ellipsoid on Earth and a sphere with a \SI{3396190}{\meter} radius on Mars.\footnote{The usage of a reference ellipsoid in this case is non-arbitrary, as it is for approximating the direction of gravity---a meaningful physical concept---rather than for providing an arbitrary datum to quantify relief.} The following equation is used for this conversion:
\[h = H + N\]
where \(h\) denotes ellipsoidal height, \(H\) denotes orthometric height, and \(N\) denotes geoid height, i.e., the height of the geoid above the reference ellipsoid. The EGM96 geoid model and GMM-2B areoid model\cite{gmm-2b} are used for \(N\) on Earth and Mars, respectively. This conversion on Mars is done with the help of Ames Stereo Pipeline \parencite{ames-stereo-pipeline}.
The next step is converting ellipsoidal height to \(X\), \(Y\), and \(Z\) coordinates in ECEF using the following equations \parencite{geodetic-to-ecef}:
\begin{align*}
& \latitude = \text{latitude} \\
& \longitude = \text{longitude} \\
& h = \text{ellipsoidal height} \\
& a = \text{length of equatorial radius of reference ellipsoid} \\
& b = \text{length of polar radius of reference ellipsoid} \\
& f = 1 - \frac{b}{a}\;\text{, the flattening of the ellipsoid} \\
& e^2 = 2f - f^2 \\
& N(\latitude) = \dfrac{a}{\sqrt{1 - e^2 \sin^2 \latitude}} \\
& X = (N(\latitude) + h) \cos \latitude \cos \longitude \\
& Y = (N(\latitude) + h) \cos \latitude \sin \longitude \\
& Z = \qty((1 - e^2) \, N(\latitude) + h) \sin \latitude
\end{align*}
The values of \(a\) and \(f\) used for their respective planets are listed in the table below; \(b\) is derivable from \(a\) and \(f\). Heights in the Vesta model, provided as radius from the center of mass, can be treated as height above an ellipsoid with dimensions \(a = 0\) and \(f = 0\).
\begin{table}[H]
\centering
\caption{Values of \(a\) and \(f\) used for their respective planets.}
\begin{tabular}{|l|l|l|}
\hline
\textbf{Planet} & \(a\) & \(f\) \\ \hline
Earth & \SI{6378137}{\meter} & \(1 \,/\, 298.257223563\) \\
Moon & \SI{1737400}{\meter} & 0 \\
Mars & \SI{3396190}{\meter} & 0 \\
Vesta & \SI{0}{\meter} & 0 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Approximating the Vertical Unit Vector}
After converting a DEM to ECEF, the next step is to compute the vertical unit vector for points on the planetary surface.
On Earth and Mars, the vertical unit vector can be approximated to point normal to their respective geoid and areoid models, which are created to model an equipotential surface. Using a GIS software, one can compute the angles for the slope \(\slope\) and aspect \(\aspect\) of a geoid or areoid model. (Aspect is an angle from \(\ang{0}\) to \(\ang{360}\) describing the direction that the slope faces, with \(\ang{0}\) denoting a north-facing slope and an increasing angle corresponding to a clockwise rotation on the compass rose.) Given the slope and aspect of the geoid or areoid at a particular latitude-longitude pair, the vertical unit vector at that location is calculated as follows:
\begin{align*}
& \Rotation{z}{\longitude} \times \Rotation{y}{-\latitude} \times \Rotation{x}{-\aspect} \times \Rotation{y}{-\slope} \times\!
\begin{pmatrix}
1 \\ 0 \\ 0
\end{pmatrix} \\
& = \begin{pmatrix}
(\cos\slope\cos\latitude - \sin\slope\cos\aspect\sin\latitude)\cos\longitude - \sin\slope\sin\aspect\sin\longitude \\
(\cos\slope\cos\latitude - \sin\slope\cos\aspect\sin\latitude)\sin\longitude + \sin\slope\sin\aspect\cos\longitude \\
\cos\slope\sin\latitude + \sin\slope\cos\aspect\cos\latitude
\end{pmatrix}
\end{align*}
where the \(R\)\,'s denote 3-dimensional rotation matrices.
On the Moon, where a geoid model not readily available, the vertical unit vector can be approximated to point normal to the reference ellipsoid. Since the reference ellipsoid is a perfect sphere on both planets, the vertical unit vector also points directly away from the planet's center of mass. Given the latitude and longitude of a point, its vertical unit vector normal to the reference ellipsoid is calculated as follows:
\[
\vu{k} = \Rotation{z}{\longitude} \times \Rotation{y}{-\latitude} \times\!
\begin{pmatrix}
1 \\ 0 \\ 0
\end{pmatrix} \\
= \begin{pmatrix}
\cos \latitude \cos \longitude \\
\cos \latitude \sin \longitude \\
\sin \latitude
\end{pmatrix}
\]
On Vesta, a reference ellipsoid of dimensions \(a \!=\! \SI{285}{\kilo\meter}\) and \(b \!=\! \SI{229}{\kilo\meter}\) has been previously used by NASA \cite{vesta-reference-ellipsoid}. However, there lacks a convenient way to convert distance from the center of mass (as provided by the Vesta surface model) to latitude and longitude on the NASA ellipsoid. Therefore, the vertical unit vector on Vesta is approximated with a different approach: Consider the existence of concentric ellipsoids with the same center and the same proportions (i.e., the same \(a \,/\, b\) ratio) as the NASA ellipsoid. The vertical unit vector of a point is approximated as pointing normal to the concentric ellipsoid that it touches. Given the \(X\), \(Y\), and \(Z\) coordinates of a point on Vesta in ECEF, its vertical unit vector \(\vu{k}\) is approximated as follows:
\begin{equation*}
\begin{gathered}
\begin{aligned}
& \Vector{k} =
\begin{pmatrix}
X \,/\, a^2 \\
Y \,/\, a^2 \\
Z \,/\, b^2
\end{pmatrix} \\
& \vu{k} = \frac{\Vector{k}}{\abs{\Vector{k}}}
\end{aligned}
\end{gathered}
\end{equation*}
After computing the ECEF coordinates and vertical unit vectors of points on the planetary surface, standard GIS operations can be used to compute all other quantities related to the datumless point measures. To save computational time, the aforementioned computations only need to be applied to points within the curvature-scale surroundings of \(\Point{p}\), as points very far away are irrelevant to the calculation of the datumless point measures.
\subsection{Datumless Point Measures on Earth and Beyond}
Google Earth Engine \parencite{google-earth-engine} is used to compute all values of the datumless point measures in this paper, along with generating all maps. Measured points are located with the help of Google Maps \parencite{google-maps}. Summit elevation and prominence values on Earth are either found on Peakbagger.com \parencite{peakbagger} or derived manually from a DEM, with the exception of the dry prominence of Mauna Kea, which is found in \cite{mauna-kea}. Measurements and related conclusions are subject to change and should always be verified.
\subsubsection{Earth}
Earth's topography is unique among the planets in the Solar System. Unlike most planets, its unique plate tectonics play a major role in generating mountain ranges.
An interesting global phenomenon to note is that mountains closer to polar latitudes along with those with a higher base elevation tend to measure greater a jut for their dominance, i.e., a greater domangle. This is likely due to the effects of glacial erosion in sculpting steeper mountains \parencite{glacier}. Closer to the poles, lower temperatures allow steep, glacier-carved flanks to extend to lower terrain, typically resulting in immediate bases being close to the bottom of a mountain range. Meanwhile, closer to the equator, glaciers are only supported in high altitudes, usually resulting in immediate bases being higher up within the mountain range that a point resides in.
In the contiguous U.S., the highest dominance values are found in the Sierra Nevada and the Cascade Range, with a handful of summits measuring above \(\SI{3500}{\meter}\) and Mt. Rainier being the only to measure above \SI{4000}{\meter}. Despite having a similar elevation, the Rocky Mountains have a lower dominance as a result of rising from higher plains, with the greatest values in the American Rockies only slightly exceeding \SI{2500}{\meter}. Places with a jut exceeding \SI{1000}{m} include the Teton Range, Glacier National Park, Half Dome, Mt. San Jacinto, the North Cascades, and last but not least, Mt. Rainier, which has the highest dominance and jut\footnote{For convenience, the jut of a mountain will refer to its summit measurement, even though non-summit locations may measure a higher jut.} of any major summit---hereby denoting a peak with at least \SI{300}{\meter} of prominence---in the contiguous United States. Within the Rocky Mountains, the subranges of Colorado are generally less steep than their more northerly neighbors, with the most impressive\footnote{The term \textit{impressive} is used objectively to denote a high jut.} summits in the state measuring a jut between \SI{500}{\meter} and \SI{750}{\meter}. In comparison to all of these places, the Appalachian Mountains feature significantly less relief as a result of old age, with all locations measuring below \SI{2000}{\meter} of dominance and \SI{500}{\meter} of jut.
In the rest of North America, the Canadian Rockies measure a similar dominance as their American counterparts but a significantly higher jut and domangle. Jut values of over \SI{1500}{\meter} are common in the region, culminating at Mt. Robson, which has over \SI{1900}{\meter} of jut. The highest and most impressive features in North America are found in Alaska and Northwest Canada, with Denali measuring the highest dominance of over \SI{5500}{\meter} and Mt. St. Elias measuring the highest jut of over \SI{2500}{\meter}. Mountains in Mexico tend to have a lower domangle, likely due in part to decreased glaciation. Pico de Orizaba stands out as the only major summit in the country with over \SI{5000}{\meter} of dominance and over \SI{1000}{\meter} of jut, with a few other stratovolcanoes measuring above \SI{4000}{\meter} of dominance.
South America is home to the Andes, the highest mountain range outside of Asia in terms of both dominance and elevation. Within the Andes, a handful of major summits have a dominance of over \(\SI{5500}{\meter}\), with Aconcagua measuring the greatest dominance of over \(\SI{6000}{\meter}\). The greatest jut values in the Andes are just over \SI{1800}{\meter}. The Southern Andes have the highest domangle, likely due to glaciation at lower altitudes due to the colder climate. In fact, the jut of major summits in Patagonia are comparable to those of the Central and Northern Andes, which have almost twice the dominance. Meanwhile, the Central Andes, which correspond to the Atacama Desert and the Altiplano, tend to measure the lowest domangle values in the entire mountain range, which could in part be due to aridity limiting the formation of glaciers \cite{glacier-2}.
In Africa, mountains tend to measure a lower jut and domangle than most other continents, likely due in part to decreased glaciation. The highest and most impressive mountain in the continent is Kilimanjaro, with a dominance of just over \SI{5000}{\meter} and a jut of just over \SI{1300}{\meter}. A few other major summits in Africa measure a dominance of over \SI{4000}{\meter}. A mountain that demonstrates the sculpting effects of glaciers particularly well is Mt. Kenya. The extinct stratovolcano rises very gradually from its base, all the way up to a height where glaciers can be supported, where it then rises abruptly to the summit. The immediate base of Mt. Kenya is located right where this transition occurs, while its base located where the mountain's lower, more gradual slopes meet flat plains.
In Europe, the two highest mountain ranges are the Alps and Caucasus Mountains. Within the Alps, several major summits exceed \SI{4000}{\meter} of dominance, with Mont Blanc measuring the highest dominance of just over \SI{4400}{\meter}. The Alps have a remarkably high jut for their dominance, with numerous locations measuring over \SI{1000}{\meter} of jut and a few locations near Mont Blanc measuring over \SI{1500}{\meter}. Compared with the Alps, the Caucasus Mountains have slightly higher dominance values, with a few mountains measuring between \SI{4500}{\meter} and \SI{5000}{\meter}, including Mt. Elbrus, the mountain with the highest dominance in Europe. However, the Caucasus also generally have a slightly lower jut compared to the Alps, with the most impressive mountains measuring between \SI{1000}{\meter} and \SI{1500}{\meter} of jut and no mountain exceeding \SI{1500}{\meter}.
Asia is home to the most rugged terrain on Earth. The greatest values of dominance and jut on the planet are found in a region bounded roughly by the Tian Shan to the north, the Himalaya to the south, the Hengduan Mountains to the east, and the Pamir-Alay to the west. Within this region, numerous major summits have a dominance of over \SI{6000}{\meter} and a jut of over \SI{2000}{\meter}. In terms of dominance, the Himalaya is the highest mountain range of them all, with dominance values exceeding \SI{7000}{\meter} at a handful of major summits. Mount Everest measures the highest dominance on the planet with a value of \SI{8081}{\meter}, measured from its base at the bottom of the Himalaya where it meets the Indo-Gangetic Plain. Second and third place for major summits with the highest dominance are Kangchenjunga and Lhotse. Meanwhile, K2 barely exceeds \SI{5500}{\meter} of dominance, as the Karakoram mountain range sits on a high plateau. However, both the Himalaya and Karakoram are home to summits exceeding \SI{2500}{\meter} of jut and occasionally \SI{3000}{\meter}. The greatest jut of any major summit goes to Nanga Parbat, with a value of over \SI{3100}{\meter}, measured from its immediate base at the bottom of its massive Rupal Face, often regarded as the tallest in the world. Asia is also home to the Tibetan Plateau, which is so large that the base of mountains in its central parts are measured directly from the plateau, rather than from the low plains that the plateau rises from. The domangle of mountains in the interior of the Tibetan Plateau tend to be quite high, likely as a result of altitude-induced glaciation.
In Oceania, the mountain with the greatest dominance is Puncak Jaya, measuring a dominance of just over \SI{4800}{\meter}. The entire landmass of Australia is considerably flat, with Mt. Kosciuszko, its highest-dominance feature, measuring a dominance of just below \SI{2000}{\meter}, and all locations measuring below \SI{400}{\meter} of jut. In contrast, neighboring New Zealand features significantly greater relief. The highest dominance is New Zealand is measured atop Aoraki / Mt. Cook, with a value of just over \SI{3600}{\meter}. The glacier-carved fjords of Milford Sound have some of the highest jut values in the world for their dominance, with Mitre Peak measuring a jut of over \SI{1300}{\meter}, similar to that of the Matterhorn which has over twice the dominance.
In Antarctica, the Sentinel Range is the highest mountain range, with Vinson Massif measuring the highest dominance of just over \SI{4500}{\meter} and a few locations measuring a jut of over \SI{1500}{\meter}. Moving inland, the ice sheet thickens so gradually that points in its central portions measure a dominance and jut close to 0, despite having thousands of meters of elevation. A similar phenomenon occurs in the center of the ice sheet of Greenland.
Regarding suboceanic features, the dry datumless measures directly describe their relief relative to the ocean floor, providing a convenient alternative to elevation, whose values need to be compared with other elevation values for this purpose. The greatest dry dominance on Earth is measured atop an unnamed seamount approximately \SI{60}{\kilo\meter} SSW of the southernmost point of Guam, with a value exceeding \SI{10200}{\meter}. The base of this seamount is located in the Mariana Trench, which measures the greatest dry submission on Earth with the same value. Outside the vicinity of the Mariana Trench, Mauna Kea in Hawaii also has a significant dry dominance of just over \SI{9300}{\meter}, measured from its base on the ocean floor.
\begin{table}[H]
\caption{Datumless point measures (unit: meters) at various summits on Earth.}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\textbf{Mountain} & \textbf{Mountain Range} & \textbf{dom} & \textbf{sub} & \textbf{jut} & \textbf{rut} & \textbf{Elev} & \textbf{Prom} \\ \hline
Mt. Washington & Appalachians & 1751 & 0 & 381 & 0 & 1917 & 1874 \\
Pikes Peak & Rocky Mountains & 2575 & 0 & 517 & 0 & 4301 & 1680 \\
Grand Teton & Rocky Mountains & 2421 & 0 & 1125 & 0 & 4197 & 1990 \\
Half Dome & Sierra Nevada & 2235 & 1252 & 1093 & 68 & 2694 & 414 \\
Mt. Whitney & Sierra Nevada & 3955 & 0 & 757 & 0 & 4419 & 3072 \\
Mt. Rainier & Cascade Range & 4193 & 0 & 1266 & 0 & 4392 & 4037 \\
Mt. Robson & Rocky Mountains & 3157 & 0 & 1907 & 0 & 3959 & 2819 \\
Denali & Alaska Range & 5765 & 0 & 2101 & 0 & 6190 & 6140 \\
Aconcagua & Andes & 6014 & 0 & 1832 & 0 & 6962 & 6962 \\
Mt. Fitz Roy & Andes & 3200 & 0 & 1776 & 0 & 3405 & 1951 \\
Kilimanjaro & East African Rift & 5067 & 0 & 1367 & 0 & 5895 & 5885 \\
Table Mountain & Cape Fold Belt & 1085 & 473 & 465 & 3.0 & 1085 & 1055 \\
Matterhorn & Alps & 3952 & 233 & 1364 & 3.8 & 4476 & 1038 \\
Mont Blanc & Alps & 4364 & 0 & 1730 & 0 & 4810 & 4697 \\
Mt. Elbrus & Caucasus Mountains & 4925 & 0 & 1106 & 0 & 5642 & 4741 \\
Kirkjufell & Sn\ae fellsnes Peninsula & 472 & 527 & 259 & 52 & 469 & 449 \\
Mt. Everest & Himalaya & 8081 & 0 & 2109 & 0 & 8849 & 8849 \\
Nanga Parbat & Himalaya & 7043 & 0 & 3166 & 0 & 8125 & 4608 \\
K2 & Karakoram & 5831 & 0 & 2542 & 0 & 8614 & 4020 \\
Mt. Fuji & NE Japan Arc & 3731 & 0 & 1062 & 0 & 3776 & 3776 \\
Puncak Jaya & Sudirman Range & 4842 & 0 & 1303 & 0 & 4884 & 4884 \\
Mt. Kosciuszko & Snowy Mountains & 1911 & 0 & 369 & 0 & 2228 & 2228 \\
Vinson Massif & Sentinel Range & 4584 & 0 & 1161 & 0 & 4892 & 4892 \\
Unnamed (Dry) & Mariana Arc & 10284 & 124 & 1410 & 1.0 & -27 & 148 \\
Mauna Kea (Dry) & Hawaiian Volcanoes & 9333 & 0 & 1203 & 0 & 4205 & 9330 \\ \hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Datumless point measures (unit: meters) at various non-summit locations on Earth.}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\textbf{Location} & \textbf{dom} & \textbf{sub} & \textbf{jut} & \textbf{rut} & \textbf{Elev} \\ \hline
San Francisco & 16 & 991 & 0.2 & 25 & 17 \\
Denver & 60 & 2476 & 0.3 & 105 & 1627 \\
Mt. Sunflower & 142 & 46 & 1.3 & 0.1 & 1231 \\
Mirror Lake & 808 & 2682 & 19 & 1034 & 1263 \\
Whitney Portal & 2123 & 1862 & 134 & 680 & 2552 \\
Crater Lake & 1281 & 862 & 32 & 89 & 1882 \\
Mather Point & 1462 & 1060 & 710 & 13 & 2170 \\
Addis Ababa & 527 & 1075 & 6.0 & 64 & 2293 \\
Kathmandu & 935 & 6147 & 26 & 492 & 1307 \\
Everest Base Camp & 4558 & 3477 & 309 & 1510 & 5364 \\
Lhasa & 31 & 2728 & 0.6 & 287 & 3656 \\
Challenger Deep (Dry) & 0 & 8892 & 0 & 1274 & -10923 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Moon, Mars, and Vesta}
The datumless measures are particularly handy on planets without a sea level, as they directly describe relief relative to local terrain (as opposed to elevation, whose values need to be compared with other elevation values for this purpose). Generally, smaller planets tend to feature greater relief as quantified by the datumless measures. This is likely due to the lower gravitational pull allowing higher mountains to form at isostatic equilibrium \parencite{isostasy}.
The Moon is significantly more rugged than the Earth when it comes to averages. Lunar terrain is fairly neutrally skewed, as asteroid impacts tend to generate a similar degree of protruding and recessed features. The far side of the moon facing away from Earth is more rugged than the near side. The point with the highest elevation, known as the Selenian Summit, measures a dominance of slightly over \SI{8300}{\meter} and a jut of slightly over \SI{1100}{\meter}. However, some places with a lower elevation feature even greater relief, with range exceeding \SI{9000}{\meter} and fluctuation exceeding \SI{2000}{\meter} in a few locations on the far side. The greatest measured dominance and jut values take place at an unnamed summit approximately \SI{200}{\kilo\meter} northwest of the Lippmann Crater, with dominance exceeding \SI{10000}{\meter} and jut exceeding \SI{2600}{\meter}. The near side of the Moon is less rugged than the far side, containing several maria---large, flat impact basins whose central locations typically measure a fluctuation below \SI{10}{\meter}. The famous Montes Apenninus mountain range near the Apollo 15 landing site measures dominance values above \SI{5000}{\meter} and jut values above \SI{2000}{\meter} in places, with the highest dominance measured atop Mons Huygens.
Mars is a land of topographic extremes, with superlative point measures exceeding those of the Moon. The shield volcano Olympus Mons measures the highest dominance on the planet, with a summit dominance of just over \SI{14100}{\meter}. Despite this, Olympus Mons rises very gradually, as demonstrated by the low domangle of its summit. In fact, the summit is below the horizontal plane of some points at the bottom of the volcano's lower cliffs. Certain points on the slopes of Olympus Mons measure a range exceeding the summit's dominance by a fair amount, with occasional values over \SI{17000}{\meter}. Mars is also home to the giant Valles Marineris canyon, with several times the relief of the Grand Canyon, measuring over \SI{10000}{\meter} of submission or over \SI{3000}{\meter} of rut in parts of its rim. Yet another notable feature on Mars is the Hellas Planitia impact basin. Locations near the rim of the basin can measure a range of over \SI{5500}{\meter} and a fluctuation of over \SI{1000}{\meter}, but points in its central portions typically measure a low fluctuation below \SI{10}{\meter}, indicative of flat surroundings. The northern hemisphere of Mars contains large swaths of plains that also consistently measure fluctuation values below \SI{10}{\meter}.
Of all the planetary bodies mentioned in this paper, Vesta is by far the most rugged. The most salient feature on Vesta is Rheasilvia, an impact crater with a central peak near the south pole. The central peak measures a dominance of over \SI{16000}{\meter}, even greater than that of Olympus Mons. The rim measures even higher dominance values upwards of \SI{18000} meters. Fluctuation can exceed \SI{19000}{\meter} in some parts of the Rheasilvia region. The equatorial and northern regions of Vesta are comparatively less rugged, but still very much so. The Divalia Fossa canyon near the equator has less relief than Valles Marineris on Mars, but still significantly more relief than the Grand Canyon. Overall, Vesta is extremely rugged, with even the flattest locations measuring thousands of meters of range and hundreds of meters of fluctuation.
\begin{table}[H]
\caption{Datumless point measures (units: meters) on Moon, Mars, and Vesta.}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\textbf{Planet} & \textbf{Location} & \textbf{dom} & \textbf{sub} & \textbf{jut} & \textbf{rut} \\ \hline
\multirow{7}{*}{Moon} & Apollo 11 Landing Site & 372.8 & 237.3 & 6.8 & 3.1 \\
& Apollo 15 Landing Site & 773 & 3957 & 23 & 780 \\
& Mons Hadley & 4726 & 0 & 1953 & 0 \\
& Mons Huygens & 5151 & 0 & 1570 & 0 \\
& Selenian Summit & 8352 & 0 & 1147 & 0 \\
& Unnamed Summit, \(\sim\!\SI{200}{\kilo\meter}\) NW of Lippmann & 10806 & 0 & 2600 & 0 \\
& Mare Imbrium, Central & 79 & 7 & 0.7 & 0.01 \\ \hline
\multirow{7}{*}{Mars} & Olympus Mons, Summit & 14175 & 0 & 947 & 0 \\
& Olympus Mons, NW Clifftop & 13361 & 345 & 2836 & 0.5 \\
& Valles Marineris, Coprates Chasma, Floor & 0 & 10379 & 0 & 1922 \\
& Valles Marineris, Candor Chasma, Rim & 8048 & 2310 & 3562 & 67 \\
& Hellas Planitia, Central & 75 & 414 & 1.1 & 4.5 \\
& Curiosity Landing Site & 53 & 4808 & 0.4 & 761 \\
& Mt. Sharp & 4904 & 0 & 1507 & 0 \\ \hline
\multirow{5}{*}{Vesta} & Rheasilvia, Central Peak & 16321 & 0 & 3060 & 0 \\
& Rheasilvia, Floor & 0 & 16916 & 0 & 4108\\
& Rheasilvia, Rim & 18920 & 287 & 6492 & 11 \\
& Divalia Fossa, Floor & 0 & 4332 & 0 & 1235 \\
& Feralia Planitia, Floor & 611 & 8647 & 114 & 2133 \\ \hline
\end{tabular}%
}
\end{table}
\section{Future Work}
Future work on the concepts in this paper will likely focus on the following areas:
\begin{itemize}[topsep=-0.9ex, itemsep=-0.9ex, partopsep=1ex, parsep=1ex]
\item Computing the datumless point measures at more places with greater accuracy.
\item Developing an algorithm to compute the datumless surface measures.
\item Creating more datumless point measures (discussed below).
\end{itemize}
\subsection{More Datumless Point Measures}
The datumless point measures in this paper describe relief in terms of maximums in a point's surroundings. While convenient, there are many other ways of describing relief. Currently under development is a set of datumless point measures that describe the relief of a point relative to the direction of least relief. An example would be the rise of Mt. Everest above the Tibetan Plateau, rather than above the Indo-Gangetic Plain, as measured by dominance. Also under development is a set of datumless point measures that describe the average relief of a point relative to its surroundings, rather than the maximum relief. Additional work in progress includes a datumless measure of the independence of a peak, similar to prominence, but designed to be more perceptually accurate and not relying on a datum.
\section{Acknowledgements}
Thank you to everyone who helped make this paper a reality.
Thank you to Tony Wang for revising the mathematical notation to make it clear and concise. You are truly a genius and a wonderful mentor, and it means a lot for this paper to be read by you.
Thank you to Mr. Behrooz Shahrvini for teaching me so many of the mathematical concepts used in this paper. It brings me joy and fulfillment to apply what I learned in your class to ``spicy problems,'' as you would often say.
Thank you to the Yale Undergraduate Research Association for providing the opportunity to present several key ideas in this paper at the Yale Undergraduate Research Symposium.
Thank you to Dr. Mark Brandon for teaching me what a geoid is. I once thought I could write this entire paper without knowing such fundamentals---you gladly proved me wrong.
Thank you to Mrs. Pamela Meuser for kindling my interest in STEM subjects when I was in elementary school. You have unknowingly set the trajectory for so many of my current goals and undertakings.
Thank you to all my friends for flying together with me during this journey. I must apologize for my excessive rants about mountains during this weird phase of my life.
Last but not least, thank you to my parents, Helen Kang and Vince Xu, for supporting me through all the ups and downs of the complex topography of life.
\section{References}
\printbibliography[heading=none]
|
Title:
Thermal effects on nuclear matter properties |
Abstract: A quantitative description of the properties of hot nuclear matter will be
needed for the interpretation of the available and forthcoming astrophysical
data, providing information on the post merger phase of a neutron star
coalescence. We have employed a recently developed theoretical model, based on
a phenomenological nuclear Hamiltonian including two- and three-nucleon
potentials, to study the temperature dependence of average and single-particle
properties of nuclear matter relevant to astrophysical applications. The
possibility to represent the results of microscopic calculations using simple
and yet physically motivated parametrisations of thermal effects, suitable for
use in numerical simulations of astrophysical processes, is also discussed.
| https://export.arxiv.org/pdf/2208.03131 |
\title{Thermal effects on nuclear matter properties}
\author{Lucas Tonetto}
\affiliation{Dipartimento di Fisica, ``Sapienza'' University of Rome, Piazzale A. Moro, 5. 00185 Roma, Italy}
\affiliation{INFN, Sezione di Roma, Piazzale A. Moro, 5. 00185 Roma, Italy}
\author{Omar Benhar}
\affiliation{INFN, Sezione di Roma, Piazzale A. Moro, 5. 00185 Roma, Italy}
\affiliation{Dipartimento di Fisica, ``Sapienza'' University of Rome, Piazzale A. Moro, 5. 00185 Roma, Italy}
\date{\today}
\index{}
\section{Introduction}
Understanding the structure
and dynamics of hot nuclear matter at microscopic level is long known to be essential
for the description of both supernov\ae \ and proto neutron stars~\citep{burrows1986,keil1995,pons1999,camelio2017}.
More recently, thermal modifications of the equation of state (EOS) of neutron star matter have been also shown to play a critical role
in the merger and postmerger phases of binary neutron star coalescence~\citep{baiotti2017,raithel2021,figura2020,figura2021,hammond2021}.
In this context, it has to be pointed out that an accurate
description of finite-temperature effects is needed to study not only the equilibrium properties determining the density dependence of matter pressure,
but also the occurrence of phenomena involving dissipation mechanisms, such as bulk viscosity \citep{alford2018} and neutrino emission \citep{camelio2017}.
The large data set of zero-temperature EOSs available for use in simulations and data analysis\textemdash for a comprehensive catalogue see Ref.~\cite{CompOSE}\textemdash is contrasted by a scarce number of EOSs of hot nuclear matter spanning the
relevant regime, which is believed to extend to temperatures as high as 100 MeV~\citep{raithel2019,oechslin2007}.
The EOS of hot nuclear matter is often obtained using Skyrme-type effective interactions~\cite{Prakash:1997} or
the Relativistic Mean Field (RMF) approach~\citep{Kaplan:2014}. More comprehensive studies of the properties of
neutron star matter at nonzero temperature have been carried out within the framework of Nuclear Many-Body Theory, in which nuclear
dynamics is described by a phenomenological Hamiltonian, strongly constrained by the observed properties of the two- and
three-nucleon systems~\cite{PhysRevC.100.054335}. Recent calculations along this line have been performed using both $G$-matrix perturbation theory~\cite{PhysRevC.100.054335} and the formalism of Correlated Basis Functions~\cite{benhar2022}.
The authors of Refs.~\cite{BL2017,benhar2022} have developed a procedure to obtain from a phenomenological nuclear Hamiltonian
a well-behaved effective potential, suitable to carry out perturbative calculations in the basis of eigenstates of the non interacting system.
This approach, in which the effects of irreducible three-nucleon interactions are consistently taken into account at microscopic level, allows
to perform calculations of a variety of properties of dense nuclear matter with arbitrary proton fraction and
temperatures in the region of $T \ll m_\pi$, $m_\pi \approx 150$ MeV being the mass of the $\pi$-meson, in which thermal effects
are not expected to significantly affect strong-interaction dynamics. Thermodynamic consistency is also achieved by
construction, through a proper definition of the grand canonical potential~\cite{benhar2022}.
The present work is primarily meant as a follow-up to the study of~\citet{benhar2022}, and provides a detailed analysis
of the impact of thermal effects on specific properties of charge-neutral and $\beta$-stable matter relevant to neutron stars,
such as the proton and neutron energy spectra and effective masses.
We will also examine the possibility of using simple approximated procedures to parametrise deviations from
the zero-temperature EOS associated with thermal effects. The development of such procedures is of utmost importance,
because their availability will enable to perform numerical simulations using EOSs based on
on a reliable treatment of the zero-temperature limit. Pinning down the validity and limitations of the proposed
procedures, through a direct comparison with the predictions of fully microscopic calculations, will help
to firmly establish their applicability.
A widely used, although admittedly oversimplified, parametrisation
is obtained from the so-called ``hybrid-EOS" approach, in which thermal modifications of the thermodynamic functions of cold
nuclear matter are
approximated by the corresponding quantities of an ideal fluid~\citep{bauswein2010,figura2020,hotokezaka2011,endrizzi2016,dietrich2017a,dietrich2017b}.
Within this scheme, pressure and specific internal energy are respectively written in the form
\begin{align}
\nonumber
p & = p_{\rm cold} + p_{\rm th} \ , \\
\nonumber
e & = e_{\rm cold} + e_{\rm th} \ ,
\end{align}
and the thermal contribution to the pressure at matter density $\varrho$ and temperature $T$ is parametrised by the adiabatic
index, $\Gamma_{\rm th}$, according to
\begin{align}
\label{hybrid}
p_{\rm th}(\varrho,T) = \varrho~e_{\rm th} (\Gamma_{\rm th} - 1) \ .
\end{align}
The above procedure involves the drastic assumption that the adiabatic index be independent of both density and temperature.
However, a comparison between the pressure obtained from Eq.~\eqref{hybrid} and that resulting from microscopic
calculations based on advanced models of nuclear dynamics shows that $\Gamma_{\rm th}$ does, in fact, depend strongly
on density, and that the dependence on temperature, while being weaker, is also non negligible~\cite{figura2020}.
A more advanced parametrisation, aimed at improving the description of the thermal pressure in the high-density region, has been recently proposed
by~\citet{raithel2019}. Within this approach, the prediction of the ideal fluid model\textemdash which is known to overestimate pressure at large densities\textemdash is replaced with that obtained from the leading term of the Sommerfeld expansion, which allows to systematically include degeneracy effects~\cite{Constantinou}. Microscopic nuclear dynamics is taken into account, using nucleon effective masses obtained from RMF models of nuclear matter.
To assess the accuracy and range of applicability of the simple and yet physically motivated parametrisation of Ref.~\cite{raithel2019}, we have compared its predictions to the results obtained from microscopic calculations of $\beta$-stable matter at temperatures up to $50$ MeV, carried out within the formalism described in Ref.~\cite{benhar2022}.
The manuscript is organised as follows. In Sect.~\ref{sec:formalism} we outline the dynamical model underlying our theoretical approach, as well as the
main elements of the formalism employed to study the properties of hot nuclear matter. \ldots\ldots
\section{Theoretical model} \label{sec:formalism}
In this section, we summarise the main features of our theoretical model.
We discuss both the the underlying description of nuclear dynamics and the
formalism employed to carry out calculations of the relevant properties of hot nuclear matter.
\subsection{Nuclear dynamics} \label{subsec:nucdyn}
Nuclear Many-Body Theory (NMBT) is based on the hypothesis that all nucleon systems\textemdash from the deuteron to neutron stars\textemdash can be described in terms of point like protons and neutrons, whose dynamics is dictated
by the Hamiltonian%
\begin{equation}
\label{hamiltonian}
H=\sum_{i}\frac{p_i^2}{2m} + \sum_{i<j}v_{ij}+\sum_{i<j<k}V_{ijk} \ ,
\end{equation}
with $m$ and ${\bf p}_i$ denoting mass and momentum of the $i$-th particle.\footnote{In this article, we adopt the
system of natural units, in which $\hbar=c=k_B=1$, and, unless
otherwise specified, neglect the small proton-neutron mass difference.}
The nucleon-nucleon (NN) potential, usually written in
the form
\begin{equation}
v_{ij} = \sum_p v^p (r_{ij}) O_{ij}^p \ ,
\label{eq:vij}
\end{equation}
where $r_{ij} = |{\bf r}_i - {\bf r}_j|$ is the distance between the interacting particles, is designed
to reproduce the measured properties of the two-nucleon system, in both bound and scattering states, and reduces
to the Yukawa one-pion exchange potential at large distances. The sum in Eq.~\eqref{eq:vij} includes up
to eighteen terms, the corresponding operators, $O^p$, being needed to describe the strong spin-isospin
dependence and non central nature of nuclear forces, as well as the occurrence spin-orbit interactions and
small violations of charge
symmetry and charge independence~\cite{AV18}.
The addition of the three-nucleon (NNN) potential $V_{ijk}$ is needed to take into account the effects of irreducible three-body
interactions, reflecting the occurrence of processes involving the internal structure of the nucleons.
The results reported in this article have been obtained using an {\it effective interaction} derived from the phenomenological
Hamiltonian comprising the Argonne $v_6^\prime$ (AV6P) NN potential~\cite{AV6P} and the Urbana IX (UIX) NNN potential~\cite{UIX_1,UIX_2}.
The AV6P potential is determined projecting the full Argonne $v_{18}$ potential of Ref.~\cite{AV18} (AV18) onto the operator basis comprising
the terms with $p \leq 6$ in the right hand side of Eq.~\eqref{eq:vij}. It predicts the binding energy and electric quadrupole
moment of the deuteron with accuracy of 1\%, and 4\%, respectively, and provides an excellent fit of the NN scattering phase
shifts in the $^1{\rm S}_0$ channel, corresponding to total spin and isospin $S=0$ and $T=1$, and relative angular momentum $\ell = 0$.
The UIX potential is written in the form
\begin{align}
V_{ijk}=V_{ijk}^{2\pi}+V_{ijk}^{R} \ ,
\end{align}
where the first term is the attractive Fujita-Miyazawa potential\textemdash describing two-pion exchange NNN interactions
with excitation of a $\Delta$-resonance in the intermediate state\textemdash while
$V_{ijk}^{R}$ is a purely phenomenological repulsive term. The strength of
$V_{ijk}^{2\pi}$, is adjusted to explain the observed ground-state energies of
\isotope[3][]{He} and \isotope[4][]{He}, while that of the isoscalar repulsive contribution
is fixed in such a way as to reproduce the saturation density of isospin symmetric matter, inferred from nuclear systematics.
Recent studies of the EOS of cold neutron matter\textemdash performed by~\citet{Lovato:2022} using state-of-the-art
computational techniques\textemdash show that the
predictions of the somewhat simplified AV6P+UIX Hamiltonian are very close to those obtained from the full AV18+UIX model, providing the basis of the widely employed EOS of Akmal, Pandharipande and Ravenhall~\cite{Akmal:1997,Akmal:1998}.
The procedure to derive the effective interaction, thoroughly described
in Refs.\cite{lovato2013,eos0,benhar2022}, exploits the formalism of correlated basis functions (CBF) and cluster
expansion techniques to take into account the effects of strong nucleon-nucleon correlations, arising from the presence of a strong
repulsive core in the NN potential.
The resulting density-dependent effective potential\textemdash which can be written as in Eq.~\eqref{eq:vij} with the sum
in the right-hand side limited to $p \leq6$\textemdash is well behaved, and consistently includes the
contributions of NN and NNN interactions. As a consequence, it is expected to be well suited to perform perturbative calculations of nuclear
matter properties in the density regime relevant to neutron stars.
\subsection{Perturbation theory at finite temperature }
At first order in the CBF effective interaction $v^{\rm eff}$, the internal energy per nucleon of nuclear matter at
baryon density $\varrho$, temperature $T$, and proton fraction $Y_p$ can be written in the form~\cite{benhar2022}
\begin{widetext}
\begin{align}
\label{int:en}
\frac{E}{N} = \frac{1}{N} \Big\{ \sum_{ \alpha {\bf k} } \ \frac{ {\bf k}^2 }{2m} \ n_\alpha(k,T)
+ \frac{1}{2} \sum_{ \alpha {\bf k} } \sum_{ \alpha^\prime {\bf k}^\prime }
\langle \alpha {k} , \alpha^\prime {k}^\prime | v^{\rm eff} | \alpha {k} , \alpha^\prime {k}^\prime \rangle_A
\ n_\alpha(k,T) n_{\alpha^\prime}(k^\prime,T) \Big\} \ .
\end{align}
\end{widetext}
In the above equations, the index $\alpha = n, p$ labels neutrons and protons, respectively, ${\bf k}$ is the nucleon momentum, $k = |{\bf k}|$,
and $| \alpha {k} , \alpha^\prime {k}^\prime \rangle_A$ denotes an antisymmetrised two-nucleon state. Note that conservation of baryon number
requires that $Y_n = 1 - Y_p$.
The temperature dependence is
described by the Fermi distribution
\begin{align}
\label{fermidist}
n_\alpha(k,T) = \Big\{ 1 + \exp { [ \beta ( e_{\alpha k} - \mu_\alpha ) ] } \Big\}^{-1} \ .
\end{align}
where the single-particle energy is defined as,
\begin{align}
\label{ek}
e_{\alpha k} = e^{\rm HF}_{\alpha k} + \delta e \ ,
\end{align}
with
\begin{align}
\label{eHFk}
e^{\rm HF}_{\alpha k} = \frac{{\bf k}^2}{2m} & + \sum_{ \alpha^\prime {\bf k}^\prime }
\langle \alpha {k} , \alpha^\prime {k}^\prime | v^{\rm eff} | \alpha {k} , \alpha^\prime {k}^\prime \rangle_A \ n_\alpha(k^\prime,T) \ ,
\end{align}
and
\begin{align}
\nonumber
\delta e = \frac{\varrho}{2} \sum_{ \alpha {\bf k} } \sum_{ \alpha^\prime {\bf k}^\prime } &
\langle \alpha {k} , \alpha^\prime {k}^\prime | \frac{\partial v^{\rm eff}}{\partial \varrho} | \alpha {k} , \alpha^\prime {k}^\prime \rangle_A \\
\label{deltae}
& \times n_\alpha(k,T) n_{\alpha^\prime}(k^\prime,T) ,
\end{align}
The correction to the Hartree Fock (HF) spectrum is needed to satisfy
the requirement of thermodynamic consistence, and vanishes in the case of a density-independent potential; see Ref.~\cite{benhar2022} for details.
The chemical potentials $\mu_\alpha$ are determined by the normalisation conditions
\begin{align}
\label{def:chempot}
\frac{2}{V} \sum_{ \alpha {\bf k} } n_\alpha(k,T) = \varrho_\alpha \ ,
\end{align}
where $V$ is the normalisation volume, and the number density of particles of species $\alpha$ is defined as $\varrho_\alpha = Y_\alpha \varrho$. Note that the above definitions
imply that both the single-nucleon energies and the chemical potentials depend on temperature through the Fermi distribution.
The entropy per nucleon is also defined in terms of the distribution of Eq.~\eqref{fermidist} as
\begin{align}
\label{def:S}
\frac{S}{N} = &- \frac{1}{N} \sum_{ \alpha {\bf k} }
\Big\{ n_\alpha(k,T) \ln {n_\alpha(k,T)} \\
\nonumber
& + \big[ 1 - n_\alpha(k,T) \big] \ln{ \big[ 1 - n_\alpha(k,T) \big] } \Big\} \ .
\end{align}
Finally, the Helmoltz free energy per nucleon is obtained combining Eqs.~\eqref{int:en} and~\eqref{def:S}
in the forrm
\begin{align}
\label{def:free}
\frac{F}{N} = \frac{1}{N} \big( E - T S \big) \ .
\end{align}
\section{Thermal effects} \label{sec:hot_matt}
In the temperature regime considered in the present study, thermal modifications of nuclear matter properties
arise primarily from the Fermi distribution, defined by Eq,\eqref{fermidist}. Comparison to the $T \to 0$ limit
\begin{align}
\label{dist:0}
n_\alpha(k,0) = \theta(\mu_\alpha-e{_{\alpha k}}) \ ,
\end{align}
where $\theta(x)$ is the Heaviside theta-function, shows that the probability distribution $n_\alpha(k,T>0)$ is reduced from unity
in the region corresponding to $\mu_{\alpha} -T \lesssim e{_{\alpha k}} \lesssim \mu_{\alpha}$, while acquiring a non vanishing positive value for
$\mu_{\alpha} \lesssim e{_{\alpha k}} \lesssim \mu_{\alpha} + T$. It follows that, for any given temperature $T$, the extent of thermal modifications to the
Fermi distribution is driven by the ratio $2T/\mu_{\alpha k}$. This observation in turn implies that, because the chemical potential is a monotonically increasing function of the particle density $\varrho_\alpha$ over a broad range of temperatures, for any given $T$ thermal effects are more significant at lower $\varrho_\alpha$. On the other hand, they become vanishingly small in the high-density regime, in which degeneracy becomes dominant.
The density-dependence of thermal effects\textemdash that also affects the particle energies and chemical potentials, defined by
Eqs.~\eqref{ek} and~\eqref{def:chempot}, respectively\textemdash plays a significant role in the determination of the properties of multicomponent systems, such as
charge-neutral $\beta$-stable matter, in which different particles have different densities.
\subsection{Composition of charge-neutral $\beta$-stable matter}
\label{composition}
In charge-neutral matter consisting of neutrons, protons and leptons in equilibrium with respect to the weak interaction processes
\begin{align}
n \to p + \ell + {\bar \nu}_\ell \ \ \ \ , \ \ \ \ p + \ell^- \to n + \nu_\ell \ ,
\end{align}
where $\ell = e, \mu$ labels the lepton flavour, the proton fraction $Y_p = \varrho_p/\varrho_n$ is uniquely determined by the equations
\begin{align}
\label{beta:eq1}
\mu_n - \mu_p = \mu_{\ell} \ ,
\end{align}
\begin{align}
\label{beta:eq2}
Y_p = \sum_\ell Y_\ell \ .
\end{align}
At densities such that the electron chemical potential does not exceed the rest mass of the muon, $m_\mu =~105.7$~MeV, the sum appearing in the
above equation includes electrons only. However, at higher densities\textemdash typically at $\varrho \gtrsim \varrho_0$, with $\varrho_0 = 0.16 \ {\rm fm}^{-3}$ being the equilibrium density
of isospin-symmetric matter\textemdash the appearance of muons becomes energetically favoured, and must be taken into account.
The solid lines of Fig.~\ref{prot:frac} show the density dependence of the proton fractions corresponding to $\beta$-equilibrium of matter consisting of protons, neutrons, electrons and muons, or $npe\mu$ matter, at $T=$ 0 (triangles) and 50 MeV (circles); all results have been obtained using the formalism described in Ref.~\cite{benhar2022}. For comparison, the same quantities in $npe$ matter, in which the muon contribution is not included, are displayed
by the dashed lines.
The most prominent thermal effect is a significant departure from the monotonic behaviour observed in cold matter. The emergence of a minimum in the
density dependence of the proton fraction results from the interplay between the thermal and degeneracy contributions to the chemical potentials appearing in Eq.~\eqref{beta:eq1}.
For $T \gtrsim 20$~MeV and low density, typically $\varrho \lesssim \varrho_0$, the thermal contribution\textemdash whose leading order term can be written in the form $\delta \mu_\alpha \propto T^2/\varrho_\alpha^{2/3}$\textemdash turns out to be much larger for protons than for neutrons, and $\beta$-equilibrium requires large proton fractions.
\subsection{Fermi distributions}
The Fermi distribution of Eq.~\eqref{fermidist} depends on temperature both explicitly, through the factor $\beta = 1/T$ appearing in the argument of the exponential, and implicitly, through the $T$-dependence of both $e{_{\alpha k}}$ and $\mu_\alpha$. Because the calculation of single-particle energies and chemical potentials in turn involves the Fermi distribution, $e{_{\alpha k}}$, $\mu_\alpha$ and $n_\alpha(k,T)$ must, in fact, be determined self-consistently, appliying an iterative procedure.
Figure~\ref{fig:fermidist} shows the distributions of neutrons and protons in charge-neutral $\beta$-stable $npe\mu$ matter at baryon density $\varrho=0.32 \ \mathrm{fm}^{-3}$.
It is apparent that, as pointed out in the previous section, thermal modifications to $n_\alpha(k,T)$\textemdash extending over a region of width $2 T$ around the Fermi momentum $k_{F\alpha} = (3 \pi^2 \varrho_\alpha)^{1/3}$\textemdash depend on {\it both} temperature {\it and} density.
As a consequence, for any given temperature $T$ they are more pronounced in the case of protons, whose density is suppressed by a factor ${\rm Y}_p/(1-{\rm Y}_p) \ll 1$
with respect to the neutron density.
\subsection{Nucleon energy spectra and effective masses}
The proton and neutron spectra employed to calculate the Fermi distributions of Fig.~\ref{fig:fermidist}\textemdash coresponding to $\beta$-stable $npe\mu$ matter at baryon density $\varrho = 2 \varrho_0$\textemdash are displayed in Fig.~\ref{fig:spectrum2n0}.
It is apparent that $e{_{\alpha k}}$ is an increasing
function of temperature at all values of $k$, with the $T$-dependence being stronger at lower momentum. At $k=0$ the difference between the energies corresponding to $T=0$ and 50 Mev reaches $\sim 35.8$~MeV for protons, and $\sim 17.5$~MeV for neutrons. In the case of protons, a $\sim 29$ MeV increase with respect to the
zero-temperature spectrum is still clearly visible at $k = k_{F_p}$, $k_{F_p} = 1.01 \ {\rm fm}^{-1}$ being the proton Fermi momentum, while the
$T=0$ and 50 MeV neutron spectra at $k = k_{F_n}$, with $k_{F_n} = 2.04 \ {\rm fm}^{-1}$, are nearly indistinguishable.
In theoretical calculations of nuclear matter properties of astrophysical interest\textemdash such as the neutrino emission rates~\citep{camelio2017},
and the shear and bulk viscosity coefficients~\citep{benharvalli2007,alford2018,alford2021}\textemdash the relevant information comprised in proton and
neutron spectra is captured by the corresponding effective masses $m_\alpha^\star$, defined by the equations
\begin{align}
\label{def:mstar}
\frac{1}{m_\alpha^\star} = \left( \frac{1}{k} \frac{ d e_{\alpha k} }{ d k }\right)_{k = k_{F_\alpha}} \ .
\end{align}
The role played by the effective masses can be readily grasped considering that they determine the dispersion
relations of matter constituents, which in turn affect their collision rates through both the incident flux and
the available phase space.
The density dependence of the proton and neutron effective masses of charge-neutral $\beta$-stable $npe\mu$ matter at temperature $0\leq T \leq 50$~MeV is
illustrated in Fig.~\ref{fig:effmass}. It clearly appears that, regardless of temperature, ${m_\alpha^\star}$ is a monotonically decreasing function of baryon density. This behaviour is consistent with the results of calculations carried out within the RMF approach, recently
reported in Ref.~\cite{raithel2021}.
For neutrons, thermal effects\textemdash measured by the departure from the zero-temperature effective mass\textemdash turn out to be limited to $\sim 5$\% over the whole temperature and density
range considered. For protons, on the other hand, their size for $T=50$ MeV turns out to be $\sim 25$\% at $\varrho = \varrho_0$,
and is still $\gtrsim 10$\% at $\varrho = 4\varrho_0$.
The nucleon effective masses are routinely used to parametrise the momentum dependence of the nucleon spectra in cold nuclear matter
according to~\cite{camelio2017}
\begin{align}
\label{ek:quad}
e_{\alpha k} = \frac{k^2}{2 m^\star_0} + U_\alpha \ ,
\end{align}
where $m^\star_0$ denotes the value of ${m_\alpha^\star}$ at $T=0$, while the offset $U_\alpha$ is determined by the requirement that the
above approximation reproduce the spectrum obtained from the full microscopic
calculation in the $k~\to~0$ limit.
In Fig.~\ref{quadratic:spectra} the proton spectra in $\beta$-stable $npe\mu$ matter at baryon density $\varrho$ = 0.32 fm$^{-3}$ and
temperature $T=0$ and 50~MeV, obtained from Eqs.~\eqref{ek}-\eqref{deltae}, are
compared to those computed using Eq.~\eqref{ek:quad}. At $T=0$ the quadratic approximation turns out to be remarkably accurate
up to momenta largely above the Fermi momentum, $k_{F_p}~=~1.01 \ {\rm fm}^{-1}$. At $T=50$ MeV, on the other hand,
the agreement between the results of the two calculations is somewhat degraded; the discrepancy is $\sim 25$\% at $k = k_{F_p}$, and monotonically increases with $k$.
The spectra displayed in the bottom panel of Fig.~\ref{quadratic:spectra} clearly show that the accuracy of Eq.~\eqref{quadratic:spectra} at $T > 0$ can be significantly improved by taking into account the
temperature dependence of the effective mass, which amounts to replacing $m^\star_0$ with the appropriate finte-temperature
value, obtained from Eq.~\eqref{def:mstar}.
In the literature, the temperature dependence of $e{_{\alpha k}}$ is often disregarded, and the
properties of nuclear matter at $T>0$ are calculated using zero-temperature spectra.
This approximation, referred to as {\it Frozen Correlations Approximation} (FCA), has been recently employed
in the studies of binary neutron star mergers of~\citet{figura2020,figura2021}.
The results reported in Ref.~\cite{baldo1999} suggest that the FCA has a nearly negligible effect on the thermodynamic
properties of nuclear matter at $T \lesssim 30 \ \mathrm{MeV}$. However, its accuracy has been
shown to deteriorate at larger temperatures~\cite{BS:AA}.
The validity of the assumption underlying the FCA can be gauged Figs.~\ref{fig:spectrum2n0}-\ref{fig:spectrum3n0}.
The implications of using this approximation scheme in calculations of nuclear matter properties will be discussed further
in the next section.
\subsection{Chemical potentials and matter composition}
The chemical potentials of protons and neutrons in charge-neutral $\beta$-stable matter at
temperature $T$ = 0 and 50 MeV are displayed in Fig.~\ref{display:rhodep} as a function of baryon density.
For comparison, the difference $\mu_n - \mu_p = \mu_e$ is also shown.
Thermal effects on chemical potentials can be analysed considering the difference
\begin{align}
\label{th:chempot}
\delta \mu_{\alpha, {\rm th}} = \mu_\alpha - \mu_{\alpha, 0}\ ,
\end{align}
with $ \mu_{\alpha, 0}$ being the value of $\mu_\alpha$ in cold matter at fixed baryon density $\varrho$ and particle fraction $Y_\alpha$. Figure~\ref{thermal:chempot} illustrates the temperature dependence of $\delta \mu_{n, {\rm th}}$ in charge-neutral $\beta$-stable matter at baryon density $\varrho = 2 \varrho_0$.
Because thermal effects in $\beta$-stable matter have different impact on proton and neutron properties, the capability
to accurately predict $\beta$-equilibrium and matter composition using FCA must be carefully investigated.
The results of numerical calculations carried out within our approach indicate for temperatures up to $T=50$~MeV the discrepancy
between the proton fractions obtained from FCA and the exact results never exceeds $\sim 3$\% over the considered
range of baryon density.
\subsection{Internal energy and free energy}
The density and temperature dependence of the internal energy and entropy per baryon of $\beta$-stable matter,
defined according to Eqs.~\eqref{int:en} and~\eqref{def:S}, respectively, is illustrated in Figs.~\ref{fig:intenergy_beta} and \ref{fig:entropy_beta}.
It is worth reminding that the CBF effective interaction based on the AV6P+UIX nuclear Hamiltonian yields a remarkably accurate
account of the equilibrium properties of isospin symmetric matter at $T=0$ inferred from nuclear data.
The saturation density is correctly reproduced, and the contribution of interactions to the internal energy turns
out to be within $\sim 10$\% of the empirical value.
Figure~\ref{fig:intenergy_beta} shows that, for any given $\varrho$, the internal energy is an increasing function
of temperature. However, the concurrent increment of the proton fraction with $T$, discussed in Section~\ref{composition}, leads
to the appearance of a minimum for temperatures larger than 10 MeV.
As expected, thermal contributions to the internal energy turn out to be less important at higher $\varrho$.
However, for $T>10$~MeV they are still significant at densities as high as $4 \varrho_0$.
\section{Modelling thermal effects} \label{sec:parametrisation}
The description of thermal effects on the thermodynamic functions determining the EOS, that is, pressure and energy density,
is of paramount importance in view of astrophysical applications. The number of available EOSs of nuclear matter at $T \neq 0$
is much smaller when compared to the corresponding figure for cold matter. Moreover, the implementation of microscopic
EOSs in numerical simulation of processes such as binary neutron star merger involves non trivial difficulties.
This above problems are
often circumvented using simple but physically sound parametrisation of the EOSs.
An extensively used expression is based on the so-called ''hybrid-EOS'' approach,
in which thermal contributions to pressure and energy density are described using an approximation based on the
the ideal fluid law; see Eq.~\eqref{hybrid}.
As pointed out in the previous section, the results of microscopic calculations clearly signal a strong interplay between the
dependencies of the nuclear matter properties on density and temperature. This feature obviously questions the adequacy of the assumption that thermal contributions to the EOS be the same to all densities. Motivated by this observation, Raithel {\it et al.} have recently proposed a model that explicitly takes into consideration the effect of matter degeneracy~\cite{raithel2019}.
Rather than using the ideal fluid EOS
in the whole density range, the authors of Ref.~\cite{raithel2019} employ the Sommerfeld expansion described by \citet{constantinou2015} in the region high $\varrho$. This formalism allows to write the deviations of the thermodynamic functions
from their zero-temperature values as series of powers of $T$. The calculation of the next-to-leading order term
involves the nucleon effective mass and its derivatives, which implies that a model of nuclear dynamics at $T\neq0$ is needed beforehand.
In order to make their parametrisation as general as possible, \citet{raithel2019} considered a set of RMF models for which the effective masses at different temperatures are available in the literature, and performed a fit using analytical models, such as
piecewise polytropes, as zero-temperature baseline.
Our goal here is establish the extent to which the results reported in Ref.~\cite{raithel2019} stand, when compared to an EOS
obtained within the framework of NMBT, rather than the RMF approach. We use the parameter values $n_0 \sim 0.13 \ \mathrm{fm}^{-3}$ and
$\alpha \sim 0.9$ \textemdash see Box 1 of Ref.~\citep{raithel2019} and the erratum, Ref.~\citep{raithel2019erratum}\textemdash to obtain first the effective mass, and subsequently the internal energy per baryon and the matter pressure.
Note that the results reported in Ref.~\cite{raithel2019} do not include the contribution of muons. Therefore, our analysis will be
limited to the case of $npe$ matter.
In Fig. \ref{fig:E_raithel} we show a comparison between the internal energy per baryon of $\beta$-stable $npe$ matter
obtained from the approach described in the previous section (solid lines) and the fit of Refs.~\citep{raithel2019,raithel2019erratum} (dashed lines). It is apparent that at $T = \mathrm{10 \ MeV}$, the agreement is almost perfect, while discrepancies\textemdash the size of which increases with increasing $T$\textemdash are clearly visible at larger temperatures. The maximum relative error between the fit and the miscroscopic calculation at $T = \mathrm{50 \ MeV}$ ($\mathrm{30 \ MeV}$) turns out to be $\sim 16\%$ ($\sim 11\%$), and occurs at density $\sim 1.5 \varrho_0$ ($\sim \varrho_0$).
We have also analysed the accuracy of the approximation of \citet{raithel2019} for the pressure.
A comparison with the results obtained from our microscopic approach, illustrated in Fig.~\ref{fig:P_raithel}, shows
a remarkably good agreement over the whole temperature range. The parametrisation of Ref.\cite{raithel2019} appear to
properly take into account the effects of degeneracy at all densities.
In order to provide a quantitative estimate of the validity of the the approximations involved in the parametrisation of pressure,
in Fig.~\ref{fig:diffP_raithel} we report the relative difference
\begin{align}
\label{diffP}
\frac{\Delta P}{P} = \frac{(P_\mathrm{approx} - P)}{P} \ ,
\end{align}
where $P$ is the result of our calculation, as a function of baryon density. It is apparent that the largest errors occur at low densities, and never exceeds $\sim$6\% for $\varrho > \varrho_0$.
\section{Summary and Conclusions} \label{sec:conclusion}
We have analysed the impact of temperature on several properties of charge-neutral
nuclear matter in $\beta$-equilibrium. Calculations have been performed using
the formalism of finite-temperature perturbation theory, with an effective interaction
derived from a nuclear Hamiltonian comprising both two- and three-nucleon potentials.
The most prominent feature emerging from our results is the strong interplay between
temperature and density, that can be ultimately traced back to the form of the Fermi
distribution. For any given temperature thermal effects turn out to decrease with density,
although in some instances they are still significant at density as high as $\sim 4\varrho$.
As a consequence, in $\beta$-stable matter thermal modifications of nucleon properties,
such as the energy spectrum, are more pronounced for protons than for neutrons.
The interplay of temperature and density has no trivial implications for astrophysical studies.
The temperature and density profiles obtained from neutron star merger simulations\textemdash see, e.g. Refs.~\citep{figura2020,camelio2021,raithel2021}), showing that in the inner region of the remnant
the thermal contribution to the pressure is lower. However, this happens not only because the
degeneracy pressure becomes more important, but also because the temperature is lower.
On the other hand, at intermediate densities the temperature is higher and the thermal
contribution to the pressure is larger as well. At lower densities, despite the temperature being
lower, the thermal contribution to the pressure is even more important due to naturally lower
degeneracy pressure. It clearly appears that, in order to pin down the role of thermal effects in determining
the properties of neutron star matter, their temperature and density dependence must be accurately
described within a consistent framework.
Of great importance, in this context, will be the availability of simple parametrisations of the EOS
of hot nuclear matter in $\beta$-equilibrium, suitable for use in numerical simulations. A direct comparison
to the results of our calculations shows that the approximate treatment of thermal effects recently proposed by
\citet{raithel2021} is remarkably accurate, and suitable to describe EOSs obtained from different models
of nuclear dynamics.
It is important to keep in mind that the discussion of temperature effects in nuclear matter should not be
limited to thermal contributions to average properties, such as the pressure and energy density. As shown by the results discussed in this article, the most fundamental properties, including the Fermi distributions, single-particle spectra and effective masses, are significantly modified at finite temperature. A consistent inclusion of the temperature dependence of these quantities is essential to accurately describe nuclear collision rates in matter, which in turn determine out-of-equilibrium phenomena \citep{alford2018,alford2021,mostetal2021,mostetal2022}, as well as neutrino emission.
The approach described in this article allows to carry out calculations of, e.g., the rates of modified Urca processes at $T>0$,
using nuclear matrix elements obtained from a highly realistic nuclear Hamiltonian, comprising both two- and three-nucleon
potentials.
As a final remark, is should be also mentioned that in this work we have considered $\beta$-equilibrium in the absence of
neutrinos. However, neutrino trapping is expected to occur even at $T \lesssim \mathrm{10 \ MeV}$ \citep{alford2018beta}, and
we plan to extend our calculations to study this scenario.
\acknowledgments
This work has been supported by the Italian National Institute for Nuclear
Research (INFN) under grant TEONGRAV.
\input{Tonetto_Benhar.bbl}
|
Title:
Reconciling Multi-messenger Constraints with Chiral Symmetry Restoration |
Abstract: We consider the parity doublet model for nucleonic and delta matter to
investigate the structure of neutron stars. We show that it is possible to
reconcile the multi-messenger astronomy constraints within a purely hadronic
equation of state (EOS), which accounts for the self-consistent treatment of
the chiral symmetry restoration in the baryonic sector. We demonstrate that the
characteristics of the EOS required by the astrophysical constraints do not
necessarily imply the existence of a hadron-quark phase transition in the
stellar core.
| https://export.arxiv.org/pdf/2208.03933 |
\title{Reconciling Multi-messenger Constraints with Chiral Symmetry Restoration \thanks{Presented at Quark Matter 2022}%
}
\author{Micha\l{} Marczenko\thanks{speaker}\thanks{e-mail: michal.marczenko@uwr.edu.pl}
\address{Incubator of Scientific Excellence - Centre for Simulations of Superdense Fluids, University of Wroc\l{}aw, plac Maksa Borna 9, 50-204 Wroc\l{}aw, Poland}
\\[3mm]
{Krzysztof Redlich, Chihiro Sasaki %
\address{Institute of Theoretical Physics, University of Wroc\l{}aw, plac Maksa Borna 9, 50-204 Wroc\l{}aw, Poland}
}
}
\section{Introduction}
The advancements of multi-messenger astronomy on different sources have led to remarkable improvements in constraining the equation of state (EoS) of dense, strongly interacting matter. The modern observatories for measuring masses and radii of compact objects, the gravitational wave interferometers of the LIGO-VIRGO Collaboration (LVC)~\cite{LIGOScientific:2018cki}, and the X-ray observatory Neutron Star Interior Composition Explorer (NICER) provide new powerful constraints on their mass-radius (M-R) profile~\cite{Riley:2019yda, Miller:2019cac, Riley:2021pdl, Miller:2021qha}. These stringent constraints allow for a detailed study of the neutron star (NS) properties and ultimately the microscopic properties of the EoS. In particular, the existence of $2~M_\odot$ NSs requires that the EoS must be stiff at intermediate to high densities to support them from gravitational collapse. At the same time, the tidal deformability (TD) constraint of a canonical $1.4~M_\odot$ NS from the GW170817 event implies that the EoS has to be fairly soft at intermediate densities, which may be indicative for a phase transition in the cores of NSs. This transition is commonly associated with a possible onset of deconfined quark matter. This conclusion has been achieved by systematic analyses of recent astrophysical observations within simplistic approaches (see, e.g.,~\cite{Alford:2013aca}). Although such schemes are instructive, they are not microscopic approaches. They provide interesting heuristic guidance, but cannot replace more realistic dynamical models for the EoS, which accounts for the fundamental properties of quantum chromodynamics (QCD), the theory of strong interactions, i.e., a self-consistent treatment of the chiral symmetry restoration in the baryonic sector. The recent LQCD results exhibit a clear manifestation of the parity doubling structure for the low-lying baryons around the chiral crossover~\cite{Aarts:2018glk}, Imprints of chiral symmetry restoration are also expected to occur in the baryonic sector of cold and dense matter. Such properties can be described in the framework of the parity doublet model~\cite{Detar:1988kn, Jido:2001nt}. The model has been applied to hot and dense hadronic matter, and neutron stars (see, e.g,~\cite{Marczenko:2021uaj,Zschiesche:2006zj, Marczenko:2017huu, Marczenko:2018jui, Sasaki:2010bp, Marczenko:2020jma, Marczenko:2020omo}).
In this work, we utilize the parity doublet model for nucleonic and $\Delta$ matter~\cite{Takeda:2017mrm} to investigate the implications on the structure of neutron stars.
\section{Equation of State}
The thermodynamic potential of the model in the mean-field approximation reads~\cite{Marczenko:2021uaj,Marczenko:2022hyt}
\begin{equation}\label{eq:thermo_potential}
\Omega = V_\sigma + V_\omega + V_\rho + \sum_{x=N,\Delta}\Omega_x\rm,
\end{equation}
where $\Omega_x$ is the kinetic part of the thermodynamic potential, and $x$ labels positive-parity and negative-parity spin-$1/2$ nucleons, i.e., $N\in \lbrace p,n;p^\star,n^\star \rbrace$, and spin-$3/2$ $\Delta$'s, i.e., \mbox{$\Delta \in \lbrace\Delta_{++,+,0,-};\Delta^\star_{++,+,0,-}\rbrace$}. Note that the negative-parity states are marked with the asterisk. The potentials $V_i$ are commonly used mean-field potentials. The masses of the positive- and negative-parity chiral partners are given by
\begin{equation}\label{eq:doublet_mass}
m^x_\pm = \frac{1}{2}\left[\sqrt{\left(g_1^x+g_2^x\right)^2\sigma^2 + 4\left(m_0^x\right)^2} \mp \left(g^x_1-g^x_2\right)\sigma\right] \textrm,
\end{equation}
where $\pm$ sign denotes parity and $x=N,\Delta$. When chiral symmetry is restored, the masses in each parity doublet become degenerate: $m_\pm^x(\sigma=0) = m_0^x$, where $m_0^x$ is the chirally invariant mass parameter. The positive-parity nucleons are identified as $N(938)$ states. Their negative-parity counterparts are identified as $N(1535)$ resonance~\cite{ParticleDataGroup:2020ssz}. The positive-parity $\Delta$ states are identified with $\Delta(1232)$ resonance. Their negative-parity chiral partners, $\Delta^\star$, are identified with $\Delta(1700)$ resonance~\cite{ParticleDataGroup:2020ssz}. Detailed description of the model and numerical values of the parameters used in this contribution can be found in~\cite{Marczenko:2022hyt}.
In the present work, we study the influence of $\Delta$ matter on the EoS and compliance with astrophysical constraints, i.e., $M_{\rm max} = (2.08 \pm 0.07)~M_\odot$~\cite{Fonseca:2021wxt}, as well as M-R and $\Lambda_{1.4} = 190^{+390}_{-120}$ from GW170817~\cite{LIGOScientific:2018cki}.
\section{Results}
Fig.~\ref{fig:p_e} shows the calculated EoSs under the NS conditions for selected values of $m_0^N=550$, $600$, $650$, $700~$MeV. To illustrate the effect of $\Delta$ matter on the EoS at intermediate densities, we show results obtained for purely nucleonic EoS (dashed line) together with the case $m_0^\Delta = m_0^N$ (solid line). The regions bounded by the two results correspond to the range spanned by solutions with $m_0^N < m_0^\Delta$ in each case. In general, the low-density behavior in each case is similar, until the deviations from the purely nucleonic EoSs are induced by the onset of $\Delta$ matter. The swift increase of the energy density is directly linked to the partial restoration of the chiral symmetry within the hadronic phase and resembled in the in-medium properties of dense matter in the parity doublet model. Most notably, it is associated with a drastic decrease of the mass of the negative-parity states in each parity doublet toward their asymptotic values, $m_0^x$. Interestingly, the softening is followed by a subsequent stiffening, as compared to the purely nucleonic result, and the EoS reaches back the constraints at higher densities. This effect is readily pronounced for $m_0^\Delta = m^N_0$. For other parametrizations shown in the figure, the EoSs fall into the region derived by the constraint.
In Fig.~\ref{fig:m0_constraints}, we show the allowed combinations of $m^N_0$ and $m^\Delta_0$ for which the TD and $2~M_\odot$ constraints are met. The green circles indicate configurations that fulfill the lower bound for the maximum mass constraint, $M_{\rm max} = (2.08 \pm 0.07)~M_\odot$~\cite{Fonseca:2021wxt}. The red crosses indicate configurations that are in accordance with the upper bound for the TD constraint, $\Lambda_{1.4} = 190^{+390}_{-120}$~\cite{LIGOScientific:2018cki}. The gray-shaded area shows the region where the two constraints are fulfilled simultaneously. The orange points show configurations with the largest value of $m^\Delta_0$ for which the $\Delta$ matter appears through the first-order transition.
The constraint derived in~\cite{Annala:2019puf} feature a notable change of the logarithmic slope of $p(\epsilon)$ around $\epsilon_{\rm QGP}\approx400-700~\rm MeV/fm^{3}$ (see Fig.~\ref{fig:p_e}), which is the estimate for the deconfinement transition at high temperatures~\cite{Bazavov:2014pvz}. It can be quantified by the polytropic index, i.e., $\gamma = d\log p / d\log \epsilon$. In~\cite{Annala:2019puf}, authors chose the criterion for the onset of quark matter in the core of NSs to be $\gamma < 1.75$. Interestingly, at higher densities, our results feature a similar change of the slope, regardless of the appearance of $\Delta$ matter. In Fig.~\ref{fig:polytropic}, we show as an example the polytropic index $\gamma$ obtained in the parity doublet model for $m_0^N=650~$MeV. Remarkably, $\gamma$ drops well below the threshold value of $1.75$ around $\epsilon_{\rm QGP}$. Thus, the polytropic index $\gamma$ does not provide a robust criterion and does not necessarily signal the onset of deconfined quark matter in the NS cores.
\section{Conclusion}
We have analyzed the properties of neutron stars and found that the multi-messenger constraints can be accommodated within a purely hadronic EoS for nucleonic matter including $\Delta(1232)$ resonance being subject to chiral symmetry restoration. As we have demonstrated in this work, the characteristics of the bulk EoS, such as the change of the logarithmic slope in the EoS, do not necessarily imply the existence of a hadron-quark phase transition as proposed in recent studies, e.g.,~\cite{Annala:2019puf}. We conclude that due to the anticipated near-future advances in multi-messenger astronomy, it will become inevitable to link the observed properties of NSs and their mergers to fundamental properties of strong interactions described by QCD, including chiral symmetry restoration, as well as emergence of conformal matter~\cite{Marczenko:2022jhl}.
\section*{Acknowledgements}
This work is supported partly by the Polish National Science Centre (NCN) under OPUS Grant No. 2018/31/B/ST2/01663 (K.R. and C.S.), Preludium Grant No. 2017/27/N/ST2/01973 (M.M.), and the program Excellence Initiative–Research University of the University of Wroc\l{}aw of the Ministry of Education and Science (M.M). K.R. also acknowledges the support of the Polish Ministry of Science and Higher Education.
\bibliographystyle{IEEEtran}
\bibliography{main.bib}
|
Title:
Wide-angle effects in multi-tracer power spectra with Doppler corrections |
Abstract: We examine the computation of wide-angle corrections to the galaxy power
spectrum including redshift-space distortions and relativistic Doppler
corrections, and also including multiple tracers with differing clustering,
magnification and evolution biases. We show that the inclusion of the
relativistic Doppler contribution is crucial for a consistent wide-angle
expansion for large-scale surveys, both in the single and multi-tracer cases.
We also give for the first time the wide-angle cross-power spectrum associated
with the Doppler magnification-galaxy cross-correlation, which has been shown
to be a new way to test general relativity. In the full-sky power spectrum, the
wide-angle expansion allows integrals over products of spherical Bessel
functions to be computed analytically as distributional functions, which are
then relatively simple to integrate over. We give for the first time a complete
discussion and new derivation of the finite part of the divergent integrals of
the form $\int_0^\infty dr r^n j_{\ell_1}(kr) j_{\ell_2}(qr)$, which are
necessary to compute the wide-angle corrections when a general window function
is included. This facilitates a novel method for integrating a general analytic
function against a pair of spherical Bessel functions.
| https://export.arxiv.org/pdf/2208.04819 |
\date{\today}
\flushbottom
\section{Introduction}
The two-point correlation function (2PCF) in observed (redshift) space is often expressed in the plane-parallel or flat-sky approximation, in which the directions from the observer to galaxy pairs are assumed to be nearly equal, $\rh_1\approx \rh_2$. Galaxy surveys with wide sky coverage, in particular next-generation surveys, require us to move beyond the flat-sky limit and include wide-angle correlations, with $\rh_1\neq \rh_2$. This was shown in early work by \cite{Szalay:1997cc,Matsubara:1999du} (using a tripolar spherical harmonic expansion) and then further investigated in, e.g., \cite{Szapudi:2004gh, Papai:2008bd, Raccanelli:2010hk,
Bertacca:2012tp,Yoo:2013zga, Reimberg:2015jma, Raccanelli:2016avd,Tansella:2017rpi,Castorina:2017inr,Beutler:2018vpe,Beutler:2021eqq,Castorina:2021xzs,Noorikuhani:2022bwc}. Our aim is to expand and clarify a number of these results to the multi-tracer power spectrum, including relativistic corrections. \CC{We give for the first time the power spectrum and wide angle corrections to the cross-correlation between the galaxy number counts and the Doppler magnification induced by peculiar velocities. We also include contributions from the derivatives of the growth rate and clustering biases, which are neglected in other studies yet, as we show, are needed for a consistent analysis on large scales.}
\subsection{\redd{Doppler contribution to the number counts}}
{The galaxy number density contrast at the source is $\delta_g=(n_g-\bar{n}_g)/\bar{n}_g=b\,\delta_{\rm m}$, where $b$ is the clustering bias and $\delta_{\rm m}$ is the matter density contrast. For brevity, we write $\delta_g \equiv \delta$ and omit the $z$-dependence in our expressions. At first (linear) order in perturbations, the number density contrast at the source is related to the contrast $\delta^s$ that is observed in redshift space by}
\be \label{1a}
{\delta^s(\r_i) = \delta(\r_i) -{1\over \H_i}{\p \over \p r_i}\big(\v_i\cdot \rh_i\big)-{\alpha_i\over \H_i}\,\big(\v_i\cdot \rh_i\big)\,,}
\ee
where we use Newtonian gauge.
Here $\v_i=\v(\r_i)$ is the peculiar velocity, $\r_i=r(z_i)\rh_i$, where $r$ is the comoving line-of-sight distance, and $\H_i=\H(z_i)$ is the conformal Hubble rate. The second term on the right of \eqref{1a} is the standard Kaiser redshift-space distortion (RSD), while the third term is a Doppler redshift effect. This Doppler term is suppressed relative to the Kaiser RSD term by a factor $\cH/k$ in Fourier space. It is therefore typically neglected in most work on galaxy clustering in redshift space.
The Doppler coefficient in \eqref{1a} is given on a spatially flat background by
(e.g. \cite{Challinor:2011bk,Bertacca:2012tp}; see \cite{DiDio:2016ykq} for the generalisation to spatially curved backgrounds):
\be \label{4}
\alpha_i = {2\over r_i} - \H_i\, b_{{\rm e}\,i} -{\ud{\H_i} \over \ud\ln(1+z_i)} +2 \H_i\,\Q_i \left(1-{1 \over r_i\H_i} \right).
\ee
In the original RSD paper \cite{Kaiser:1987qv}, the Doppler coefficient \eqref{4} includes only the first 2 terms on the right. This is
followed in the pioneer wide-angle papers \cite{Szalay:1997cc,Matsubara:1999du} and many subsequent papers. However, the second 2 terms on the right of \eqref{4} are required for a correct analysis \cite{Challinor:2011bk,Bertacca:2012tp}.
In \eqref{4}, the `evolution bias' $b_{\rm e}= -\p\ln \bar{n}_g/\p\ln(1+z)$ measures the deviation of the average comoving number density from constancy (due e.g. to galaxy mergers) \cite{Maartens:2021dqy}.
The third term on the right takes account of cosmic evolution.
The last term on the right arises from the Doppler correction to lensing convergence \cite{Bonvin:2005ps,Bonvin:2008ni} in a flux-limited survey,
where $\Q=-\p\ln \bar{n}_g/\p\ln L_{\rm c}$ is the magnification bias and $L_{\rm c}$ is the luminosity cut \cite{Maartens:2021dqy}. (In the ideal case of no flux limit, $\Q=0$, and for line intensity mapping, $\Q=1$.)
In \eqref{1a} we have omitted two contributions to the number density contrast:
\begin{enumerate}
\item[(1)]
the contribution from the standard lensing magnification term $2(\Q-1)\kappa$, where $\kappa$ is a weighted integral of $\delta_{\rm m}$ along the line of sight;
\item[(2)]
additional relativistic potential terms, including Sachs-Wolfe, integrated Sachs-Wolfe and time delay effects, which collectively scale as the gravitational %
potential $\Phi$.
\end{enumerate}
The standard lensing magnification contribution (1) is typically only important at higher redshift -- and it requires significant additional complexity to incorporate it into the Fourier power spectrum.
The additional relativistic terms in (2) are suppressed relative to the Doppler term in \eqref{1a}, since the Poisson equation shows that $\Phi\sim (\H^2/k^2)\delta_{\rm m}$. (See e.g. \cite{Challinor:2011bk,Bertacca:2012tp,Tansella:2017rpi} for details of all these terms.)
There is a subtle point about the Doppler contribution relative to the potential contribution. In $\delta^s$, the Doppler term is clearly less suppressed than the potential contribution. However, this does not translate directly to the 2PCF in the case of a single tracer with correlations at equal redshifts. In this case, i.e. auto-correlations at equal $z$, the Doppler contribution is a square of the Doppler term, i.e., scaling as $(\cH^2/k^2)P_{\rm m}$, like the leading potential contribution. \red{Strictly, this means that it is inconsistent to neglect the potential contributions while including the Doppler term, when considering auto-correlations at equal redshifts. Including the potential terms is simple in principle, but we omit them in order to avoid additional complexity in the equations. Effectively, this means that we adopt a `weak field' approximation \cite{DiDio:2018zmk}. }
When considering correlations of two tracers (see e.g. \cite{McDonald:2009ud, Bonvin:2013ogt, Bacon:2014uja, Gaztanaga:2015jrs,Hall:2016bmm,Lepori:2017twd,Breton:2018wzk, DiDio:2018zmk,DiDio:2020jvo,Beutler:2020evf,Beutler:2021eqq}), the leading Doppler contribution to the 2PCF scales instead as $(\H/k)P_{\rm m}$ -- and in this case it is consistent to neglect the potential contributions. Note that it is also consistent in the case of single-tracer correlations at unequal redshifts.
{Furthermore, we highlight the fact that the leading wide-angle contribution to $\delta^s$ scales as $r/d \sim d^{-1}/k$, where $r$ is the comoving separation of the galaxy pair and $d$ is the line-of-sight comoving distance to the galaxy pair (see \autoref{fig1} and \autoref{s1.1}). As a consequence, the leading wide-angle and leading Doppler contributions are of the same order -- so that a consistent treatment requires the inclusion of both (e.g. \cite{Castorina:2021xzs,Noorikuhani:2022bwc}).}
\CC{In addition to this, for a consistent treatment we need to include radial derivatives of the growth rate, the biases, and other variables which appear. In an expansion in $r/d$, derivative terms appear at order $\sim r{\cal H}$ when approaching $z\sim 1$~-- i.e., distances of the Hubble scale to the galaxy pair -- and so are also needed for a consistent treatment. In both cases these have been neglected in previous analyses.}
We define the transforms \cite{Szalay:1997cc,Matsubara:1999du,Bertacca:2012tp}
\be
A^n_\ell(\r) = \int {\ud^3\k \over (2\pi)^3}\,(\I k)^{-n}\cl_\ell(\hat\k\cdot\hat\r) \,\e^{\I \k\cdot\r}\,\delta_{\rm m}(\k),
\label{7}
\ee
where $\cl_\ell$ is a Legendre polynomial,
and then we can express \eqref{1a} as
\be
{\delta^s(\r_i)\over b_i} = \Big(1+ { \beta_i\over 3} \Big)A^0_0(\r_i)+ {2\over3}\beta_i\, A^0_2(\r_i) +\beta_i\,\alpha_i \,A^1_1(\r_i) \quad
\mbox{where}\quad\beta_i\equiv {f_i\over b_i}.
\label{6}
\ee
It follows that the 2PCF in redshift space, $\xi_{{g}}(\r_1,\r_2)=\langle \delta^s(\r_1)\,\delta^s(\r_2)\rangle $, is given by
\begin{align}
{\xi_{{g}}(\r_1,\r_2)\over b_1b_2} =& \Big(1+ { \beta_1\over 3} \Big)\Big(1+ { \beta_2\over 3} \Big)S^0_{00}(\r_1,\r_2)+ {4\over9}\beta_1\beta_2\, S^0_{22}(\r_1,\r_2) +
{2\beta_2\over3}\Big(1+ { \beta_1\over 3} \Big)S^0_{02}(\r_1,\r_2)
\notag\\
& +{2\beta_1\over3}\Big(1+ { \beta_2\over 3} \Big)S^0_{20}(\r_1,\r_2)
+\Big(1+ { \beta_1\over 3} \Big)\beta_2\alpha_2 S^1_{01}(\r_1,\r_2)+\Big(1+ { \beta_2\over 3} \Big)\beta_1\alpha_1 S^1_{10}(\r_1,\r_2)
\notag\\
&
+{2\over3}\beta_1\beta_2\Big[\alpha_1S^1_{12}(\r_1,\r_2)+ \alpha_2S^1_{21}(\r_1,\r_2)\Big]
+\beta_1\beta_2\,\alpha_1\alpha_2 S^2_{11}(\r_1,\r_2), \label{8}
\end{align}
where
\bea
S^{n_1+n_2}_{\ell_1\ell_2}(\r_1,\r_2) \equiv (-1)^{\ell_2} \int {\ud^3\k \over (2\pi)^3}\,(\I k)^{-(n_1+n_2)}\,\cl_{\ell_1}(\hat\k\cdot\hat\r_1)\,\cl_{\ell_2}(\hat\k\cdot\hat\r_2) \,\e^{\I \k\cdot(\r_1-\r_2)}\,P_{\rm m}(k).
\label{9}
\eea
Thus in general the 2PCF is a function of $\bm r_1$ and $\bm r_2$, or equivalently of
\be
\bm r=\r_1-\bm r_2 \quad \mbox{and} \quad \bm d = (1-t)\r_1+t\r_2\quad (0\leq t\leq 1),
\ee
where $t$ determines the choice of $\bm d$ -- see \autoref{fig1}.
We can also define the wide-angle 2-point cross-correlation of galaxy number density contrast and Doppler magnification $\xi_\kappa(\bm r_1,\bm r_2)=\big\langle \delta^s(\r_1)\,\kappa_v(\r_2)\big\rangle$, where the magnification induced by peculiar velocities is given by \cite{Bacon:2014uja}:
\bea
\kappa_v =-{\tilde \alpha \over \cH}\, \bm v\cdot\bm n\qquad\mbox{where}\quad \tilde \alpha={1\over r}-{\cH}\,.
\eea
Then the equivalent of \eqref{8} becomes
\begin{align}
{\xi_\kappa(\bm r_1,\bm r_2)\over b_1f_2\tilde\alpha_2} =& \Big(1+ { \beta_1\over 3} \Big) S^1_{01}(\r_1,\r_2)
+{2\over3}\beta_1\,S^1_{21}(\r_1,\r_2)
+\beta_1\alpha_1\, S^2_{11}(\r_1,\r_2)\,. \label{8dop}
\end{align}
As shown in \cite{Bonvin:2016dze}, this 2-point correlation function has a significant dipole which can be used as a test of general relativity~\cite{Andrianomena:2018aad,Franco:2019wbj}. The power spectrum of this 2PCF has not been given before.
\subsection{Wide-angle multipole expansion -- overview}
\label{s1.1}
Here we give a brief summary of the calculation of the wide-angle expansion.
In general the multipole decomposition of $\xi_{{g}}(\bm d,\bm r)$ and $\xi_\kappa(\bm d,\bm r)$ or their equivalent power spectra $P_{{g}}(\bm d, \bm k)$ and $P_\kappa(\bm d, \bm k)$ (defined below) are analytically intractable. We aim to produce a series expansion first in the 2PCF about the plane-parallel limit ($r\ll d$), and then for each term in that series expansion, we perform a multipole decomposition with respect to the angle between $\r$ and $\bm d$, with cosine $\mu=\hat{\bm d}\cdot{\hat \r}$. Once translated to the power spectrum this becomes an expansion about $k^{-1}\ll d$ with coefficients expanded in Legendre multipoles in $\mu_k=\hat{\bm d}\cdot{\hat \k}$. Clearly a sensible expansion variable in the 2PCF is $x=r/d$, but we need to be careful. The plane-parallel limit is not simply the limit as $r\to0$ with $d$ fixed: in this limit, $\xi_{\rm pp}(r)\sim \int \d k j(kr)P_m(k)$, and therefore $\xi_{\rm pp}$ is a function of $r$. Nor is it the limit $d\to\infty$ with $r$ fixed -- since the coefficients $\alpha$ and $\beta$ are functions of the two redshift shells $\bm r_1,\bm r_2$ we are looking at [see \eqref{skjncskjdbvf} below].
The plane-parallel limit is thus a mixture of both $r\to0$ and $d\to\infty$.
On inspection each term in \eqref{8} contains parts which depend on $|\r_1|,|\r_2|$~-- i.e., the $\alpha$ and $\beta$ coefficients only depend on the distance to each source, and parts which depend on $\hat\r_1,\hat\r_2,\hat\r$ and $|\r|$ but not on $|\r_1|,|\r_2|$~-- i.e., the terms which are integrals over the power spectrum, $S^n_{\ell_1,\ell_2}$, depend on the geometry of the triangle and the distance between the sources. These different contributions require slightly different series expansions around the plane-parallel limit:
\begin{itemize}
\item
In $S^n_{\ell_1,\ell_2}$, we may expand in a series in $x=r/d$ around $x=0$ with $r$ fixed, because the direction vectors $\hat\r_1,\hat\r_2,\hat\r$ only depend on the ratio $x$, not on $r$ and $d$ separately. In addition, $|\r|$ in the exponential does not depend on $\mu$: hence it does not affect the multipoles and does not need expanding (the $\mu$ dependence factors into Legendre polynomials on using a plane wave expansion below).
\item
Functions of $|\r_1|,|\r_2|$ which appear can sensibly be expanded in a series in $x$, but now with $d$ fixed, so that the series coefficients come out in terms of $\alpha(d)$ and $\beta(d)$ and their derivatives $\d\alpha(d)/\d\ln d$, $\d\beta(d)/\d\ln d$ evaluated at $r=0$. We start with a general $\bm r_1\neq \bm r_2$ and then expand functions of these quantities around a median distance given by the shell $d$ [see, e.g., \eqref{cjdhsbdjsbdjsbc}].
\end{itemize}
Putting this all together gives a multipole series of the form,
\be\label{dksjndckdncs}
\xi_{{g}}(\bm d,\bm r) = \sum_{\ell,p} \,\Xi_\ell^{(p)}(r,d) \left(\frac{r}{d}\right)^p \mathcal{L}_\ell(\mu)\quad \mbox{where}\quad \mu=\hat{\bm d}\cdot{\hat \r}=\cos\gamma\,,
\ee
and the $\ell$-pole is given by
\be
\sum_{p} \,\Xi_\ell^{(p)}(r,d) \left(\frac{r}{d}\right)^p\,. \label{e12}
\ee
We will denote analogous coefficients of the galaxy-magnification 2PCF with a tilde.
The plane-parallel approximation $p=0$ leads to the well-known coefficients for the galaxy-galaxy power spectrum for two tracers. The leading terms at order $(\cH/k)^0$ are the set of even multipoles given first in~\cite{Szalay:1997cc,Matsubara:1999du},
\begin{align}
\Xi^{(0)}_{0}&=\frac{1}{15}b_1b_2 \Big(15+5{\beta_{1}}+5{\beta_{2}}+3{\beta_{1} \beta_{2}}\Big) \xi^{(0)}_{0}, \\
\Xi^{(0)}_{2}&=-\frac{2}{21}b_1b_2\Big({7 \beta_{1}}+{7 \beta_{2}}+{6 \beta_{1} \beta_{2}}\Big) \xi^{(0)}_{2}, \\
\Xi^{(0)}_{4}&=\frac{8}{35}b_1b_2\, \beta_{1} \beta_{2} \,\xi^{(0)}_{4}\,.
\end{align}
At order $(\cH/k)^1$ we have the sub-leading odd multipoles arising from the Doppler contribution given here for the first time,
\begin{align}
\Xi^{(0)}_{1}&=\frac{1}{5}b_1b_2\Big[5\beta_{1} \alpha_{1}-5\beta_{2}\alpha_{2} +{3 \beta_{1} \beta_{2}\big( \alpha_{1}}-{ \alpha_{2}}\big) \Big] \xi^{(1)}_{1}, \\
\Xi^{(0)}_{3}&=\frac{2}{5}b_1b_2\, \beta_{1} \beta_{2} \big(\alpha_{2}-\alpha_{1}\big) \xi^{(1)}_{3}\,.
\end{align}
Note these vanish in the case of a single tracer.
The coefficients $\Xi_\ell^{(p)}$ of the series in \eqref{e12} depend on $d$ via the coefficients $\alpha,\beta$ and on the separation $r$ via a sum over weighted integrals of the power spectrum:
\bea
\xi^{(n)}_{\ell'}(r) &\equiv & \int {\ud k \over 2\pi^2}\, k^{2-n}j_{\ell'}(kr) P_{\rm m}(k).
\label{12}
\eea
Note that these terms are of order $(1/k)^n$.
In the case of the galaxy-magnification power spectrum the plane-parallel limit has a dipole and an octupole at order $n=1$, corresponding to terms $\sim(\cH/k)^1$ :
\begin{align}
\tilde\Xi^{(0)}_{1}&=-\frac{1}{5}b_{1}f\tilde\alpha_{2} { \big(5+3 \beta_{1}\big)\xi^{(1)}_{1}}\,, \\
\tilde\Xi^{(0)}_{3}&=\frac{2}{5}{b_{1} f \beta_{1} \tilde\alpha_{2} \, \xi^{(1)}_{3}} \,.
\end{align}
At order $(\cH/k)^2$ ($n=2$), we have the sub-leading corrections to the monopole and quadrupole:
\begin{align}
\begin{gathered}
\tilde\Xi^{(0)}_{0}=-\frac{1}{3}{ b_{1} f \beta_{1} \alpha_{1}\tilde\alpha_{2} \xi^{(2)}_{0}} \\
\tilde\Xi^{(0)}_{2}=\frac{2}{3}{ b_{1} f \beta_{1} \alpha_{1}\tilde\alpha_{2} \xi^{(2)}_{2}}
\end{gathered}
\end{align}
We will derive the other coefficients below.
Only in the plane-parallel approximation is the $\ell$-pole dependent solely on $\xi^{(n)}_{\ell}(r)$~-- in general, differing $\ell'$-poles, $\xi^{(n)}_{\ell'\neq\ell}(r)$, come into play, and \eqref{e12} leads to:
\be\label{djschsbdisuhdishd}
\Xi_\ell^{(p)}(r,d) = \sum_{\ell',n} \Xi_{\ell\ell'}^{(p,n)}(d)\xi^{(n)}_{\ell'}(r)\,.
\ee
Here $\Xi_{\ell\ell'}^{(p,n)}(d)$ are functions of $\alpha(d)$, $\beta(d)$ and their derivatives.
Once we have the 2PCF in the form~\eqref{dksjndckdncs}, we define the wide-angle power spectrum at a displacement $\bm{d}$ from the observer by Fourier transforming the redshift-space 2PCF over $\bm{r}$ (see \autoref{fig1}):\footnote{Note that this is treated as a formal Fourier transform over $r\in[0,\infty)$, not as a discrete Fourier series over a finite $r$. A window function can be added to account for such effects. }
\be\label{psdk}
P_{{g}}(\bm d, \bm k)=\int \ud^3 \r \,{\rm e}^{-\I\,\k\cdot\r}\,\xi_{{g}}(\bm d,\bm r)\,.
\ee
The power spectrum can be expanded in multipoles defined by the angle between the wavevector $\k$ and the line of sight $\bm d$, as
\be
P_{{g}}(\bm d, \bm k) = \sum_\ell \mathcal{P}_\ell(k,d)\, \cl_\ell(\mu_k)
\quad\mbox{where}\quad {\mu_k=\hat{\bm d}\cdot\hat{\bm k}}\,.
\ee
Then, using the plane-wave expansion
\be
{\rm e}^{-\I\,\k\cdot\r} = \sum_{\ell=0}^\infty \I^{-\ell} (2\ell+1) j_\ell(kr)
\cl_\ell(\hat\k\cdot\hat\r)\,,
\ee
we find that
\be
\mathcal{P}_\ell(k,d) = 4\pi\, \I^{-\ell} \sum_p\int \d r\,r^2\,
\Xi_\ell^{(p)}(r,d)\, j_\ell(kr)\left(\frac{r}{d}\right)^p \,.
\ee
An important difference in the multipoles in redshift space versus Fourier space is that in redshift space the multipoles are in $\mu$ with $r$ fixed, while in Fourier space the multipoles are in $\mu_k$ with $k$ fixed.
On using~\eqref{12} and \eqref{djschsbdisuhdishd}, we find that the multipoles become
\be\label{csdcndkncsk}
\mathcal{P}_\ell(k,d) = \frac{2}{\pi}\,\I^{-\ell}\sum_{\ell', n, p} d^{-p}\,\Xi_{\ell\ell'}^{(p,n)}(d)\int {\ud q}\, q^{2-n}\,P_{\rm m}(q)\, \mathcal{I}^p_{\ell\ell'}(k,q)\,,
\ee
where
\be\label{ipll}
\mathcal{I}^p_{\ell\ell'}(k,q)=\int_0^\infty \ud r \,r^{2+p}\, j_\ell(kr)\,j_{\ell'}(qr)\,.
\ee
These integrals, although divergent, can be evaluated as distributions~-- delta functions, and more complicated singular points~-- which then simply feed into the integral over the power spectrum in \eqref{csdcndkncsk}.
We can rewrite \eqref{csdcndkncsk} as a series in $k^{-1}/d=(kd)^{-1}$ (which is a Fourier counterpart to $r/d$):
\be
\mathcal{P}_\ell(k,d)=\sum_{p} {\mathcal{P}^{(p)}_\ell {(k,d)}}\,{(kd)^{-p}}\,,
\ee
with
\begin{align}
\mathcal{P}^{(p)}_\ell(k,d)
&=\frac{2}{\pi}\I^{-\ell}\sum_{\ell', n}{\Xi_{\ell\ell'}^{(p,n)}(d)}\,P^{pn}_{\ell\ell'}(k)\,,\label{cppl}
\end{align}
where
\be \label{pnpll}
P^{pn}_{\ell\ell'}(k)=k^p\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q)\, \mathcal{I}^p_{\ell\ell'}(k,q)\,.
\ee
\section{Evaluating the 2PCF and power spectrum}
In order to evaluate the 2PCF, we first evaluate $S^{n_1+n_2}_{\ell_1\ell_2}(\r_1,\r_2)$ in \eqref{9} finding, %
\begin{align}
S_{\ell_{1} \ell_{2}}^{n}=&\sum_{L}(-1)^{\ell_2} i^{L-n} (2L+1)\left[\frac{(4\pi)^3(2L + 1)}{(2\ell_1 + 1)(2\ell_2 +1 )}\right]^{1/2}
\left(\begin{array}{lll}
\ell_{1} & \ell_{2} & L \\
0 & 0 & 0
\end{array}\right)
\xi_{L}^{(n)}(r)
\notag \\
&\sum_{m_1,m_2,M}
\left(\begin{array}{lll}
\ell_{1} & \ell_{2} & L \\
m_1 & m_2 & M
\end{array}\right)
Y_{\ell_{1} m_{1}}\left(\hat{\boldsymbol{r}}_{1}\right) Y_{\ell_{2} m_{2}}\left(\hat{\boldsymbol{r}}_{2}\right)Y_{L M}(\hat{\boldsymbol{r}})
\end{align}
\iffalse
where the triangle-shape coefficients are
\begin{align}\label{dsjknskjdsk}
\begin{aligned}
B_{\ell_{1} \ell_{2}}^{L}(\Delta)=& \frac{1}{\sqrt{\left(2 \ell_{1}+1\right)\left(2 \ell_{2}+1\right)}}\,\left(\begin{array}{lll}
\ell_{1} & \ell_{2} & L \\
0 & 0 & 0
\end{array}\right)
\sum_{M} X_{\ell_{1} \ell_{2}}^{L M *}\left(\hat{\boldsymbol{r}}_{1}, \hat{\boldsymbol{r}}_{2}\right) Y_{L M}(\hat{\boldsymbol{r}})\,.
\end{aligned}
\end{align}
Here $\xi^{(n)}_L$ are given by \eqref{12} and
\begin{align}
\begin{aligned}
X_{\ell_{1}\ell_{2}}^{L M}\left(\hat{\boldsymbol{r}}_{1}, \hat{\boldsymbol{r}}_{2}\right)=&(-1)^{\ell_{1}-\ell_{2}-M} \sqrt{2 L+1}
\sum_{m_{1}, m_{2}}\left(\begin{array}{ccc}
\ell_{1} & \ell_{2} & L \\
m_{1} & m_{2} & -M
\end{array}\right) Y_{\ell_{1} m_{1}}\left(\hat{\boldsymbol{r}}_{1}\right) Y_{\ell_{2} m_{2}}\left(\hat{\boldsymbol{r}}_{2}\right) .
\end{aligned}
\end{align}
\fi
Here $\xi^{(n)}_L$ are given by \eqref{12}.
In order to explicitly evaluate the 2PCF \eqref{8} in terms of angles at the observer, we use spherical coordinates $(\varrho,\vartheta,\varphi)$ with $\bm d=(d,0,0)$ along the $z$-axis, and the triangle in the $y=0$ plane oriented such that $\bm r_1$ points in the negative $x$-direction (see \autoref{fig1}). This gives
\bea\label{rtp}
&& \hat{\bm r}_1=(1,\phi,\pi),~~ \hat{\bm r}_2=(1,\theta,0),~~ \hat{\bm r}=(1,\gamma,\pi)\,.
\eea
We also have the relations,
\bea
{r_1}=r\,\frac{\sin(\gamma+\theta)}{\sin(\theta+\phi)}=d\,\frac{\sin\gamma}{\sin(\gamma-\phi)}\,,~~~
{r_2}=r\,\frac{\sin(\gamma-\phi)}{\sin(\theta+\phi)}=d\,\frac{\sin\gamma}{\sin(\gamma+\theta)}
\,.
\eea
Using the angles $\theta,\phi,\gamma\,(=\cos^{-1}\mu)$ and the separation $r$, the 2PCF can be expanded as
\be
\xi_{{g}}(\bm d,\bm r) =\xi_{{g}}(d,\theta,\phi,\mu,r)= b_1\,b_2\sum_{n,\ell'} c_{n\ell'}(d,\theta,\phi,\mu)\,\xi^{(n)}_{\ell'}(r)\,,
\ee
where the redshift dependence is implicit.
This expansion follows \cite{Matsubara:1999du}, which corrects typos in \cite{Szalay:1997cc} and generalises \cite{Szalay:1997cc} to include galaxy bias, unequal redshifts and redshift evolution. In \cite{Matsubara:1999du}, the angular variables are $\theta+\phi$ and $\gamma_i=\cos^{-1}\hat{\bm r}_i\cdot \hat{\bm r}$, with a modified version of $\xi^{(n)}_\ell$. (A simplified and unified form of the expansions in \cite{Matsubara:1999du} is given in \cite{Bel:2022iuf}.)
The $c_{n\ell}$ and $\tilde c_{n\ell}$ coefficients are discussed in \autoref{app1}.
\CC{In the general case of $\theta\neq\phi$ and $\phi\neq0$, we find for the galaxy-galaxy correlations,
\begin{align}
c_{00} &= 1 + \frac{1}{3}(\beta_1+\beta_2) + \frac{1}{15}\beta_1\beta_2\big[2+\cos2(\phi+\theta)\big],
\\ c_{20} &= \frac{1}{3}\beta_1\beta_2\alpha_1\alpha_2 \cos(\theta+\phi),
\\ c_{11} &= \frac{1}{5} \alpha_1\Big\{5\beta_1\cos(\phi-\gamma)+\beta_1\beta_2\big[2\cos(\phi - \gamma) + \cos(2\theta+\phi+\gamma)\big]\Big\}
\notag\\
&~~- \frac{1}{5}\alpha_2\Big\{5\beta_2\cos(\theta + \gamma)+\beta_1\beta_2\big[2\cos(\theta+\gamma) + \cos(2\phi+\theta-\gamma)\big] \Big\},
\\
c_{02} &=-\frac{1}{6}\beta_1[3\cos(2\phi-2\gamma)+1]-\frac{1}{6}\beta_2[3\cos(2\gamma+2\theta)+1]
\notag\\
& -\frac{1}{42}\beta_1\beta_2[4 + 9\cos(2\gamma+2\theta) + 9\cos(2\phi-2\gamma) + 2\cos(2\theta+2\phi)]
,
\\ c_{22} &= -\frac{1}{6}\alpha_1\alpha_2\beta_1\beta_2\big[\cos(\theta+\phi)+3\cos(\phi-2\gamma - \theta ) \big],
\\ %
c_{13} &=\frac{1}{20} \beta_1 \beta_2 \Big\{-\alpha_1\big[\cos (2 \theta+\phi+\gamma)+5 \cos (\phi-3 \gamma-2 \theta)+2 \cos (\phi-\gamma)\big]
\nonumber\\&~~
+\alpha_2\big[\cos (2 \phi+\theta-\gamma)+5 \cos (2 \phi-3 \gamma-\theta)+2 \cos (\theta+\gamma)\big] \Big\},
\\
c_{04} &= \frac{1}{280}\beta_1\beta_2\big[6 +35\cos2(\phi - 2\gamma - \theta) +10 \cos2(\phi - \gamma)
\notag\\&~~
+ 10 \cos2(\theta+\gamma) + 3\cos2(\phi + \theta)\big].
\end{align}
For the coefficients of the galaxy-magnification 2PCF \eqref{8dop}, we define
\be
\xi_{{\kappa}}(\bm d,\bm r) =\xi_{{\kappa}}(d,\theta,\phi,\mu,r)= b_1f_2\tilde\alpha_2\sum_{n,\ell'} \tilde c_{n\ell'}(d,\theta,\phi,\mu)\,\xi^{(n)}_{\ell'}(r).
\ee
From this we find:
\begin{align}
{\tilde c}_{20} &=\frac{1}{3}\alpha_1\beta_1 \cos(\phi+\theta)\,,
\\
\tilde c_{11} &= -\frac{1}{5}\beta_1\Big[2\cos(\theta+\gamma) + \cos(2\phi+\theta - \gamma) \Big] - \cos(\theta + \gamma),
\\
\tilde c_{22} &= -\frac{1}{6}\alpha_1\beta_1 \Big[\cos(\theta+\phi) + 3\cos(\phi-2\gamma-\theta) \Big],
\\
\tilde c_{13} &= \frac{1}{20}\beta_1\Big[\cos(2\phi + \theta -\gamma) + 2\cos(\theta + \gamma) + 5\cos(2\phi -\theta - 3\gamma) \Big].
\end{align}
}
\iffalse
The expansion in $S^{n_1+n_2}_{\ell_1\ell_2}$ is significantly simplified by characterising the observer triangle via the separation distance and two angles \cite{Szalay:1997cc,Matsubara:1999du} -- $r,\theta,\gamma$ (see \autoref{fig1}), where
\bea
r = \big|\r_1-\r_2\big|=\Big(r_1^2+r_2^2-2r_1r_2\cos2\theta \Big)^{1/2},
\quad \cos\gamma = \hat{\bm{d}}\cdot\hat{\r} \equiv \mu. \label{5}
\eea
Here $\bm{d}$ is the displacement from the observer to $\r_{12}$ in the direction of the bisector of the angle at the observer.
Then
\bea
{\xi_{{g}}(r_{12},\theta,\mu)\over b_1b_2} &=&\sum_{n,\ell} c_{n\ell}(\theta,\mu)\,\xi^{(n)}_\ell(r_{12})
\notag\\
&=& c_{00} \xi^{(0)}_0 + c_{02} \xi^{(0)}_2 + c_{04} \xi^{(0)}_4
+ c_{11} \xi^{(1)}_1 +c_{13} \xi^{(1)}_3.
\label{10}
\eea
Note that the $n=2$ terms in \cite{Szalay:1997cc,Matsubara:1999du} have been omitted in \eqref{10} for consistency. These $n=2$ terms arise purely from the Doppler term squared, and scale as $(\H^2/k^2)P_{\rm m}$. If we include them, then we should include relativistic potential contributions that also scale as $(\H^2/k^2)P_{\rm m}$ (e.g. density $\times$ Sachs-Wolfe) \cite{Bertacca:2012tp}. We effectively adopt a `weak field' approximation, i.e., we neglect all terms of order $(\H^n/k^n)P_{\rm m}$ with $n>1$.
The $c_{n\ell}$ are given by \cite{Matsubara:1999du}, correcting typos in \cite{Szalay:1997cc} and generalising \cite{Szalay:1997cc} to include galaxy bias, unequal redshifts and redshift evolution, as well as different tracers. \red{Note that eq.~(3.45) of \cite{Matsubara:1999du} defines a dimensionless alternative $\tilde\xi^{(\tilde n)}_\ell$ to our $\xi^{(n)}_\ell$:
\be
\xi^{(n)}_\ell=(-1)^{\tilde n+\ell}\,r^{\ell-2\tilde n}\,\tilde\xi^{(\tilde n)}_\ell \quad \mbox{where}\quad \tilde n= {1\over2}(n+\ell)\,.
\ee
In addition, our $\theta$ is $\theta_{\rm h}$ in \cite{Matsubara:1999du}, and our $\gamma$ is $\gamma-\pi$ in \cite{Matsubara:1999du}.}
Here we re-arrange the expressions, rewriting $\cos\theta$ in terms of $\sin\theta$ and using \eqref{5} to rewrite $\cos\gamma, \sin\gamma$ in terms of $\mu$. This more clearly separates out the wide-angle contributions which are given by $\sin\theta$, where $\sin\theta=0$ is the plane-parallel (or flat-sky) approximation. It also uses the Legendre mutipoles based on $\gamma$, which recover the standard multipoles in the flat-sky limit. We find that
\bea
c_{00} &=& 1+{1\over3}\big(\beta_1+\beta_2\big)+{1\over5}\beta_1\beta_2-{8\over15}\beta_1\beta_2\big(1- \sin^2\!\theta\big)\sin^2\!\theta,
\label{13}\\
c_{02} &=& -{2\over21}\big[7\big(\beta_1+\beta_2\big)+6\beta_1\beta_2\big]\big(1-2\sin^2\!\theta \big)\cl_2(\mu)
\notag\\&&{}
-{1\over21}\big[7\big(\beta_1+\beta_2\big) -2\beta_1\beta_2+8\beta_1\beta_2\sin^2\!\theta \big]\sin^2\!\theta
\notag\\&&{}
-2\big(\beta_1-\beta_2 \big)\sin\theta\,\sqrt{1-\sin^2\!\theta}\,\mu\sqrt{1-\mu^2} ,
\label{14}\\
c_{04} &=&{1\over105}\beta_1\beta_2\big[24\cl_4(\mu) -20\sin^2\!\theta\,\cl_2(\mu)+
\big(9\sin^2\!\theta -4 \big)\sin^2\!\theta\big],
\label{15}\\
c_{11} &=&{1\over5}\big[5(\beta_1\alpha_1-\beta_2\alpha_2)+\beta_1 \beta_2\big(\alpha_1-\alpha_2 \big)\big(\red{3-4\sin^2\!\theta}\big)\big]\sqrt{1-\sin^2\!\theta}~\cl_1(\mu)
\notag\\&&{}
\red{+{1\over5}} \big[5(\beta_1\alpha_1+\beta_2\alpha_2)+\beta_1 \beta_2\big(\alpha_1+\alpha_2 \big)\big(4\sin^2\!\theta-1\big)\big]\sqrt{1-\mu^2}\,\sin\theta ,
\label{16}\\
c_{13} &=&\red{{1\over15}\beta_1\beta_2}\Big\{ 3 (\alpha_1-\alpha_2)\red{\sqrt{1-\sin^2\!\theta}} \big[ \sin^2\!\theta\,\cl_1(\mu)-2\cl_3(\mu) \big]
\notag\\&&{}
\red{+}(\alpha_1+\alpha_2)\sin\theta\,\red{\sqrt{1-\mu^2}} \big[2- 3\sin^2\!\theta +10\cl_2(\mu)\big] \Big\}.
\label{17}
\eea
In \eqref{13}--\eqref{17}, the plane parallel limit is given by $\sin\theta=0$.
In $c_{11}$ and $c_{13}$, there are wide-angle corrections from an infinite series of even multipoles, due to the presence of $\sin\gamma=\sqrt{1-\mu^2}$, while \red{$c_{02}$} contains an infinite series of odd multipoles from $\sin\gamma\cos\gamma=\mu \sqrt{1-\mu^2}$, when $\beta_1\neq \beta_2$. We can compute the first few multipoles in these infinite series as follows:
\red{
\bea
\sqrt{1-\mu^2} =\sum_\ell s_\ell \,\cl_\ell(\mu),
\label{18}
\eea
where
\bea
s_\ell &=& \Big(\ell+{1\over2} \Big)\int_{-1}^1\ud\mu\,\sqrt{1-\mu^2}\,\cl_\ell(\mu)
\notag \\
&=&{\pi\over4}(2\ell+1) \Big(1,\, 0,\, -{1\over8},\,0,\,- {1\over64},\,\cdots \Big),
\label{21}
\eea
and therefore}
\bea
\sqrt{1-\mu^2} ={\pi\over 256}\,\Big[64-40\cl_2(\mu)-9 \cl_4(\mu) + \cdots\Big].
\label{22}
\eea
Typically only multipoles up to $\ell=4$ are included, and we follow this approximation.
Similarly,
\bea
\mu\sqrt{1-\mu^2} ={\pi\over 64}\,\Big[12\cl_1(\mu)-7 \cl_3(\mu) + \cdots\Big].
\label{32}
\eea
We can now rewrite \eqref{10} in the form
\bea
\xi_{{g}}(r_{12},\sin\theta,\mu)=\sum_\ell \xi_\ell(r_{12},\sin\theta) \,\cl_\ell(\mu), \label{23}
\eea
where $\xi_\ell$ can be thought of as the multipoles of the 2PCF [THIS ISN'T CORRECT]. In each case the first line on the right is the plane-parallel limit \red{[double check]}
\begin{align}
{\xi_0\over b_1b_2}=&{1\over 15}\big[15+5\big(\beta_1+\beta_2\big)+3\beta_1\beta_2\big]\, \xi^{(0)}_0
\notag\\ &
-{8\over15}\big[\beta_1\beta_2\big(1- \sin^2\!\theta\big)\big] \sin^2\!\theta\,\xi^{(0)}_0
- {1\over21}\big[7\big(\beta_1+\beta_2\big) -2\beta_1\beta_2+8\beta_1\beta_2\sin^2\!\theta \big]\sin^2\!\theta\, \xi^{(0)}_2
\notag\\ &
\red{+} {1\over105}\big[\beta_1\beta_2
\big(9\sin^2\!\theta -4 \big)\big]\sin^2\!\theta\, \xi^{(0)}_4
\notag\\ &
\red{+} {\pi\over20}\big[5(\beta_1\alpha_1+\beta_2\alpha_2)+\beta_1 \beta_2\big(\alpha_1+\alpha_2 \big)\big(4\sin^2\!\theta-1\big)\big]\sin\theta\,\xi^{(1)}_1
\notag\\ &
+\red{{\pi\over60}}\Big[\beta_1\beta_2(\alpha_1+\alpha_2) \red{\big(2- 3\sin^2\!\theta\big)}\Big] \sin\theta\, \xi^{(1)}_3\,,
\label{33} \\ %
{\xi_1\over b_1b_2}=& {1\over5}\Big[5(\beta_1\alpha_1 -\beta_2\alpha_2)\red{+3} \beta_1 \beta_2\big(\alpha_1-\alpha_2 \big)\Big]\xi^{(1)}_1
\notag\\ &
\red{+} {1\over5}\bigg\{\red{\Big[5(\beta_1\alpha_1 -\beta_2\alpha_2)+3} \beta_1 \beta_2\big(\alpha_1-\alpha_2 \big)\red{\Big]\Big(\sqrt{1-\sin^2\!\theta}-1\Big)}
\notag\\ &
\red{-
4\beta_1\beta_2\big(\alpha_1-\alpha_2 \big)}\sin^2\!\theta\,\sqrt{1-\sin^2\!\theta}\,\Big]\bigg\}\xi^{(1)}_1
\notag\\ &
-{3\pi\over8}\Big[\big(\beta_1-\beta_2 \big)\sqrt{1-\sin^2\!\theta}\,\Big]\sin\theta\,\xi^{(0)}_2
\notag\\ &
+{1\over5}\Big[\beta_1\beta_2(\alpha_1-\alpha_2)\sqrt{1-\sin^2\!\theta} \, \Big]\sin^2\!\theta\, \xi^{(1)}_3\,,
\label{34} \\ %
{\xi_2\over b_1b_2}=& -{2\over21}\Big[7\big(\beta_1+\beta_2\big)+6\beta_1\beta_2\Big]\, \xi^{(0)}_2
\notag\\ &
+{4\over21}\Big[7\big(\beta_1+\beta_2\big)+6\beta_1\beta_2\Big]\sin^2\!\theta \, \xi^{(0)}_2
- {4\over\red{21}}\beta_1\beta_2\,\sin^2\!\theta\, \xi^{(0)}_4
\notag\\ &
-{\pi\over\red{32}}\Big[5(\beta_1\alpha_1+\beta_2\alpha_2)+\beta_1 \beta_2\big(\alpha_1+\alpha_2 \big)\big(4\sin^2\!\theta-1\big)\Big]\sin\theta\, \xi^{(1)}_1
\notag\\ &
+\red{{\pi\over96}}\beta_1\beta_2(\alpha_1+\alpha_2)\red{\big(14+3\sin^2\!\theta \big)}\,
\sin\theta\, \xi^{(1)}_3\,,
\label{35}\\ %
{\xi_3\over b_1b_2}=& -{2\over5}\Big[\beta_1\beta_2(\alpha_1-\alpha_2)\Big]\xi^{(1)}_3
\notag\\ &
-{2\over5}\Big\{\beta_1\beta_2(\alpha_1-\alpha_2)\Big[\sqrt{1-\sin^2\!\theta}-1\Big]\Big\}\xi^{(1)}_3
\notag\\ &
+{7\pi\over32}\Big[\big(\beta_1-\beta_2 \big)\sqrt{1-\sin^2\!\theta}\,\Big] \sin\theta\,\xi^{(0)}_2
\,,
\label{36}\\ %
{\xi_4\over b_1b_2}=& {8\over35} \beta_1\beta_2\,\xi^{(0)}_4
\notag\\ &
\red{-} {9\pi\over1280}\Big[5(\beta_1\alpha_1+\beta_2\alpha_2)+\beta_1 \beta_2\big(\alpha_1+\alpha_2 \big)\big(4\sin^2\!\theta-1\big)\Big]\sin\theta \,\xi^{(1)}_1
\notag\\ &
\red{- {27\pi\over2240}\beta_1 \beta_2\big(\alpha_1+\alpha_2 \big)\,\sin\theta \,\xi^{(1)}_3}
\,.
\label{37}
\end{align}
Note that:\\
(a) \red{In $c_{13}$ in \eqref{17}, there is a product of two $\cl_2$ (one in $\sqrt{1-\mu^2}$), and we used the identity $(\cl_2)^2= (13/35)\cl_4+(2/7)\cl_2+1/5$.}
\\
(b) The hexadecapole $\xi_4$ is not independent of bias beyond the p-p limit.
\\
(c) At equal redshifts and for a single tracer, the dipole and octupole vanish.\\
(d) In the absence of the Doppler contribution, i.e. $\alpha_i=0$, there is still a dipole and octupole in the single-tracer unequal-redshift case and in the 2-tracer case ($\beta_1\neq \beta_2$ for both cases), due to wide-angle leakage of power from the quadrupole.
\red{[Suggestion: plot $\xi_{0,2}$ for equal $z$ and for $\sin\theta=0,0.1, 1/\sqrt{2},0.9$, for DESI BGS ($z=0.2$) and Euclid ($z=1$). The biases are in 2107.14057, p6. Instead of using $s=2\Q/5, b_e$ to get $\alpha$, we can guess values of $A_D\equiv - \alpha/\H$ from Fig. 4. $f=\Omega_m(z)^{0.55}$]}
\blue{More generally we can assume a triangle configuration with $\theta\neq\phi$ in which case the coefficients $c_{n\ell}(\theta)\to c_{n\ell}(\bm \theta)$. See Appendix...}
\section{Evaluating the power spectrum}
We define the wide-angle power spectrum at a displacement $\bm{d}$ from the observer by Fourier transforming the redshift-space 2PCF over $\bm{r}_{12}$ (see \autoref{fig1}):
\be
P_{{g}}(\bm d, \bm k)=\int \ud^3r \,{\rm e}^{-\I\,\k\cdot\r}\,\xi^{\red{s}}(\bm d,\bm r)
\quad\mbox{where}\quad \bm r \equiv \bm{r}_{12}\,.
\ee
To recap, we have expanded the 2-point correlation function as $\mu=\red{\hat{\bm d}\cdot\hat{\bm r}}$ as
\be
\xi_{{g}}(\bm d,\bm r) = b_1b_2\sum_{n,\ell'} c_{n\ell'}(\bm\theta,\mu)\,\xi^{(n)}_{\ell'}(r_{12})\approx\sum_{\ell'} \xi_{\ell'}(r,\bm\theta) \,\cl_{\ell'}(\mu),
\ee
where \red{[changed $\tilde c \to C$]}
\be
\xi_\ell(r,\bm\theta) = \sum_{\ell',n} \, C_{n\ell'\ell}(\bm\theta) \,\xi^{(n)}_{\ell'}(r)\,.
\ee
The coefficients $C_{n\ell'\ell}$ \red{may be read off from \eqref{33}--\eqref{37} and} are given in \autoref{app1}.
At this stage note that the coefficients $\xi_\ell(r,\bm\theta)$ are \emph{not} multipoles of the correlation function because $\bm\theta$ depends on $\mu$, and the expression has been approximated to include terms only up to $\ell=4$.
The power spectrum can be expanded in multipoles as
\be
P_{{g}}(\bm d, \bm k) = \sum_\ell \mathcal{P}_\ell(k,d)\, \cl_\ell(\mu_k)
\quad\mbox{where}\quad {\mu_k=\hat{\bm d}\cdot\hat{\bm k}}\,.
\ee
As with the 2PCF, in general the multipoles $\mathcal{P}_\ell(k,d)$ can only be evaluated numerically in general, but we can find the coefficients of the series expansion in $1/(kd)^p$ as follows.
\fi
\CC{Using these,} the power spectrum may be written as
\begin{align}
P_{{g}}(\bm d, \bm k) & = b_1\,b_2\sum_{n=0}\sum_{\ell'=0}\int \ud^3r \,{\rm e}^{-\I\,\k\cdot\r}\, c_{n\ell'}({d},\theta,\phi,\mu)\,\xi^{(n)}_{\ell'}(r)
\notag \\
&= b_1\,b_2\sum_{n,\ell,\ell'}\,\I^{-\ell} (2\ell+1)
\int \ud r \,r^2\,j_\ell(kr)\, \xi^{(n)}_{\ell'}(r)
\int {\ud\Omega_{{\bm r}}} \, c_{n\ell'}({d},\theta,\phi,\mu)\,
\cl_\ell(\hat\k\cdot\hat\r)\,,
\end{align}
\CC{with a similar formula for the galaxy-magnification power spectrum.}
At this stage we need the series expansion in $r/d$. Writing
\be
b_1b_2c_{n\ell'}({d},\theta,\phi,\mu) = \sum_{p=0}^\infty c_{n\ell'}^{(p)}(\mu)\left(\frac{r}{d}\right)^p\,,
\ee
the coefficients $c_{n\ell'}^{(p)}$ can be expanded in Legendre polynomials. Then comparing with \eqref{djschsbdisuhdishd}, we see that
\be
\Xi^{(p,n)}_{\ell\ell'}(d)=\int\ud\mu\, c_{n\ell'}^{(p)}(\mu)\,\cl_\ell(\mu)\,.
\ee
Therefore
\be
b_1b_2\,c_{n\ell'}({d},\theta,\phi,\mu) = \sum_{p,\ell''} \Xi^{(p,n)}_{\ell''\ell'}(d)\,\cl_{\ell''}(\mu)\left(\frac{r}{d}\right)^p\,,
\ee
which leads to
\begin{align}\label{sjdkncskdjnscksnss}
P_{{g}}(\bm d, \bm k) &= \sum_{p,n,\ell,\ell',\ell''}\I^{-\ell}\, (2\ell+1)\,
\Xi^{(p,n)}_{\ell''\ell'}
\int \ud r\,r^2 \,j_\ell(kr)\, \xi^{(n)}_{\ell'}(r)\left(\frac{r}{d}\right)^p
\int {\ud\Omega_{{\bm r}}} \, \cl_{\ell''}(\hat{\bm d}\cdot\hat\r)\,
\cl_\ell(\hat\k\cdot\hat\r)
\notag\\
&=4\pi\sum_{p,n,\ell,\ell'}\I^{-\ell}\,
\Xi^{(p,n)}_{\ell\ell'}\,\cl_\ell(\hat{\bm d}\cdot\hat{\bm k})
\int \ud r \, r^2 \,j_\ell(kr)\, \xi^{(n)}_{\ell'}(r)\left(\frac{r}{d}\right)^p\,.
\end{align}
It follows that
\begin{align}
\mathcal{P}_\ell^{(p)}(k,d) &=4\pi k^p\sum_{n,\ell'}\I^{-\ell}
\Xi^{(p,n)}_{\ell\ell'}
\int r^{2+p}\ud r \,j_\ell(kr)\, \xi^{(n)}_{\ell'}(r)\,,
\end{align}
which recovers \eqref{cppl} and \eqref{pnpll}:
\begin{align}
\mathcal{P}^{(p)}_\ell
&=\frac{2}{\pi\I^{\ell}}\sum_{\ell', n}{\Xi_{\ell\ell'}^{(p,n)}(d)}P^{pn}_{\ell\ell'}(k)\,,
\qquad
P^{pn}_{\ell\ell'}(k)=k^p\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q)\, \mathcal{I}^p_{\ell\ell'}(k,q)\,.
\end{align}
In order to compute the Legendre multipoles in $\mu_k$ of the power spectrum,
\be
\mathcal{P}_\ell(k,d)=\sum_{p=0}^\infty \frac{\mathcal{P}^{(p)}_\ell}{(kd)^p}\,,
\ee
we need to compute:
\begin{enumerate}
\item the coefficients $\Xi_{\ell\ell'}^{(p,n)}$, which are the Legendre multipoles in $\mu$ of the Taylor coefficients of the coefficients of the 2PCF appearing in \eqref{cppl};
\item the integrals $\mathcal{I}^p_{\ell\ell'}(k,q)$ as distributions in $k,q$;
\item the power spectrum multipole `weights' $P^{pn}_{\ell\ell'}(k)$.
\end{enumerate}
\CC{For the galaxy-magnification case, the formulas are the same but with
\be
b_1f_2\tilde\alpha_2 \tilde c_{n\ell'}({d},\theta,\phi,\mu) = \sum_{p=0}^\infty \tilde c_{n\ell'}^{(p)}(\mu)\left(\frac{r}{d}\right)^p\,,
\ee
and the rest of the derivation is the same but with tilde's on relevant variables. }
Before we implement this computation, we briefly discuss the effects of a window function.
\subsection{Window function}
A careful inspection of the terms in the functions $c_{n\ell'}$ indicate that the nonzero $\Xi_{\ell\ell'}^{(p,n)}$ always have $|\ell-\ell'|+p$ as an even number, which as we will see below implies that the distributional integrals $\mathcal{I}^p_{\ell\ell'}(k,q)$ are combinations of delta functions, step functions and derivatives thereof. This in turn implies that $P^{pn}_{\ell\ell'}(k)$ can be found relatively easily. This is no longer the case when a window function is involved, which we now illustrate (see \cite{Castorina:2017inr,Beutler:2018vpe,Beutler:2021eqq} for more details on window functions).
We can incorporate a window function $w(\bm r_i)$ into the power spectrum:
\be
\hat P_{{g}}(\bm d, \bm k)=\int \ud^3 \bm r \,{\rm e}^{-\I\,\k\cdot\r}\,W(\bm d,\bm r)\,\xi_{{g}}(\bm d,\bm r)\quad\mbox{where} \quad
W(\bm d,\bm r) = w(\bm r_1)w(\bm r_2)\,.
\ee
Note that the Yamamoto estimator \cite{Yamamoto:2005dz} is then
\be
\big\langle\hat P_L^{{s}}\big\rangle = (2 L+1) \int \frac{\mathrm{d} \Omega_{{\bm k}}}{4 \pi} \int \mathrm{d}^{3}\bm r_{1} \int\,\mathrm{d}^{3}\bm r_{2}\,
\hat P_{{g}}\,\mathcal{L}_L(\hat{\bm k}\cdot\hat{\bm d}).
\ee
For simplicity we assume azimuthal symmetry for $W$ and expand it as
\be
W(\bm d,\bm r) = \sum_{p',L} W^{(p')}_L(d)\left(\frac{r}{d}\right)^{p'} \mathcal{L}_L(\mu)\,.
\ee
Then in \eqref{sjdkncskdjnscksnss} we use the identity
\be\label{dshjbcsjda}
\cl_{\ell_1}(\mu)\,\cl_{\ell_2}(\mu)=\sum_{\ell}(2\ell+1)
\left(\begin{array}{lll}
\ell_{1} & \ell_{2} & \ell \\
0 & 0 & 0
\end{array}\right)^2 \cl_{\ell}(\mu)\,,
\ee
which leads to
\begin{align}
\hat P_{{g}}(\bm d, \bm k)
&=4\pi\sum_{p,p',n,\ell,\ell'\ell''L}\I^{-\ell}(2\ell+1) \left(\begin{array}{lll}
\ell & \ell'' & L \\
0 & 0 & 0
\end{array}\right)^2
\Xi^{(p,n)}_{\ell''\ell'}\, W^{(p')}_L(d)\, \cl_\ell(\hat\k\cdot\hat{\bm d})
\notag \\ &~~~\times
\int r^2\ud r \,j_\ell(kr)\, \xi^{(n)}_{\ell'}(r)\left(\frac{r}{d}\right)^{p+p'}\,.
\end{align}
Therefore we have for the wide-angle multipoles of the windowed power spectrum:
\begin{align}\label{dsjbcscskndc}
\hat{\mathcal{P}}^{(p)}_\ell(k,d)
&=\frac{2}{\pi\I^{\ell}}\sum_{p',n,\ell',\ell'',L }
(2\ell+1) \left(\begin{array}{lll}
\ell & \ell'' & L \\
0 & 0 & 0
\end{array}\right)^2
{\Xi_{\ell''\ell'}^{(p,n)}(d)}\,W^{(p')}_L(d)\,P^{n,p+p'}_{\ell\ell'}(k)\,.
\end{align}
Note that we recover the previous results on using
\be
\left(\begin{array}{lll}
\ell & \ell'' & 0 \\
0 & 0 & 0
\end{array}\right) = \frac{(-1)^\ell}{\sqrt{2\ell+1}}\,\delta_{\ell\ell''}\,.
\ee
In \eqref{dsjbcscskndc}, because there is no restriction on $L$, we see that there is no longer a restriction on $|\ell-\ell'|+p$ being even in the resulting integrals.
\subsection{Computation of $\Xi_{\ell\ell'}^{(p,n)}$ \CC{and $\tilde\Xi_{\ell\ell'}^{(p,n)}$}}
In order to compute $\Xi_{\ell\ell'}^{(p,n)}$ and $\tilde\Xi_{\ell\ell'}^{(p,n)}$ we need to compute a series expansion in $x=r/d$ of functions of $r_1,r_2$ and $\theta,\phi$.
From the geometry in \autoref{fig1} we have
\be\label{skjncskjdbvf}
r_1=d \sqrt{t^{2} x^{2}+2 \mu x t+1},~~ r_2=d \sqrt{(t-1)^{2} x^{2}+2 \mu(t-1) x+1}\,,~~x=\frac{r}{d}\,,~~0\le t\le 1\,.
\ee
This implies
\begin{align}
{ \cos\theta }&=\frac{1-(1-t) x \mu}{\sqrt{(1-t)^{2} x^{2}+1-2(1-t) x \mu}}\approx 1-\frac{(1-t)^2}{3}\big[1-{\cl_{2}(\mu)}\big] x^{2}\,, \\
{ \cos\phi }&=\frac{x \mu t+1}{\sqrt{x^{2} t^{2}+2 x \mu t+1}}\approx 1-\frac{t^2}{3}\big[1-{\cl_{2}(\mu)}\big] x^{2}\,.
\end{align}
In the `bisector' case, where $\theta=\phi$, we have %
\be
t=\frac{x\mu+y}{2x\mu},~~~y=\sqrt{x^{2} \mu^{2}+1}-1,
\ee
giving\footnote{\CC{Note that truncating the Legendre expansion in $\sin\theta\approx\displaystyle
{\pi}\big[64-40\cl_{2}(\mu)-9\cl_{4}(\mu)+\cdots\big] x/{512}$ means that $\sin^2\theta+\cos^2\theta\neq1$ at each order in $x$. However, in all the expressions for $c_{n\ell}$ only $\cos(m\theta+n\gamma)$, with $m,n$ integers, appears, which means that we only have factors of $\sin\theta\sin\gamma\approx\frac{1}{2}(1-\mu^2)x$, and no truncation of the Legendre series is necessary. }}
\begin{align}
\sin\theta&=\sqrt{\frac{y(1-\mu^{2})}{2 \mu^{2}+y}}\approx\frac{1}{2}\sqrt{1-\mu^2}x,\\
\cos\theta&=\sqrt{\frac{\mu^{2}(y+2)}{2 \mu^{2}+y}}\approx 1-\frac{1}{12}\big[1-{\cl_{2}(\mu)}\big] x^{2}.
\end{align}
For the general case (any $t$), we have
\be\label{cjdhsbdjsbdjsbc}
f(r_1)\approx f(d)+f'(d)\cl_1(\mu)\,tx+\frac{1}{6}\Big\{f'(d)+f''(d)-2\big[2f'(d)-f''(d)\big]\cl_2(\mu)\Big\} t^2x^2+\cdots\,,
\ee
where $'=\ud/\ud\ln d$. For a function of $r_2$, we replace $t\to(t-1)$. In the bisector case:
\be
f(r_1)\approx f(d)+\frac{1}{2}f'(d)\cl_1(\mu)x+\frac{1}{24}\left[3f'(d)+f''(d)-2f''(d)\,\cl_2(\mu)\right] \,x^2+\cdots\,,
\ee
and for $r_2$ we replace $x\to -x$.
Inserting these into the coefficients $c_{n\ell'}(d,\theta,\phi,\mu)$, we expand as a Taylor series. In order to extract the multipoles from products of Legendre polynomials, it is convenient to use \eqref{dshjbcsjda}.
\subsubsection{Hierarchy of terms}
We briefly give an overview of the relative size of the contributions as we include wide-angle effects.
We are expanding in a series in $x=r/d$, corresponding to $1/kd$ in the power spectrum, but once wide-angle effects are included like this there are a number of extra contributions which must be consistently included as we have discussed. Once we approach distances with $z\sim1$, $d\sim 1/\cal H$, so $1/kd\sim {\cal H}/k$ implying the need for the relativistic terms.
Therefore,
for a fully consistent approach, we need an expansion in powers of $p+n$ where we consider terms
\be
\left(\frac{r}{d}\right)^p\left(\frac{\cH}{k}\right)^n\sim \left(\frac{1}{kd}\right)^p\left(\frac{\cH}{k}\right)^n\sim \left(\frac{\cH}{k}\right)^{p+n}\,
,
\ee
together.
\CC{Furthermore, we include derivatives of the growth rate, biases, and other variables
{since these derivatives are typically not negligible.}
{Consider the $\ln d$ derivative of the grwoth rate:}
\begin{align}
x f'(d)=r\frac{{\d}f}{{\d} d} = {rH_0h(z)} \frac{{\d}f}{{\d}z}\,,
\end{align}
where $h(z)=H(z)/H_0$ {and $d=\int\d z/H$. This can be} compared to
\begin{align}
x f(d)=\frac{r}{d}f(z) = \frac{rH_0}{\int_0^z {\d}z/h(z)} f(z)\,,
\end{align}
{leading to}
\begin{align}
\frac{f'(d)}{f(d)}
= \left[h(z)\int_0^z\frac{{\d}z}{h(z)}\right] {\frac{f'(z)}{f(z)}}
\approx z\, {\frac{f'(z)}{f(z)}}\,.
\end{align}
The term in square brackets is found numerically to be $\approx z$. Therefore for variables whose derivative with respect to redshift is of the order of the variable itself, the derivatives appearing in the wide-angle expansion can be important.
In the case of the growth rate,
\be
\frac{{\d}f}{{\d}z}=\frac{f}{1+z}\bigg[ 2+f-\frac{3}{2}\Omega_m\Big(1+\frac{1}{f} \Big)\bigg]\,,
\ee
where the right-hand side is a similar size to $f$~-- for a LCDM model we find the contribution from the derivative of the growth peaks at $\sim$25\% at $z\sim0.5$. Similarly, for a simple bias model $b\sim (1+z)^\sigma$, we have $b'(z)\sim \sigma\,z(1+z)^{\sigma-1}$ so $b'(z)/b=\sigma\,z/(1+z)$. For $\alpha$ it is more complicated, because the relative importance of $\alpha'$ is strongly dependent on the evolution and magnification biases. As an example, if $b_e=1=\mathcal{Q}$ then the correction peaks at $\sim50\%$ above $z\sim 1$ and can be more significant than this even at low redshift.
}
We now discuss the contributions to the most important multipoles in the cases of the line of sight being chosen either as the mid-point ($t=1/2$) or the equal angle bisector ($\phi=\theta$).
We give general formulas for any $t$ in \autoref{kdjsncskcnskjdn}.
\subsubsection{Contributions to galaxy-galaxy multipoles}
\noindent \textbf{\textit{Contributions to the monopole}}\\
\noindent At $O(x^0)$, i.e., the plane-parallel limit, we have
\begin{align}
\Xi^{(0,0)}_{00} &= b_1b_2+ \frac{1}{3}f(b_1+b_2)+\frac{1}{5}f^2\,
\\
\Xi^{(0,2)}_{00} &= \frac{1}{3}\alpha_1\alpha_2f^2 \,.
\end{align}
Note that all terms are evaluated at position $d$, as is the case with all similar formulas below.
At $O(x^1)$ there are only contributions from $\ell'=1$. In the case of the bisector and midpoint ($t=1/2$) geometries:
\begin{align}
\Xi^{(1,1)}_{01} &=\frac{1}{15}f \Big[5(\alpha_1b_2 + \alpha_2b_1) - f (\alpha_1 + \alpha_2) \Big] + \frac{1}{30}f \Big[\alpha'_1(3f + 5b_2) + \alpha'_2(3f+5b_1) \Big]
\notag \\&
- \frac{1}{6}f\Big(\alpha_1b'_2 + \alpha_2b_1' \Big) + \frac{1}{6}f'(\alpha_1b_2 + \alpha_2b_1),
\end{align}
These all have $n=1$, so the overall order of these corrections in the 2PCF is $(\cH/k)(r/d)$ which is equivalent to $(\cH/k)(1/kd)$ in the power spectrum, and they are present even for the case of a single tracer.
At $O(x^2)\sim O[1/(kd)^2]$, the contributions arise with $n=0$ so have an overall order $(\cH/k)^0(r/d)^2\sim (\cH/k)^0(1/kd)^2$, and are consequently similar in size to the $O(x)$ terms on large scales. In the bisector case we have,
\begin{align}
\Xi^{(2,0)}_{00}&=-\frac{4 }{45}f^{2}-\frac{1}{36}f^{\prime}(b_1'+b_2')
-\frac{1}{12}b_1'b_2'+\frac{1}{360}\left[5(b_1+b_2)+6f\right](f^{\prime \prime}+3f')
\notag \\&
+\frac{1}{72}\left(f+3b_{2}\right)(b_{1}^{\prime \prime}+3b_1')+\frac{1}{72}\left(f+3b_{1}\right)(b_{2}^{\prime \prime}+3b_2')-\frac{1}{60}(f'^2) \,, \\
\Xi^{(2,0)}_{02}&=-\frac{1}{90}f\Big[2f+9(b_1+b_2) \Big] + \frac{1}{15}f (b_1' + b_2' ) - \frac{1}{15}f'(b_1 +b_2) + \frac{1}{45}f'(b_1' +b_2' )
\notag\\&
- \frac{1}{90}f (b_1'' +b_2'') - \frac{1}{630}\Big[12f +7(b_1 +b_2)\Big] + \frac{2}{105}(f')^2.
\end{align}
The bisector and midpoint contributions are no longer equal however. For general $t$, all even $\ell'$ contribute, but in the bisector case only $\ell'=4$ is zero.
In all these cases there is only marginal simplification from specialising to a single tracer. \\
\noindent \textbf{\textit{Contributions to the dipole}} \\
\noindent First we have the plane-parallel limit,
\be
\Xi^{(0,1)}_{11} = \frac{1}{5}{f\left[3 f(\alpha_1-\alpha_2)
+5(\alpha_1b_2-\alpha_2b_1)\right]}\,,
\ee
which vanishes for a single tracer.
At $O(x^1)$ the principal corrections to the dipole are only from $\ell'=0$ and $\ell'=2$ (and not from $\ell'=4$):
\begin{align}
\Xi^{(1,0)}_{10} &=
\frac{1}{6}\left[b_1'(f+3b_2)-b_2'(f+3b_1)
+f'(b_2-b_1)\right],
\\
\Xi^{(1,0)}_{12} &=\frac{2}{5}f(b_1 - b_2) + \frac{2}{15} \Big[ f(b'_2 -b'_1 ) + f'(b_1 - b_2) \Big]
\,.
\end{align}
These expressions are valid for the bisector and midpoint configurations and vanish in the case of a single-tracer survey. This is not the case for any other configurations and a single-tracer survey will have corrections from all even $\ell'$.
At higher order in $x$, they are further suppressed by a factor of $\cH/k$ (i.e., for $p=2$ the non-zero contributions are from $n=1$), and again the bisector and midpoint configurations have different (complicated) contributions~-- which all vanish for a single tracer, but not in the mutli-tracer case. \\
\noindent \textbf{\textit{Contributions to the quadrupole}} \\
\noindent The plane-parallel limit is
\be
\Xi^{(0,0)}_{22} = -\frac{2}{21}{f\left[7(b_1+b_2)+6f\right]}\,,
\ee
and the leading $O(x)$ corrections are for both bisector and midpoint,
\begin{align}
\Xi^{(1,1)}_{21} &= \frac{1}{15}f\Big[f(\alpha_1 + \alpha_2) - 5(\alpha_1b_2 + \alpha_2b_1) \Big] + \frac{1}{15}f\Big[\alpha'_1(3f + 5b_2) + \alpha'_2(3f+5b_1) \Big]
\notag \\&
- \frac{1}{3}f(\alpha_1b'_2 + \alpha_2b'_1 ) + \frac{1}{3}f'(\alpha_1b_2 + \alpha_2b_1),\\
\Xi^{(1,1)}_{23} &=\frac{4}{35}f^2(\alpha_1 + \alpha_2) - \frac{3}{35}f^2(\alpha'_1 + \alpha'_2)\,.
\end{align}
As in the case of the monopole, these corrections arise from the relativistic terms and are non-zero for a single tracer survey also.
However for consistency at this order we need the $O(x^2)$ contributions, which all have $n=0$. For the bisector case,
\begin{align}
\Xi^{(2,0)}_{20}&=+\frac{4 }{45}f^{2}-\frac{1}{18}f^{\prime}(b_1'+b_2')
-\frac{1}{6}b_1'b_2'+ \frac{1}{180}\big[5(b_1+b_2)+6f\big]f^{\prime \prime}+\frac{1}{36}\left(f+3b_{2}\right)b_{1}^{\prime \prime}
\notag \\&
+\frac{1}{36}\left(f+3b_{1}\right)b_{2}^{\prime \prime}-\frac{1}{30}(f')^2\,, \\
\Xi^{(0,2)}_{22}&= -\frac{2}{3}\alpha_1\alpha_2f^2\,, \\
\Xi^{(2,0)}_{22}&= \frac{1}{882}f\Big[106f + 189(b_1 + b_2)\Big] + \frac{1}{252}b'_1 (22f' -9f ) + \frac{1}{252}b'_2(22f' - 9f)
\notag\\&
- \frac{1}{84}f'[12f + 11(b_1 + b_2)] - \frac{11}{252}f(b''_1 + b''_2) - \frac{1}{1764}f''[132f+77(b_1 +b_2)]
\notag\\&
+\frac{11}{147}(f')^2,\\
\Xi^{(2,0)}_{24}&=\frac{4}{735}\big[-3f^2+2ff''-2(f'^2)\big]\,.
\end{align}
with a similar formula for the mid-point case.
~\\\noindent \textbf{\textit{Contributions to the higher multipoles}} \\
\noindent The plane-parallel limit for the octupole is
\begin{align}
\Xi^{(0,1)}_{33}=\frac{2}{5} f^{2}\left(\alpha_{2}-\alpha_{1}\right)\,,
\end{align}
which vanishes for a single tracer, while the hexadecapole is always present:
\begin{align}
\Xi^{(0,0)}_{44}=\frac{8}{35} f^{2}\,.
\end{align}
The leading wide-angle corrections are
\begin{align}
\Xi^{(1,0)}_{32}=-\frac{2}{5}f(b_1 - b_2) + \frac{1}{5}f \Big(b'_2 - b'_1 \Big) +\frac{1}{5}f'(b_1 - b_2),
\end{align}
for the octupole and
\begin{align}
\Xi^{(1,1)}_{43}&=-\frac{4}{35}f^2(\alpha_1 + \alpha_2) - \frac{4}{35}f^2(\alpha'_1 + \alpha'_2),
\end{align}
for the hexadecapole. As with the other even multipoles, we also need the $O(x^2)$ terms for consistency:
\begin{align}
\Xi^{(2,0)}_{42}&= -\frac{4}{245}f\Big[6f + 7(b_1 + b_2)\Big] + \frac{4}{35}f'(b_1 + b_2) + \frac{2}{35}b'_1(f' - 2f) + \frac{2}{35}b'_2 (f' - 2f) \\
\notag&
- \frac{1}{35}f\Big(b''_1 +b''_2\Big) - \frac{1}{245}f''\Big[12f + 7(b_1 +b_2)\Big] + \frac{12}{245}(f')^2,\\
\Xi^{(2,0)}_{44}&=\frac{4}{245}f^2 + \frac{2}{35}ff'+\frac{78}{2695}\Big[ff'' -(f')^2 \Big]\,,
\\
\Xi^{(2,0)}_{64}&= \frac{4}{231}\Big[ff'' -(f')^2 \Big] \,.
\end{align}
As for the other cases, this is given for the bisector line of sight.\\
In summary, for the even multipoles in a symmetric configuration of $\bm d$ and $\bm r$, the leading wide-angle corrections are suppressed in the 2PCF by a factor of $r/d$ (equivalently $1/kd$ in the power spectrum) but {\it also} by a factor of $\cH/k$, as they arise from the relativistic part. Given that $\cH/k\sim r/d$ for large-scale surveys, this implies that the consistent wide-angle correction needs to include the Newtonian $O(x^2)$ contributions. It also implies that the leading wide-angle corrections require the relativistic corrections for a consistent treatment, \CC{in addition to derivative terms}, even in the single-tracer case, as the Newtonian part does not capture the full range of effects.
However for the odd multipoles, the multi-tracer plane-parallel limit is already $O(\cH/k)$ from the relativistic corrections, while the leading wide-angle correction does not have this suppression factor (arising purely from the Newtonian part) -- which implies that the Newtonian wide-angle corrections will be a similar size to the relativistic plane-parallel part when $\cH/k\sim r/d$.
\subsubsection{Contributions to galaxy-magnification multipoles}
~\\ \textbf{\textit{Contributions to the monopole}}\\
\noindent At $O(x^0)$, i.e., the plane-parallel limit we have
\be
\tilde\Xi^{(0,2)}_{00} = \frac{1}{3}\alpha_1\tilde\alpha_2f^2\,.
\ee
The leading wide-angle corrections at $O(x^2)$, for both bisector and mid-point lines of sight, are
\begin{align}
\tilde\Xi^{(1,1)}_{01}&=
-\frac{1}{15}\tilde{\alpha}_2f(f-5b_1)- \frac{1}{6}\tilde{\alpha}_2b_1'f + \frac{1}{30}\tilde{\alpha}_2'f(3f+5b_1) + \frac{1}{6}\tilde{\alpha}_2b_1f'\,.
\end{align}
Note that this wide-angle correction to the monopole occurs at order $n=1$ compared with the plane parallel limit which has $n=2$. Therefore we can expect these to be a similar size when $(\cH/k)^2\sim (\cH/k)^1 (r/d)^1$. The next contributions at $O(x^2)$ are all at order $n=2$ and are thus sub-dominant. As in the galaxy-galaxy case, the midpoint and bisector results are different.
\begin{align}
\tilde\Xi^{2,2}_{00} &= -\frac{1}{9}\alpha_1\tilde\alpha_{2}f^2 + \frac{1}{24}\alpha_1\tilde\alpha_{2}'f^2 + \frac{1}{72}\alpha_1'f^2 (3\tilde\alpha_{2} - 2\tilde\alpha_{2}' ) + \frac{1}{12}\alpha_1\tilde\alpha_{2}ff'
\notag\\
&+ \frac{1}{72}f^2 (\alpha_1''\tilde\alpha_{2} +\alpha_1\tilde\alpha_{2}'') + \frac{1}{36}\alpha_1\tilde\alpha_{2} \Big[ff'' - (f')^2\Big],
\\
\tilde\Xi^{(2,2)}_{02}&= \frac{1}{18}\alpha_1\tilde{\alpha}_2f^2 + \frac{1}{45}\alpha_1'\tilde\alpha_2'f^2 - \frac{1}{90} \Big[\alpha_1\tilde{\alpha}_2''f^2 + \alpha_1''\tilde{\alpha}_2f^2 + 2\alpha_1\tilde{\alpha}_2ff'' -2\alpha_1\alpha_2(f')^2 \Big]\,.
\end{align}
~\\\noindent \textbf{\textit{Contributions to the dipole}}\\
\noindent The plane-parallel limit is
\be
\tilde\Xi^{(0,1)}_{11}=-\frac{3}{5}\tilde\alpha_{2}f^2 - \tilde{\alpha}_2b_1f\,,
\ee
while the leading wide-angle $O(x)$ correction is
\begin{align}
\tilde\Xi^{(1,2)}_{10}&=\frac{1}{6}f^2(\alpha_1'\tilde\alpha_{2} - \alpha_1\tilde\alpha_{2}')\,,
\\ \tilde\Xi^{(1,2)}_{12}&=-\frac{2 }{15}f^{2}\left({\alpha_{1}^{\prime} \tilde\alpha_{2}}-{ \alpha_{1}\tilde\alpha_{2}^{\prime} }\right)\,.
\end{align}
Note that these are further suppressed, beyond the factor $r/d$ compared to the plane-parallel limit, by an extra factor of $\cH/k$.
At $O(x^2)$ we have corrections from $n=1$ terms which can be a similar size to the $O(x)$ corrections~-- again these are different for the mid-point and bisector cases. For the bisector case we have
\begin{align}
\tilde\Xi^{(2,1)}_{11}&= \frac{1}{100}\tilde{\alpha}_2f(11f+5b_1) - \frac{1}{40}\tilde{\alpha}_2b_1'f + \frac{1}{200}\tilde{\alpha}_2'(30fb_1' - 11f^2 - 45b_1f)
\notag \\
& + \frac{3}{40}f'\Big[2\tilde{\alpha}_2b_1' - 2\tilde{\alpha}_2'b_1
- \tilde{\alpha}_2(2f + 3b_1) \Big] -\frac{3}{200}\tilde{\alpha}_2f''(6f+5b_1)
\notag \\
&- \frac{1}{200}\tilde{\alpha}_2''(9f^2 + 15b_1f) - \frac{3}{40}\tilde{\alpha}_2b_1''f + \frac{9}{100}\tilde{\alpha}_2(f')^2\,, \\
\tilde\Xi^{(2,1)}_{13}&= -\frac{2}{175}\tilde{\alpha}_2f^2 -\frac{4}{175}\tilde{\alpha}_2'f^2 + \frac{3}{350} \Big[\tilde{\alpha}_2''f^2 +2\tilde{\alpha}_2ff'' - 2\tilde{\alpha}_2(f')^2 \Big]\,.
\end{align}
~\\\noindent \textbf{\textit{Contributions to the quadrupole}}\\
\noindent At $O(x^0)$ we have
\be
\tilde\Xi^{(0,2)}_{22} = \frac{2}{3}\alpha_1\tilde\alpha_2f^2\,.
\ee
The leading wide-angle corrections at $O(x)$, for the bisector and the midpoint case are ,
\begin{align}
\tilde\Xi^{(1,1)}_{21}&= \frac{1}{15}\tilde{\alpha}_2f(f-5b_1) - \frac{1}{3}\tilde{\alpha}_2b_1'f + \frac{1}{15}\tilde{\alpha}_2'f(3f+5b_1) + \frac{1}{3}\tilde{\alpha}_2b_1f'\,,
\\
\tilde\Xi^{(1,1)}_{23}&=\frac{4}{35}\tilde{\alpha}_2f^2 - \frac{3}{35}\tilde{\alpha}_2'f^2\,.
\end{align}
Again, note that this is a similar size to the plane-parallel contribution. The next order $O(x^2)$ is suppressed, having only contributions from $n=2$:
\begin{align}
\tilde\Xi^{(2,2)}_{20}&= \frac{1}{9}\alpha_1\tilde\alpha_{2}f^2 - \frac{1}{18}\alpha_1'\tilde\alpha_{2}'f^2 +\frac{1}{36} \Big( \alpha_1\tilde\alpha_{2}''f^2 + \alpha_1''\tilde\alpha_{2}f^2 + 2\alpha_1\tilde\alpha_{2}ff'' \Big) - \frac{1}{18}\alpha_1\tilde\alpha_{2}(f')^2\,,
\\ \tilde\Xi^{(2,2)}_{22}&= - \frac{1}{18}\alpha_1\tilde\alpha_{2}f^2 - \frac{1}{12}\alpha_1\tilde\alpha_{2}'f^2 + \frac{1}{252}\alpha_1' (22\tilde\alpha_{2}'f^2 - 21\tilde\alpha_{2}f^2 ) - \frac{1}{6}\alpha_1\tilde\alpha_{2}ff'
\notag\\
&- \frac{11}{252} \Big(\alpha_1''\tilde\alpha_{2}f^2 + \alpha_1\tilde\alpha_{2}''f^2 + 2\alpha_1\alpha_2ff'' \Big) + \frac{11}{126}\alpha_1\tilde\alpha_{2}(f')^2 \,.
\end{align}
~\\
\textbf{\textit{Contributions to the higher multipoles}}\\
\noindent In the plane-parallel limit, the octupole has
\be
\tilde\Xi^{(0,1)}_{33} = \frac{2}{5}\tilde\alpha_2f^2\,,
\ee
and there is no hexadecapole.
The leading $O(x)$ wide-angle corrections are
\be
\tilde\Xi^{(1,2)}_{32}=-\frac{ 1}{5}f^{2}\left({\alpha_{1}^{\prime} \tilde\alpha_{2}}-{ \alpha_{1}\tilde\alpha_{2}^{\prime} }\right)\,,
\ee
for the octupole. The $O(x^2)$ contributions are a similar size; for the bisector case,
\begin{align}
\tilde\Xi^{(2,1)}_{31}&= -\frac{1}{100}\tilde{\alpha}_2f(11f+5b_1) - \frac{1}{10}\tilde{\alpha}_2b_1'f + \frac{1}{50}\tilde{\alpha}_2'\Big[5fb_1' - f(f-5b_1) \Big]
\notag \\
&+ \frac{1}{10}f'(\tilde{\alpha}_2b_1' - \tilde{\alpha}_2'b_1
+ \tilde{\alpha}_2b_1 ) -\frac{1}{100}\tilde{\alpha}_2f''(6f+5b_1) - \frac{1}{100}\tilde{\alpha}_2''(3f^2 + 5b_1f)
\notag \\
&- \frac{1}{20}\tilde{\alpha}_2b_1''f + \frac{3}{50}\tilde{\alpha}_2(f')^2\,,
\\
\tilde\Xi^{(2,1)}_{53}&= \frac{1}{63}\Tilde{\alpha}_2f^2 + \frac{2}{63}\Big[\Tilde{\alpha}_2'f^2 +\Tilde{\alpha}_2ff'' - \Tilde{\alpha}_2(f')^2 \Big] + \frac{1}{63}\Tilde{\alpha}_2''f^2\,.
\end{align}
Finally we have the wide-angle correction for the hexadecapole:
\begin{align}
\tilde\Xi^{(1,1)}_{43}&= -\frac{4}{35}\tilde{\alpha}_2f^2 - \frac{4}{35}\tilde{\alpha}_2'f^2 \,,
\\
\tilde\Xi^{(2,2)}_{42}&= \frac{2}{35} \alpha_1\tilde{\alpha}_2(f')^2 - \frac{1}{35} \Big(\alpha_1''\tilde{\alpha}_2f^2 + \alpha_1\tilde{\alpha}_2''f^2 + 2\alpha_1\tilde{\alpha}_2ff'' - 2\alpha_1'\tilde{\alpha}_2'f^2 \Big) \,.
\end{align}
\section{Moments of the power spectrum}
In this section we give a detailed discussion on evaluating the integrals involved in $P^{pn}_{\ell\ell'}(k)$. Some of the results below cover well-known results which we generalise by giving new derivations valid for all values of $p,\ell,\ell'$.
\subsection{Evaluating the integrals $\mathcal{I}^p_{\ell\ell'}(k,q)$}
In general the integral \eqref{ipll} is formally divergent for $p\geq 0$, but its relevance for us is as a distribution, so we need to find its finite part. This is because it is integrated against $P_m(q)$, which will result in a convergent result, provided that $P_m(q)$ is sufficiently compact. For example, for $\ell=\ell'$, we have the well known closure relation:
\be
\mathcal{I}^0_{\ell\ell}(k,q) = \frac{\pi}{2kq}\,\delta(k-q)\,,
\ee
which has a singular point at $k=q$.
However, for $\ell\neq\ell'$ the distributions and singular points become more complicated -- and in some cases quite subtle. We give a detailed discussion of some of the subtleties of the distributions in \autoref{appI}, including also a full derivation of the results presented here. We first give formulas for $p=0$, then demonstrate how to calculate the integrals for $p>0$.
Note that the integral \eqref{ipll} satisfies a simple symmetry:
\be
\mathcal{I}^p_{\ell\ell'}(k,q)=
\mathcal{I}^p_{\ell'\ell}(q,k)\,.
\ee
\noindent{$\bm{p=0}$}\\
Consider $\mathcal{I}^0_{\ell\ell'}(k,q)$, with $\ell\neq\ell'$, following~\cite{1991JMP....32..642M} (derived in a new way in \autoref{appI}).
For $\ell-\ell'$ even:
\begin{align}\label{dsjkcbnsjcskn}
\mathcal{I}^0_{\ell \ell^{\prime}}\left(k, q\right)
=\left\{
\begin{array}{l}
g_{\ell \ell^{\prime}}\left({k}, {q}\right) \Theta\left({k}-{q}\right)+\dfrac{\pi}{2 {k} {q}}(-1)^{\left(\ell-\ell^{\prime}\right) / 2} \delta\left({k}-{q}\right) ~~~~\ell>\ell' \,,\\~\\
g_{\ell^{\prime} \ell}\left({q}, {k}\right) \Theta\left({q}-{k}\right)+\dfrac{\pi}{2 {k} {q}}(-1)^{\left(\ell^{\prime}-\ell\right) / 2} \delta\left({k}-{q}\right)~~~~~\ell<\ell'\,,
\end{array}
\right.
\end{align}
where $\Theta$ is the unit step function (we do not need it at the origin as this does not affect the distribution~-- but see \autoref{appI} for the case $p=-2$ where we \emph{do} need it). We can rewrite this as
\bea
\mathcal{I}^0_{\ell \ell^{\prime}}\left(k, q\right)
&=& g_{\ell \ell^{\prime}}\left({k}, {q}\right) \Theta\left({k}-{q}\right)\Theta(\ell-\ell')+g_{\ell^{\prime} \ell}\left({q}, {k}\right) \Theta\left({q}-{k}\right)\Theta(\ell'-\ell)
\notag\\ &&{}
+\dfrac{\pi}{2 {k} {q}}(-1)^{\left(\ell-\ell^{\prime}\right) / 2} \delta\left({k}-{q}\right).
\eea
The function $g$, for ${k}< q$, is given by
\begin{align}
\begin{aligned}
g_{\ell^{\prime} \ell}\left({q}, {k}\right)=& \frac{\pi }{{q}^{ 3}} \left(\frac{k}{q}\right)^{\ell}\frac{\Gamma\left[\left(\ell+\ell^{\prime}+3\right) / 2\right]}{\Gamma\left(\ell+3 / 2\right) \Gamma\left[\left(\ell'-\ell\right) / 2\right]}\, { }_{2} F_{1}\left(\frac{\ell+\ell^{\prime}+3}{2}, \frac{\ell-\ell^{\prime}}{2}+1 ; \ell+\frac{3}{2} ; \frac{{k}^{2}}{{q}^{2}}\right),
\end{aligned}
\end{align}
and $g_{\ell\ell'}(k,q)=0$. For $k>q$ we have
\begin{align}
\begin{aligned}
g_{\ell \ell^{\prime}}\left({k}, {q}\right)=& \frac{\pi }{{k}^{ 3}} \left(\frac{q}{k}\right)^{\ell'}\frac{\Gamma\left[\left(\ell+\ell^{\prime}+3\right) / 2\right]}{\Gamma\left(\ell^{\prime}+3 / 2\right) \Gamma\left[\left(\ell-\ell'\right) / 2\right]} \,{ }_{2} F_{1}\left(\frac{\ell+\ell^{\prime}+3}{2}, \frac{\ell^{\prime}-\ell}{2}+1 ; \ell^{\prime}+\frac{3}{2} ; \frac{{q}^{2}}{{k}^{2}}\right).
\end{aligned}
\end{align}
For $\ell-\ell'$ odd we have instead,
\be
\mathcal{I}^0_{\ell \ell^{\prime}}\left(k, q\right) = g_{\ell \ell^{\prime}}\left({k}, {q}\right) \Theta\left({k}-{q}\right) + g_{\ell^{\prime} \ell}\left({q}, {k}\right) \Theta\left({q}-{k}\right)\,.
\ee
These functions are not actually that complicated for the small values of $\ell$ that we need, and in general can be written in terms of Legendre functions~\cite{1991JMP....32..642M}. In the case of $\ell-\ell'$ even, the relevant functions are, for $q>k$,
\begin{align}
g_{20}(q,k) = \frac{3\pi}{2q^3}\,,~~
g_{31}(q,k) = \frac{5\pi k}{2q^4}\,,~~
g_{40}(q,k) = \frac{5 \pi\left(3 q^{2}-7 k^{2}\right)}{4 q^{5}}\,,~~
g_{42}(q,k) = \frac{7 \pi k^{2}}{2 q^{5}}\,,
\end{align}
and so on. The cases for $k>q$ are given by swapping $k\leftrightarrow q$. Note that $g_{\ell\ell'}(k,q)$ are not required for $\ell<\ell'$. From this we find
\begin{align}
\mathcal{I}^0_{20}\left(k, q\right)&=\frac{3 \pi }{2 k^{3}}\,\Theta(k-q)-\frac{\pi }{2 kq}\,\delta(k-q)\,,\\
\mathcal{I}^0_{3,1}(k, q)&=\dfrac{5 \pi q }{2 k^{4}}\,\Theta(k-q)-\dfrac{\pi }{2 q^{2}}\,\delta(k-q)\,,
\end{align}
and so on. (Formulas are listed in \autoref{appI}.)
For $\ell-\ell'$ odd, once converted to elementary functions, we can combine the step functions to give the integrals as sums over polynomials and factors of $\ln[|q-k|/(k+q)]$ and $1/(q-k)$. These give singular points to be integrated over later. The lowest $\ell\ell'$ integrals are
\begin{align}
\mathcal{I}^0_{10}\left(k, q\right)&=\frac{1}{k(k^{2}-q^{2})}-\frac{1}{2 k^{2} q}\ln \dfrac{|k-q|}{k+q}\,,\\
\mathcal{I}^0_{2,1}(k, q)&=\dfrac{3 q^{2}-k^{2}}{2 k^{2} q(k^2-q^2)} -\dfrac{k^{2}+3 q^{2} }{4 k^{3} q^{2}}\ln \dfrac{|k-q|}{k+q}\,,\\
\mathcal{I}^0_{30}\left(k, q\right)&= \frac{13 k^{2}-15 q^{2}}{2 k^{3}(k^2- q^{2})}+\frac{3(5 q^{2}- k^{2}) }{4 k^{4} q}\ln \frac{|k-q|}{k+q}\,,\\
\mathcal{I}^0_{32}\left(k, q\right)&=\frac{\left(3 q^{2}-3k^{2}\right)\left(k^{2}+3 q^{2}\right)}{8 q^{2} k^{3}(k^2-q^2)}-\frac{3\left(k^{4}+2 k^{2} q^{2}+5 q^{4}\right) }{16 k^{4} q^{3}}\ln \frac{|k-q|}{k+q}\,.
\end{align}
To find the corresponding formulas with $\ell$ and $\ell'$ reversed, switch $q$ and $k$. These are valid for all $q\neq k$. Further formulas are in \autoref{appI}. \\
\noindent{$\bm{p>0}$}\\
We can derive the distributions for $\mathcal{I}^p_{\ell\ell'}(k,q)$ from $\mathcal{I}^0_{\ell\ell'}(k,q)$ by using
\be
\mathcal{I}^p_{\ell\ell'}(k,q) = -\frac{1}{k^{1-\ell}}\frac{\partial}{\partial k}\left[k^{1-\ell}\int_0^\infty \ud r \,r^{1+p}\, j_{\ell-1}(kr)\,j_{\ell'}(qr)\right] = -k^{\ell-1}\frac{\partial}{\partial k}\left[k^{1-\ell}\mathcal{I}^{p-1}_{\ell-1,\ell'}(k,q)\right],
\ee
and
\be\label{hsbdsjdhbcshjdc}
\mathcal{I}^p_{\ell\ell'}(k,q) = \frac{1}{k^{2+\ell}}\frac{\partial}{\partial k}\left[k^{2+\ell}\int_0^\infty \ud r \,r^{1+p}\, j_{\ell+1}(kr)\,j_{\ell'}(qr)\right] = \frac{1}{k^{2+\ell}}\frac{\partial}{\partial k}\left[k^{2+\ell}\mathcal{I}^{p-1}_{\ell+1,\ell'}(k,q)\right],
\ee
to step up the powers of $r$ in the integrand. These are found via the identities
\begin{align}
j_{\ell}^{\prime}(x)=j_{\ell-1}(x)-\frac{\ell+1}{x} j_{\ell}(x), \quad j_{\ell}^{\prime}(x)=-j_{\ell+1}(x)+\frac{\ell}{x} j_{\ell}(x)\,.
\end{align}
Then we can derive
\begin{align}\label{djsncksdns}
\mathcal{I}^{p+2}_{\ell\ell'}(k,q) = -k^{\ell-1}\frac{\partial}{\partial k}\left[k^{-2\ell}\frac{\partial}{\partial k}k^{\ell+1}\mathcal{I}^{p}_{\ell\ell'}(k,q)\right]%
= \left[-\frac{\partial^2}{\partial k^2}-\frac{2}{k}\frac{\partial}{\partial k}+\frac{\ell(\ell+1)}{k^2}\right]\mathcal{I}^{p}_{\ell\ell'}(k,q),
\end{align}
to step up two powers of $r$ while keeping $\ell\ell'$ the same. The operator in square brackets in the second equality is the spherical Bessel function differential operator. From these relations we see that if $|\ell-\ell'|+p $ is even, then the resulting distributions will be a mixture of step functions and $\delta$-functions; while if $|\ell-\ell'|+p $ is odd, the resulting integrals will be rational functions, which have poles of order $p+1$ at $k=q$, plus another rational function with a singular point $\sim\ln|k-q|$.
First we calculate $\mathcal{I}^1_{\ell \ell^{\prime}}\left(k, q\right)$, starting with
\be
\mathcal{I}^1_{00}(k,q) = \frac{1}{k^{2}}\frac{\partial}{\partial k}\left[k^{2}\mathcal{I}^{0}_{10}(k,q)\right]
=-\frac{2}{(q+k)^{2}(k-q)^{2}}\,.
\ee
Then
\begin{align}
\mathcal{I}^1_{20}(k,q) &= \frac{1}{k^{4}}\frac{\partial}{\partial k}\left[k^{4}\mathcal{I}^{0}_{30}(k,q)\right]
=-\frac{3}{2 k^{3} q} \ln \dfrac{|k-q|}{k+q}+\frac{5 k^{2}-3 q^{2}}{k^{2}(q+k)^{2}(k-q)^{2}}\,,
\end{align}
with similar formulas for other values of $\ell-\ell'$ even. Note that the singularities in these functions at $k=q$ do not get worse than $ \ln {|k-q|}$ and $1/(k-q)^2$.
For $\ell-\ell'$ odd, these come from $\ell-\ell'$ even with $p=0$, and thus involve $\delta$-functions. For example,
\begin{align}
\mathcal{I}^1_{10}(k,q) & = \frac{1}{k^{3}}\frac{\partial}{\partial k}\left[k^{3}\mathcal{I}^{0}_{20}(k,q)\right]= -\frac{ \pi}{2 q^{2}}\,\delta'(k-q)\,,
\\
\mathcal{I}^1_{4,1}(k,q) & =-\frac{4 \pi}{q^{3}}\,\delta(k-q)+\frac{\pi }{2 q^{2}}\,\delta'(k-q)+\frac{35 \pi q }{2 k^{5}}\,\Theta(k-q)\,.
\end{align}
We can also derive general formulas from the closure relation,\footnote{Expressions involving derivatives of delta functions can appear different depending on how they are derived. From
\be
\frac{\partial}{\partial k}\left[q^p\delta(k-q)\right]=q^p\delta'(k-q)=\frac{\partial}{\partial k}\left[k^p\delta(k-q)\right]\,,
\ee
we can derive
\begin{align}
q^p\delta'(k-q)&=k^p\delta'(k-q)+p k^{p-1}\delta(k-q)\,,\\
q^p\delta''(k-q)&=k^p\delta''(k-q)+2pk^{p-1}\delta'(k-q)+(p-1)k^{p-2}\delta(k-q)\,,
\end{align}
and so on. This explains the difference in appearance between some of the expressions here and those in~\cite{Reimberg:2015jma}.
}
\begin{align}
\mathcal{I}^1_{\ell-1,\ell}(k,q) & = \frac{\pi(\ell+1) }{2 k^{3}}\,\delta(k-q)-\frac{\pi }{2 k^{2}}\,\delta'(k-q) = \frac{\pi}{2}\frac{q^{\ell-1}}{k^{\ell+1}}\,\delta'(k-q)\,, \\
\mathcal{I}^1_{\ell+1,\ell}(k,q) &= \frac{\pi \ell}{2 k^{3}}\,\delta(k-q) +\frac{\pi }{2 k^{2}}\,\delta'(k-q) = -\frac{\pi}{2}\frac{k^{\ell}}{q^{\ell+2}}\,\delta'(k-q)\,.
\end{align}
For $n\geq2$ we just repeat the process. When $\ell=\ell'$ we can use~\eqref{djsncksdns} and the closure relation, giving
\begin{align}
\mathcal{I}^2_{\ell\ell}(k,q)& =\frac{\pi(\ell+2)(\ell-1) }{2 k^{4}}\,\delta(k-q)+\frac{\pi }{k^{3}}\,\delta'(k-q)-\frac{\pi }{2 k^{2}}\,\delta''(k-q)\,,\\
\mathcal{I}^4_{\ell\ell}(k,q)& =\frac{\pi(\ell-1)(\ell-3)(\ell+4)(\ell+2) }{2 k^{6}}\,\delta(k-q)+\frac{4 \pi\left(\ell^{2}+\ell-3\right) }{k^{5}}\,\delta'(k-q)\nonumber\\
&-\frac{\pi(\ell+3)(\ell-2) }{k^{4}}\,\delta''(k-q)-\frac{2 \pi }{k^{3}}\,\delta'''(k-q)+\frac{\pi }{2 k^{2}}\,\delta^{(4)}(k-q)\,.
\end{align}
We can derive general formulas for nearby $\ell$, such as
\begin{align}
\mathcal{I}^2_{\ell-2,\ell}(k,q)& = \frac{\pi \ell(\ell-1) }{2 k^{4}}\,\delta(k-q)+\frac{\pi\ell }{k^{3}}\,\delta'(k-q)+\frac{\pi }{2 k^{2}}\,\delta''(k-q)\,,\\
\mathcal{I}^2_{\ell+2,\ell}(k,q)& = \frac{ \pi(\ell+2)(\ell+1) }{2 k^{4}}\,\delta(k-q)-\frac{\pi(\ell+1) }{k^{3}}\,\delta'(k-q)+\frac{\pi }{2 k^{2}}\,\delta''(k-q)\,,
\end{align}
with similar formulas for $\mathcal{I}^3_{\ell\pm3,\ell}(k,q)$, $\mathcal{I}^3_{\ell\pm1,\ell}(k,q)$, $\mathcal{I}^4_{\ell\pm4,\ell}(k,q)$, $\mathcal{I}^4_{\ell\pm2,\ell}(k,q)$\,.
Tabulated integrals may be found in \autoref{appI}.
\subsection{Integration of the power spectrum}
To complete our expansion of the multipoles of the power spectrum, we need to compute
\be\label{dsbjhdbcsjds}
P^{pn}_{\ell\ell'}(k)=k^p\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q)\, \mathcal{I}^p_{\ell\ell'}(k,q)\,.
\ee
Having evaluated $\mathcal{I}^p_{\ell\ell'}(k,q)$ in terms of relatively simple distributions means that these are straightforward to compute numerically, rather than having to compute highly oscillatory triple integrals. Given that they are distributions, there are some subtleties:
\begin{description}
\item[$|\ell-\ell'|+p $ even:] The integrals consist of delta functions and derivatives thereof, together with step functions. The delta functions evaluate to sample the power spectrum and its derivatives at $k$, and the step functions sample the long wavelength part of the power spectrum for $\ell<\ell'$ and the short wavelength part for $\ell>\ell'$.
\item[$|\ell-\ell'|+p $ odd:] The integrals consist of regular plus singular parts, all of the form
\be
\int_0^\infty \ud q\, \left\{ f_1(q,k)\big[\ln|k-q|-\ln(k+q)\big]+\frac{f_2(q,k)}{(k-q)^{p+1}}\right\}\,,
\ee
which is singular at $k=q$ and formally diverges. However what we need is the finite part~-- i.e. the Cauchy Principal Value in the case of the $1/(k-q)$ singularities, and Hadamard regularisation for $p>0$. The stronger singularities for $p>0$ can be evaluated integrating by parts, assuming $P_{\rm m}(q)$ and its derivatives vanish sufficiently rapidly at $q=0$ and $q=\infty$:
\begin{align}
&\int_0^\infty \ud q\, \left\{ f_1(q,k)\big[\ln|k-q|-\ln(k+q)\big]+\frac{f_2(q,k)}{(k-q)^{p+1}}\right\}
\\
&=\int_0^\infty \ud q\, \bigg\{- f_1(q,k)\ln(k+q)
\nonumber\\\nonumber &\qquad\qquad\quad~
+(k-q)(1-\ln|k-q|)\left[-\frac{\partial f_1(q,k)}{\partial q}+\frac{(-1)^{p+1}}{p!}\frac{\partial^{p+2} f_2(q,k)}{\partial q^{p+2}}\right]\bigg\}.
\end{align}
In general we would use finite limits, which are most easily dealt with in this formula by cutting off $P(q)$ with step functions~-- these then evaluate to delta functions in the integrand (and cannot be ignored). More details are given in \autoref{dsjkcnsdkcnskc}.
\end{description}
An alternative approach which avoids the singular integrals is to use \eqref{djsncksdns} and \eqref{hsbdsjdhbcshjdc} to reduce the order in \eqref{dsbjhdbcsjds} down to $\mathcal{I}^{-2}_{\ell\ell'}(k,q) $. For example, for $p$ even,
\begin{align}\label{kjdsnvsdbsjhk}
P^{pn}_{\ell\ell'}(k) = \left[\mathcal{D}^{(\ell)}_k\right]^{(p+2)/2}\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q)\, \mathcal{I}^{-2}_{\ell\ell'}(k,q)\,,
\end{align}
where
\be
\mathcal{D}^{(\ell)}_k=-\frac{\partial^2}{\partial k^2}-\frac{2}{k}\frac{\partial}{\partial k}+\frac{\ell(\ell+1)}{k^2}\,.
\ee
For $p$ odd,
\begin{align}\label{kldmdskvmlds}
P^{pn}_{\ell\ell'}(k) = \left[\mathcal{D}^{(\ell)}_k\right]^{(p+1)/2}
\frac{1}{k^{2+\ell}}\frac{\partial}{\partial k}\left[k^{2+\ell}
\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q)\, \mathcal{I}^{-2}_{\ell+1,\ell'}(k,q)\right]\,.
\end{align}
Therefore, given $P^{-2,n}_{\ell\ell'}(k)$, we can compute the rest simply by taking suitable derivatives.
There is a subtlety involved in swapping the derivative and integral because the integrals for $p\geq0$ are distributions and not convergent~-- yet the integrals in \eqref{kjdsnvsdbsjhk} and \eqref{kldmdskvmlds} are convergent and well defined. A full discussion of this is given in \autoref{dsjkcnsdkcnskc}.
\iffalse
\clearpage
\fi
\subsubsection{Multipoles of the power spectra}
For completeness, we explicitly give here the lowest multipoles of the power spectra up to $O(1/kd)$. For the galaxy-galaxy bisector case, we have
\begin{align}
\mathcal{P}_0 {(k)}&= \frac{1}{15} P(k)\left[3 f^2+5 f (b_1 + b_2)+15b_1 b_2 \right]+\frac{1}{30 k^2 d}\Big[kP_{,k}(k)+P(k)\Big]\Big[
10f( \alpha_1 b_2+ b_1 \alpha_2)
\nonumber \\
& -2 f^2( \alpha_1 +\alpha_2)
+\alpha_1^{\prime}f (3f+5 b_2)
+\alpha_2^{\prime}f (3f+5 b_1)
\nonumber \\
&-5 f(b_1^{\prime} \alpha_2+ b_2^{\prime} \alpha_1)
+5f'( \alpha_1 b_2+ b_1 \alpha_2) \Big]\,,\\
\mathcal{P}_1{(k)}&=
\frac{{\I} }{{5 k}} P(k)f\left[3 f(\alpha_2 - \alpha_1)-5 \alpha_1 b_2+5 b_1 \alpha_2\right] \nonumber \\
& +\frac{{\I}}{10kd}\Big\{kP_{,k}(k) \left[(4 f-3f')(b_2- b_1)-3f(b_2^{\prime} -b_1^{\prime}) +5(b_1^{\prime} b_2-b_2^{\prime}b_1)\right]
\nonumber\\
&+{4P(k)}\left[
(3 f+f')(b_2 - b_1)-f(b_2^{\prime} -b_1^{\prime}) \right]\Big\}\,, \\
\mathcal{P}_2{(k)}&=
\frac{2}{21}P(k) f\left[6 f+7 (b_1+ b_2)\right] +\frac{1}{21k^2d}\bigg\{ { kP_{,k}(k) } \Big[-(\alpha_1+\alpha_2) f^2-7f(b_1 \alpha_2+\alpha_1 b_2)
\nonumber\\
& +7f'(\alpha_1 b_2+b_1 \alpha_2)
+f(6 f+7b_1)\alpha_2^{\prime}+f(6 f+7b_2)\alpha_1^{\prime}-7f(\alpha_2b_1^{\prime}+\alpha_1b_2^{\prime})\Big] \nonumber \\
& +{14}P ( k ) \Big[-10f^2 (\alpha_1+ \alpha_2) +14f(b_1 \alpha_2+\alpha_1 b_2)-\left(3 f^2+14b_2 f\right)\alpha_1^{\prime} \nonumber\\
&-\left(3 f^2+14f b_1\right)\alpha_2^{\prime}-14(b_1 \alpha_2+\alpha_1 b_2)f^{\prime}+14f(\alpha_2b_1^{\prime}+\alpha_1b_2^{\prime})\Big]\bigg\}\,.
\end{align}
\CC{In \autoref{jscndsk} we show a plot of the $O(1/kd)$ wide-angle corrections to the dipole for the multi-tracer bisector case, in order to illustrate the size of the corrections and the importance of the derivative terms. Here we have chosen $b_1=1,b_{e1}=0=\mathcal{Q}_1$, and $b_2=1+z,~b_{e2}=-5,~\mathcal{Q}_2=2$ for illustration purposes. In the bisector case of $t=1/2$, we see that for small $k$ the wide-angle corrections are small relative to the leading relativistic term, but become important for larger $k$. This is the opposite to the endpoint case of $t=0$, where the smaller $k$ is, the more important the wide-angle corrections are.
We also see that inclusion of derivative terms is vital for calculating the wide-angle corrections, since neglecting them gives the wrong values, often by a significant margin. The details of how important they are depend on the biases $b_i, b_{ei},{\cal Q}_i$, as well as the triangle configuration that is chosen. For example, when $t=1$ the corrections are much smaller because there are no $b_2'$ factors. The features from the baryon acoustic oscillations are enhanced from the $kP_{,k}$ contribution which is larger than $P(k)$ for large $k$.
}
For the case of the galaxy magnification power spectrum we have
\begin{align}
\tilde{\mathcal{P}}_0{(k)}&=
- \frac{1}{3 k^2}P(k) f^2 \alpha_1 \tilde{\alpha}_2 +\frac{1}{30 k^2 d}\big[kP_{,k}(k) +P(k)\big]\big[2 f \tilde{\alpha}_2(5 b_1-f)+(3 f+5 b_1)f\tilde{\alpha}_2^{\prime}
\nonumber \\&
+5\tilde{\alpha}_2(f^{\prime} b_1 - fb_1^{\prime})\big]\,,\\
\tilde{\mathcal{P}}_1{(k)}&=
\frac{{\I }}{5 k} P(k)
f \tilde{\alpha}_2 \left(5 b_1+3 f\right)-\frac{{\I }}{10 k^3 d}\big[3kP_{,k}(k) -2 P(k) \big] f^2\left(\alpha_1\tilde{\alpha}_2^{\prime}-\tilde{\alpha}_2\alpha_1^{\prime}\right)\,,\\
\tilde{\mathcal{P}}_2{(k)}&=
\frac{2}{3 k^2}P(k) f^2 \alpha_1 \tilde{\alpha}_2 +\frac{1}{21k^2d}\bigg\{kP_{,k}(k)\Big[
-(f^2+7 b_1f+7b_1^{\prime} f-7f^{\prime} b_1) \tilde\alpha_2+\left(6 f+7 b_1\right)f\tilde\alpha_2^{\prime}\Big]
\nonumber\\
& +P(k) \Big[
2 f \tilde\alpha_2\left(7 b_1-5 f\right)
-f\left(3 f+14 b_1\right)\tilde\alpha_2^{\prime}
+14 \tilde\alpha_2(f^{\prime} b_1-fb_1')
\Big]\bigg\}\,.
\end{align}
\section{Conclusions}
We have given for the first time the wide-angle corrections to the multi-tracer 2PCF and associated power spectrum, including the relativistic Doppler corrections which go beyond the normal redshift space distortion effect. We have also presented the wide-angle corrections for the density-magnification cross-power spectrum. The full-sky power spectrum is expanded as a series in powers of $1/kd$, with each term expanded into Legendre multipoles in the angle between the line-of-sight vector $\bm d$ and the mode vector $\bm k$. The coefficients of this expansion are just the coefficients of the equivalent expansion in $r/d$ of the 2PCF, weighted by an appropriate integral over the matter power spectrum. We have presented the coefficients for the equal-angle bisector and mid-point cases, as well as for an arbitrary line of sight.
A key result of our analysis has been to show the importance of the relativistic Doppler corrections \CC{as well as the derivative terms in the expansion -- both typically neglected in previous analysis. While the relativistic terms} enter the observed galaxy number density contrast at $O(\cH/k)$, they only enter the 2PCF at $O[(\cH/k)^2]$ -- except in the multi-tracer case, when the corrections appear at $O(\cH/k)$, and is therefore a useful method for measuring relativistic effects. Similarly the density-magnification 2PCF has also been shown to be an important probe of relativistic effects. However, once we are interested in effects which appear at $O(\cH/k)$, wide-angle effects need to be considered -- since in the 2PCF they arise at $O(r/d)$, and in the power spectrum at $O([1/(kd)]$, which for large-scale surveys is potentially a similar size to $O(\cH/k)$. Therefore, for a fully consistent approach we need an expansion in powers of $p+n$ where we consider terms
\be
\left(\frac{r}{d}\right)^p\left(\frac{\cH}{k}\right)^n\sim \left(\frac{1}{kd}\right)^p\left(\frac{\cH}{k}\right)^n\sim \left(\frac{\cH}{k}\right)^{p+n}\,
,
\ee
\CC{as similar size, and we need to keep derivative terms at each order for consistency}. In doing this we find, in the galaxy-galaxy case, that the even multipoles at $O(r/d)$ receive relativistic corrections of $O(\cH/k)$, which implies Newtonian wide-angle corrections at $O(x^2)$ are required \CC{for consistency}. In addition, we find that for the odd multipoles in the multi-tracer case only, Newtonian wide-angle corrections are required for a consistent treatment of the plane-parallel limit.
\CC{This is clearly seen in Fig.~\ref{jscndsk} where the $O(1/kd)$ terms are a significant correction for large $k$. This also shows the importance of the derivative terms in the wide-angle expansion.} This analysis implies that the relativistic calculations also require $n=2$ potential terms for a fully consistent wide-angle expansion, which is a straightforward extension that we leave for future work.
In the galaxy-magnification cross-power case, we show that as a potential observable of relativistic effects, the dipole and octupole have wide-angle corrections which are suppressed by a factor $(\cH/k)(r/d)$ over the plane-parallel limit. For the even multipoles, the wide-angle corrections are of a similar size to the plane-parallel case.
Finally, we have given a full discussion of the resulting integrals that appear in the wide-angle expansion, i.e., $\mathcal{I}^p_{\ell\ell'}(k,q)$. Although these are well known for some values of $p, \ell, \ell'$, the full set of cases has not been discussed in this context \CC{and given in terms of elementary functions}. Furthermore, in \autoref{appI} we have given a new derivation of these integrals as distributions, together with a discussion of understanding the integrals as the Hadamard finite part, which allows us to give a general formula for an analytic function integrated against a pair of spherical Bessel functions~\eqref{djskndksnskdjc}.
\acknowledgments
CC is supported by the UK Science \& Technology Facilities Council Consolidated Grant ST/P000592/1.
RM is supported by the South African Radio Astronomy Observatory and the National Research Foundation (Grant No. 75415).
\clearpage
\appendix
\section{The coefficients
$c_{n\ell}$ }\label{app1}
The $c_{n\ell}$ for the density-density 2PCF are given by \cite{Matsubara:1999du}, correcting typos in \cite{Szalay:1997cc}. First, we note that
a dimensionless alternative $\breve\xi^{(\breve n)}_\ell$ to our $\xi^{(n)}_\ell$ is used in \cite{Matsubara:1999du} [eq.~(3.45)]:
\be\label{cnlmtous}
\xi^{(n)}_\ell=(-1)^{\breve n+\ell}\,r^{\ell-2\breve n}\,\breve\xi^{(\breve n)}_\ell \quad \mbox{where}\quad \breve n= {1\over2}(n+\ell)\,.
\ee
{Then \cite{Matsubara:1999du} defines $\breve{c}^{(\breve n)}_\ell$, so that our $c_{n\ell}\,\xi^{(n)}_\ell$ corresponds to $\breve c^{(\breve n)}_\ell\, \breve\xi^{(\breve n)}_\ell$, taking into account the definition of $\breve\alpha_i$ in \cite{Matsubara:1999du}, which corresponds to our case via $\alpha_i=r^{\ell-2\breve n} \breve\alpha_i$.
The triangle configuration
in \cite{Matsubara:1999du} is defined by 3 interior angles: the opening angle $\Theta=\cos^{-1}\hat{\bm r}_1\cdot\hat{\bm r}_2$ and the remaining angles $\gamma_1,\gamma_2$. This configuration
corresponds naturally to an `end-point' line of sight in our set-up:
\be
\mbox{In \cite{Matsubara:1999du}:}\quad\hat{\bm d}=\hat{\bm r}_1\,,~~ \phi=0\,,~~\Theta=\theta\,,~~ \gamma_1=\gamma\,,~~\gamma_2=\pi-(\gamma+\theta) \,.
\ee
The $\breve{c}^{(\breve n)}_\ell$ given by \cite{Matsubara:1999du} [eqs.~(3.34--42)] become in our notation:
\bea
c_{00} & =& 1 + \frac{1}{3}(\beta_1+\beta_2) + \frac{1}{15}\beta_1\beta_2\left(2+\cos2\theta\right), \label{cnl} \\
c_{20} & = & \frac{1}{3}\alpha_1\alpha_2\beta_1\beta_2\cos{\theta}, \\
c_{11} & =& \alpha_1\beta_1\cos\gamma-\alpha_2\beta_2\cos(\gamma+\theta)
\notag\\&&{}
+\frac{1}{5}\beta_1\beta_2\Big \{ \alpha_1 \big[ 2\cos\gamma+\cos(\gamma+2\theta)\big ] - \alpha_2\big [ \cos(\gamma-\theta) + 2\cos(\gamma+\theta) \big ] \Big \}, \\
c_{02} & = & -\frac{1}{6}\beta_1 \big[1+3\cos(2\gamma) \big] - \frac{1}{6}\beta_2 \big[1+3\cos(2\gamma+2\theta) \big]
\notag\\&&{}
- \frac{1}{42}\beta_1\beta_2 \big[4 + 9 \cos(2\gamma + 2\theta) + 9 \cos(2\theta) + 2\cos(2\theta) \big], \\
c_{22} & = &
-\frac{1}{6} \alpha_1 \alpha_2 \beta_1 \beta_2 \big[ \cos\theta + 3\cos(\theta + 2\gamma) \big], \\
c_{13} & =& \frac{1}{20}\beta_1\beta_2\Big \{ \alpha_2 \big[ \cos(\gamma - \theta) + 2\cos(\gamma + \theta) + 5\cos(\theta + 3\gamma) \big]
\notag \\ &&{}
- \alpha_1 \big[ \cos(2\theta + \gamma) + 5 \cos(2\theta + 3\gamma) + 2\cos(\gamma) \big] \Big\},
\\
c_{04} & = & \frac{1}{280}\beta_1\beta_2 \big[ 3\cos(2\theta) + 10\cos(2\gamma) + 10\cos(2\gamma + 2\theta) + 35 \cos(4\gamma + 2\theta) +6 \big] .
\eea
The bisector line of sight, i.e. $\phi=\theta$ in our notation, corresponds in \cite{Matsubara:1999du} to
$\Theta=2\theta$, $\gamma_1=\gamma-\theta$ and $\gamma_2=\pi-(\gamma+\theta)$.
The $\breve{c}^{(\breve n)}_\ell$ for the bisector case are given in \cite{Matsubara:1999du} [eqs.~(3.47--55)], with a different convention for the angle between $\hat{\bm d}$ and $\hat{ \bm r}$, i.e. $\breve\gamma =\pi-\gamma$. This leads to
\begin{align}
c_{00} &=1+{1\over3}\big(\beta_1+\beta_2\big)+{1\over15}\beta_1\beta_2 \big(2 + \cos4\theta\big),
\label{13}\\
c_{20} & = \frac{1}{3} \alpha_1\alpha_2\beta_1\beta_2\cos{2\theta}, \\
c_{11} & = \alpha_1\beta_1 \cos(\gamma - \theta) - \alpha_2\beta_2 \cos(\gamma + \theta)
\notag\\&~~
+\frac{1}{5}\beta_1\beta_2 \Big\{ \alpha_1 \big[2\cos(\gamma -\theta) + \cos(\gamma + 3\theta) \big] - \alpha_2\big[ 2\cos(\gamma + \theta) + \cos(\gamma - 3\theta) \big] \Big\},
\label{16}\\
c_{02} &= -\frac{1}{6}\beta_1 \big[1 + 3\cos(2\gamma - 2\theta) \big] - \frac{1}{6}\beta_2(1+ 3\cos(2\gamma + 2\theta) \big]
\notag\\&~~
- \frac{1}{42} \beta_1\beta_2 \big[ 4 + 2\cos 4\theta + 9\cos(2\gamma - 2\theta) + 9\cos(2\gamma + 2\theta) \big] ,
\label{14}\\
c_{22} &=-\frac{1}{6} \alpha_1\alpha_2\beta_1\beta_2\big[\cos 2\theta + 3\cos 2\gamma \big]
,
\\
c_{13} &= \frac{1}{20} \beta_1 \beta_2 \Big\{ \alpha_2 \big[ \cos(\gamma - 3\theta) + 2\cos(\gamma + \theta) + 5 \cos( 3\gamma - \theta) \big]
\notag\\&~~
- \alpha_1 \big[ \cos(\gamma + 3\theta) + 2\cos(\gamma - \theta) + 5 \cos(\theta + 3\gamma) \big] \Big\},
\label{17}\\
c_{04} &= \frac{1}{280} \beta_1\beta_2 \big[3 \cos4\theta + 10\cos(2\gamma + 2\theta) + 10\cos(2\gamma - 2\theta) + 35 \cos 4\gamma +6 \big].
\label{15}
\end{align}
}
In the general case of $\theta\neq\phi$ and $\phi\neq0$, we find that
\begin{align}
c_{00} &= 1 + \frac{1}{3}(\beta_1+\beta_2) + \frac{1}{15}\beta_1\beta_2\big[2+\cos2(\phi+\theta)\big]
\\ c_{20} &= \frac{1}{3}\beta_1\beta_2\alpha_1\alpha_2 \cos(\theta+\phi)
\\ c_{11} &= \frac{1}{5} \alpha_1\Big\{5\beta_1\cos(\phi-\gamma)+\beta_1\beta_2\big[2\cos(\phi - \gamma) + \cos(2\theta+\phi+\gamma)\big]\Big\}
\notag\\
&~~- \frac{1}{5}\alpha_2\Big\{5\beta_2\cos(\theta + \gamma)+\beta_1\beta_2\big[2\cos(\theta+\gamma) + \cos(2\phi+\theta-\gamma)\big] \Big\},
\\
c_{02} &=-\frac{1}{6}\beta_1[3\cos(2\phi-2\gamma)+1]-\frac{1}{6}\beta_2[3\cos(2\gamma+2\theta)+1]
\notag\\
& -\frac{1}{42}\beta_1\beta_2[4 + 9\cos(2\gamma+2\theta) + 9\cos(2\phi-2\gamma) + 2\cos(2\theta+2\phi)]
,
\\ c_{22} &= -\frac{1}{6}\alpha_1\alpha_2\beta_1\beta_2\big[\cos(\theta+\phi)+3\cos(\phi-2\gamma - \theta ) \big],
\\ %
c_{13} &=\frac{1}{20} \beta_1 \beta_2 \Big\{-\alpha_1\big[\cos (2 \theta+\phi+\gamma)+5 \cos (\phi-3 \gamma-2 \theta)+2 \cos (\phi-\gamma)\big]
\nonumber\\&~~
+\alpha_2\big[\cos (2 \phi+\theta-\gamma)+5 \cos (2 \phi-3 \gamma-\theta)+2 \cos (\theta+\gamma)\big] \Big\},
\\
c_{04} &= \frac{1}{280}\beta_1\beta_2\big[6 +35\cos2(\phi - 2\gamma - \theta) +10 \cos2(\phi - \gamma)
\notag\\&~~
+ 10 \cos2(\theta+\gamma) + 3\cos2(\phi + \theta)\big].
\end{align}
For the coefficients of the galaxy-magnification 2PCF \eqref{8dop}, we obtain:
\begin{align}
{\tilde c}_{20} &=\frac{1}{3}\alpha_1\beta_1 \cos(\phi+\theta)\,,
\\
\tilde c_{11} &= -\frac{1}{5}\beta_1\Big[2\cos(\theta+\gamma) + \cos(2\phi+\theta - \gamma) \Big] - \cos(\theta + \gamma),
\\
\tilde c_{22} &= -\frac{1}{6}\alpha_1\beta_1 \Big[\cos(\theta+\phi) + \cos(\phi-2\gamma-\theta) \Big],
\\
\tilde c_{13} &= \frac{1}{20}\beta_1\Big[\cos(2\phi + \theta -\gamma) + 2\cos(\theta + \gamma) + 5\cos(2\phi -\theta - 3\gamma) \Big].
\end{align}
\iffalse
\section{The coefficients
$C_{n\ell'\ell}$ }\label{app1}
Define $\tilde C_{n\ell'\ell}= C_{n\ell'\ell}/(b_1b_2)$. Then for $\theta=\phi$, and
$n=0$:
\begin{align}
\tilde C_{0,0,0}&=1+\frac{1}{3}{\beta_{1}}+\frac{1}{3}{\beta_{2}} +\frac{1}{5}{\beta_{1} \beta_{2}}-\frac{8}{15}\beta_{1} \beta_{2}\cos^{2}\theta \sin^{2}\theta\\
\tilde C_{0,2,0}&=-\left(\frac{1}{3} \beta_{1}+\frac{1}{3} \beta_{2}-\frac{2}{21} \beta_{1} \beta_{2}+\frac{8}{21}{ \beta_{1} \beta_{2} \sin^{2}\theta}\right) \sin^{2}\theta \\
\tilde C_{0,4,0}&=\beta_{1} \beta_{2}\left(-\frac{4 }{105} +\frac{3 }{35}\sin^{2}\theta\right) \sin^{2}\theta\\
\tilde C_{0,2,1}&=\frac{3\pi}{8} \left(\beta_{2}-\beta_{1}\right)\sin\theta \cos\theta \\
\tilde C_{0,2,2}&=\left(\frac{2}{3} \beta_{1}+\frac{2}{3} \beta_{2}+\frac{4}{7} \beta_{1} \beta_{2}\right)\left(2 \sin^{2}\theta-1\right) \\
\tilde C_{0,4,2}&=-\frac{4 }{21}\beta_{1} \beta_{2} \sin^{2}\theta \\
\tilde C_{0,2,3}&=\frac{7 \pi}{32}\left(\beta_{1}-\beta_{2}\right) \sin\theta \cos\theta\\
\tilde C_{0,4,4}&=\frac{8 }{35}\beta_{1} \beta_{2}
\end{align}
For $n=1$:
\begin{align}
&\tilde C_{1,1,0}=\frac{\pi}{5.2^2}\left(-(1-4\sin^2\theta)\left(\alpha_{1}+\alpha_2\right) \beta_1\beta_{2}
+{5 }(\beta_{1} \alpha_{1}+\beta_{2} \alpha_{2})\right) \sin\theta\\
&\tilde C_{1,3,0}=\frac{ \pi}{5.2^4}\left(1-4 \sin^2\theta\right)\left(\alpha_{1}+\alpha_{2}\right) \beta_{2} \beta_{1}\sin\theta\\
&\tilde C_{1,1,1}=
\frac{\pi}{5}\left(-(3-4\sin^2\theta)\left(\alpha_{1}-\alpha_2\right) \beta_1\beta_{2}
-{5 }(\beta_{1} \alpha_{1}-\beta_{2} \alpha_{2})\right) \cos\theta
\\
&\tilde C_{1,3,1}=\frac{1}{5}\beta_{1} \beta_{2}\left(\alpha_{1}-\alpha_{2}\right) \cos\theta \sin^2\theta\\
&\tilde C_{1,1,2}=
\frac{\pi}{2^5}\left((1-4\sin^2\theta)\left(\alpha_{1}+\alpha_2\right) \beta_1\beta_{2}
-{5 }(\beta_{1} \alpha_{1}+\beta_{2} \alpha_{2})\right) \sin\theta
\\
&\tilde C_{1,3,2}=\frac{\pi}{2^6}\left(\alpha_{1}+\alpha_{2}\right) \beta_{1}\beta_{2}\left(2 \sin^2\theta+7\right) \sin\theta \\
&\tilde C_{1,3,3}=\frac{2}{5} \left(\alpha_{2}-\alpha_{1}\right)\beta_{1} \beta_{2} \cos\theta\\
&\tilde C_{1,1,4}=
\frac{9\pi}{5.2^8}\left((1-4\sin^2\theta)\left(\alpha_{1}+\alpha_2\right) \beta_1\beta_{2}
-{5 }(\beta_{1} \alpha_{1}+\beta_{2} \alpha_{2})\right) \sin\theta
\\
&\tilde C_{1,3,4}=\frac{9\pi}{5.2^{11}} \left(\alpha_{1}+\alpha_{2}\right) \beta_{1}\beta_{2} \left(8\sin^2\theta-{77}\right)\sin\theta
\end{align}
For $\theta\neq\phi$ we have
$n=0$:
\begin{align}
\tilde C_{0,0,0}&=1+\frac{1}{3}{\beta_{1}}+\frac{1}{3}{\beta_{2}} +\frac{1}{5}{\beta_{1} \beta_{2}}-\frac{8}{15}\beta_{1} \beta_{2}\cos^{2}\theta \sin^{2}\theta\\
\end{align}
\fi
\iffalse
\section{sin and cos}
\label{app2}
expansion of the stuff needed up to order $x^6$ where $x=r/d$:
\begin{align}
\begin{array}{c}
\cos (\theta)=1+\left(\dfrac{\mu^{2}}{8}-\dfrac{1}{8}\right) x^{2}+\left(-\dfrac{5}{128} \mu^{4}+\dfrac{3}{128}+\dfrac{1}{64} \mu^{2}\right) x^{4} \\
\sin (\theta)=\dfrac{\sqrt{-\mu^{2}+1} x}{2}-\dfrac{\left(\mu^{2}+1\right) \sqrt{-\mu^{2}+1} x^{3}}{16}+\dfrac{\sqrt{-\mu^{2}+1}\left(7 \mu^{4}+6 \mu^{2}+3\right) x^{5}}{256} \\
\sin (\theta) \cos (\theta)=\dfrac{\sqrt{-\mu^{2}+1} x}{2}-\dfrac{\sqrt{-\mu^{2}+1} x^{3}}{8}+\dfrac{\left(\mu^{2}+1\right) \sqrt{-\mu^{2}+1} x^{5}}{32} \\
\sin (\theta)^{2}=\left(-\dfrac{\mu^{2}}{4}+\dfrac{1}{4}\right) x^{2}+\left(\dfrac{\mu^{4}}{16}-\dfrac{1}{16}\right) x^{4} \\
\sin (\theta)^{3} \cos (\theta)=\dfrac{\left(-\mu^{2}+1\right)^{3 / 2} x^{3}}{8}-\dfrac{\left(\mu^{2}+2\right)\left(-\mu^{2}+1\right)^{3 / 2} x^{5}}{32} \\
\sin (\theta)^{4} \cos (\theta)=\dfrac{(\mu-1)^{2}(\mu+1)^{2} x^{4}}{16}
\end{array}
\end{align}
\fi
\section{Wide-angle expansion coefficients}\label{kdjsncskcnskjdn}
We collect these in terms of the hierarchy $p+n$, where
\be
\left(\frac{r}{d}\right)^p\left(\frac{\cH}{k}\right)^n\sim \left(\frac{\cH}{k}\right)^{p+n}\,.
\ee
Here we used the fact that for large-scale surveys, the two terms are of a similar order of magnitude.
\subsection{{Coefficients $\Xi^{(p,n)}_{\ell\ell'}$ in the galaxy-galaxy wide angle expansion for all $t$}}
$\displaystyle \bm{O\left[({\cH}/{k})^0\right]:}$
\begin{align}
\Xi_{00}^{(0,0)} &= b_1b_2 + \frac{1}{3}(b_1 + b_2)f + \frac{1}{5}f^2\,
\\
\Xi_{22}^{(0,0)} &= -\frac{2}{3}f(b_1+b_2) - \frac{4}{7}f^2\,,
\\
\Xi_{44}^{(0,0)} &= \frac{8}{35}f^2 \,,
\end{align}
$\displaystyle \bm{O\left[(r/d)^1\right]\sim O\left[({\cH}/{k})^1\right]:}$
\begin{align}
\Xi_{11}^{(0,1)} &= (\alpha_1 b_2 - \alpha_2b_1)f+\frac{3}{5}(\alpha_1 - \alpha_2)f^2,
\\
\Xi_{33}^{(0,1)} &= \frac{2}{5}(\alpha_2 - \alpha_1)f^2,
\\
\Xi_{10}^{(1,0)} &= \frac{1}{15}\Big\{ f'[3(2t-1)f+5(t-1)b_1+5tb_2] + 5(t-1)(f+3b_1)b_2' + 5t(f+3b_2)b_1'\Big\},
\\
\Xi_{12}^{(1,0)} &= -\frac{4}{35}f\big[7t(b_1 +b_2) + 6ft - 3f -7b_1\big]-\frac{4}{15}f\big[b_1't + b_2'(t-1)\big]
\nonumber\\
& + \frac{4}{105}f'\big[7b_1(1-t) +6f(1-2t)-7b_2t\big] ,
\\
\Xi_{32}^{(1,0)} &= \frac{4}{35}f\big[ 7t(b_1 + b_2)+6ft- 3f -7b_1\big] -\frac{2}{5}f[b_1't+b_2'(t-1)] \nonumber\\
& + \frac{2}{35}f'\bigg[6f(1-2t)+7b_1(1-t) - 7b_2t \bigg],
\\\Xi_{34}^{(1,0)} &=\frac{16}{63}f^2(2t-1) + \frac{32}{315}ff'(2t-1) ,
\\\Xi_{54}^{(1,0)} &= -\frac{16}{63}f^2(2t-1) +\frac{8}{63}ff'(2t-1),
\end{align}
\noindent $\displaystyle \bm{O\left[({r}/{d})^1\,({\cH}/{k})^1\right]
:}$
\begin{align}
\Xi_{01}^{(1,1)} &=\frac{2}{15}f \Big\{5\alpha_1 b_2t - 5\alpha_2b_1(t-1)
+ f\big[\alpha_1(3t-2) + \alpha_2(1-3t) \big] \Big\}
\notag\\
&+\frac{1}{3}f\big[ \alpha_{1} b_2'(t-1)- \alpha_{2} b_1't\big] +\frac{1}{15}f'\Big\{3f(\alpha_{1}-\alpha_{2})(2 t-1) -5\big[\alpha_{2} b_1(t-1) -\alpha_{1}b_2t\big]\Big\}
\notag\\
&+\frac{1}{15}f\big[ \alpha_{1}'t(3 f+5 b_2) -\alpha_{2}'(t-1)(3f+5b_1)\big]
,
\\
\Xi_{21}^{(1,1)} &= -\frac{2}{15}f\Big\{f\big[\alpha_1(3t-2) + \alpha_2(1-3t) \big]
+ 5 \big[\alpha_1b_2t - \alpha_2b_1(t-1) \big] \Big\}
\notag\\
&+ \frac{2}{3}f\big[\alpha_1b_2'(t-1) - \alpha_2b_1't\big] + \frac{2}{15} f'\big[3f(\alpha_1 - \alpha_2)(2t-1) - 5\alpha_2b_1(t-1) + 5\alpha_1b_2t\big] \notag\\
&+ \frac{2}{15}f \big[\alpha_1't(3f+5b_2) - \alpha_2'(t-1)(3f+5b_1)\big],
\\
\Xi_{23}^{(1,1)} &= - \frac{8}{35}f^2 \Big[3t(\alpha_1 - \alpha_2) - 2\alpha_1 + \alpha_2 \Big]+\frac{6}{35}\Big\{f^2\Big[\alpha'_2(t-1) - \alpha'_1t \Big] - ff'(\alpha_1 - \alpha_2)(2t-1) \Big]\Big\} ,
\\
\Xi_{43}^{(1,1)} &= \frac{8}{35} \Big\{ -f^2\Big[\alpha_1(2-3t) + \alpha_2(3t-1) \Big]+f^2 \Big[\alpha'_2(t-1) - \alpha'_1t \Big] - ff'(\alpha_1 - \alpha_2)(2t-1) \Big\} ,
\end{align}
\noindent For completeness we also give $\displaystyle \bm{O\left[({r}/{d})^2\right]:}$
\begin{align}
\Xi_{00}^{(0,2)} &= \frac{1}{3}\alpha_1\alpha_2f^2
\\
\Xi_{00}^{(2,0)} &= -\frac{4}{45}f^2+\frac{1}{18}b_1'\Big[6t(t-1)b_2'+2 t(t-1)f'+t^{2}(f+3 b_2)\Big]+\frac{1}{18}b_2'\Big\{ 2t(t-1)f'
\notag\\
& +\big[(t-1)^{2}(f+3 b_1)\big]\Big\}
+\frac{1}{15}t\left(t-1\right)f'^2+\frac{1}{90}f'\Big[3f\left( 2t^{2}-2t+1\right) +5b_1(t-1)^{2} +5b_2 t^{2}\Big]
\notag\\
&+\frac{1}{18}\Big[t^{2}b_1''(f+3 b_2)
+(t-1)^{2}b_2''(f+3 b_1)\Big] +\frac{1}{90}f''\Big[3\left(2t^{2}-2 t+1\right)f +5(t-1)^{2} b_1+5t^{2}b_2 \Big]
\,,
\\
\Xi_{20}^{(2,0)} &= \frac{4}{45}f^2+\frac{2}{15}t(t-1)f'^2 + \frac{2}{45}f'\Big[3f(2t-2t^2-1) - 5b_1(t-1)^2 - 5b_2t^2 \Big]
\notag\\
&+\frac{2}{9}b_1'\Big[3t(t-1)b_2' + f't(t-1) - t^2(f+3b_2) \Big] + \frac{2}{9}b_2'\Big[f't(t-1) -(f+3b_1)(t-1)^2 \Big]
\notag\\
&+ \frac{1}{9}\big[t^2b_1''(f+3b_2)
+ (t-1)^2b_2''(f+3b_1)\big] + \frac{1}{45}f''\Big[5b_2t^2 + 5b_1(t-1)^2+3(2t^2 - 2t +1)\Big]
\,,
\\
\Xi_{02}^{(2,0)} &= \frac{4}{630}f \Big[f(18t^2 -18t -1) + 21b_1(t-1)^2 + 21b_2t^2 \Big]
+ \frac{8}{105}(f')^2t(1-t)
\notag\\&
+ \frac{2}{315}f' \Big[f(48t - 48t^2 - 6) - 28b_1(t-1)^2 - 28b_2t^2 \Big]
+\frac{4}{45}b'_1\Big[f't(1-t)-ft(2t-3) \Big]
\notag\\&
+ \frac{4}{45}b'_2\Big[f't(1-t)-f(2t-1)(t-1) \Big]-\frac{2}{45}f\Big[b''_1t^2 +b''_2(t-1)^2 \Big]
\notag\\&
+ \frac{2}{315}f''\Big[6f(2t - 2t^2 +1) - 7b_1(t-1)^2 - 7b_2t^2 \Big]
,
\\
\Xi_{22}^{(2,0)} &= \frac{44}{9702}f \Big[f(198t^2 - 198t + 85) + 231b_1(t-1)^2 + 231b_2t^2 \Big]+ \frac{44}{147}(f')^2t(1-t)
\notag\\&
+ \frac{1}{63}b'_1\Big[22f't(1-t) - ft(11t-12) \Big] + \frac{1}{63}b'_2\Big[22f't(1-t) - f(11t +1)(t-1) \Big]
\notag\\&
- \frac{11}{63}f\Big[b''_1t^2 + b''_2(t-1)^2 \Big] + \frac{1}{441}f'\Big[f(132t - 132t^2 - 66) - 77b_1(t-1)^2 - 77b_2t^2 \Big]
\notag\\&
+ \frac{1}{441}f''\Big[f(132t - 132t^2 - 30) - 77b_1(t-1)^2 - 77b_2t^2 \Big],
\\
\Xi_{42}^{(2,0)} &=- \frac{192}{1470}f \Big[3f(2t^2 - 2t +1) + 7b_1(t-1)^2 + 7b_2t^2 \Big] + \frac{8}{35}b'_1\Big[f't(1-t) + ft(3t-2) \Big]
\notag\\&
+ \frac{8}{35}b'_2\Big[f't(1-t) + f(3t-1)(t-1) \Big] + \frac{24}{245}f'\Big[7b_1(t-1)^2+7b_2t^2 + 4f(3t^2 - 3t +1)\Big]
\notag\\&
- \frac{4}{35}f \Big[b''_1t^2 + b''_2(t-1)^2 \Big]+ \frac{4}{245}f'' \Big[6f(2t - 2t^2 -1) - 7b_1(t-1)^2 - 7b_2t^2 \Big]
\notag\\&
+ \frac{48}{245}(f')^2t (1-t)
,
\\
\Xi_{24}^{(2,0)} &= \frac{8}{735}f^2(30t^2 - 30t + 1) + \frac{16}{735}ff'(4t-1)(4t+3)
+ \frac{32}{735}(f')^2t(t-1)
\notag\\&
+ \frac{16}{735}ff''(2t^2 - 2t + 1),
\\
\Xi_{44}^{(2,0)} &= -\frac{8}{2695}f^2(390t^2 - 390t + 97) + \frac{4}{2695}ff'(78t^2-78t+19)
+ \frac{312}{2695}t(t-1)(f')^2
\notag\\&
+ \frac{156}{2695}ff''(2t^2 - 2t + 1),
\\
\Xi_{64}^{(2,0)} &= \frac{64}{231}f^2(3t^2 - 3t + 1) - \frac{16}{231}ff'(10t^2-10t+3)
+ \frac{16}{231}t(t-1)(f')^2
\notag\\&
+ \frac{8}{231}ff''(2t^2 - 2t + 1),
\\
\Xi_{22}^{(0,2)} &= -\frac{2}{3}\alpha_1\alpha_2f^2.
\end{align}
\iffalse
\subsubsection{Chris's coeffs -- to delete}
safe copy [8/aug/22]
\begin{align}
\Xi_{00}^{(0,0)} &= b_1b_2 + \frac{1}{3}(b_1 + b_2)f + \frac{1}{5}f^2\,,
\\
\Xi_{10}^{(1,0)} &= \frac{1}{45}\Big\{ 15 tb_1'(f+3b_2)+15\big[f+3b_1b_2'(t-1)\big]+\big[9f(2t-1)+15(t-1)b_1+5b_2t\big]f'
\Big\},
\\ \Xi_{00}^{(0,2)} &= -\frac{1}{3}\alpha_1\alpha_2f^2,
\\ \Xi_{10}^{(1,2)} &= -\frac{1}{3}\Big[
(2t-1)\alpha_1\alpha_2ff'+t\alpha_2\alpha_1'f^2 - (t-1)\alpha_1\alpha_2'f^2 \Big],
\\ \Xi_{11}^{(0,1)} &= (\alpha_1 b_2 - b_1\alpha_2)f+\frac{3}{5}(\alpha_1 - \alpha_2)f^2,
\\
\Xi_{01}^{(1,1)} &=-\frac{1}{3}f\big[ \alpha_{2}t b_1'+\alpha_{1} b_2'(t-1)\big] +\frac{1}{15}f'\Big\{3f\big(\alpha_{1}-\alpha_{2}\big)(2 t-1) -5\big[b_1 \alpha_{2}(t-1)+b_2 \alpha_{1}\big]\Big\}
\notag\\
&+\frac{1}{15}f\big[ t \alpha_{1}'(3 f+5 b_2) -\alpha_{2}'(t-1)(3f+5b_1)\big]
+\frac{885\pi^2}{65536}f\Big\{f\big[({3}-t) \alpha_{2}+\alpha_{1}(3t-2)\big]
\notag\\
&-5 b_1 \alpha_{2}(t-1)+5 t b_2 \alpha_{1} \Big\},
\\
\Xi_{21}^{(1,1)} &= \frac{2}{3}\big[\alpha_1(t-1)fb_2' - t\alpha_2fb_1'\big] + \frac{1}{15} f'\big[3f(\alpha_1 - \alpha_2)(2t-1) - 5b_1(t-1)\alpha_2 + 5tb_2\alpha_1\big] \notag\\
&+ \frac{2}{15}f \big[t\alpha_1'(3t+5b_2) - \alpha_2'(t-1)(3f+5b_1)\big]
- \frac{14205\pi^2}{2^{20}}f\Big\{f\big[\alpha_2(1 -3 t)
+ \alpha_1(3t-2)\big]
\notag\\
&- 5\big[b_1\alpha_2(t-1) + tb_2\alpha_1\big]\Big\},
\\ \Xi_{41}^{(1,1)} &= \frac{1755\pi^2}{2^{23}}f\Big\{5\big[b_1\alpha_2(t-1) + tb_2\alpha_1\big]- f\big[\alpha_2
\big(1 - 3t\big) + \alpha_1\big(3t - 2\big)\big] \Big\},
\\ \Xi_{22}^{(0,0)} &= -\frac{2}{3}f(b_1+b_2) - \frac{4}{7}f^2\,,
\\ \Xi_{12}^{(1,0)} &= -\frac{4}{15}f\big[tb_1' - (t-1)b_2'\big] + \frac{4}{105}f'\big[7b_1(1-t) + 6f(1-2t)-7b_2t\big] \\
&- \frac{165\pi^2}{14336}f\big[6tf+7t(b_1+b_2)-3f-7b_1\big],
\\ \Xi_{32}^{(1,0)} &= -\frac{2}{5}f[tb_1'+(t-1)fb_2'] + \frac{2}{175}f'\bigg[30f(1-2t)+35(1-t)b_1 - 7b_2t \bigg] \\
&+ \frac{395\pi^2}{2^{15}}f\big[6tf - 7t(b_1 + b_2)- 3f -7b_1f\big],
\\ \Xi_{22}^{(0,2)} &= \frac{2}{3}f^2\alpha_1\alpha_2 \,,
\\\Xi_{12}^{(1,2)} &= \frac{4}{15}\Big\{\alpha_1\alpha_2f(2t-1)f' + f^2\big[\alpha_2\alpha_1't + \alpha_1\alpha_2'(t-1)\big]\Big\}+ \frac{165\pi^2}{2^{12}}f^2\alpha_1\alpha_2\big(2t-1\big),
\\ \Xi_{32}^{(1,2)} &= \frac{2}{5}\Big\{\alpha_1\alpha_2f(2t-1)f' + f^2\big[\alpha_2\alpha_1't + \alpha_1\alpha_2'(t-1)\big] \Big\} - \frac{2765\pi^2}{2^{16}}f^2\alpha_1\alpha_2\big(2t-1 \big),
\\ \Xi_{03}^{(1,1)} &= \frac{15\pi^2}{524288}f^2\big[3t(\alpha_1 - \alpha_2) - 2\alpha_1 + \alpha_2\big],
\\ \Xi_{23}^{(1,1)} &= \frac{2}{5}f^2(\alpha_2 - \alpha_1),
\\ \Xi_{43}^{(1,3)} &= \frac{6}{35}\Big\{f^2\big[\alpha_2'(t-1)-\alpha_1't\big]
- f(\alpha_1 - \alpha_2)(2t-1)f'\Big\}
\notag \\&
- \frac{192615\pi^2}{2^{23}}f^2\big[3t(\alpha_1 - \alpha_2) - 2 \alpha_1 + \alpha_2\big],
\\ \Xi_{43}^{(1,1)} &= \frac{8}{35}\Big\{f^2\big[\alpha_2'(t-1) - \alpha_1't\big]
\notag \\&
- f(\alpha_1 - \alpha_2)(2t-1)f'\Big\} + \frac{1629855\pi^2}{2^{26}}f^2\big[3t(\alpha_1 - \alpha_2) - 2\alpha_1 + \alpha_2\big],
\\ \Xi_{44}^{(0,0)} &= \frac{8}{35}f^2 \,,
\\ \Xi_{14}^{(1,0)} &= \frac{75\pi^2}{114688}f^2(1-2t),
\\ \Xi_{34}^{(1,0)} &= \frac{32\pi^2}{315}f(2t-1)f' + \frac{5525}{2^{18}}f^2(2t-1),
\end{align}
\begin{align}
\Xi_{00}^{(2,0)} &= \frac{1}{18}b_1'\Big[6b_2't(t-1)+2 t(t-1)f'+t^{2}(f+3 b_2)\Big]+\frac{1}{18}b_2'\Big\{ 2f' t(t-1)
+b_2'\big[f+3 b_1(t-1)^{2}\big]\Big\}
\notag\\
&+\frac{1}{18}\Big[t^{2}b_1''(f+3 b_2)
+(t-1)^{2}b_2''(f+3 b_1)\Big] +\frac{1}{90}f''\Big[3f\left(2t^{2}-2 t+1\right) +5(t-1)^{2} b_1+5b_2 t^{2}\Big]
\notag\\
&+\frac{1}{15}f'^2t\left(t-1\right)+\frac{1}{90}f'\Big[3f\left( 2t^{2}-2t+1\right) +5b_1(t-1)^{2} +5b_2 t^{2}\Big]-\frac{4}{45}f^2\,,
\\
\Xi_{20}^{(2,0)} &= \frac{2}{9}b_1'\Big[3t(t-1)b_2' + f't(t-1) - t^2(f+3b_2) \Big] + \frac{2}{9}b_2'\Big[f't(t-1) -(f+3b_1)(t-1)^2 \Big]
\notag\\
&+ \frac{1}{9}\big[t^2b_1''(f+3b_2)
+ (t-1)^2b_2''(f+3b_1)\big] + \frac{1}{45}f''\Big[5b_2t^2 + 5b_1(t-1)^2+3(2t^2 - 2t +1)\Big]
\notag\\
&+\frac{2}{15}tf'^2(t-1) + \frac{2}{45}f'\Big[3(f(2t-2t^2-1) - 5b_1(t-1)^2 - 5b_2t^2 \Big] + \frac{4}{45}f^2\,,
\\
\Xi_{02}^{(2,0)} &= \frac{2}{315}f\Big\{14\big[(t-1)^2b_2'+t^2b_1'\big] - f\big[54t(t-1)+17\big]\Big\} - \frac{24}{35}f\big[(t-1)^2b_1+b_2t^2\big]
\notag\\
&+ \frac{2}{315}\Big\{2f'\big[7t(1-t)(b_2+b_1)+6f(2t^2-2t+1)+7(t-1)^2b_1+7t^2b_2\big]
\notag\\
&+f''\big[6f(2t-2t^2-1)-7(t-1)^2b_1 - 7t^2b_2\big] +12f'^2t(t-1) \Big\}
\notag\\&
- \frac{2}{45}f\big[(t-1)^2b_2'' - t^2b_1''\big]
-\frac{\pi^2}{14336}f'\big[165(2t-1)^2+385(t-1)^2b_1 + 385t^2b_2\big]
\notag\\&
+ \frac{165\pi^2}{43008}f\big[7t(1-t)(b_1+b_2)+3(2t^2-2t+1)f +7(t-1)^2b_1 + 7t^2b_2\big],
\\
\Xi_{22}^{(2,0)} &= \frac{36}{15876}f\Big\{7\big[(t-1)^2b_2'+t^2b_1'\big] + 2f\big[162t(t-1)+67\big] + 378\big[(t-1)^2b_1+t^2b_2\big]\Big\}
\notag\\
&+\frac{1}{441}\Big\{f'\big[154t(1-t)(b_2'+b_1')+6f(2t^2 -2t + 1)+7(t-1)^2b_1+7t^2b_2\big]
\notag\\
&+ f''\big[66f(2t-2t^2+1) - 77(t-1)^2b_1 - 77t^2b_2\big] +132tf'^2(1-t) \Big\}
\notag\\
&
- \frac{11}{63}f\big[(t-1)^2b_2'' - t^2b_1''\big]
-\frac{25\pi^2}{229376}f'\big[138(2t-1)^2f + 161(t-1)^2b_1 - 161t^2b_2\big]
\notag\\
&+ \frac{1725\pi^2}{688128}\big[7t(1-t)(b_1'+b_2')
+3f(2t^2-2t+1) + 7(t-1)^2b_1 + 7t^2b_2\big] ,
\\
\Xi_{42}^{(2,0)} &= -\frac{24}{735}f\Big\{6f\big[2t(t-1)+1\big] -7\big[(t-1)^2b_2'+t^2b_1'\big] + 14\big[(t-1)^2b_1+t^2b_2\big]\Big\}
\notag\\
&+\frac{4}{245}\Big\{f'\big[14t(1-t)(b_2'+b_1')+12f(2t^2 -2t + 1)+14(t-1)^2b_1+14t^2b_2\big]
\notag\\
&+ f''\big[6f(2t-2t^2+1) - 7(t-1)^2b_1 - 7t^2b_2\big] +12tf'^2(1-t) \Big\}
\notag\\
&- \frac{4}{35}f\big[(t-1)^2b_2'' - t^2b_1''\big]
-\frac{45\pi^2}{1835008}f'\big[1542(2t-1)^2f + 1799(t-1)^2b_1 + 1799t^2b_2\big]
\notag\\
&+ \frac{34695\pi^2}{5505024}\big[7t(1-t)(b_1'+b_2')
+3f(2t^2-2t+1) + 7(t-1)^2b_1 + 7t^2b_2\big] ,
\\
\Xi_{24}^{(2,0)} &= \frac{53}{245}\big[2t(1-t)f'^2 + 2(2t^2-2t+1)f'-(2t^2-2t+1)ff''\big]
\notag\\
&- \frac{4}{245}f^2\big[200t(t-1)+51\big]
-\frac{15775\pi^2}{183008}f\big[(2t^2-2t+1)f - (2t-1)^2f'\big],
\\
\Xi_{44}^{(2,0)} &= \frac{24}{245}\big[2t(1-t)f'^2 -(2t^2-2t+1)ff''\big] +\frac{76}{245}(2t^2-2t+1)ff'
\notag\\
&- \frac{88}{2695}f^2\big[110t(t-1)+27 \big]
-\frac{101475\pi^2}{7340032}f\big[(2t^2-2t+1)f - (2t-1)^2f'\big].
\end{align}
$p=0$
\begin{align}
\begin{gathered}
X I_{0,0,0,0}=\left(\frac{f}{3}+b_{2}\right) b_{1}+\frac{f^{2}}{5}+\frac{f b_{2}}{3} \\
X I_{0,1,1,1}=\left(f b_{2}+\frac{3 f^{2}}{5}\right) \alpha_{1}+\left(-f b_{1}-\frac{3 f^{2}}{5}\right) \alpha_{2} \\
X I_{0,0,2,2}=-\frac{4 f^{2}}{7}-\frac{2 f b_{2}}{3}-\frac{2 f b_{1}}{3} \\
X I_{0,1,3,3}=-\frac{2 f^{2} \alpha_{1}}{5}+\frac{2 f^{2} \alpha_{2}}{5} \\
X I_{0,0,4,4}=\frac{8 f^{2}}{35}
\end{gathered}
\end{align}
$p=1$
\begin{align}
\begin{aligned}
&X I_{1,0,1,0}=\left(\left(\frac{t}{3}-\frac{1}{3}\right) b_{1}+\frac{b_{2} t}{3}+\frac{f(-1+2 t)}{5}\right)\left(f^{\prime}\right)+\left(\frac{f t}{3}+b_{2} t\right)\left(b_{1}^{\prime}\right)+\left((-1+t) b_{1}+\frac{f(-1+t)}{3}\right)\left(b_{2}^{\prime}\right)\\
&X I_{1,1,0,1}=\left(\left(\frac{b_{2} t}{3}+\frac{f(-1+2 t)}{5}\right) \alpha_{1}+\left(\left(-\frac{t}{3}+\frac{1}{3}\right) b_{1}-\frac{f(-1+2 t)}{5}\right) \alpha_{2}\right)\left(f^{\prime}\right)+\left(\frac{f b_{2} t}{3}+\frac{f^{2} t}{5}\right)\left(\alpha_{1}^{\prime}\right)\\
&\begin{aligned}
&+\left(-\frac{b_{1}(-1+t) f}{3}-\frac{f^{2}(-1+t)}{5}\right)\left(\alpha_{2}^{\prime}\right)-\frac{f \alpha_{2}\left(b_{1}^{\prime}\right) t}{3}+\frac{f(-1+t) \alpha_{1}\left(b_{2}^{\prime}\right)}{3}+ \\
&\left.+\frac{885 \pi^{2} f^{2}(3 t-2)}{65536}\right) \alpha_{1}+\left(-\frac{4425 f(-1+t) \pi^{2} b_{1}}{65536}-\frac{885 f^{2}(3 t-1) \pi^{2}}{65536}\right) \alpha_{2}
\end{aligned}\\
&X I_{1,1,2,1}=\left(\left(\frac{2 b_{2} t}{3}+\frac{2 f(-1+2 t)}{5}\right) \alpha_{1}+\left(\left(\frac{2}{3}-\frac{2 t}{3}\right) b_{1}-\frac{2 f(-1+2 t)}{5}\right) \alpha_{2}\right)\left(f^{\prime}\right)+\left(\frac{2 f b_{2} t}{3}\right.\\
&\left.+\frac{2 f^{2} t}{5}\right)\left(\alpha_{1}^{\prime}\right)+\left(-\frac{2 b_{1}(-1+t) f}{3}-\frac{2 f^{2}(-1+t)}{5}\right)\left(\alpha_{2}^{\prime}\right)-\frac{2 f \alpha_{2}\left(b_{1}^{\prime}\right) t}{3}+\frac{2 f(-1+t) \alpha_{1}\left(b_{2}^{\prime}\right)}{3}+(\\
&\left.-\frac{71025 \pi^{2} b_{2} f t}{1048576}-\frac{14205 \pi^{2} f^{2}(3 t-2)}{1048576}\right) \alpha_{1}+\left(\frac{71025 f(-1+t) \pi^{2} b_{1}}{1048576}+\frac{14205 f^{2}(3 t-1) \pi^{2}}{1048576}\right) \alpha_{2}\\
&X I_{1,1,4,1}=\left(-\frac{8775 \pi^{2} b_{2} f t}{8388608}-\frac{1755 \pi^{2} f^{2}(3 t-2)}{8388608}\right) \alpha_{1}+\left(\frac{8775 f(-1+t) \pi^{2} b_{1}}{8388608}+\frac{1755 f^{2}(3 t-1) \pi^{2}}{8388608}\right) \alpha_{2}\\
&X I_{1,0,1,2}=\left(\left(\frac{4}{15}-\frac{4 t}{15}\right) b_{1}-\frac{4 b_{2} t}{15}-\frac{8 f(-1+2 t)}{35}\right)\left(f^{\prime}\right)-\frac{4 t\left(b_{1}^{\prime}\right) f}{15}-\frac{4 f(-1+t)\left(b_{2}^{\prime}\right)}{15}\\
&-\frac{165 f(-1+t) \pi^{2} b_{1}}{2048}-\frac{165 \pi^{2} b_{2} f t}{2048}-\frac{495 \pi^{2} f^{2}(-1+2 t)}{14336}
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
&X I_{1,0,3,2}=\left(\left(-\frac{2 t}{5}+\frac{2}{5}\right) b_{1}-\frac{2 b_{2} t}{5}-\frac{12 f(-1+2 t)}{35}\right)\left(f^{\prime}\right)-\frac{2 t\left(b_{1}^{\prime}\right) f}{5}-\frac{2 f(-1+t)\left(b_{2}^{\prime}\right)}{5} \\
&+\frac{2765 f(-1+t) \pi^{2} b_{1}}{32768}+\frac{2765 \pi^{2} b_{2} f t}{32768}+\frac{1185 \pi^{2} f^{2}(-1+2 t)}{32768} \\
&X I_{1,1,0,3}=\frac{15 \pi^{2} f^{2}(3 t-2) \alpha_{1}}{524288}-\frac{15 \pi^{2} f^{2} \alpha_{2}(3 t-1)}{524288} \\
&X I_{1,1,2,3}=\left(-\frac{6 f(-1+2 t) \alpha_{1}}{35}+\frac{6 f \alpha_{2}(-1+2 t)}{35}\right)\left(f^{\prime}\right)-\frac{6 f^{2}\left(\alpha_{1}^{\prime}\right) t}{35}+\frac{6 f^{2}(-1+t)\left(\alpha_{2}^{\prime}\right)}{35} \\
&\left.-\frac{192615 \pi^{2} f^{2}(3 t-2) \alpha_{1}}{8388608}+\frac{192615 \pi^{2} f^{2} \alpha_{2}(3 t-1)}{8388608}\right)\left(f^{\prime}\right)-\frac{8 f^{2}\left(\alpha_{1}^{\prime}\right) t}{35}+\frac{8 f^{2}(-1+t)\left(\alpha_{2}^{\prime}\right)}{35} \\
&X I_{1,1,4,3}=\left(-\frac{8 f(-1+2 t) \alpha_{1}}{35}+\frac{8 f \alpha_{2}(-1+2 t)}{35}+\frac{1629855 \pi^{2} f^{2} \alpha_{2}(3 t-1)}{67108864}\right. \\
&\quad+\frac{1629855 \pi^{2} f^{2}(3 t-2) \alpha_{1}}{67108864}-\frac{75 \pi^{2} f^{2}(-1+2 t)}{114688} \\
&\quad X I_{1,0,1,4}--\frac{32 f(-1+2 t)\left(f^{\prime}\right)}{315}+\frac{5525 \pi^{2} f^{2}(-1+2 t)}{262144}
\end{aligned}
\end{align}
$p=2$
\begin{align}
\begin{aligned}
&X I_{2,0,0,0}=\left(\frac{(-1+t)^{2} b_{1}}{18}+\frac{b_{2} t^{2}}{18}+\frac{f\left(2 t^{2}-2 t+1\right)}{30}\right)\left(f^{\prime \prime}\right)+\left(\frac{f t^{2}}{18}+\frac{b_{2} t^{2}}{6}\right)\left(b_{1}^{\prime \prime}\right)+\left(\frac{(-1+t)^{2} b_{1}}{6}\right.\\
&\left.+\frac{f(-1+t)^{2}}{18}\right)\left(b_{2}^{\prime \prime}\right)+\frac{t(-1+t)f'^2}{15}+\left(\frac{t(-1+t)\left(b_{1}^{\prime}\right)}{9}+\frac{t(-1+t)\left(b_{2}^{\prime}\right)}{9}+\frac{(-1+t)^{2} b_{1}}{18}+\frac{b_{2} t^{2}}{18}\right.\\
&\left.+\frac{f\left(2 t^{2}-2 t+1\right)}{30}\right)\left(f^{\prime}\right)+\left(\frac{t(-1+t)\left(b_{2}^{\prime}\right)}{3}+\frac{f t^{2}}{18}+\frac{b_{2} t^{2}}{6}\right)\left(b_{1}^{\prime}\right)+\left(\frac{(-1+t)^{2} b_{1}}{6}+\frac{f(-1+t)^{2}}{18}\right)(\\
&\left.b_{2}^{\prime}\right)-\frac{4 f^{2}}{45}\\
&X I_{2,0,2,0}=\left(\frac{(-1+t)^{2} b_{1}}{9}+\frac{b_{2} t^{2}}{9}+\frac{f\left(2 t^{2}-2 t+1\right)}{15}\right)\left(f^{\prime \prime}\right)+\left(\frac{f t^{2}}{9}+\frac{b_{2} t^{2}}{3}\right)\left(b_{1}^{\prime \prime}\right)+\left(\frac{(-1+t)^{2} b_{1}}{3}\right.\\
&\left.+\frac{f(-1+t)^{2}}{9}\right)\left(b_{2}^{\prime \prime}\right)+\frac{2 t(-1+t)f'^2}{15}+\left(\frac{2 t(-1+t)\left(b_{1}^{\prime}\right)}{9}+\frac{2 t(-1+t)\left(b_{2}^{\prime}\right)}{9}-\frac{2(-1+t)^{2} b_{1}}{9}\right.\\
&\left.-\frac{2 b_{2} t^{2}}{9}-\frac{2 f\left(2 t^{2}-2 t+1\right)}{15}\right)\left(f^{\prime}\right)+\left(\frac{2 t(-1+t)\left(b_{2}^{\prime}\right)}{3}-\frac{2 f t^{2}}{9}-\frac{2 b_{2} t^{2}}{3}\right)\left(b_{1}^{\prime}\right)+\left(-\frac{2(-1+t)^{2} b_{1}}{3}\right.\\
&\left.-\frac{2 f(-1+t)^{2}}{9}\right)\left(b_{2}^{\prime}\right)+\frac{4 f^{2}}{45}
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
&X I_{2,0,0,2}=\left(-\frac{2(-1+t)^{2} b_{1}}{45}-\frac{2 b_{2} t^{2}}{45}-\frac{4 f\left(2 t^{2}-2 t+1\right)}{105}\right)\left(f^{\prime \prime}\right)-\frac{2 f\left(b_{1}^{\prime \prime}\right) t^{2}}{45}-\frac{2 f(-1+t)^{2}\left(b_{2}^{\prime \prime}\right)}{45}\\
&-\frac{8 t(-1+t)f'^2}{105}+\left(-\frac{4 t(-1+t)\left(b_{1}^{\prime}\right)}{45}-\frac{4 t(-1+t)\left(b_{2}^{\prime}\right)}{45}+\left(-\frac{55(-1+t)^{2} \pi^{2}}{2048}+\frac{4(-1+t)^{2}}{45}\right) b_{1}\right.\\
&\left.+\left(\frac{4}{45} t^{2}-\frac{55}{2048} \pi^{2} t^{2}\right) b_{2}-\frac{165 f(-1+2 t)^{2} \pi^{2}}{14336}+\frac{8 f\left(2 t^{2}-2 t+1\right)}{105}\right)\left(f^{\prime}\right)+\left(-\frac{55 f t(-1+t) \pi^{2}}{2048}\right.\\
&\left.+\frac{4 f t^{2}}{45}\right)\left(b_{1}^{\prime}\right)+\left(-\frac{55 f t(-1+t) \pi^{2}}{2048}+\frac{4 f(-1+t)^{2}}{45}\right)\left(b_{2}^{\prime}\right)+\left(\frac{55 f(-1+t)^{2} \pi^{2}}{2048}-\frac{2 f(-1+t)^{2}}{5}\right) b_{1}\\
&+\left(-\frac{2 f t^{2}}{5}+\frac{55 \pi^{2} f t^{2}}{2048}\right) b_{2}+\frac{165 f^{2}\left(2 t^{2}-2 t+1\right) \pi^{2}}{14336}-\frac{2 f^{2}\left(54 t^{2}-54 t+17\right)}{315}\\
&X I_{2,0,2,2}=\left(-\frac{11(-1+t)^{2} b_{1}}{63}-\frac{11 b_{2} t^{2}}{63}-\frac{22 f\left(2 t^{2}-2 t+1\right)}{147}\right)\left(f^{\prime \prime}\right)-\frac{11 f\left(b_{1}^{\prime \prime}\right) t^{2}}{63}-\frac{11 f(-1+t)^{2}\left(b_{2}^{\prime \prime}\right)}{63}\\
&-\frac{44 t(-1+t)f'^2}{147}+\left(-\frac{22 t(-1+t)\left(b_{1}^{\prime}\right)}{63}-\frac{22 t(-1+t)\left(b_{2}^{\prime}\right)}{63}+\left(-\frac{575(-1+t)^{2} \pi^{2}}{32768}\right.\right.\\
&\left.\left.+\frac{(-1+t)^{2}}{63}\right) b_{1}+\left(\frac{1}{63} t^{2}-\frac{575}{32768} \pi^{2} t^{2}\right) b_{2}-\frac{1725 f(-1+2 t)^{2} \pi^{2}}{229376}+\frac{2 f\left(2 t^{2}-2 t+1\right)}{147}\right)\left(f^{\prime}\right)+(\\
&\left.-\frac{575 f t(-1+t) \pi^{2}}{32768}+\frac{f t^{2}}{63}\right)\left(b_{1}^{\prime}\right)+\left(-\frac{575 f t(-1+t) \pi^{2}}{32768}+\frac{f(-1+t)^{2}}{63}\right)\left(b_{2}^{\prime}\right)+\left(\frac{575 f(-1+t)^{2} \pi^{2}}{32768}\right.\\
&\left.+\frac{6 f(-1+t)^{2}}{7}\right) b_{1}+\left(\frac{6 f t^{2}}{7}+\frac{575 \pi^{2} f t^{2}}{32768}\right) b_{2}+\frac{1725 f^{2}\left(2 t^{2}-2 t+1\right) \pi^{2}}{229376}+\frac{2 f^{2}\left(162 t^{2}-162 t+67\right)}{441}
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
X I_{2,0,4,2} &=\left(-\frac{4(-1+t)^{2} b_{1}}{35}-\frac{4 b_{2} t^{2}}{35}-\frac{24 f\left(2 t^{2}-2 t+1\right)}{245}\right)\left(f^{\prime \prime}\right)-\frac{4 f\left(b_{1}^{\prime \prime}\right) t^{2}}{35}-\frac{4 f(-1+t)^{2}\left(b_{2}^{\prime \prime}\right)}{35} \\
&-\frac{48 t(-1+t)f'^2}{245}+\left(-\frac{8 t(-1+t)\left(b_{1}^{\prime}\right)}{35}-\frac{8 t(-1+t)\left(b_{2}^{\prime}\right)}{35}+\left(\frac{11565(-1+t)^{2} \pi^{2}}{262144}\right.\right.\\
&\left.\left.+\frac{8(-1+t)^{2}}{35}\right) b_{1}+\left(\frac{8}{35} t^{2}+\frac{11565}{262144} \pi^{2} t^{2}\right) b_{2}+\frac{34695 f(-1+2 t)^{2} \pi^{2}}{1835008}+\frac{48 f\left(2 t^{2}-2 t+1\right)}{245}\right)\left(f^{\prime}\right) \\
&+\left(\frac{11565 f t(-1+t) \pi^{2}}{262144}+\frac{8 f t^{2}}{35}\right)\left(b_{1}^{\prime}\right)+\left(\frac{11565 f t(-1+t) \pi^{2}}{262144}+\frac{8 f(-1+t)^{2}}{35}\right)\left(b_{2}^{\prime}\right)+\left({ }_{2}\right) \\
&\left.-\frac{11565 f(-1+t)^{2} \pi^{2}}{262144}-\frac{16 f(-1+t)^{2}}{35}\right) b_{1}+\left(-\frac{16 f t^{2}}{35}-\frac{11565 \pi^{2} f t^{2}}{262144}\right) b_{2}-\frac{34695 f^{2}\left(2 t^{2}-2 t+1\right) \pi^{2}}{1835008} \\
&-\frac{48 f^{2}\left(2 t^{2}-2 t+1\right)}{245}
\end{aligned}
\end{align}
\begin{align}
\begin{aligned}
&X I_{2,0,2,4}=\frac{16 f\left(2 t^{2}-2 t+1\right)\left(f^{\prime \prime}\right)}{735}+\frac{32 t(-1+t)f'^2}{735}+\left(\frac{15775 f(-1+2 t)^{2} \pi^{2}}{1835008}\right. \\
&\left.\quad-\frac{32 f\left(2 t^{2}-2 t+1\right)}{735}\right)\left(f^{\prime}\right)-\frac{15775 f^{2}\left(2 t^{2}-2 t+1\right) \pi^{2}}{1835008}+\frac{8 f^{2}\left(50 t^{2}-50 t+11\right)}{735} \\
&X I_{2,0,4,4}=\frac{156 f\left(2 t^{2}-2 t+1\right)\left(f^{\prime \prime}\right)}{2695}+\frac{312 t(-1+t)f'^2}{2695}+\left(\frac{101475 f(-1+2 t)^{2} \pi^{2}}{14680064}\right. \\
&\left.\quad-\frac{4 f\left(2 t^{2}-2 t+1\right)}{2695}\right)\left(f^{\prime}\right)-\frac{101475 f^{2}\left(2 t^{2}-2 t+1\right) \pi^{2}}{14680064}-\frac{8 f^{2}\left(370 t^{2}-370 t+87\right)}{2695}
\end{aligned}
\end{align}
\fi
\subsection{Coefficients $\tilde\Xi^{(p,n)}_{\ell\ell'}$ for the galaxy-magnification wide angle expansion}
$\displaystyle \bm{O\left[({\cH}/{k})^1\right]:}$
\begin{align}
\tilde\Xi_{11}^{(0,1)} &= -\frac{1}{5}\tilde{\alpha}_2f(3f+5b_1),
\\
\tilde\Xi_{33}^{(0,1)} &= \frac{2}{5}\tilde{\alpha}_2f^2 \,.
\end{align}
\noindent $\displaystyle \bm{O\left[({r}/{d})^1\,({\cH}/{k})^1\right]\sim O\left[({\cH}/{k})^2\right]:}$
\begin{align}%
\tilde\Xi_{00}^{(0,2)} &= \frac{1}{3}\alpha_1\tilde{\alpha}_2f^2 \,, \\
\tilde\Xi_{22}^{(0,2)} &= -\frac{2}{3}\alpha_1\tilde{\alpha}_2f^2\,, \\
\tilde\Xi_{01}^{(1,1)} &= \frac{2}{15}\tilde{\alpha}_2f^2(1-3t) - \frac{2}{3}\tilde{\alpha}_2b_1f(t-1) - \frac{1}{3}\tilde{\alpha}_2b_1'ft - \frac{1}{15}\tilde{\alpha}_2'f(3f+5b_1)(t-1)
\notag\\
&+ \frac{1}{15}f' \Big[3\tilde{\alpha}_2f(1-2t) - 5\tilde{\alpha}_2b_1(t-1) \Big],
\\
\tilde\Xi_{21}^{(1,1)} &= \frac{2}{15}\tilde{\alpha}_2f^2(1-3t) +\frac{2}{3}\tilde{\alpha}_2b_1f(t-1) - \frac{2}{3}\tilde{\alpha}_2b'_1ft - \frac{2}{15}\tilde{\alpha}_2'f(3f+5b_1)(t-1)
\notag\\
&+ \frac{1}{15}f' \Big[6\tilde{\alpha}_2f(1-2t)-10\tilde{\alpha}_2b_1(t-1) \Big],
\\
\tilde\Xi_{23}^{(1,1)} &= \frac{8}{35}\tilde{\alpha}_2f^2(3t-1) + \frac{6}{35}\Big[\tilde{\alpha}_2'f^2(t-1)+\tilde{\alpha}_2ff'(2t-1) \Big]\,,
\\
\tilde\Xi_{43}^{(1,1)} &= -\frac{8}{35}\tilde{\alpha}_2f^2(3t-1) + \frac{8}{35}\Big[\tilde{\alpha}_2'f^2(t-1) + \tilde{\alpha}_2ff'(2t-1) \Big]\,.
\end{align}
\noindent $\displaystyle \bm{O\left[({r}/{d})^2\,({\cH}/{k})^1\right]\sim O\left[({r}/{d})^1\,({\cH}/{k})^2\right]\sim O\left[({\cH}/{k})^3\right]:}$
\begin{align}%
\tilde\Xi_{10}^{(1,2)} &= \frac{1}{3}\alpha_1\tilde{\alpha}_2ff{'}(2t-1) + \frac{1}{3}f^2\big[\alpha_1'\tilde{\alpha}_2t + \alpha_1\tilde{\alpha}'_2(t-1)\big], \\
\tilde\Xi_{12}^{(1,2)} &= -\frac{2}{5}\alpha_1\tilde{\alpha}_2f^2(2t-1) - \frac{4}{15}\Big[\alpha_1\tilde{\alpha}_2'f^2(t-1) + \alpha_1'\tilde{\alpha}_2f^2t + \alpha_1\tilde{\alpha}_2ff'(2t-1)\Big],\\
\tilde\Xi_{32}^{(1,2)} &= \frac{2}{5}\alpha_1\tilde{\alpha}_2f^2(2t-1) - \frac{2}{5}\Big[\alpha_1\tilde{\alpha}_2'f^2(t-1) + \alpha_1'\tilde{\alpha}_2f^2t + \alpha_1\tilde{\alpha}_2ff'(2t-1) \Big]
\\
\tilde\Xi_{11}^{(2,1)} &= \frac{9}{225}\tilde{\alpha}_2f \Big[f(9t^2 - 6t + 5) +15b_1(t-1)^2 \Big] - \frac{1}{10}\tilde{\alpha}_2b_1'ft(3t-4)
- \frac{1}{50}\tilde{\alpha}_2' \Big[30b_1'ft(t-1)
\notag\\
&-f^2(9t-1)(t-1) + 3b_1f(t-1)^2 \Big] - \frac{1}{450}f'\Big\{\tilde{\alpha}_2'\Big[270b_1(t-1)^2 + 162f(t-1)(2t-1)\Big]
\notag\\
&+ 270\tilde{\alpha}_2b_1't(t-1) + 9\tilde{\alpha}_2\Big[f(18t^2-14t+1) + 15b_1(t-1)^2 \Big] \Big\} - \frac{9}{150}\tilde{\alpha}_2f''\Big[3f(2t^2 - 2t +1)
\notag\\
&+5b_1(t-1)^2 \Big]- \frac{9}{150}\tilde{\alpha}_2''f(3f+5b_1)(t-1)^2 - \frac{3}{10}\tilde{\alpha}_2b_1''ft^2 - \frac{9}{25}\tilde{\alpha}_2(f')^2t(t-1)\,,
\\
\tilde\Xi_{31}^{(2,1)} &= -\frac{9}{225}\tilde{\alpha}_2f\Big[f(9t^2 - 6t +5)+15b_1(t-1)^2 \Big] + \frac{2}{5}\tilde{\alpha}_2b_1'ft(2t-1) + \frac{1}{25}\tilde{\alpha}_2' \Big[-10b_1'ft(t-1)
\notag\\
&+ 4f^2(3t-2)(t-1) + 20b_1f(t-1)^2 \Big] + \frac{1}{25}f' \Big\{- \tilde{\alpha}_2'\Big[10b_1(t-1)^2 + 6f(t-1)(2t-1) \Big]
\notag\\
&- 10\tilde{\alpha}_2b_1't(t-1) + 2\tilde{\alpha}_2 \Big[f(12t^2 - 11t +4) +10b_1(t-1)^2 \Big] \Big\} - \frac{1}{25}\tilde{\alpha}_2f'' \Big[3f(2t^2 - 2t +1)
\notag\\
&+ 5b_1(t-1)^2 \Big] - \frac{1}{25}\tilde{\alpha}_2''f(3f+5b_1)(t-1)^2 - \frac{1}{5}\tilde{\alpha}_2b_1''ft^2 - \frac{6}{25}\tilde{\alpha}_2(f')^2t(t-1)
\\
\tilde\Xi_{13}^{(2,1)} &= \frac{2}{175}\tilde{\alpha}_2f^2(24t^2 - 16t -5) + \frac{4}{175}\tilde{\alpha}_2'f^2(9t-1)(t-1) +\frac{4}{175}f'\Big[3\tilde{\alpha}_2'f(2t-1)(t-1)
\notag\\
&+ \tilde{\alpha}_2f(18t^2-14t+1) \Big] + \frac{6}{175}\Big[\tilde{\alpha}_2ff''(2t^2 - 2t+1) + \tilde{\alpha}_2''f^2(t-1)^2 + 2\tilde{\alpha}_2(f')^2t(t-1) \Big],
\\
\tilde\Xi_{33}^{(2,1)} &= -\frac{2}{225}\tilde{\alpha}_2f^2(138t^2 - 92t +15) +\frac{1}{225}\tilde{\alpha}_2'f^2(23t-7)(t-1) + \frac{1}{225}f' \Big[46\tilde{\alpha}_2'f(2t-1)(t-1)
\notag\\
&+ \tilde{\alpha}_2f(46t^2-38t+7) \Big] +\frac{23}{225} \Big[\tilde{\alpha}_2ff''(2t^2-2t+1) + \tilde{\alpha}_2''f^2(t-1)^2 +2\tilde{\alpha}_2(f')^2t(t-1)\Big]
\\
\tilde\Xi_{53}^{(2,1)} &= \frac{4}{63}\tilde{\alpha}_2f^2(15t^2 - 10t +3) - \frac{16}{63}\tilde{\alpha}_2'f^2(2t-1)(t-1) + \frac{8}{63}f' \Big[\tilde{\alpha}_2'f(2t-1)(t-1)
\notag\\
&- \tilde{\alpha}_2f(8t^2 - 7t +2) \Big] + \frac{4}{63} \Big[\tilde{\alpha}_2ff''(2t^2 -2t +1)+\tilde{\alpha}_2''f^2(t-1)^2 + 2\tilde{\alpha}_2(f')^2t(t-1)
\end{align}
\noindent For completeness we give the other $O(x^2)$ contributions which are\\\\
$\displaystyle \bm{O\left[({r}/{d})^2\,({\cH}/{k})^2\right]\sim O\left[({\cH}/{k})^4\right]\,:}$
\begin{align}%
\tilde \Xi_{00}^{(2,2)} &= -\frac{1}{9}\alpha_1\tilde{\alpha}_2f^2 + \frac{1}{18}\alpha_1\tilde{\alpha}_2'f^2(t-1)^2 + \frac{1}{18}\alpha_1' \Big[2\tilde{\alpha}_2'f^2t(t-1)
+ \tilde{\alpha}_2f^2t^2 \Big]
\notag\\
&+ \frac{1}{18}f' \Big[2\alpha_1'\tilde{\alpha}_2ft(2t-1)+2\alpha_1\tilde{\alpha}_2'f(2t-1)(t-1)+\alpha_1\tilde{\alpha}_2f(2t^2-2t+1) \Big]
\notag\\
& +\frac{1}{18}\Big[\alpha_1\tilde{\alpha}_2ff''(2t^2 - 2t +1) + \alpha_1''\tilde{\alpha}_2f^2t^2 + \alpha_1\tilde{\alpha}_2''f^2(t-1)^2 \Big] +\frac{1}{9}\alpha_1\tilde{\alpha}_2(f')^2 t(t-1)
\\
\tilde\Xi_{20}^{(2,2)} &= \frac{1}{9}\alpha_1\tilde{\alpha}_2f^2 - \frac{2}{9}\alpha_1\tilde{\alpha}_2'f^2(t-1)^2 + \frac{2}{9}\alpha'_1\Big[\tilde{\alpha}_2'f^2t(t-1) - \tilde{\alpha}_2f^2t^2\Big]
\notag\\
&+ \frac{2}{9}f'\Big[\alpha_1'\tilde{\alpha}_2ft(2t-1) + \alpha_1\tilde{\alpha}_2'f(t-1)(2t-1) - \alpha_1\tilde{\alpha}_2f(2t^2-2t + 1)\Big]
\notag\\
&+ \frac{1}{9}\Big[\alpha_1\tilde{\alpha}_2f(f'')(2t^2 -2t +1) + \alpha_1''\tilde{\alpha}_2f^2t^2 + \alpha_1\tilde{\alpha}_2''f^2(t-1)^2 \Big] + \frac{2}{9}\alpha_1\tilde{\alpha}_2(f')^2t(t-1) \\
\tilde\Xi_{02}^{(2,2)} &= -\frac{2}{45}\alpha_1\tilde{\alpha}_2f^2(3t^2-3t -2) - \frac{2}{45}\alpha_1\tilde{\alpha}_2'f^2(4t-1)(t-1)
- \frac{2}{45}\alpha_1' \Big[2\tilde{\alpha}_2'f^2t(t-1) + \tilde{\alpha}_2f^2t(4t-3) \Big]
\notag\\
&- \frac{1}{45}f' \Big[4\alpha_1'\tilde{\alpha}_2ft(2t-1) + 4\alpha_1\tilde{\alpha}_2'f(t-1)(2t-1) + 2\alpha_1\tilde{\alpha}_2f(8t^2 - 8t+1) \Big]
\notag\\
&- \frac{2}{45} \Big[2\alpha_1\tilde{\alpha}_2ff''(2t^2 - 2t + 1) + 2\alpha_1''\tilde{\alpha}_2f^2t^2 + 2\alpha_1\tilde{\alpha}_2''f^2(t-1)^2 \Big] - \frac{4}{45}\alpha_1\tilde{\alpha}_2(f')^2t(t-1) \Big]
\\
\tilde\Xi_{22}^{(2,2)} &= \frac{22}{693}\alpha_1\tilde{\alpha}_2f^2(33t^2-33t+8) - \frac{1}{63}\alpha_1\tilde{\alpha}_2'f^2(11t-5)(t-1) - \frac{1}{63}\alpha_1'\Big[22\tilde{\alpha}_2'f^2t(t-1)
\notag\\
&+ \tilde{\alpha}_2f^2t(11t-6) \Big] - \frac{1}{63}f' \Big[22\alpha_1'\tilde{\alpha}_2ft(2t-1) + 22\alpha_1\tilde{\alpha}_2'f(2t-1)(t-1) + \alpha_1\tilde{\alpha}_2f(22t^2 - 22t +5) \Big]
\notag\\
&- \frac{11}{63}\Big[\alpha_1\tilde{\alpha}_2ff''(2t^2 -2t +1) + \alpha_1''\tilde{\alpha}_2f^2t^2 + \alpha_1\alpha_2''f^2(t-1)^2 \Big] - \frac{22}{63}\alpha_1\tilde{\alpha}_2(f')^2(t-1)
\\
\tilde\Xi_{42}^{(2,2)} &= -\frac{4}{35}\alpha_1\tilde{\alpha}_2f^2(8t^2 - 8t +3) + \frac{8}{35}\alpha_1\tilde{\alpha}_2'f^2(3t-2)(t-1) + \frac{8}{35}\alpha_1'\Big[\tilde{\alpha}_2f^2t(3t-1)
\notag\\
&- \tilde{\alpha}_2'f^2t(t-1) \Big] + \frac{8}{35}f' \Big[2\alpha_1\tilde{\alpha}_2f(3t^2 - 3t +1) - \alpha_1\tilde{\alpha}_2'f(2t-1)(t-1) - \alpha_1'\tilde{\alpha}_2ft(2t-1) \Big]
\notag\\
&- \frac{4}{35}\Big[\alpha_1\tilde{\alpha}_2ff''(2t^2-2t+1) + \alpha_1''\tilde{\alpha}_2f^2t^2 + \alpha_1\tilde{\alpha}_2''f^2(t-1)^2 \Big] -\frac{8}{35}\alpha_1\alpha_2(f')^2t(t-1)
\end{align}
\iffalse
Pritha's equations for safekeeping...
\begin{align}
\tilde\Xi_{11}^{(0,1)} &= -\frac{1}{5}\tilde{\alpha}_2f(3f+5b_1)
\\
\tilde\Xi_{33}^{(0,1)} &= \frac{2}{5}\tilde{\alpha}_2f^2
\end{align}
\begin{align}%
\tilde\Xi_{01}^{(1,1)} &= -\frac{1}{3}t\tilde{\alpha}_2b_1'f + \frac{1}{15}f'[3(1-2t)\tilde{\alpha}_2f - 5(t-1)\tilde{\alpha}_2b_1] - \frac{1}{15}(t-1)(3f+5b_1)\tilde{\alpha}_2'f
\notag\\
&+ \frac{885\pi^2}{2^{16}}(t-1)(f-5b_1)\tilde{\alpha}_2f
\\
\tilde\Xi_{03}^{(1,1)} &= \frac{15\pi^2}{2^{19}}(t-1)\tilde{\alpha}_2f^2
\\
\tilde\Xi_{23}^{(1,1)} &= \frac{6}{35}f[(2t-1)\tilde{\alpha}_2f' + (t-1)\tilde{\alpha}_2'f] - \frac{192615\pi^2}{2^{23}}(t-1)\tilde{\alpha}_2f^2
\\
\tilde\Xi_{43}^{(1,1)} &= \frac{8}{35}f[(2t-1)\tilde{\alpha}_2f' + (t-1)\tilde{\alpha}_2'f] + \frac{1629855\pi^2}{2^{26}}(t-1)\tilde{\alpha}_2f^2
\\
\tilde\Xi_{00}^{(0,2)} &= -\frac{1}{3}\alpha_1\tilde{\alpha}_2f^2 \\
\tilde\Xi_{22}^{(0,2)} &= \frac{2}{3}\alpha_1\tilde{\alpha}_2f^2
\\
\tilde\Xi_{11}^{(2,1)} &= \frac{3}{10}tb_1'[t\tilde{\alpha}_2f- 2(t-1)\tilde{\alpha}_2f' - 2(t-1)\tilde{\alpha}_2'f] - \frac{21195\pi^2}{2^{19}}t(t-1)\tilde{\alpha}_2b_1'f - \frac{3}{10}t^2\tilde{\alpha}_2b_1''f
\notag\\
&- \frac{9}{150}\tilde{\alpha}_2f''[3f(2t^2-2t+1) + 5(t-1)^2b_1 ] - \frac{3}{50}(t-1)^2\tilde{\alpha}_2''f[3f +5b_1] - \frac{9}{25}t(t-1)\tilde{\alpha}_2f'^2
\notag\\
&+f'\Big\{-\frac{1}{25} \tilde{\alpha}_2'[15(t-1)^2b_1 + 9(t-1)(2t-1)f] + \frac{4239\pi^2}{2^{19}}(t-1)\tilde{\alpha}_2[5(1-t)b_1 + (2t-1)f]
\notag\\
&+ \frac{3}{150}\tilde{\alpha}_2[3f(2t^2-2t+1)+5(t-1)^2b_1]\Big\} + \frac{1}{50}(t-1)^2\tilde{\alpha}_2'(3f+5b_1)f
\notag\\
&+ \frac{4239\pi^2}{2^{19}}(t-1)^2(f-5b_1)f[\tilde{\alpha}_2' - \tilde{\alpha}_2] +\frac{1}{25}(t-1)^2(11f + 5b_1)\alpha_2f
\\
\tilde\Xi_{13}^{(2,1)} &= \frac{23}{225}f[(2t^2 - 2t +1)\tilde{\alpha}_2 f''+(t-1)^2\tilde{\alpha}_2''f]
+ \frac{46}{225}t(t-1)\tilde{\alpha}_2f'^2 + \frac{1}{225}f[46(2t-1)(t-1)\tilde{\alpha}_2'f'
\notag\\
&- (2t^2-2t+1)\tilde{\alpha}_2f' - (t-1)^2\tilde{\alpha}_2'f] + \frac{38403\pi^2}{2^{22}}f[(t-1)^2\tilde{\alpha}_2f - (2t^2-3t+1)\tilde{\alpha}_2f'
\notag\\
&- (t-1)^2\tilde{\alpha}_2'f] - \frac{8}{175}(t-1)^2f^2\tilde{\alpha}_2
\\
\tilde\Xi_{31}^{(2,1)} &= \frac{2}{5}tb_1'[t\tilde{\alpha}_2f- (t-1)\tilde{\alpha}_2f' - (t-1)\tilde{\alpha}_2'f] + \frac{86205\pi^2}{2^{21}}t(t-1)\tilde{\alpha}_2b_1'f - \frac{1}{5}t^2\tilde{\alpha}_2b_1''f
\notag\\
&- \frac{6}{150}\tilde{\alpha}_2f''[3f(2t^2-2t+1) + 5(t-1)^2b_1 ] - \frac{3}{75}(t-1)^2\tilde{\alpha}_2''f[3f +5b_1] - \frac{6}{25}t(t-1)\tilde{\alpha}_2f'^2
\notag\\
&+f'\Big\{-\frac{1}{25} \tilde{\alpha}_2'[10(t-1)^2b_1 + 6(t-1)(2t-1)f] + \frac{19241\pi^2}{2^{21}}(t-1)\tilde{\alpha}_2[5(1-t)b_1 + (2t-1)f]
\notag\\
&+ \frac{12}{150}\tilde{\alpha}_2[3f(2t^2-2t+1)+5(t-1)^2b_1]\Big\} + \frac{6}{75}(t-1)^2\tilde{\alpha}_2'(3f+5b_1)f
\notag\\
&+ \frac{17241\pi^2}{2^{21}}(t-1)^2(f-5b_1)f[\tilde{\alpha}_2 - \tilde{\alpha}_2'] -\frac{1}{25}(t-1)^2(11f + 5b_1)\alpha_2f
\\
\tilde\Xi_{33}^{(2,1)} &= \frac{6}{175}f[(2t^2 - 2t +1)\tilde{\alpha}_2 f''+(t-1)^2\tilde{\alpha}_2''f]
+ \frac{12}{175}t(t-1)\tilde{\alpha}_2f'^2 + \frac{12}{175}f[(2t-1)(t-1)\tilde{\alpha}_2'f'
\notag\\
&- (2t^2-2t+1)\tilde{\alpha}_2f' - (t-1)^2\tilde{\alpha}_2'f] + \frac{50043\pi^2}{2^{24}}f[(t-1)^2\tilde{\alpha}_2f - (2t^2-3t+1)\tilde{\alpha}_2f'
\notag\\
&- (t-1)^2\tilde{\alpha}_2'f] - \frac{4}{225}(t-1)^2f^2\tilde{\alpha}_2
\\
\tilde\Xi_{10}^{(1,2)} &= -\frac{1}{3}(2t-1)\alpha_1\tilde{\alpha}_2ff{'} - \frac{1}{3}f^2[t\alpha_1'\tilde{\alpha}_2 - (t-1)\alpha_1\tilde{\alpha}'_2] \\
\tilde\Xi_{12}^{(1,2)} &= \frac{4}{15}[(2t-1)\alpha_1\tilde{\alpha}_2ff' + t\alpha_1'\tilde{\alpha}_2f^2+(t-1)\alpha_1\tilde{\alpha}_2'f^2]
\\
\tilde \Xi_{00}^{(2,2)} &= -\frac{2}{9}(2t^2-2t+1)\alpha_1\tilde{\alpha}_2ff'' - \frac{1}{18}f^2[t^2\alpha_1''\tilde{\alpha}_2 + (t-1)^2\alpha_1\tilde{\alpha}_2''] - \frac{1}{9}\alpha_1\tilde{\alpha}_2[4(t-1)^2f^2
\notag\\
&- t(t-1)f'^2]+ \frac{1}{18}f'[2t(1-2t)\alpha_1'\tilde{\alpha}_2f - 2(2t-1)(t-1)\alpha_1\tilde{\alpha}_2'f - 2(2t^2-2t+1)\alpha_1\tilde{\alpha}_2]
\notag\\
&-\frac{1}{18}\alpha_1'f^2[t^2\alpha_2+t(t-1)\alpha_2'f^2] -\frac{1}{18}(t-1)^2\alpha_1\tilde{\alpha}_2'f^2
\\
\tilde\Xi_{20}^{(2,2)} &= -\frac{1}{9}(2t^2 - 2t +1) \alpha_1\tilde{\alpha}_2ff{''} - \frac{1}{9}f^2[t^2\alpha_1''\tilde{\alpha}_2 - (t-1)^2\alpha_1\alpha_2'' - \frac{2}{9}t(t-1)\alpha_1\tilde{\alpha}_2f'^2]
\notag\\
&+ \frac{2}{9}f'[(2t^2 - 2t + 1)\alpha_1\tilde{\alpha}_2f - t(2t-1)\alpha_1'\tilde{\alpha}_2f - (t-1)(2t-1)\alpha_1\tilde{\alpha}_2'f]
\notag\\
&+ \frac{2}{9}\alpha_1'f^2[t^2\tilde{\alpha}_2 - t(t-1)\tilde{\alpha}_2']
+\frac{2}{9}(t-1)^2\alpha_1\tilde{\alpha}_2'f^2 - \frac{4}{9}(t-1)^2\alpha_1\tilde{\alpha}_2f^2
\\
\tilde\Xi_{02}^{(2,2)} &= -\frac{2}{45}\alpha_1\tilde{\alpha}_2f^2(3t^2-3t -2) - \frac{2}{45}\alpha_1\tilde{\alpha}_2'f^2(4t-1)(t-1) - \frac{2}{45}\alpha_1' \Big[2\tilde{\alpha}_2'f^2t(t-1) + \tilde{\alpha}_2f^2t(4t-3) \Big] - \frac{1}{45}f' \Big[4\alpha_1'\tilde{\alpha}_2ft(2t-1) + 4\alpha_1\tilde{\alpha}_2'f(t-1)(2t-1) + 2\alpha_1\tilde{\alpha}_2f(8t^2 - 8t+1) \Big] - \frac{1}{45} \Big[2\alpha_1\tilde{\alpha}_2ff''(2t^2 - 2t + 1) + 2\alpha_1''\tilde{\alpha}_2f^2t^2 + 2\alpha_1\tilde{\alpha}_2''f^2(t-1)^2 + 4\alpha_1\tilde{\alpha}_2(f')^2t(t-1) \Big]
\\
\tilde\Xi_{22}^{(2,2)} &= \frac{11}{63}\alpha_1\tilde{\alpha}_2(2t^2-2t+1)ff' + \frac{11}{63}f^2[t^2\alpha_1''\tilde{\alpha}_2+(t-1)^2\alpha_1\tilde{\alpha}_2''] + \frac{22}{63}t(t-1)\alpha_1\tilde{\alpha}_2f'^2
\notag\\
&+ \frac{1}{63}f'[22t(2t-1)\alpha_1'\tilde{\alpha}_2f + 22(t-1)(2t-1)\alpha_1\tilde{\alpha}_2'f - (2t^2-2t+1)\alpha_1\tilde{\alpha}_2f ]
\notag\\
&+ \frac{1}{63}\alpha_1'[22t(t-1)\tilde{\alpha}_2'f^2 - t^2\tilde{\alpha}_2f^2] - \frac{1}{63}(t-1)^2\alpha_1\tilde{\alpha}_2'f^2 + \frac{2}{9}(t-1)^2\alpha_1\tilde{\alpha}_2f^2
\\
\tilde\Xi_{42}^{(2,2)} &= \frac{4}{35}\alpha_1\tilde{\alpha}_2(2t^2-2t+1)ff' + \frac{4}{35}f^2[t^2\alpha_1''\tilde{\alpha}_2+(t-1)^2\alpha_1\tilde{\alpha}_2''] + \frac{8}{35}t(t-1)\alpha_1\tilde{\alpha}_2f'^2
\notag\\
&+ \frac{8}{35}f'[t(2t-1)\alpha_1'\tilde{\alpha}_2f + (t-1)(2t-1)\alpha_1\tilde{\alpha}_2'f - (2t^2-2t+1)\alpha_1\tilde{\alpha}_2f ]
\notag\\
&+ \frac{8}{35}\alpha_1'[t(t-1)\tilde{\alpha}_2'f^2 - t^2\tilde{\alpha}_2f^2] - \frac{8}{35}(t-1)^2\alpha_1\tilde{\alpha}_2'f^2
\end{align}
\fi
\clearpage
\section{Analysis of the integrals $\mathcal{I}^p_{\ell\ell'}(k, q)$}
\label{appI}
\subsection{New derivation of $\mathcal{I}^0_{\ell\ell'}(k, q)$}
Here we give a new derivation of the formulas for $\mathcal{I}^0_{\ell\ell'}(k, q)$, which starts from purely convergent integrals. We begin with
\be
\mathcal{I}^{-2}_{\ell\ell'}(k,q)=\int_0^\infty \ud r \,j_\ell(kr)\,j_{\ell'}(qr)\,.
\ee
Now, for large $r$,
\be
\lim_{r\to\infty} j_\ell(kr) = \frac{\cos(kr-\pi(\ell+1)/2)}{kr}\,,
\ee
which implies that these integrals will converge absolutely, in contrast to the case with $p=0$, where the integral does not converge.
First, we define
\begin{align}
\tilde g_{\ell \ell^{\prime}}\!\left({k}, {q}\right)=& \frac{\pi }{4{k}} \left(\frac{q}{k}\right)^{\ell'}\!\frac{\Gamma\left[\left(\ell+\ell^{\prime}+1\right) / 2\right]}{\Gamma\left(\ell^{\prime}+3 / 2\right) \Gamma\left[1-\left(\ell-\ell'\right) / 2\right]}\,
{ }_{2} F_{1}\!\left(\frac{\ell+\ell^{\prime}+1}{2}, \frac{\ell^{\prime}-\ell}{2}; \ell+\frac{3}{2} ; \frac{{q}^{2}}{{k}^{2}}\right)\,,
\end{align}
which is the result {\sc Maple} gives. However, this misses the full answer for all values of $\ell \ell^{\prime}, k, q$. For $\ell-\ell'$ an odd number or zero, we have
\be
\mathcal{I}^{-2}_{\ell\ell'}(k,q) = \Theta(k-q)\,\tilde g_{\ell' \ell}\left({k}, {q}\right)+\Theta(q-k)\,\tilde g_{\ell \ell'}\left(q,k\right)\,,
\ee
while for $\ell-\ell'$ even, we have
\be
\mathcal{I}^{-2}_{\ell\ell'}(k,q) = \Theta(\ell-\ell')\Theta(k-q)\tilde g_{\ell' \ell}\left({k}, {q}\right) +\Theta(\ell'-\ell)\Theta(q-k)\tilde g_{\ell \ell'}\left(q,k\right)\,.
\ee
These formulas work for $k=q$ provided that we use the definition of the step function:
\begin{align}
\Theta\left({k}-{q}\right)=\left\{\begin{array}{lll}
1 &\quad\mbox{for}\quad & {k}>{q}\,, \\
{1}/{2} &\quad\mbox{for}\quad & k=q\,, \\
0 &\quad\mbox{for}\quad & {k}<{q}\,.
\end{array}\right.
\end{align}
The formulas can be converted to elementary functions; for example
\begin{align}
\mathcal{I}^{-2}_{0,0}(k, q)&=\frac{ \pi}{2 q}\Theta(q-k)+\frac{ \pi}{2 k}\Theta(k-q)\,, \\
\mathcal{I}^{-2}_{1,0}(k, q)&=\frac{(k-q)(q+k) }{4 q k^{2}}\ln\dfrac{q+k}{|k-q|}+\frac{1}{2 k}\,, \\
\mathcal{I}^{-2}_{1,1}(k, q)&=\frac{ k \pi}{6 q^{2}}\Theta(q-k)+\frac{ q \pi}{6 k^{2}}\Theta(k-q)\,, \\
\mathcal{I}^{-2}_{2,0}(k, q)&=\frac{\pi(k-q)(q+k) }{4 k^{3}}\Theta(k-q) \,,\\
\mathcal{I}^{-2}_{2,1}(k, q)&=\frac{(k-q)(q+k)\left(k^{2}+3 q^{2}\right) }{16 k^{3} q^{2}}\ln \dfrac{q+k}{|k-q|}-\frac{k^{2}-3 q^{2}}{8 q k^{2}}\,, \\
\mathcal{I}^{-2}_{2,2}(k, q)&=\frac{ k^{2} \pi}{10 q^{3}}\Theta(q-k)+\frac{ q^{2} \pi}{10 k^{3}}\Theta(k-q)\,.
\end{align}
These are similar in form to $\mathcal{I}^{0}_{\ell\ell'}(k,q) $, but without the $1/(k-q)$ singular points. The points where $\ln|k-q|$ causes problems in $\mathcal{I}^{0}_{\ell\ell'}(k,q) $ always appear as $(k-q)\ln|k-q|$ here, which is well behaved as $k\to q$. So these integrals always converge to finite values.
From these well-behaved non-singular formulas we can derive all $\mathcal{I}^{p}_{\ell\ell'}(k,q) $ for $p\geq-1$, using the differentiation formulas given in the text. In terms of hypergeometric functions, these are not particularly helpful, although they are easy to derive (but painful to simplify!). However, for low values of $\ell,\ell'$ that we are interested in, it is straightforward in terms of elementary functions:
\begin{description}
\item[$\bm{p=-1}$] A single derivative of the $\mathcal{I}^{-2}_{\ell\ell'}(k,q) $ formulas leaves weak singular points $\sim\ln|k-q|$. However, no delta functions appear because in $\mathcal{I}^{-2}_{\ell\ell'}(k,q) $ the step functions are either symmetric in $k$ and $q$ (so that they cancel), or where they appear alone, they are accompanied by a factor $k-q$, so that any delta function appears as $(k-q)\delta(k-q)=0$. Step functions appear without being multiplied by $k-q$. We find for the first few:
\begin{align}
\mathcal{I}^{-1}_{0,0}(k, q)&=\frac{1}{2 k q}\ln \frac{k+q}{|k-q|},\\
\mathcal{I}^{-1}_{1,0}(k, q)&=\frac{\pi }{2 k^{2}}\Theta(k-q),\\
\mathcal{I}^{-1}_{1,1}(k, q)&=-\frac{\left(k^{2}+q^{2}\right) }{4 k^{2} q^{2}} \ln \frac{|k-q|}{k+q}-\frac{1}{2 q k},\\
\mathcal{I}^{-1}_{2,0}(k, q)&=\frac{3q }{4 k^{3}}\ln \frac{|k-q|}{(k+q)}+\frac{1}{4 q k}\ln \frac{k+q}{|k-q|}+\frac{3}{2 k^{2}},\\
\mathcal{I}^{-1}_{2,1}(k, q)&=\frac{q \pi }{2 k^{3}}\Theta(k-q),\\
\mathcal{I}^{-1}_{2,2}(k, q)&=-\frac{\left(3 k^{4}+2 k^{2} q^{2}+3 q^{4}\right) }{16 k^{3} q^{3}}\ln \frac{|k-q|}{k+q}-\frac{3\left(k^{2}+q^{2}\right)}{8 q^{2} k^{2}}.
\end{align}
\item[$\bm{p=0}$] Another derivative implies that $\ln|k-q|\to 1/(k-q)$ and the lone step functions now lead to the delta functions given in \eqref{dsjkcbnsjcskn}.
\end{description}
There are a variety of ways to check that these formulas make sense. For $p=-2, -1$ we can just evaluate them numerically and check the results against these formulas. Alternatively, for low values of $\ell,\ell'$, we can write $\int_{0}^{\infty}
=\lim_{t\to\infty}\int_{0}^{t}$, and perform the integral analytically.
For $p=0$ these integrals are divergent, but we can check numerically that the formulas make sense for $q\neq k$. We give examples of simple cases, starting with $(\ell,\ell',k,q)=1, 1, 2, 3$, which is just an example of the closure relation -- and thus should give zero. However, this is in fact not straightforward:
\begin{align}
&\int_{0}^{\infty} j_{1}({2} r) j_{1}\left({3} r\right) r^{2} d r
=\lim_{t\to\infty}\int_{0}^{t} j_{1}({2} r) j_{1}\left({3} r\right) r^{2} d r
\\\notag &
=\lim_{t\to\infty}\frac{30 t\sin t+6t \sin (5 t) -5 \cos t+5 \cos (5 t)}{360 t}
=
\lim_{t\to\infty}\frac{30t \sin t+6t \sin (5 t) +5 \cos (5 t)}{360 t}\,,
\end{align}
which does not converge. However, the mean of this \emph{is} zero, which is the result of the closure relation.
Trying other values of $(\ell,\ell',k,q)$ the general result works in the same way. For example, $(2, 3, 2, 3)$ gives numerically 0.07447888703 from the formulas above, so that numerically the integral does not converge as the upper limit $\to\infty$. Rewriting the spherical Bessel functions in terms of sin and cos and integrating gives a limit which oscillates between
$
-\frac{913}{1440}+\frac{233 \ln 5}{3456}$ and $\frac{163}{288}+\frac{233 \ln 5}{3456},
$
with a mean value of 0.07447888703. From a distributional point of view, the oscillations cancel out, leaving only the mean.
For $p\geq1$ these results can be checked by integrating against a compact function in $k,q$ to ensure their \emph{distributional form} is correct. We have checked for small values of $|\ell-\ell'|+p $ even, where integrals such as
\be
\int_0^\infty \ud k\int_0^\infty \ud q \int_0^\infty \ud r \,r^{2+p}\, j_\ell(kr)\,j_{\ell'}(qr)\, k^2 q^2 {\rm e}^{-k^2-q^2}\,,
\ee
can be computed analytically by integrating over $k$ and $q$ first and then computing the $r$ integral. These can then be compared with the same integrals computed with the distributions calculated here, which indeed agree.
\iffalse
For example, we find for $p,\ell,\ell'$
\begin{align}
0,2,0&=-\frac{\pi^{3 / 2}(7 \sqrt{2}-12 \ln (1+\sqrt{2}))}{32}\\
1,1,0&=\pi/8\\
2,0,0&=\frac{3 \pi^{3 / 2} \sqrt{2}}{32}\\
2,2,0&= \frac{3 \pi^{3 / 2} \sqrt{2}}{32}\\
4,0,0&= \frac{15 \pi^{3 / 2} \sqrt{2}}{32}\\
4,2,0&= \frac{27 \pi^{3 / 2} \sqrt{2}}{32}.
\end{align}
(We couldn't do the same checks for $|\ell-\ell'|+p $ odd analytically.)
\fi
\subsection{Integrating the distributions -- dealing with the singularities}\label{dsjkcnsdkcnskc}
As we saw by calculating everything from $\mathcal{I}^{-2}_{\ell\ell'}(k,q) $, the integrals $\mathcal{I}^{p}_{\ell\ell'}(k,q)$ should give meaningful answers for $p\geq0$ even though the integrals themselves are divergent. One way to see this is to write
\begin{align}
\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q) \mathcal{I}^p_{\ell\ell'}(k,q)
&= \left[-\frac{\partial^2}{\partial k^2}-\frac{2}{k}\frac{\partial}{\partial k}+\frac{\ell(\ell+1)}{k^2}\right]^m\int_0^\infty {\ud q}\, q^{2-n}\,P_{\rm m}(q) \mathcal{I}^{p-2m}_{\ell\ell'}(k,q)\,.
\end{align}
While the lhs appears to be divergent,
by differentiating the integral on the right with $2m=p+2$ or $2m=p+1$ appropriately we must get a finite answer. Where the integrals give distributions in the form of delta functions it's clear what this means. For the other cases it's a bit more subtle as a brute force numerical or analytical evaluation will give infinite answers (this is a key reason for treating the Fourier transforms as formal mathematical transforms rather than introducing cutoffs in $r$ to try to remain within physical constraints). We know that all the singular points which are not delta functions come from derivatives of $(k-q)\ln|k-q|$, which give the singular terms of the form
\be
\ln|k-q|,~~~\frac{1}{k-q},~~~\frac{1}{(k-q)^2},\cdots,\frac{1}{(k-q)^{p+1}}\,,
\ee
as we differentiate repeatedly.
The first is termed weakly singular, the second singular, and the higher powers, hyper-singular points. Consequently, what we need to understand is
\be
\frac{\partial^i}{\partial k^i}\int_a^b \ud q f(q) (k-q)\ln|k-q| = \int_a^b \ud q f(q) \frac{\partial^i}{\partial k^i}\left[(k-q)\ln|k-q|\right]\,,
\ee
where we just focus on a region around $q=k$, so $0<a<k<b$.
On the left we have a regular expression but on the right we have now apparently created an integral with singular points, which diverges for $i>1$ \cite{MONEGATO2009425}. How do we make sense of this? For $i=1$ we can simplify,
\begin{align}
&\int_a^b \ud q\, f(q) \ln|k-q|=\int_a^b \ud q\, [f(q)-f(k)]\, \ln|k-q|+\int_a^b \ud q\, [f(k)]\, \ln|k-q|
\\ \nonumber
&= %
f(k)\left[(b - k)\ln(b - k) + (k - a)\ln(k - a) + a - b\right]+ \int_a^b \ud q\, [f(q)-f(k)]\, \ln|k-q|,
\end{align}
where the remaining integral converges assuming that $f(q)$ is analytic in the neighbourhood of $k=q$. On taking a derivative of this with respect to $k$ we have
\be
\frac{\partial}{\partial k}\int_a^b \ud q\, f(q) \ln|k-q| = \int_a^b \ud q\, \frac{f(q)-f(k)}{k-q} + f(k)\ln\frac{k-a}{b-k}= \dashint_a^b \ud q\, \frac{f(q)}{k-q}\,,\label{dscskdcscs}
\ee
where $\dashint$ represents the Cauchy Principal Value:
\be
\dashint_a^b \ud q\, \frac{f(q)}{k-q} = \lim_{\epsilon\to0}\left[\int_a^{k-\epsilon}+\int_{k+\epsilon}^b\right]\ud q\, \frac{f(q)}{k-q}\,.
\ee
That is, a symmetric region about the singular point is removed and the limit taken to zero. Note all the integrals in~\eqref{dscskdcscs} converge and terms involving $f'(k)$ cancel. We conclude that when we write an expression like
\be
\frac{\partial}{\partial k}\int_a^b \ud q\, f(q) \ln|k-q| =\int_a^b \ud q\, \frac{f(q)}{k-q}\,,
\ee
swapping the derivative and integral means that we are only taking the principal value of the integral on the right; instead we should write
\be
\frac{\partial}{\partial k}\int_a^b \ud q\, f(q) \ln|k-q| =\dashint_a^b \ud q\, \frac{f(q)}{k-q}\,.
\ee
We can take another derivative~\cite{Zozulya:2015:RDI}:
\begin{align}
\frac{\partial^2}{\partial k^2}&\int_a^b \ud q\, f(q) \ln|k-q| = \frac{\partial}{\partial k}\left[\int_a^b \ud q\, \frac{f(q)-f(k)}{k-q} + f(k)\ln\frac{k-a}{b-k}\right]\nonumber\\
&=-\int_a^b \ud q\, \frac{f(q)-f(k)-f'(k)(q-k)}{(k-q)^2}+ f(k) \ddashint_a^b \ud q\, \frac{1}{(k-q)^2} + f'(k)\dashint_a^b \ud q\, \frac{f(q)}{k-q} \nonumber\\
&=-\int_a^b \ud q\, \frac{f(q)-f(k)-f'(k)(k-q)}{(k-q)^2}+ f(k)\left[\frac{1}{k-a}+\frac{1}{b-k}\right]+f'(k)\ln\frac{k-a}{b-k}\nonumber\\
&=
\ddashint_a^b \ud q\, \frac{f(q)}{(k-q)^2}\,.
\end{align}
Here, the notation $\ddashint$ is the Hadamard finite part of the integral, which generalises the Cauchy Principal Value to hyper-singular points. For this we delete a small interval around $k=q$ (it does not have to be symmetric), and take the limit as $\epsilon\to0$, and we ignore any diverging terms.
Alternatively, following \cite{MONEGATO2009425},
\begin{align}
&\frac{\partial^2}{\partial k^2}\int_a^b \ud q\, f(q) \ln|k-q| = \frac{\partial}{\partial k}\left[\int_a^b \ud q\, \frac{f(q)-f(k)}{k-q} + f(k)\ln\frac{k-a}{b-k}\right]\nonumber\\
&=-\dashint_a^b \ud q\, \frac{f(q)-f(k)}{(k-q)^2}+ f(k) \ddashint_a^b \ud q\, \frac{1}{(k-q)^2} %
=-\dashint_a^b \ud q\, \frac{f(q)-f(k)}{(k-q)^2}+ f(k)\left[\frac{1}{k-a}+\frac{1}{b-k}\right]\nonumber\\
&=
\ddashint_a^b \ud q\, \frac{f(q)}{(k-q)^2}\,.
\end{align}
These expressions differ only by divergent terms which we can miraculously ignore (see \cite{MONEGATO2009425}).
In a similar manner we can find the finite part of
\be
\ddashint_a^b \ud q\, \frac{1}{(k-q)^{p+1}} = \frac{(-1)^p}{p!}\frac{\partial^p}{\partial k^p}\dashint_a^b \ud q\, \frac{1}{k-q}\,.
\ee
Numerically we find it straightforward to calculate these integrals using integration by parts, which naturally returns the finite part. For example,
\begin{align}
\dashint_a^b \ud q\, \frac{f(q)}{k-q} & = \frac{\partial}{\partial k}\int_a^b \ud q\, f(q)\ln|k-q|
= \int_a^b \ud q\, f(q) \frac{\partial}{\partial k}\ln|k-q|\nonumber\\
& = -\int_a^b \ud q\, f'(q) \ln|k-q|+~\text{boundary terms.}
\end{align}
In general, our integrals consist of regular parts plus singular parts and are of the form
\be
\int_0^\infty \ud q\, \left[ f_1(q,k)[\ln|k-q|-\ln(k+q)]+\frac{f_2(q,k)}{(k-q)^{p+1}}\right]\,.
\ee
The singularities at $k=q$ can be evaluated by parts, assuming that $P_{\rm m}(q)$ and its derivatives vanish sufficiently rapidly at $q=0$ and $q=\infty$. Integrating the last term by parts implies that the non-regular parts of this integral can be written as
\begin{align}
& \int_0^\infty\! \ud q\! \left[ f_1(q,k)\ln|k-q|+\frac{f_2(q,k)}{(k-q)^{p+1}}\right] = \int_0^\infty\!\! \ud q \ln|k-q|\!\left[ f_1(q,k)+\frac{(-1)^{p+1}}{p!}\frac{\partial^{p+1} f_2(q,k)}{\partial q^{p+1}}\right]
\nonumber\\
&= \int_0^\infty \ud q\, (k-q)(1-\ln|k-q|)\left[-\frac{\partial f_1(q,k)}{\partial q}+\frac{(-1)^{p+1}}{p!}\frac{\partial^{p+2} f_2(q,k)}{\partial q^{p+2}}\right],
\end{align}
where we integrated by parts to remove the logarithmic singularity.
Note that the part of the integrand containing $(k-q)(1-\ln|k-q|)$ is now regular~\cite{Zozulya:2015:RDI}. Therefore we have
\begin{align}
&\int_0^\infty \!\ud q \left\{ f_1(q,k)\Big[\ln|k-q|-\ln(k+q)\Big]+\frac{f_2(q,k)}{(k-q)^{p+1}}\right\}\\ \nonumber
&=\int_0^\infty \!\ud q \left\{- f_1(q,k)\ln(k+q)-(k-q)(1-\ln|k-q|)\left[\frac{\partial f_1(q,k)}{\partial q}-\frac{(-1)^{p+1}}{p!}\frac{\partial^{p+2} f_2(q,k)}{\partial q^{p+2}}\right]\right\}.
\end{align}
In reality we are dealing with an integral over the power spectrum with finite limits so the boundary terms need to be taken into account. In general, %
\begin{align}
&\int_a^b\! \ud q \left[ f_1(q,k)[\ln|k-q|-\ln(k+q)]+\frac{f_2(q,k)}{(k-q)^{p+1}}\right]\nonumber\\
&=\int_a^b\! \ud q \left\{- f_1(q,k)\ln(k+q)+(k-q)(1-\ln|k-q|)\left[-\frac{\partial f_1(q,k)}{\partial q}+\frac{(-1)^{p+1}}{p!}\frac{\partial^{p+2} f_2(q,k)}{\partial q^{p+2}}\right]\right\}\nonumber\\
&+\sum_{i=0}^{p-1}\frac{(p-(i+1))!}{p!(q-k)^{p-i}}\frac{\partial^i f_2(q,k)}{\partial q^i}\bigg|_{q=a}^b +\frac{(-1)^{p+1}}{p!}\ln|k-q|\frac{\partial^{p} f_2(q,k)}{\partial q^{p}}\bigg|_{q=a}^b \nonumber\\
&+ (k-q)(1-\ln|k-q|)\left[- f_1(q,k)+\frac{(-1)^{p}}{p!}\frac{\partial^{p} f_2(q,k)}{\partial q^{p}}\right]\bigg|_{q=a}^b.
\end{align}
\subsection{A general formula for $\displaystyle\int_0^\infty \ud r\, f(r) j_{\ell}(kr)\,j_{\ell'}(qr)$}
Given an analytic function $f(r)$ on $[0,\infty)$ we can derive the general formula:
\begin{align}
\int_0^\infty \ud r\, f(r) j_{\ell}(kr)\,j_{\ell'}(qr)%
&= \sum_{n=0}^\infty \left[-\frac{\partial^2}{\partial k^2}-\frac{2}{k}\frac{\partial}{\partial k}+\frac{\ell(\ell+1)}{k^2}\right]^{n+1}
\nonumber\\ &~~~~~~\times
\left[\frac{f^{(2n)}(0)}{(2n)!}\mathcal{I}^{-2}_{\ell\ell'}(k,q)
+\frac{f^{(2n+1)}(0)}{(2n+1)!}\mathcal{I}^{-1}_{\ell\ell'}(k,q)
\right].\label{djskndksnskdjc}
\end{align}
\section{Integral formulas for $p\geq0$}
\label{app4}
Here we tabulate the lowest order integrals in terms of elementary functions required for calculating multipoles up to $\ell=4$, to order $O(x^2)$.
\begin{align}
\mathcal{I}^0_{1,0}(k, q)&=\dfrac{1}{2 q k^{2}}\ln \dfrac{k+q}{|k-q|}+\dfrac{1}{k(k-q)(k+q)}\,, \\
\mathcal{I}^0_{2,0}(k, q)&=\dfrac{3 \pi }{2 k^{3}}\,\Theta(k-q)-\dfrac{\pi }{2 q^{2}}\,\delta(k-q)\,, \\
\mathcal{I}^0_{2,1}(k, q)&=\dfrac{k^{2}+3 q^{2} }{4 k^{3} q^{2}}\ln \dfrac{k+q}{|k-q|}-\dfrac{k^{2}-3 q^{2}}{2 k^{2} q(k-q)(k+q)}\,, \\
\mathcal{I}^0_{3,0}(k, q)&=\dfrac{3\left(k^{2}-5 q^{2}\right)}{4 k^{4} q} \ln \dfrac{k+q}{|k-q|}+\dfrac{13 k^{2}-15 q^{2}}{2 k^{3}(k-q)(k+q)}\,, \\
\mathcal{I}^0_{3,1}(k, q)&=\dfrac{5 \pi q }{2 k^{4}}\,\Theta(k-q)-\dfrac{\pi }{2 q^{2}}\, \delta(k-q)\,, \\
\mathcal{I}^0_{4,3}(k, q)&=\dfrac{5 k^{6}+9 k^{4} q^{2}+15 k^{2} q^{4}+35 q^{6} }{32 k^{5} q^{4}}\ln \dfrac{k+q}{|k-q|}-\dfrac{15 k^{6}+17 k^{4} q^{2}+25 k^{2} q^{4}-105 q^{6}}{48 k^{4} q^{3}(k-q)(k+q)}\,,\\
\mathcal{I}^1_{0,0}(k, q)&=-\dfrac{2}{(k-q)^{2}(k+q)^{2}}\,, \\
\mathcal{I}^1_{1,0}(k, q)&=-\dfrac{\pi }{2 q^{2}}\,\delta'(k-q)\,, \\
\mathcal{I}^1_{1,1}(k, q)&=\dfrac{1}{2 k^{2} q^{2}}\ln \dfrac{k+q}{|k-q|}-\dfrac{k^{2}+q^{2}}{k q(k+q)^{2}(k-q)^{2}}\,, \\
\mathcal{I}^1_{2,0}(k, q)&=\dfrac{3}{2 q k^{3}}\ln \dfrac{k+q}{|k-q|}+\dfrac{5 k^{2}-3 q^{2}}{k^{2}(k-q)^{2}(k+q)^{2}}\,, \\
\mathcal{I}^1_{2,1}(k, q)&=\dfrac{\pi }{2 q^{3}}\,\delta(k-q)-\dfrac{\pi }{2 q^{2}}\,\delta'(k-q)\,, \\
\mathcal{I}^1_{3,0}(k, q)&=\frac{15 \pi }{2 k^{4}}\,\Theta(k-q)
-\frac{5 \pi}{2 q^{3}}\,\delta(k-q)+\frac{\pi }{2 q^{2}}\, \delta'(k-q)
\,,\\
\mathcal{I}^1_{3,1}(k, q)&=\dfrac{3\left(k^{2}+5 q^{2}\right) }{4 k^{4} q^{2}}\ln \dfrac{k+q}{|k-q|}-\dfrac{3 k^{4}-22 k^{2} q^{2}+15 q^{4}}{2 k^{3} q(k-q)^{2}(k+q)^{2}}, \\
\mathcal{I}^1_{3,2}(k, q)&=\dfrac{\pi }{q^{3}}\,\delta(k-q)-\dfrac{\pi }{2 q^{2}}\,\delta'(k-q)\,,\\
\mathcal{I}^1_{3,3}(k, q)&=\dfrac{3\left(5 k^{4}+6 k^{2} q^{2}+5 q^{4}\right)}{16 k^{4} q^{4}}\ln \dfrac{k+q}{|k-q|}
-\dfrac{\left(k^{2}+q^{2}\right)\left(15 k^{4}-22 k^{2} q^{2}+15 q^{4}\right)}{8(k+q)^{2} q^{3} k^{3}(k-q)^{2}}\,, \\
\mathcal{I}^1_{4,0}(k, q)&=\dfrac{15\left(k^{2}-7 q^{2}\right) }{4 k^{5} q}\ln \dfrac{k+q}{|k-q|}+\dfrac{81 k^{4}-190 k^{2} q^{2}+105 q^{4}}{2 k^{4}(k-q)^{2}(k+q)^{2}}, \\
\mathcal{I}^1_{4,1}(k, q)&=\dfrac{35 \pi q }{2 k^{5}}\, \Theta(k-q)-\dfrac{4 \pi }{q^{3}}\,\delta(k-q) +\dfrac{\pi }{2 q^{2}}\,\delta'(k-q)\,, \\
\mathcal{I}^1_{4,2}(k, q)&=\dfrac{3\left(3 k^{4}+10 k^{2} q^{2}+35 q^{4}\right)}{16 q^{3} k^{5}}\ln \dfrac{k+q}{|k-q|}
-\dfrac{9 k^{6}+15 k^{4} q^{2}-145 k^{2} q^{4}+105 q^{6}}{8 k^{4} q^{2}(k-q)^{2}(k+q)^{2}}\,, \\
\mathcal{I}^1_{4,3}(k, q)&=\dfrac{3 \pi }{2 q^{3}}\,\delta(k-q) -\dfrac{\pi }{2 q^{2}}\,\delta'(k-q)\,, \\
\mathcal{I}^1_{4,4}(k, q)&=\dfrac{5\left(k^{2}+q^{2}\right)\left(7 k^{4}+2 k^{2} q^{2}+7 q^{4}\right) }{32 k^{5} q^{5}}\ln \dfrac{k+q}{|k-q|}
\notag\\&
-\dfrac{105 k^{8}-40 k^{6} q^{2}-34 k^{4} q^{4}-40 k^{2} q^{6}+105 q^{8}}{48 q^{4} k^{4}(k-q)^{2}(k+q)^{2}},\\
\mathcal{I}^2_{0,0}(k, q)&=-\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{ \pi}{q^{3}}\,\delta'(k-q)-\dfrac{\pi }{q^{4}}\,\delta(k-q), \\
\mathcal{I}^2_{1,0}(k, q)&=-\dfrac{8 k}{(k-q)^{3}(k+q)^{3}}, \\
\mathcal{I}^2_{1,1}(k, q)&=-\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{ \pi}{q^{3}}\,\delta'(k-q), \\
\mathcal{I}^2_{2,0}(k, q)&=\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{ \pi}{2 q^{3}}\,\delta'(k-q)-\dfrac{\pi }{2 q^{4}}\,\delta(k-q), \\
\mathcal{I}^2_{2,1}(k, q)&=\dfrac{3 }{2 k^{3} q^{2}}\ln \dfrac{k+q}{|k-q|}-\dfrac{\left(3 k^{2}-q^{2}\right)\left(k^{2}+3 q^{2}\right)}{q k^{2}(k-q)^{3}(k+q)^{3}}, \\
\mathcal{I}^2_{2,2}(k, q)&=-\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{ \pi}{q^{3}}\,\delta'(k-q)+\dfrac{2 \pi }{q^{4}}\,\delta(k-q), \\
\mathcal{I}^2_{3,0}(k, q)&=\dfrac{15 }{2 k^{4} q}\ln \dfrac{k+q}{|k-q|}+\dfrac{33 k^{4}-40 q^{2} k^{2}+15 q^{4}}{k^{3}(k-q)^{3}(k+q)^{3}}, \\
\mathcal{I}^2_{3,1}(k, q)&=\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{3 \pi}{2 q^{3}}\,\delta'(k-q), \\
\mathcal{I}^2_{3,2}(k, q)&=\dfrac{3\left(3 k^{2}+5 q^{2}\right) }{4 k^{4} q^{3}}\ln \dfrac{k+q}{|k-q|}-\dfrac{9 k^{6}-9 k^{4} q^{2}+31 k^{2} q^{4}-15 q^{6}}{2 k^{3} q^{2}(k-q)^{3}(k+q)^{3}},\\
\mathcal{I}^2_{3,3}(k, q)&=-\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{ \pi}{q^{3}}\,\delta'(k-q)+\dfrac{5 \pi }{q^{4}}\,\delta(k-q), \\
\mathcal{I}^2_{4,0}(k, q)&=-\dfrac{27 \pi }{2 q^{4}}\,\delta(k-q)+\dfrac{4 \pi}{q^{3}}\,\delta'(k-q)-\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)+\dfrac{105 \pi }{2 k^{5}}\,\Theta(k-q), \\
\mathcal{I}^2_{4,1}(k, q)&=\dfrac{15\left(k^{2}+7 q^{2}\right) }{4 k^{5} q^{2}}\ln \dfrac{k+q}{|k-q|}-\dfrac{15 k^{6}-191 k^{4} q^{2}+265 k^{2} q^{4}-105 q^{6}}{2 q k^{4}(k-q)^{3}(k+q)^{3}}, \\
\mathcal{I}^2_{4,2}(k, q)&=\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{5 \pi}{2 q^{3}}\,\delta'(k-q)+\dfrac{3 \pi }{2 q^{4}}\,\delta(k-q), \\
\mathcal{I}^2_{4,3}(k, q)&=\dfrac{15\left(3 k^{4}+6 q^{2} k^{2}+7 q^{4}\right)}{16 k^{5} q^{4}}\ln \dfrac{k+q}{|k-q|}
\notag\\&
-\dfrac{45 k^{8}-30 k^{6} q^{2}-36 k^{4} q^{4}+190 k^{2} q^{6}-105 q^{8}}{8 q^{3} k^{4}(k-q)^{3}(k+q)^{3}}, \\
\mathcal{I}^2_{4,4}(k, q)&=-\dfrac{\pi }{2 q^{2}}\,\delta''(k-q)-\dfrac{ \pi}{q^{3}}\,\delta'(k-q)+\dfrac{9 \pi }{q^{4}}\,\delta(k-q).
\end{align}
\newpage
\bibliographystyle{JHEP}
\bibliography{reference_library}
|
Title:
The Big Bang as a Mirror: a Solution of the Strong CP Problem |
Abstract: We argue that the Big Bang can be understood as a type of mirror. We show how
reflecting boundary conditions for spinors and higher spin fields are fixed by
local Lorentz and gauge symmetry, and how a temporal mirror (like the Bang)
differs from a spatial mirror (like the AdS boundary), providing a possible
explanation for the observed pattern of left- and right-handed fermions. By
regarding the Standard Model as the limit of a minimal left-right symmetric
theory, we obtain a new, cosmological solution of the strong $CP$ problem,
without an axion.
| https://export.arxiv.org/pdf/2208.10396 |
\title{The Big Bang as a Mirror: a Solution of the Strong CP Problem}
\author{Latham Boyle$^1$, Martin Teuscher$^{1,2}$ and Neil Turok$^{1,3}$}
\affiliation{$^{1}$Perimeter Institute for Theoretical Physics, Waterloo, Ontario, Canada, N2L 2Y5 \\
$^{2}$\'Ecole Normale Sup\'erieure, Paris, France, 75005 \\
$^{3}$Higgs Centre for Theoretical Physics, University of Edinburgh, Edinburgh, Scotland, EH8 9YL}
\date{August 2022}
\section{Introduction}
In a series of recent papers \cite{Boyle:2018tzc, Boyle:2018rgh, Boyle:2021jej, Boyle:2021jaz, Turok:2022fgq}, we have argued that the Big Bang can be described as a mirror separating two sheets of spacetime. Let us briefly recap some of the observational and theoretical motivations for this idea.
Observations indicate that the early Universe was strikingly simple \cite{Planck:2015fie}: a fraction of a second after the Big Bang, the Universe was radiation-dominated, almost perfectly homogeneous, isotropic, and spatially flat; with tiny (around $10^{-5}$) deviations from perfect symmetry also taking a highly economical form: random, statistically gaussian, nearly scale-invariant, adiabatic, growing-mode density perturbations. Although we cannot see all the way back to the bang, we have this essential observational hint: the further back we look (all the way back to a fraction of a second), the simpler and more regular the Universe gets. This is the central clue in early Universe cosmology: the question is what it is trying to tell us.
In the standard (inflationary) theory of the early Universe one regards this observed trend as illusory: one imagines that, if one could look back even further, one would find a messy, disordered state, requiring a period of inflation to transform it into the cosmos we observe. An alternative approach is to take the fundamental clue at face value and imagine that, as we follow it back to the bang, the Universe really does approach the ultra-simple radiation-dominated state described above (as all observations so far {\it seem} to indicate). Then, although we have a singularity in our past, it is extremely special \cite{penrose1979singularities}. Denoting the conformal time by $\tau$, the scale factor $a(\tau)$ is $\propto \tau$ at small $\tau$ so the metric $g_{\mu \nu} \sim a(\tau)^2 \eta_{\mu \nu}$ has an analytic, conformal zero through which it may be extended to a ``mirror-reflected" universe at negative $\tau$ \footnote{Although it is only an approximation to treat the radiation as a perfect fluid, in the static conformal frame the fluctuations in the radiation density scale as $1/\sqrt{\cal N}$, where ${\cal N}$ is the number of relativistic degrees of freedom. Since ${\cal N}\sim 10^2$ in the Standard Model, the perfect fluid approximation is reasonable all the way to the bang. Our mirror symmetry and analyticity conditions at the bang exclude metric perturbations that blow up there~\cite{Boyle:2018tzc, Boyle:2021jej}, and primordial black holes. Finally, including dimension zero fields which cancel the trace anomaly, the radiation fluid may be perfectly conformal near the bang~\cite{Boyle:2021jaz,perts}.}.
In \cite{Boyle:2018tzc, Boyle:2018rgh, Boyle:2021jej, Boyle:2021jaz, Turok:2022fgq} we point out that, by taking seriously the symmetries and complex analytic properties of this extended two-sheeted spacetime, we are led to elegant and testable new explanations for many of the observed features of our Universe including: (i) the dark matter \cite{Boyle:2018tzc, Boyle:2018rgh}; (ii) the absence of primordial gravitational waves, vorticity, or decaying mode density perturbations \cite{Boyle:2018tzc, Boyle:2021jej}; (iii) the thermodynamic arrow of time ({\it i.e.}\ the fact that entropy increases away from the bang) \cite{Boyle:2021jej}; and (iv) the homogeneity, isotropy and flatness of the Universe~\cite{Turok:2022fgq}, among others. In a forthcoming paper~\cite{perts}, we show that, with our new mechanism for ensuring conformal symmetry at the bang \cite{Boyle:2021jaz}, this picture can also explain the observed primordial density perturbations.
In this Letter, we show that: (i) there is a crucial distinction, for spinors, between spatial and temporal mirrors; (ii) the reflecting boundary conditions (b.c.'s) at the bang for spinors and higher spin fields are fixed by local Lorentz invariance and gauge invariance; (iii) they explain an observed pattern in the Standard Model (SM) relating left- and right-handed spinors; and (iv) they provide a new solution of the strong $CP$ problem~\cite{tHooft:1976rip}.
\section{Reflecting b.c.'s: spinors, higher spin}
Locally, a mirror is a codimension-one hyperplane with a unit normal $n^{\mu}$. Reflecting b.c.'s at such a mirror are almost uniquely fixed by local Lorentz symmetry. To describe fields of arbitrary spin, it will be convenient to work in 2-component spinor (dotted/undotted index) formalism (we follow the conventions of Appendix E in \cite{Zee:2003mt}: in particular, we will work in signature $(+,-,-,-)$\footnote{For an useful alternate introduction to 2-component spinors, see Ref.~\cite{Dreiner:2008tw}, but beware that the conventions used in this paper slightly differ from ours.
Also, the expert reader may know the peculiar fact about pinor group that $\mathrm{Pin}(1,3) \ncong \mathrm{Pin}(3,1)$, but our results are valid in both signatures $(1,3)$ and $(3,1)$ \cite{Berg:2000ne}.}). We define $\s^\mu = (I, \s^j)$, $\bar{\s}^\mu = (I, -\s^j)$ with $\s^j$ the Pauli matrices. Under a mirror reflection, a left-chiral spinor $\varphi_{\alpha}$ is mapped to a right-chiral spinor $\bar{\chi}^{\adot}$. Covariance under the Lorentz group implies that, at the mirror,
\begin{subequations}
\label{2comp_Dirac_bc}
\begin{eqnarray}
\label{2comp_Dirac_bc_1}
n_{\alpha\adot}\bar{\chi}^{\adot}&=&\xi\;\!\varphi_{\alpha}\,
\qq{where}n_{\alpha\adot}\equiv n_{\mu}(\s^{\mu})_{\alpha\adot} \\
\label{2comp_Dirac_bc_2}
\bar{n}^{\adot\alpha}\varphi_{\alpha}&=&\xi'\bar{\chi}^{\adot}
\qq{where} \bar{n}^{\adot\alpha}\equiv n_{\mu}(\bar{\s}^{\mu})^{\adot\alpha}
\end{eqnarray}
\end{subequations}
where $\xi$ and $\xi'$ are $\rm{U}(1)$-phases. We shall call this a Dirac-type boundary condition. Since $n_{\alpha\adot}\bar{n}^{\adot\beta}\!=\!n^{2}\delta_{\alpha}^{\beta}$ and $\bar{n}^{\adot\alpha}n_{\alpha\bdot}=n^{2}\delta^{\adot}_{\bdot}$ we see that $\xi\xi'=n^{2}$, and that Eqs.~\eqref{2comp_Dirac_bc_1} and \eqref{2comp_Dirac_bc_2} are equivalent.
So far, $\varphi_{\alpha}$ and $\bar{\chi}^{\adot}$ are {\it independent}. Now define the charge conjugate spinors
\begin{subequations}
\begin{eqnarray}
\bar{\varphi}^{\adot}\equiv\eps^{\adot\bdot}\bar{\varphi}_{\bdot} \equiv \eps^{\adot\bdot}(\varphi_{\beta}^{})^{\ast} \\
\chi_{\alpha}\equiv\eps_{\alpha\beta}\chi^{\beta} \equiv \eps_{\alpha\beta}(\bar{\chi}^{\bdot})^{\ast}
\end{eqnarray}
\end{subequations}
It is compatible with Lorentz invariance to set $\bar{\chi}^{\adot} =\bar{\varphi}^{\adot}$, {\it i.e.} so $\left(\substack{\varphi_\alpha \\ \bar{\chi}^{\adot}}\right)$ is a Majorana spinor. With this restriction, the Dirac-type b.c. \eqref{2comp_Dirac_bc} reduces to a Majorana-type b.c.,
\begin{equation}
\label{2comp_Majorana_bc}
n_{\alpha\adot}\bar{\varphi}^{\adot}=\xi\varphi_{\alpha}.
\end{equation}
Now let us generalize to higher spins. For spin 1/2, in the Dirac-type b.c.~(\ref{2comp_Dirac_bc}), we partnered every left-handed spinor $\varphi_{\alpha}$ with an independent right-handed spinor $\bar{\chi}^{\adot}$. For higher spins, we partner $ \varphi_{\alpha_{1}\ldots\alpha_{m}}^{\bdot_{1}\ldots\bdot_{n}}$ (in the $(m/2,n/2)$ irreducible representation of the Lorentz group) with $ \bar{\chi}^{\adot_{1}\ldots\adot_{m}}_{\beta_{1}\ldots\beta_{n}}$ in the $(n/2,m/2)$ irrep. Then the Dirac-type reflecting b.c.~\eqref{2comp_Dirac_bc} generalizes to
\begin{subequations}
\label{general_Dirac_bc}
\begin{eqnarray}
n_{\alpha_{1}^{}\adot_{1}^{}}^{}\cdots\;\!n_{\alpha_{m}^{}\adot_{m}^{}}^{}
\bar{n}^{\bdot_{1}\beta_{1}}\cdots\;\!\bar{n}^{\bdot_{n}\beta_{n}}\;
\bar{\chi}^{\adot_{1}...\adot_{m}}_{\beta_{1}...\beta_{n}}
&=&\xi\,\varphi_{\alpha_{1}...\alpha_{m}}^{\bdot_{1}...\bdot_{n}}\qquad \\
\bar{n}_{}^{\adot_{1}\alpha_{1}}\!\cdots\;\!\bar{n}^{\adot_{m}\alpha_{m}}
n_{\beta_{1}\bdot_{1}}\cdots\;\!n_{\beta_{n}\bdot_{n}}\;
\varphi_{\alpha_{1}...\alpha_{m}}^{\bdot_{1}...\bdot_{n}}
&=&\xi'\,\bar{\chi}^{\adot_{1}...\adot_{m}}_{\beta_{1}...\beta_{n}}\qquad
\end{eqnarray}
\end{subequations}
where $\xi,\xi'\in\rm{U}(1)$. Again, these are equivalent only if
\begin{equation}
\xi\xi'=(n^{2})^{m+n}.
\end{equation}
Likewise, we define the charge conjugate fields
\begin{subequations}
\begin{eqnarray}
\bar{\varphi}^{\adot_{1}...\adot_{m}}_{\beta_{1}...\beta_{n}}&=&
\eps_{}^{\adot_{1}\dot{\g}_{1}}\cdots\;\!\eps_{}^{\adot_{m}\dot{\g}_{m}}
\eps^{}_{\beta_{1}\delta_{1}}\cdots\;\!\eps^{}_{\beta_{n}\delta_{n}}
\bar{\varphi}_{\dot{\g}_{1}...\dot{\g}_{m}}^{\delta_{1}...\delta_{n}}\quad \\
\chi_{\alpha_{1}...\alpha_{m}}^{\bdot_{1}...\bdot_{n}}&=&
\eps^{}_{\alpha_{1}\g_{1}}\cdots\;\!\eps^{}_{\alpha_{m}\g_{m}}
\eps_{}^{\bdot_{1}\dot{\delta}_{1}}\cdots\;\!\eps_{}^{\bdot_{n}\dot{\delta}_{n}}
\chi^{\g_{1}...\g_{m}}_{\dot{\delta}_{1}...\dot{\delta}_{n}}\quad
\end{eqnarray}
\end{subequations}
where $\bar{\varphi}_{\dot{\g}_{1}...\dot{\g}_{m}}^{\delta_{1}...\delta_{n}}\equiv(\varphi_{\g_{1}...\g_{m}}^{\dot{\delta}_{1}...\dot{\delta}_{n}})^{\ast}$ and $\chi^{\g_{1}...\g_{m}}_{\dot{\delta}_{1}...\dot{\delta}_{n}}\equiv(\bar{\chi}^{\dot{\g}_{1}...\dot{\g}_{m}}_{\delta_{1}...\delta_{n}})^{\ast}$ and see again that setting $\bar{\chi} =\bar{\varphi}$ is compatible with Lorentz invariance. With this constraint, the Dirac-type b.c.~\eqref{general_Dirac_bc} reduces to the Majorana-type b.c.
\begin{equation}
\label{general_Majorana_bc}
\!\!n_{\alpha_{1}^{}\adot_{1}^{}}^{}\!\cdots n_{\alpha_{m}^{}\adot_{m}^{}}^{}
\bar{n}^{\bdot_{1}\beta_{1}}\!\cdots\bar{n}^{\bdot_{n}\beta_{n}}\;
\bar{\varphi}^{\adot_{1}...\adot_{m}}_{\beta_{1}...\beta_{n}}
=\xi\,\varphi_{\alpha_{1}...\alpha_{m}}^{\bdot_{1}...\bdot_{n}}. \\
\end{equation}
Now, comparing Eq.~\eqref{general_Majorana_bc} to its complex conjugate and using $\eps_{\alpha\beta}\eps_{\adot\bdot}\bar{n}^{\bdot\beta} = (n_{\alpha\adot})^\ast$, we find consistency requires $(-n^{2})^{m+n}=1$. We infer that, for fermionic fields ($m+n$ odd), Majorana-type b.c.'s are only consistent when $n^\mu$ is spacelike. The Anti de Sitter (AdS) boundary is an example: massless spinors satisfy a Majorana-type reflecting b.c.~\eqref{general_Majorana_bc}, with a field $\varphi$ being related to its charge conjugate $\bar{\varphi}$~\cite{Avis:1977yn, Breitenlohner:1982jf, Hawking:1983mx}.
In contrast, the Big Bang is a mirror with a timelike normal $n^\mu$. {\it The key result of this section is that fermions must satisfy a Dirac-type b.c.~\eqref{general_Dirac_bc}, where a field $\varphi$ is related to another field $\bar{\chi}$ which is not its charge conjugate.}
\bigskip
One can check~\eqref{general_Majorana_bc} for the familiar situation of an ordinary mirror in electromagnetism. A reflection acts on spacetime as $x^{\mu}\to R^{\mu}_{\;\;\nu}x^{\nu}$, with $R^{\mu}_{\;\;\nu}=\eta^{\mu}_{\;\;\nu}-2 n^{\mu}n_{\nu}/n^{2}$. With a spacelike normal $n^\mu=(0,{\bf n})$, the b.c.'s for a perfectly conducting, ``electric" mirror are $\bm{n}\times\bm{E}=\bm{n}\cdot\bm{B}=0$. This is equivalent to imposing reflection symmetry on the field strength $F_{\kappa \lambda}=\pm R_{\kappa}^{\;\;\rho}R_{\lambda}^{\;\;\sigma}F_{\rho\sigma}$, when picking the lower sign. Then, writing $F_{\mu\nu}\to F_{\alpha\adot\beta\bdot}\to\varphi_{\alpha\beta}^{}\eps_{\dot{\alpha}\dot{\beta}}+\bar{\varphi}_{\dot{\alpha}\dot{\beta}}\eps_{\alpha\beta}^{}$, where $\bar{\varphi}$ is the self-dual part (for details see Ch. 3 and Ch. 5, p. 320 in \cite{penrose_rindler_1984}), Eq.~(\ref{general_Majorana_bc}) with $(m,n)=(2,0)$ and $\xi=1$ yields ``electric" mirror b.c.'s; while a general $\xi$ gives a mixed electric-magnetic mirror.
\section{Standard Model \& gauge invariance}
So far, our choice of b.c.'s was fixed by local Lorentz invariance. Can we make them compatible with local gauge invariance? At first glance, the answer might seem to be ``no," given that the Standard Model's chiral nature precisely means that one cannot pair up left- and right-handed spinors in this way. However, with the Higgs doublet $h$ included, the answer is in fact ``yes." From the representation of the SM fields,~\footnote{For a pedagogical introduction to the SM, see Ref.~\cite{langacker2017standard}, particularly Section 8.1.}
\begin{equation}
\begin{array}{c|c|c|c|c}
&& \rm{SU}(3)_C & \rm{SU}(2)_L & \rm{U}(1)_Y \\
\hline
\multirow{3}*{Quarks}& q_L=\left(\substack{u_L \\ d_L}\right) & 3 & 2 & +1/6 \\
\cline{2-5}
&u_R & 3 & 1 & +2/3 \\
\cline{2-5}
& d_R & 3 & 1 & -1/3 \\
\hline
\multirow{3}*{Leptons} &l_L=\left(\substack{\nu_L \\ e_L}\right) & 1 & 2 & -1/2 \\
\cline{2-5}
& \nu_R & 1 & 1 & 0 \\
\cline{2-5}
& e_R & 1 & 1 & -1 \\
\hline
\multirow{1}*{Higgs} & h & 1 & 2 & +1/2 \\
\end{array}
\end{equation}
if we define $h'=i\sigma^{2}h^{\ast}$, $\hat{h}=h/|h|$, $\hat{h}'=h'/|h'|$ and
\begin{subequations}
\label{uLdLnuLeL}
\begin{eqnarray}
u_L\equiv(\hat{h}'{}^\dagger\, q_L)&\qq{}&
d_L\equiv(\hat{h}^\dagger\, q_L) \\
\nu_L\equiv(\hat{h}'{}^\dagger\;l_{L\;\!})&\qq{}&
e_L\equiv(\hat{h}^\dagger\;l_L),
\end{eqnarray}
\end{subequations}
it follows that $\{u_{L},d_{L},\nu_{L},e_{L}\}$ transform under $\rm{SU}(3)_C\times \rm{SU}(2)_L\times \rm{U}(1)_Y$ exactly like $\{u_{R},d_{R},\nu_{R},e_{R}\}$. Therefore, the Standard Model's gauge symmetry is compatible with these Dirac-type boundary conditions:
\begin{subequations}
\label{SM_bcs}
\begin{eqnarray}
\xi\:\! u_{L,\alpha}=n_{\alpha\adot}u_{R}^{\adot}&\qq{}&
\xi\:\! d_{L,\alpha}=n_{\alpha\adot}d_{R}^{\adot} \\
\xi\:\! \nu_{L,\alpha}=n_{\alpha\adot\;\!}\nu_{R}^{\adot}&\qq{}&
\xi\:\! e_{L,\alpha}=n_{\alpha\adot}e_{R}^{\adot}
\end{eqnarray}
\end{subequations}
with $n^{\mu}=(1,{\bf 0})$ for the bang and we can adjust the relative phase of $u_{L}$ and $u_{R}$, etc, to set $\xi=1$.
Note that $\hat{h}$ and $\hat{h}'$ live on the unit 3-sphere $\mathbb{S}^{3}$. In three spatial dimensions they are generically well-defined except on a set of measure zero, even at the bang, where $h$ satisfies a Neumann boundary condition (see below).
This section has two main conclusions. First, for Standard Model fermions (including right-handed neutrinos), reflecting b.c.'s at the bang \eqref{SM_bcs} are uniquely determined by local Lorentz and gauge symmetry. Second, reflecting b.c's {\it require} that all Standard Model fermions can -- using the Higgs as in (\ref{uLdLnuLeL}) -- be grouped into left- and right-handed pairs that transform identically under gauge transformations. Thus, the big-bang-as-mirror hypothesis gives a new explanation for this observed fact.
\section{\texorpdfstring{Left-right symmetry and strong $\bm{CP}$}{}}
Now consider the minimal left-right symmetric extension of the Standard Model: the LRSM. It is based on the gauge group $\rm{SU}(3)_C\times \rm{SU}(2)_L\times \rm{SU}(2)_R\times \rm{U}(1)_{B-L}$ \cite{Hall:2018let}. This theory, where each field has a left/right partner that gauge transforms similarly, has a simpler table:
\begin{equation}
\begin{array}{r|c|c|c|c}
& \rm{SU}(3)_C & \rm{SU}(2)_L & \rm{SU}(2)_R & \rm{U}(1)_{B-L} \\
\hline
q_L & 3 & 2 & 1 & +1/3 \\
\hline
q_R & 3 & 1 & 2 & +1/3 \\
\hline
h_L, l_L & 1 & 2 & 1 & -1 \\
\hline
h_R, l_R & 1 & 1 & 2 & -1 \\
\end{array}
\nonumber
\end{equation}
Here $h_L$ is the usual $\rm{SU}(2)_L$ Higgs doublet (previously called $h'$ in the SM) and $h_R$ is its new $\rm{SU}(2)_R$ counterpart. If the latter acquires a vacuum expectation value, it breaks $\rm{SU}(2)_R\times \rm{U}(1)_{B-L}$ down to $\rm{U}(1)_Y$ and the LRSM reduces to the SM below this scale.
Hall and Harigaya \cite{Hall:2018let} have argued that the LRSM is not only phenomenologically viable but has several explanatory advantages over the SM. An independent argument~\cite{Boyle:2020ctr} is that incorporating the SM fermions into the recently-noticed connection between the SM and a special mathematical object (the exceptional Jordan algebra)~\cite{Todorov:2018mwd, Dubois-Violette:2018wgs, Baezpost}, requires embedding the SM in the LRSM.
In the LRSM, the mirror b.c.'s at the bang take a more symmetrical form, as we can define
\begin{subequations}
\begin{eqnarray}
u_{L,R}\equiv \hat{h}_{L,R}^{\dagger}\,q_{L,R}&\qq{}&
d_{L,R}\equiv \hat{h}_{L,R}'{}^{\!\!\!\!\!\!\!\!\dagger}\,\;\;\,q_{L,R} \\
\nu_{L,R}\equiv \hat{h}_{L,R}^{\dagger}\;l_{L,R}&\qq{}&
e_{L,R}\equiv \hat{h}_{L,R}'{}^{\!\!\!\!\!\!\!\!\dagger}\;\;\;\, l_{L,R}
\end{eqnarray}
\end{subequations}
and then write the b.c.'s as in \eqref{SM_bcs}.
These mirror b.c.'s will only yield genuine mirror symmetry between the two sheets of spacetime (on either side of the bang) if the dynamical theory is {\it also} appropriately symmetric. We now explain the appropriate symmetry.
In our earlier paper \cite{Boyle:2018rgh}, we show in detail how $C$, $P$ and $T$ act on fields on an FRW background in which $a(\tau)$ is even or odd under $\tau\to-\tau$. As in that paper, we use conventions where $\hbar=c=1$ and the spacetime coordinates are dimensionless so that the metric $g_{\mu\nu}$ has dimensions mass$^{-2}$; and we work for convenience in the conformal frame where the fields and couplings have all been rescaled by a power of the scale factor corresponding to their mass dimension: {\it i.e.}\ $\tilde{\varphi}(x)=a(\tau)\varphi(x)$ (for scalars), $\tilde{\psi}(x)=a^{3/2}(\tau)\psi(x)$ (for spinors), $\tilde{A}_{\mu}(x)=A_{\mu}(x)$ (for vectors); and $\tilde{g}_{\mu\nu}(x)=a^{-2}(\tau)g_{\mu\nu}(x)$ (for the metric), so the fields effectively live in a static spacetime background. In this convention, all dimensionful couplings become functions of $\tau$; and, in particular, if $a(\tau)$ is even (resp. odd) under $\tau\to-\tau$, then couplings of odd mass dimension are even (resp. odd) under $\tau\to-\tau$.
Now consider an anti-linear $CT$ transformation which also swaps each field with its $L\leftrightarrow R$ partner, so the fields transform as:
\begin{subequations}
\label{LR_transform}
\begin{eqnarray}
\label{scalar_transform}
\tilde{h}_{L,R}(x)&\to&\tilde{h}_{R,L}(x')^{\ast}
\qquad\qquad\quad\;\textrm{(scalars)} \\
\label{spinor_transform}
\tilde{\psi}_{L,R}^{i}(x)&\to&\left(\begin{array}{c} \!\gamma^{5}\! \\ 1 \end{array}\right)\gamma^{0}\tilde{\psi}_{R,L}^{i}(x')^{\ast}
\quad\textrm{(spinors)} \\
\label{vector_transform}
\tilde{A}_{\mu}^{L,R}(x)&\to&R_{\mu}^{\;\;\nu}\tilde{A}_{\nu}^{R,L}(x')^{\ast}
\qquad\quad\;\textrm{(gauge fields)}\quad
\end{eqnarray}
\end{subequations}
where, $R_{\mu}^{\;\;\nu}=diag(-1,1,1,1)$ is the matrix representing reflection through the bang and $x'{}^{\mu}\equiv R^{\mu}_{\;\;\nu}x^{\nu}$ is the reflected spacetime coordinate; in the spinor transformation (\ref{spinor_transform}), $\psi_{L,R}$ stands for either $q_{L,R}$ or $l_{L,R}$ and the upper/lower option ({\it i.e}\ $\gamma^{5}$ or $1$) applies when $a(\tau)$ is even/odd respectively; and in the vector transformation (\ref{vector_transform}), $A_{\mu}^{L,R}$ stands for either $W_{\mu}^{L,R}$ (the gauge fields for $\rm{SU}(2)_{L,R}$, respectively) or $G_{\mu}$ or $B_{\mu}$ (the gauge fields for $\rm{SU}(3)_{C}$ or $\rm{U}(1)_{B-L}$, which do not carry an $L/R$ label).
Demanding the action $S_{LRSM}$ for the LRSM is invariant under this $CT$ symmetry forbids the $\theta G\tilde{G}$ dual term in the Lagrangian, because it requires that the Yukawa matrices are hermitian $Y=Y^{\dagger}$ (so there is no overall Yukawa phase, and hence the $\theta G\tilde{G}$ is not regenerated by the chiral anomaly) \footnote{See \cite{Hall:2018let} for a more detailed introduction to the LRSM Lagrangian; and for a detailed explanation of how imposing an analogous $P$ symmetry on $S_{LRSM}$ solves the strong $CP$ problem, see \cite{Babu:1989rb} and Section 4 in \cite{Hall:2018let}. The argument is intimately related to ours, since $P$ and $CT$ symmetry are related by the $CPT$ symmetry of $S_{LRSM}$, though of course our line of reasoning, in which the bang is an actual $CT$ mirror, is physically and conceptually quite distinct.}. (Note that demanding that $T$ rather than $CT$ symmetry would not be correct: it would eliminate the $\theta G\tilde{G}$ term, but it would also require that the Yukawa matrices are real, $Y=Y^{\ast}$, in conflict with observations.) Relatedly, note that the $S_{LRSM}$ yields classical equations that are symmetric under the corresponding linear/analytic time-reversal transformation
\begin{subequations}
\label{LR_transform_classical}
\begin{eqnarray}
\label{scalarsymm}
\tilde{h}_{L,R}(x)&\to&\tilde{h}_{R,L}(x')
\qquad\qquad\quad\;\textrm{(scalars)} \\
\tilde{\psi}_{L,R}^{i}(x)&\to&\left(\begin{array}{c} \!\gamma^{5}\! \\ 1 \end{array}\right)\gamma^{0}\tilde{\psi}_{R,L}^{i}(x')
\quad\textrm{(spinors)} \\
\tilde{A}_{\mu}^{L,R}(x)&\to&R_{\mu}^{\;\;\nu}\tilde{A}_{\nu}^{R,L}(x')
\qquad\quad\;\textrm{(gauge fields)}\quad
\end{eqnarray}
\end{subequations}
precisely when all the Yukawa's satisfy $Y=Y^{\dagger}$, and that a solution invariant under this transformation precisely satisfies the Dirac-like mirror boundary condition (\ref{SM_bcs}).
In other words: {\it In the LRSM, requiring the two sheets of spacetime (before and after the bang) to be related by a mirror symmetry -- so that, at the quantum level, the bang is a surface of $CT$ symmetry (\ref{LR_transform}) and, at the classical level, the solutions of the equations of motion are invariant under the corresponding transformation (\ref{LR_transform_classical}) -- also solves the strong $CP$ problem.}
\section{Back to the Standard Model}
The left-right symmetry of the LRSM is generally broken spontaneously (see, {\it e.g.}, \cite{Hall:2018let}) because the two VEVs $\langle h_{R} \rangle$ and $\langle h_{L} \rangle$ differ in magnitude. If $|\langle h_{R} \rangle| \gg |\langle h_{L} \rangle |$, at energies $\ll |\langle h_{R} \rangle|$ the LRSM reduces to the SM. Likewise, in the limit $|\langle h_{R} \rangle|\to\infty$, the LRSM {\it is} just the SM.
How does this work in our two-sheeted, $CT$-symmetric cosmology?
For the symmetry (13) to be gauge invariant, the gauge groups on the two sheets must be related: elements of the local gauge group $ \rm{SU}(3)_C\times \rm{SU}(2)_{L}\times \rm{SU}(2)_{R}\times \rm{U}(1)_{B-L}$ must obey $(g_{3}^{}, g_{2}^{L}, g_{2}^{R}, g_{1}^{})(\tau,x)=(g_{3}^{}, g_{2}^{R}, g_{2}^{L}, g_{1}^{})(-\tau,x)$. This suggests that, from the two-sheeted perspective, the natural bosonic fields are $\widetilde{h}_{L,R}$ and $\widetilde{A}_{L,R}^{\mu}$, which equal $h_{L,R}$ and $A_{L,R}^{\mu}$ when $\tau>0$, and $h_{R,L}$ and $A_{R,L}^{\mu}$ when $\tau<0$. Indeed, the limit $|\langle \widetilde{h}_{R}\rangle|\to\infty$ is compatible with our reflection symmetry, whereas $|\langle h_{R}\rangle|\to\infty$ is not.
Further support for this view comes from coupling the theory to general relativity, and implementing a ``Weyl lift"~\cite{Bars:2013yba}. After taking the limit $|\langle \widetilde{h}_R\rangle |\to\infty$ and setting $g_{\mu \nu}=\Omega^2 \widetilde{g}_{\mu \nu}$, the Einstein-Higgs action becomes $\int (\frac{1}{2} \Omega^2 \widetilde{R} - 3(\partial \Omega)^2 + \Omega^2 |{\cal D}\widetilde{h}_L|^2 -\Omega^4 V(\widetilde{h}_L)) \sqrt{-\widetilde{g}}\,d^4x$ with $8\pi G=1$. Let us work in ``unitary gauge" where
$\widetilde{h}_L\equiv\begin{psmallmatrix}\bar{h}\\0\end{psmallmatrix}$, with $\bar{h}>0$.
Similarly, as we are interested in studying our theory on an FRW background, we can choose conformally static gauge for $\tilde{g}_{\mu\nu}$, so $\widetilde{R} =6 \kappa$ with $\kappa$ parameterizing the spatial curvature. Near the bang the gauge field mass and spatial curvature terms vanish as $\Omega^2$ while the Higgs potential term vanishes as $\Omega^4$. Neglecting these terms, the action becomes $V_c \int ( - n^{-1}(3\,\dot{ \Omega}^2 -\Omega^2 \dot{ \bar{h}}^2) -n\, r ) d \tau,$ with $V_c$ the comoving volume, $n$ the lapse (whose variation yields the Friedmann constraint), and we have included the radiation density $r/\Omega^4$, where $r$ is a constant. Recognizing the line element on $(\Omega,\bar{h})$ field space as 2d Minkowski in Milne coordinates, we pass to global coordinates $(\Phi,H)\equiv \Omega \left(\rm{cosh}(\bar{h}/\sqrt{3}), \rm{sinh}(\bar{h}/\sqrt{3})\right)$. The action becomes $V_c \int ( -3 n^{-1}(\,\dot{ \Phi}^2 -\dot{H}^2) - n \,r ) d \tau.$ Classical trajectories consist of straight lines in the $(\Phi, H)$-plane whose 2-velocity is ``time-like" since $r>0$ (see Fig. 1). The symmetry (13) extends to $\Omega(\tau)\rightarrow -\Omega(-\tau)$, $\Phi(\tau)\rightarrow -\Phi(-\tau)$, $H(\tau)\rightarrow -H(-\tau)$: solutions satisfying this condition pass through the origin of the $(\Phi,H)$-plane and never enter the ``antigravity" region~\cite{Bars:2011aa}. As $\tau$ approaches zero, $\bar{h}$ tends to a constant, consistent with the Neumann boundary condition identified in Ref.~\cite{Boyle:2021jej}. When finite temperature effects are included, at long wavelengths the statistical ensemble of classical saddles for $\bar{h}$ will average to zero near the bang, with $\bar{h}$ only acquiring a nonzero VEV which breaks $\widetilde{\rm{SU}(2)}_{L}\times \widetilde{\rm{U}(1)}_{Y}$ gauge symmetry at the electroweak phase transition.
Thus, although $\widetilde{h}_{R}$ corresponds to $h_{L}$ before the bang, and $h_{R}$ after it, there is no discontinuity since $h_{L}$ before the bang and $h_{R}$ after the bang parameterize the hyperbolae in two non-overlapping Milne wedges in field space, and the global coordinate $H$ is perfectly continuous. From the point-of-view of the left-right symmetric theory, the natural scalar fields are $\widetilde{h}_{L,R}$ rather than $h_{L,R}$ and the SM is then neatly recovered in the appropriate limit $|\langle\widetilde{h}_{R}\rangle|\to\infty$. Hence, our solution of the strong $CP$ problem in the LRSM extends to a corresponding solution in the SM itself! -- a subtle solution that we might never have guessed without the help of the LRSM.
\section{Classical Versus Quantum}
In this paper, we have seen how the requirement that the Big Bang is a surface of quantum $CT$ symmetry yields a new solution to the strong $CP$ problem. It also gives rise to classical solutions that are symmetric under time reversal, and satisfy appropriate reflecting boundary conditions at the bang. The classical solutions we describe are stationary points of the action and are analytic in the conformal time $\tau$. Hence they are natural saddle points to a path integral over fields and four-geometries. The full quantum theory is presumably based on a path integral between boundary conditions at future and past infinity that are related by $CT$-symmetry. The cosmologically relevant classical saddles inherit their analytic, time-reversal symmetry from this path integral, although the individual paths are {\it not} required to be time-symmetric in the same sense (and, moreover may, in general, be highly jagged and non-analytic). We will describe in more detail the quantum $CT$-symmetric ensemble which implements (12), including the question of whether {\it all} of the analytic saddles are necessarily time-symmetric \cite{Newman:1992tc, Newman:1992cx}, and the calculation of the associated gravitational entanglement entropy, elsewhere~\cite{perts}.
{\bf Acknowledgements:} Research at Perimeter Institute is supported by the Government of Canada, through Innovation, Science and Economic Development, Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. The work of NT is supported by the STFC Consolidated Grant `Particle Physics at the Higgs Centre' and by the Higgs Chair of Theoretical Physics at the University of Edinburgh.
\bibliography{references}
|
Title:
Minkowski Tensors in Redshift Space -- Beyond the Plane Parallel Approximation |
Abstract: The Minkowski tensors (MTs) can be used to probe anisotropic signals in a
field, and are well suited for measuring the redshift space distortion (RSD)
signal in large scale structure catalogs. We consider how the linear RSD signal
can be extracted from a field without resorting to the plane parallel
approximation. A spherically redshift space distorted field is both anisotropic
and inhomogeneous. We derive expressions for the two point correlation
functions that elucidate the inhomogeneity, and then explain how the breakdown
of homogeneity impacts the volume and ensemble averages of the tensor Minkowski
functionals. We construct the ensemble average of these quantities in
curvilinear coordinates and show that the ensemble and volume averages can be
approximately equated, but this depends on our choice of definition of the
volume average of a tensor and the radial distance between the observer and
field. We then extract the tensor Minkowski functionals from spherically
redshift space distorted, Gaussian random fields and gravitationally evolved
dark matter density fields at $z=0$ to test if we can successfully measure the
Kaiser RSD signal. For the dark matter field we find a significant, $\sim 10\%$
anomalous signal in the MT component parallel to the line of sight that is
present even on large scales $R_{\rm G} \gtrsim 15 \, {\rm Mpc}$, in addition
to the Kaiser effect. This is due to the line of sight component of the MT
being significantly contaminated by the Finger of God effect, which can be
approximately modelled by an additional damping term in the cumulants.
| https://export.arxiv.org/pdf/2208.10164 |
\title{Minkowski Tensors in Redshift Space -- Beyond the Plane Parallel Approximation}
\author{Stephen Appleby}
\email{stephen.appleby@apctp.org}
\affiliation{Asia Pacific Center for Theoretical Physics, Pohang, 37673, Korea}
\affiliation{Department of Physics, POSTECH, Pohang 37673, Korea}
\author{Joby P. Kochappan}
\affiliation{Asia Pacific Center for Theoretical Physics, Pohang, 37673, Korea}
\author{Pravabati Chingangbam}
\affiliation{Indian Institute of Astrophysics, Koramangala II Block, Bangalore 560 034, India}
\affiliation{School of Physics, Korea Institute for Advanced Study, 85
Hoegiro, Dongdaemun-gu, Seoul, 02455, Korea}
\author{Changbom Park}
\affiliation{School of Physics, Korea Institute for Advanced Study, 85
Hoegiro, Dongdaemun-gu, Seoul, 02455, Korea}
\section{Introduction}
The tensor Minkowski functionals are a rank-$p$ generalisation of the scalar Minkowski functionals \citep{nla.cat-vn2176896, McMullen:1997,Alesker1999,2002LNP...600..238B,HugSchSch07,1367-2630-15-8-083028,JMI:JMI3331,Beisbart:2001gk,Hadwiger, nla.cat-vn1821482}. Being tensors, they are sensitive to directionally dependent signals in data and have found application in a number of disciplines such as material science \citep{PhysRevE.77.051805,Becker2003ComplexDS,Olszowka2006}. The scalar Minkowski functionals and associated morphological statistics have a long and storied history within cosmology \citep{Gott:1989yj,1991ApJ...378..457P,Mecke:1994ax,Schmalzing:1997aj,Schmalzing:1997uc,1989ApJ...345..618M,1992ApJ...387....1P,2001ApJ...553...33P,Park:2009ja,doi:10.1111/j.1365-2966.2010.18015.x,Sahni:1998cr,Bharadwaj:1999jm,vandeWeygaert:2011ip, Park:2013dga,vandeWeygaert:2011hyr,Shivshankar:2015aza,Pranav:2016gwr,Pranav:2018lox,Pranav:2018pnu,Feldbrugge:2019tal,Wilding:2020oza,Munshi:2020tzm}, but the tensors are less widely adopted. They were initially introduced in \citep{Beisbart:2001vb,Beisbart:2001gk,2002LNP...600..238B} to provide a measure of sub-structure of galaxy clusters and spiral galaxies. In the mathematics literature they are defined for structures on flat Euclidean space. In two dimensions, the definition of the translation invariant rank-2 Minkowski tensors were generalised to structures on the two-sphere in \cite{Chingangbam:2017uqv}. More recently, they have been applied to cosmological scale fields \citep{Chingangbam:2017uqv,Ganesan:2016jdk,Appleby:2017uvb,Appleby:2018tzk,Rahman:2021azv} -- Cosmic Microwave Background temperature and polarisation data \cite{Ganesan:2016jdk,K.:2018wpn,Joby:2021,Goyal:2020,Goyal:2021}, and the fields of the epoch of reionization \citep{Kapahtia:2017qrg,Kapahtia:2019ksk,Kapahtia:2021}. In addition, the authors have written a series of papers on the application of the Minkowski tensors to the low redshift matter density field as traced by galaxies \citep{Appleby:2017uvb,Appleby:2018tzk}. The ensemble average of the MTs measured from isotropic and anisotropic, Gaussian random fields were considered in \cite{Chingangbam:2017uqv,Appleby:2018tzk,Chingangbam:2021kov}. Anisotropic random fields were subsequently explored further in \citet{Klatt_2022}, including higher rank statistics. Numerical algorithms with which to extract the MTs from two-dimensional fields can be found in \cite{JMI:JMI3331,Appleby:2017uvb,Schaller2020}.
In real space, galaxies are assumed to be distributed in a statistically isotropic and homogeneous manner. The cosmic web is locally anisotropic, with filaments feeding matter into nodes, and extended structures aligning on two-dimensional walls. In this picture, isotropy of the matter distribution means that there is no globally preferred direction within the filamentary large scale structure, when averaging over a volume that is large compared to the typical scale of the structures. This statistical isotropy is an axiom within cosmology, motivated by observations of the Cosmic Microwave Background.
Even if the large scale distribution of the matter field in real space is isotropic, the observed distribution of galaxies is contaminated by their peculiar velocity along the line of sight. This phenomenon was first described in early pioneering works \citep{1987MNRAS.227....1K}, and is referred to as the redshift space distortion (RSD) effect. The RSD effect perturbs the apparent position of galaxies in redshift space only along the line of sight, and hence has rotational symmetry around the central observer. However, it leads to a global alignment of structures in the excursion sets of the density field along the line of sight. This alignment of structures in the field is what we refer to as anisotropy in the context of Minkowski tensors. A significant body of literature has subsequently been devoted to understanding the effect of RSD on the two-point statistics \citep{Hamilton:1997zq,PhysRevD.70.083007,2013PhR...530...87W} and other quantities \citep{Matsubara:1995wj,Codis:2013exa}.
There are two phenomena commonly associated with redshift space distortion. On small scales $\lesssim {\cal O} (1\, {\rm Mpc})$ the Finger of God effect describes the scatter of galaxy positions within bound structures due to their stochastic velocity components \citep{1972MNRAS.156P...1J}. In addition, coherent in-fall into overdensities – and corresponding outflow from underdensities – occurs on all scales. The latter phenomenon, dubbed the Kaiser effect \citep{1987MNRAS.227....1K}, can be described using linear perturbation theory on large scales. The density field in the late Universe is non-Gaussian due to the non-linear nature of gravitational collapse, but by smoothing the field on sufficiently large scales one can treat the field as approximately Gaussian and the RSD effect as approximately Kaiser-ian. The anisotropic effect of redshift space distortion contains information regarding the growth rate of structure, due to the fact that the signal is a measure of the in-fall rate of matter into gravitational potentials.
This work is a continuation of a series of papers by the authors, in which we consider the impact of redshift space distortion on the tensor Minkowski functionals. In \cite{Appleby:2018tzk}, the authors described a numerical algorithm used to extract the Minkowski functionals and Cartesian tensors from any three-dimensional field. In \cite{Appleby_2019} we constructed the ensemble expectation value of the Minkowski tensors in redshift space, in the linearized, plane-parallel Kaiser limit and for Gaussian random fields. The latter paper used the so-called `distant observer' approximation, making the simplifying assumption that the field is sufficiently remote from the observer and localised in direction, so that each point in the field practically shares a common line-of-sight vector along which the redshift space distortion operator acts. This, in conjunction with periodic boundary conditions, renders the field anisotropic but homogeneous, and the sky flat for computational purposes. In reality, the radial nature of the RSD signal generates an inhomogeneous field.
The purpose of this work is two-fold. First, we generalise the calculation in \cite{Appleby_2019} to account for the radial nature of the RSD signal. We calculate the ensemble average of the Minkowski tensors in spherical coordinates, for a field that has been subjected to a radial RSD correction. The calculation requires a careful reappraisal of the Cartesian tensor analysis of \cite{Appleby_2019} to account for the vagaries of curvilinear coordinate systems. In addition, a radial signal is inherently inhomogeneous, and this will have consequences for the assumption of ergodicity that is frequently applied to cosmological fields. Second, we use gravitationally evolved, dark matter N-body simulations to construct mildly non-Gaussian density fields by smoothing over large scales $15 \, {\rm Mpc} < R_{\rm G} < 45 \, {\rm Mpc}$. We compare the extracted Minkowski tensor statistics to their Gaussian expectation values, to determine the scale at which the analytic prediction can be used. This analysis serves as a precursor to a forthcoming paper, in which we will extract these statistics from the BOSS galaxy data and infer the growth rate from the RSD signal.
The paper will proceed as follows. We review the definition of the rank-2 Minkowski tensors in Section \ref{sec:theory}, and also provide details on our approach to ensemble averaging. In Section \ref{sec:pp} we re-state the main results of \cite{Appleby_2019}; the ensemble average of the Minkowski tensors in globally plane-parallel redshift space. In Section \ref{sec:sph} we expand the analysis and derive the expectation value of the MTs in a spherical coordinate system for a field with radial anisotropy relative to a central observer. We repeat this analysis in a Cartesian coordinate system in Section \ref{sec:cart}. In Section \ref{sec:num} we extract the Minkowski tensors from dark matter particle snapshot boxes after applying a radial redshift space distortion correction, to test the scale at which the Gaussian limit is approached and the magnitude of the non-Gaussian corrections. We also compare plane parallel and radial anisotropic signals. We discuss our results in Section \ref{sec:dis}.
Throughout this work, in the main body of the text we focus on the particular Minkowski tensor $W^{0,2}_{1}$, because it is computationally simpler and we expect that it will provide superior constraining power \citep{Appleby_2019}. A second linearly independent, translation invariant Minkowski tensor in three dimensions $W^{0,2}_{2}$ has some additional complications because it is a function of the second derivative of the field. For completeness we include a brief analysis of $W^{0,2}_{2}$ in Appendix \ref{sec:appen1}. The rotation of the spherical basis vectors relative to a great arc tangent vector is presented in Appendix \ref{sec:appen3} and finally some useful identities regarding spherical harmonics and Bessel functions are provided in Appendix \ref{sec:appen2}.
\section{Translation Invariant Minkowski Tensors in Three-Dimensions}
\label{sec:theory}
The Minkowski Tensors (MTs) have been elucidated in numerous papers, and we direct the reader to \cite{1367-2630-15-8-083028} for details on the quantities used in this work. Briefly, in three dimensions we define an excursion set $Q$ for a field $\delta(x)$ on a manifold $\Mspace$ as
\begin{equation}
\label{intro:Au:equn}
Q =\ \{x\in \Mspace: \delta(x)\geq \delta_{t}\},
\end{equation}
\noindent where $\delta_{t}$ is a chosen density threshold value. Initially we take the manifold $\Mspace$ to be three-dimensional Euclidean space $\Rspace^3$. We then define two translation invariant, rank-two tensors as
\begin{eqnarray}\label{eq:eq2} & & W_{1}^{0,2} \equiv \frac{1}{6V} \int_{\partial Q} {\bf \hat{n}}^{2} \textrm{dA} ,\\
\label{eq:eq3} & & W_{2}^{0,2} \equiv {1 \over 3\pi V} \int_{\partial Q} G_{2} {\bf \hat{n}}^{2} \textrm{dA}
\end{eqnarray}
\noindent where the boundary $\partial Q$ of $Q$ is a two-dimensional iso-field surface defined by $\delta(x) = \delta_{t}$. The vector ${\bf \hat{n}}$ is the unit normal vector and $G_{2}$ is the mean curvature at each point of the surface $\partial Q$. We define the symmetric tensor product as ${\bf \hat{n}^{2}} = {\bf \hat{n}} \otimes {\bf \hat{n}} = (\hat{n}_{i} \hat{n}_{j} + \hat{n}_{j} \hat{n}_{i})/2$. The vector ${\bf \hat n}$ is an element of the cotangent space at each point on $\Rspace^3$. Since addition is defined only for vectors or tensors that belong to the same vector space, in order to perform these integrals we must transport all normal vectors to a fiducial point, and addition is then carried out in the cotangent space at that point. This is a trivial step when the manifold is flat space. $W_{1}^{0,2}$ and $W_{2}^{0,2}$ are invariant under translation of the coordinates, which ensures that they are independent of the choice of fiducial point on $\Rspace^{3}$. If the manifold is curved, then the integrals defined in the expressions ($\ref{eq:eq2}$), ($\ref{eq:eq3}$) require a fiducial point at which the average is taken to be specified, as well as the choice of transport path. These details will be important later and considered in Section \ref{sec:volav}.
We will measure $W_1^{0,2}$ and $W_2^{0,2}$ from dark matter point distributions, which are smoothed with a Gaussian kernel to generate a continuous matter field with background density $\rho_{m}$ and fluctuations $\delta(x)$. The fluctuations satisfy $\langle \delta \rangle = 0$, where $\langle ... \rangle$ represents the ensemble average of this random field. When smoothed on large scales, $\delta(x)$ is assumed to be well approximated as a Gaussian random field but on small scales non-Gaussianities are present due to the mode coupling arising from the non-linear nature of gravitational collapse. In this work we are chiefly concerned with the large scale limit of the density field, where Gaussian statistics can be applied. The non-Gaussian corrections require further study and are beyond the scope of this work. For the remainder of the paper, we will focus specifically on the Minkowski tensor $W_{1}^{0,2}$, and consign the more complicated $W_{2}^{0,2}$ statistic to Appendix \ref{sec:appen1}.
Following \citet{Schmalzing:1997aj,Schmalzing:1997uc}, we perform a surface to volume integral transform and use $\hat{n}_{i} = \delta_{i}/|\nabla \delta|$ to re-write equation (\ref{eq:eq2}) as
\begin{eqnarray}
W_1^{0,2}|_{i}{}^{j} &=& \frac{1}{6V} \int_{V} \textrm{dV} \, \delta_{D}\left( \delta - \delta_{t} \right) \frac{\delta_{i} \delta^{j}}{\left| \nabla \delta \right|} ,
\label{eqn:W_delta}
\end{eqnarray}
\noindent where we use shorthand notation for the gradients of the field $\delta_{i} = \nabla_{i}\delta$, and $\delta_D$ is the Dirac delta function. Given that $\delta$ is assumed to be a smooth random field, its derivatives and in particular the vector $\delta_{i}/|\nabla \delta|$ is well defined at all points over the volume $V$. The right hand side of equation ($\ref{eqn:W_delta}$) is the volume average of the rank-$(1,1)$ tensor
\begin{equation}\label{eq:t1} w_{i}{}^{j} = {1 \over 6 } {\delta _{i}\delta^{j} \over |\nabla \delta|} \delta_{D}(\delta-\delta_{t}), \end{equation}
\noindent where the delta function $\delta_{D}(\delta-\delta_{t})$ can be defined in a distributional sense when constructing the ensemble average or approximately discretized when taking the volume average \citep{Schmalzing:1997aj}. We denote the volume average of this tensor as $\bar{w}_{i}{}^{j} \equiv W^{0,2}_{1}|_{i}{}^{j}$.
\subsection{Ensemble Average and Ergodicity}
\label{sec:ens}
First, we review the steps made in calculating the ensemble average of $w_{i}{}^{j}$, because there are some subtleties that will become important later. The purpose of this subsection is to highlight the assumptions that are made when deriving the ensemble average of $w_{i}{}^{j}$, and then equating this quantity to the volume average that we measure from cosmological data.
The ensemble average $\langle ... \rangle$ is the linear sum over possible states of the quantity within the brackets, weighted by the probability of that state --
\begin{equation} \label{eq:ens} \langle w_{i}{}^{j} \rangle = \frac{1}{6} \int \Phi(X,\Sigma) \, \delta_{D} \left( \delta - \delta_{t} \right) \frac{\delta_{i} \delta^{j}}{\left| \nabla \delta \right|} \textrm{dX} ,
\end{equation}
\noindent where $X = (\delta,\delta_{i})$ is shorthand for an array of the field and components of its first derivatives and $\Phi(X,\Sigma)$ is the underlying probability distribution function (PDF) for $X$. Here $w_{i}{}^{j}$ is defined at a point on the manifold, so $\Phi(X,\Sigma)$ is the PDF describing the field and its derivatives at a single location. For a Gaussian random field we have $\Phi(X, \Sigma) \propto \exp[-X^{T} \Sigma^{-1} X/2]$, where $\Sigma$ denotes the covariance between the component fields of $X$. When integrating over $X$, all physical information is contained within the inverse covariance matrix $\Sigma^{-1}$ in $\Phi(X, \Sigma)$. To estimate the ensemble average of $w_{i}{}^{j}$, we require the covariance matrix $\Sigma$.
In cosmological applications, we measure $\bar{w}_{i}{}^{j}$ from a data set and then equate this quantity to the theoretically predicted $\langle w_{i}{}^{j} \rangle$. That is, we invoke ergodicity to impose $\langle w_{i}{}^{j} \rangle \simeq \bar{w}_{i}{}^{j}$. Ergodicity is known to be exact if a field is homogeneous, Gaussian, the two-point correlation $\zeta$ of $\delta$ satisfies $\zeta(r)|_{r\to \infty} = 0$ and we take the limit $V \to \infty$~(\citet{Adler} p145). In reality, cosmological fields occupy a finite volume and have finite resolution, and ergodicity is never exactly realised. We tacitly interpret the volume average of a quantity over a finite domain as providing an unbiased estimate of the ensemble average, with an associated uncertainty related to the finite sampling of the probability distribution.
If the covariance $\Sigma$ between the fields $\delta, \delta_{i}$ contains explicit coordinate dependence, then the ensemble average $\langle w_{i}{}^{j} \rangle$ is sensitive to the position ${\bf x}$ on the manifold at which we take this average -- $\Phi=\Phi(X, \Sigma({\bf x}))$. In this case, it is clear that the ensemble average at any given point cannot be equated to the volume average of the same tensor over the entire manifold. Constancy of $\Sigma$ is a consequence of the fields being homogeneous (see e.g. \citep{Adler,Chingangbam:2021kov}), so when the fields are inhomogeneous we cannot invoke ergodicity and generically $\langle w_{i}{}^{j}\rangle \neq \bar{w}_{i}{}^{j}$. In such a situation, the question of whether we can invoke ergodicity -- even approximately -- depends on the physical properties of the field, manifold and coordinate system adopted. In what follows we will present an example for which $\bar{w}_{i}{}^{j} \simeq \langle w_{i}{}^{j} \rangle$ is an excellent approximation despite the field being inhomogeneous, and a second example for which $\bar{w}_{i}{}^{j}$ completely fails to encapsulate the properties of the ensemble average.
For a homogeneous field, $\Sigma$ and hence $\langle w_{i}{}^{j}\rangle$ are constant over the entire manifold and ergodicity is more naturally realised. Ambiguity remains in the definition of the volume average of a tensor, which is discussed further in Section \ref{sec:volav}.
\section{Review : Plane Parallel Redshift Space Distortions}
\label{sec:pp}
In Section \ref{sec:sph} we will calculate the ensemble average of $w_{i}{}^{j}$ for a Gaussian field that has been subjected to a spherically symmetric redshift space distortion operator, but before doing so we briefly review the plane parallel result derived in \cite{Appleby_2019}, aided by earlier work on the Minkowski functionals \citep{1970Ap......6..320D,Adler,Gott:1986uz,10.1143/PTP.76.952,Hamilton:1986,Ryden:1988rk,1987ApJ...319....1G,1987ApJ...321....2W,Matsubara:1994wn,Matsubara:1994we,Matsubara:1995dv,Gay:2011wz,2000astro.ph..6269M,10.1111/j.1365-2966.2008.12944.x}\footnote{See \citet{Buchert:2017uup} for a model-independent approach applying Minkowski functionals to the CMB and using general Hermite expansions of the discrepancy functions with respect to the analytical Gaussian predictions.}.
We take an isotropic and homogeneous Gaussian random field in a periodic box, adopt a Cartesian coordinate system $x,y,z$ with basis vectors ${\bf e}_{x}$, ${\bf e}_{y}$, ${\bf e}_{z}$, and then apply the plane parallel redshift space distortion operator aligned with one of the coordinate axes taken arbitrarily to be ${\bf e}_{z}$. We preserve periodicity in ${\bf e}_{z}$, so that the field is homogeneous but anisotropic. We simply re-state the main results of \cite{Appleby_2019}, and direct the reader to that work for details of the calculation and \citet{Matsubara:1995wj,Codis:2013exa} for a detailed analysis of the RSD effect on the scalar functionals.
To linear order in the density fluctuation, the relation between the true position of a tracer particle ${\bf x}$ and its redshift space position ${\bf s}$ is given by
\begin{equation} {\bf s} = {\bf x} + f {\bf e}_{z} \, ({\bf u} . {\bf e}_{z}) \end{equation}
\noindent where $f=d\ln D/d\ln a$ and $D$ is the linear growth factor, ${\bf u} = {\bf v}/(aHf)$, ${\bf v}$ is the peculiar velocity and $H$ is the Hubble parameter. We have assumed that every tracer particle is subject to a single, parallel line of sight. The density field in redshift space $\tilde{\delta}$ can be related to its real space counterpart $\delta$ according to
\begin{equation} \label{eq:pp1} \tilde{\delta}({\bf k}) = (1 + f \mu^{2}) \delta ({\bf k}) , \end{equation}
\noindent where $\mu = {\bf k}. {\bf
e}_{z}/|k|$ is the cosine of the angle between the line of sight and wavenumber ${\bf k}$. The cumulants of the field $\tilde{\delta}$ and its gradient are given by \citep{Matsubara:1995wj}
\begin{eqnarray} \label{eq:rc0} \langle \tilde{\delta}({\bf x'}) \tilde{\delta}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} &=& \sigma_{0}^{2} \left[ 1 + {2 \over 3}f + {1 \over 5} f^{2} \right] \\
\langle \tilde{\delta}_{x}({\bf x'}) \tilde{\delta}_{x}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} = \langle \tilde{\delta}_{y}({\bf x'}) \tilde{\delta}_{y}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} &=& \sigma_{1}^{2} \left[ {1 \over 3} + {2 \over 15}f + {1 \over 35} f^{2}\right] \\
\label{eq:rc3} \langle \tilde{\delta}_{z}({\bf x'}) \tilde{\delta}_{z}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} &=& \sigma_{1}^{2} \left[ {1 \over 3} + {2 \over 5}f + {1 \over 7}f^{2} \right] \\
\label{eq:rc4} \langle \tilde{\delta}({\bf x'}) \tilde{\delta}_{i}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} &=& 0
\end{eqnarray}
\noindent where we have defined the $i^{\rm th}$ isotropic cumulant as
\begin{equation} \sigma_{i}^{2} = {1 \over 2\pi^{2}} \int k^{2i+2} P(k, R_{\rm G}) dk , \end{equation}
\noindent and have introduced a Gaussian-smoothed power spectrum $P(k,R_{\rm G}) = W^{2}(kR_{\rm G})P(k)$ with $W(k R_{\rm G}) \propto e^{-k^{2}R_{\rm G}^{2}/2}$ for some comoving smoothing scale $R_{\rm G}$. The ensemble expectation value of the components of the Minkowski tensor $W^{0,2}_{1}$ in this particular Cartesian coordinate system, assuming the field is Gaussian, are then \cite{Appleby_2019}
\begin{eqnarray}\label{eq:m1} & & \langle W^{0,2}_{1}|_{xx} \rangle = {A_{0} \over 4}\left[ {(2\lambda^{2}-1)\cosh^{-1}\left(2\lambda^{2}-1\right) \over (\lambda^{2}-1)^{3/2}} - {2\lambda \over \lambda^{2}-1} \right] e^{-\nu^{2}/2} , \\
& & \langle W^{0,2}_{1}|_{yy} \rangle = \langle W^{0,2}_{1}|_{xx} \rangle , \\
\label{eq:m2} & & \langle W^{0,2}_{1}|_{zz} \rangle = A_{0}\left({\lambda^{2} \over \lambda^{2}-1}\right) \left( \lambda - {\cosh^{-1} \lambda \over \sqrt{\lambda^{2}-1}}\right) e^{-\nu^{2}/2} , \\
\label{eq:m3} & & \langle W^{0,2}_{1}|_{xy} \rangle = \langle W^{0,2}_{1}|_{xz} \rangle = \langle W^{0,2}_{1}|_{yz} \rangle = 0 ,
\end{eqnarray}
\noindent the constant $A_{0}$ is given by
\begin{equation}\label{eq:a0} A_{0} = {\sigma_{1} \over 6\sqrt{3}\pi\sigma_{0}}\sqrt{105 + 42 f + 9 f^{2} \over 105 + 70 f + 21f^{2}} ,
\end{equation}
\noindent and
\begin{equation}\label{eq:lam} \lambda^{2} = {35 + 42 f + 15 f^{2} \over 35 + 14 f + 3 f^{2}} , \end{equation}
\noindent and we have introduced the normalised threshold $\nu = \delta_{t}/\tilde{\sigma}_{0}$, where $\tilde{\sigma}_{0}^{2} = \langle \tilde{\delta}({\bf x'}) \tilde{\delta}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}}$. The Minkowski tensor is diagonal in this coordinate system, with discrepant values in the directions perpendicular and parallel to the `line of sight' $z$. A coordinate transform will generate off-diagonal terms, but the eigenvalues remain invariant. Modulo a noise component due to finite sampling, the eigenvalues are equal to the diagonal elements of the MT in this coordinate system. The properties of the field dictate the form of the MT ; anisotropy is represented by unequal eigenvalues, and homogeneity is manifested by the constancy of the cumulants ($\ref{eq:rc0}$-$\ref{eq:rc4}$) over the domain on which the field is defined.
\section{Minkowski Tensors -- Spherical Redshift Space Distortion}
\label{sec:sph}
The plane parallel limit reviewed in the previous section is an approximation where the observed patch of the density field is sufficiently distant from the observer and localised on the sky so that the line of sight can be approximately aligned with one of the Cartesian axes. Now we generalise and calculate the Minkowski tensors without the plane parallel approximation. Since redshift space distortion acts along the line of sight, we choose to work with the spherical coordinate system with the observer at the origin. The radial and angular basis vectors in this system are denoted ${\bf e}_{r}$, ${\bf e}_{\theta}$, ${\bf e}_{\phi}$, and ${\bf e}_{r}$ is aligned with the line of sight. The redshift space distortion operator is spherically symmetric and applied to an otherwise isotropic and homogeneous Gaussian random field. Under the assumption that the average number density of tracer particles is constant over the manifold, the relation between the density field in real ($\delta$) and redshift ($\tilde{\delta})$ space is given by \citep{Hamilton:1997zq}
\begin{equation}\label{eq:sphd} \tilde{\delta}({\bf r}) = \left[ 1 + f \left( {\partial^{2} \over \partial r^{2}} + {2 \over r} {\partial \over \partial r}\right) \nabla^{-2} \right] \delta({\bf r}) , \end{equation}
\noindent to linear order in the fields. Here $f$ is the growth factor which we assume to be constant, neglecting its redshift dependence. The redshift space distortion operator in square brackets is now radial relative to a central observer located at $r=0$. There is no longer a uniformly parallel line of sight vector over the entire manifold -- the line of sight is now aligned with the radial basis vector ${\bf e}_{r}$. The redshift space field is sensitive to this vector, because tracer particles that are used to define $\tilde{\delta}$ are perturbed according to the component of their velocity parallel to the corresponding line of sight direction. The radial nature of the signal renders the redshift space distorted field inhomogeneous, and the two-point correlation function of $\tilde{\delta}$ is no longer solely a function of the separation between two tracer particles, but now depends on the triangle formed by the observer and the two points. Translation invariance is broken, but residual rotational symmetry around the observer and azimuthal symmetry about the line of sight persist.
\subsection{Ensemble Average $\langle w_{i}{}^{j} \rangle$}
\label{sec:ensav}
The goal of this section is to derive the ensemble average of the tensor $w_{i}{}^{j}$ for the field $\tilde{\delta}$ defined in equation ($\ref{eq:sphd}$), in a spherical coordinate system. The first step is to derive the cumulants $\langle \tilde{\delta}^{2} \rangle$, $\langle \tilde{\delta} \tilde{\delta}_{i} \rangle$ and $\langle \tilde{\delta}_{i}\tilde{\delta}^{j} \rangle$. The variance of the field $\langle \tilde{\delta}^{2} \rangle$ is a scalar quantity and hence invariant under coordinate transformations, but $\langle \tilde{\delta}_{i}\tilde{\delta}^{j}\rangle$ is a rank-$(1,1)$ tensor and $\langle \tilde{\delta}\tilde{\delta}_{i}\rangle$ is a vector, both of which transform non-trivially. Spherical redshift space two-point statistics have been extensively studied in the literature, and we direct the reader to \citet{1992ApJ...385L...5H,1996MNRAS.278...73H, 1996ApJ...462...25Z,1998ApJ...498L...1S, Szapudi:2004gh,Shaw:2008aa,Bonvin:2011bg, Raccanelli:2013gja, 10.1093/mnras/stu2491,Reimberg_2016,Paul:2022xfx} and references therein for details.
Starting with the scalar cumulant, following \citet{Castorina:2017inr} we define the density field in terms of angular coefficients as
\begin{equation} \tilde{\delta}({\bf r}) = \sum_{\ell m} a_{\ell m}(r) Y^{*}_{\ell m}(\hat{r}) . \end{equation}
\noindent Then the two-point function is given by
\begin{eqnarray} \langle \tilde{\delta}({\bf r'}) \tilde{\delta}({\bf r}) \rangle = \zeta({\bf r}, {\bf r'}) &=& \sum_{\ell m} \langle a_{\ell m}(r) a_{\ell m}(r') \rangle Y_{\ell m}(\hat{r}) Y^{*}_{\ell m}(\hat{r}') \\
&=& \sum_{\ell} {2\ell + 1 \over 4\pi} C_{\ell}(r,r') {\cal L}_{\ell} (\hat{r} . \hat{r}') , \end{eqnarray}
\noindent where
\begin{equation} C_{\ell}(r,r') = {2 \over \pi} \int dk k^{2} P(k, R_{\rm G}) \left[ j_{\ell}(kr) - f \left( j''_{\ell}(kr) + {2 \over kr}j'_{\ell}(kr) \right) \right] \left[ j_{\ell}(kr') - f \left( j''_{\ell}(kr') + {2 \over kr'}j'_{\ell}(kr') \right) \right] ,\end{equation}
\noindent where primes on the spherical Bessel function $j_{\ell}$ denote differentiation with respect to the argument $kr$ or $kr'$ and ${\cal L}_{\ell}$ are Legendre polynomials. The cumulant is defined as the field correlation in the limit ${\bf r} \to {\bf r}'$, which is
\begin{eqnarray}\nonumber \tilde{\sigma}_{0}^{2} \equiv \langle \tilde{\delta}({\bf r'}) \tilde{\delta}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} &=& {1 \over 2 \pi^{2}} \int dk k^{2} P(k, R_{\rm G}) \sum_{\ell=0}^{\infty} (2\ell+1) \left[ j_{\ell}^{2}(kr) - 2 f j_{\ell}(kr)j''_{\ell}(kr) + f^{2}\left[j''_{\ell}(kr)\right]^{2} + \vphantom{\frac{1}{2}} \right. \\ \nonumber & & \left. - {4 f \over kr} j_{\ell}(kr) j'_{\ell}(kr) + {4 f^{2} \over kr } j''_{\ell}(kr) j'_{\ell}(kr) + {4 f^{2} \over k^{2}r^{2}} \left[j'_{\ell}(kr)\right]^{2} \right] ,\\
\label{eq:sig0} &=& {1 \over 2 \pi^{2}} \int dk k^{2} P(k, R_{\rm G}) \left[ 1 + {2 \over 3}f + {1 \over 5} f^{2} \right] + {4 f^{2} \over 3r^{2}} {1 \over 2\pi^{2}} \int dk P(k, R_{\rm G}) .
\end{eqnarray}
\noindent The first term on the right hand side of ($\ref{eq:sig0}$) is the cumulant in the plane parallel limit. The second term is divergent as $r \to 0$ but falls off at large distances from the central observer. The divergent behaviour at $r=0$ is not physical, and can be subtracted via a suitable correction to the space distortion operator in ($\ref{eq:sphd}$). Practically, cosmological data will always occupy a domain excluding the observer and for computational purposes we will excise the $r=0$ point from the manifold in redshift space. Hence the manifold on which the RSD field $\tilde{\delta}$ is defined is not $\mathbb{R}^{3}$, but rather $\mathbb{S}^{2} \times \mathbb{R}_{> 0}$.
Similarly the radial and angular derivative cumulants can be calculated --
\begin{eqnarray} \nonumber \langle \tilde{\delta}_{r}({\bf r'}) \tilde{\delta}^{r}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} &=& {1 \over 2 \pi^{2}} \int dk k^{4} P(k, R_{\rm G}) \sum_{\ell=0}^{\infty} (2\ell+1) \left[ [j'_{\ell}(kr)]^{2} - 2 f j'_{\ell}(kr)j'''_{\ell}(kr) + f^{2}[j'''_{\ell}(kr)]^{2} \vphantom{\frac{1}{2}} \right. \\
\nonumber & & + \left. {4 f^{2} \over k^{2} r^{2}} j''_{\ell}(kr) j''_{\ell}(kr) + {4 f\over k^{2}r^{2}} j'_{\ell}(kr) j'_{\ell}(kr) - {4 f^{2} \over k^{2}r^{2}} j'_{\ell}(kr) j'''_{\ell}(kr) + {4 f^{2} \over k^{4} r^{4}} j'_{\ell}(kr) j'_{\ell}(kr) \right] \\
\nonumber &=& {1 \over 2 \pi^{2}} \int dk k^{4} P(k, R_{\rm G}) \left[ {1 \over 3} + {2 \over 5}f + {1 \over 7}f^{2} \right] + {1 \over 2\pi^{2}r^{2}} \int dk k^{2} P(k, R_{\rm G}) \left[ {4 f \over 3 } + {8 f^{2} \over 5 } \right] + \\
& & +{1 \over 2\pi^{2}r^{4}} \int dk P(k, R_{\rm G}) {4 f^{2} \over 3}
\label{eq:sigr} \end{eqnarray}
\noindent and
\begin{eqnarray} \nonumber \langle \tilde{\delta}_{\phi}({\bf r'}) \tilde{\delta}^{\phi}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} &=& {1 \over 4\pi^{2}r^{2}} \int dk k^{2}P(k, R_{\rm G}) \sum_{\ell=0}^{\infty} \ell (\ell+1) (2\ell +1) \left[ j_{\ell}^{2}(kr) - 2 f j_{\ell}(kr)j''_{\ell}(kr) + f^{2}j''_{\ell}(kr)j''_{\ell}(kr) + \vphantom{\frac{1}{2}} \right. \\
\nonumber & & \left. - {4 f \over kr} j_{\ell}(kr) j'_{\ell}(kr) + {4 f^{2} \over kr } j''_{\ell}(kr) j'_{\ell}(kr) + {4 f^{2} \over k^{2}r^{2}} j'_{\ell}(kr) j'_{\ell}(kr) \right] \\
\nonumber &=& {1 \over 2\pi^{2}} \int dk k^{4}P(k, R_{\rm G}) \left[ {1 \over 3} + {2 \over 15}f + {1 \over 35} f^{2}\right] + {1 \over 2\pi^{2}r^{2}} \int dk k^{2}P(k, R_{\rm G}) \left[-{4 \over 3} f + {8 \over 15} f^{2}\right] + \\
& & + {1 \over 2\pi^{2}r^{4}} \int dk P(k, R_{\rm G}) {4 f^{2} \over 3} \label{eq:sigphi} \\
\nonumber \langle \tilde{\delta}_{\theta}({\bf r'}) \tilde{\delta}^{\theta}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} &=& {1 \over 2\pi^{2}} \int dk k^{4}P(k, R_{\rm G}) \left[ {1 \over 3} + {2 \over 15}f + {1 \over 35} f^{2}\right] + {1 \over 2\pi^{2}r^{2}} \int dk k^{2}P(k, R_{\rm G}) \left[-{4 \over 3} f + {8 \over 15} f^{2}\right] + \\
& & +{1 \over 2\pi^{2}r^{4}} \int dk P(k, R_{\rm G}) {4 f^{2} \over 3} \label{eq:sigthe}\end{eqnarray}
\noindent The cross covariance terms are zero in this coordinate system -- for example
\begin{eqnarray} & & \langle \tilde{\delta}_{r}({\bf r'}) \tilde{\delta}^{\phi}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} = {2 \over \pi} \int dk k^{3}P(k, R_{\rm G}) \sum_{\ell=0}^{\infty} j_{\ell}(kr) j'_{\ell}(kr) \sum_{m=-\ell}^{\ell} (im) Y_{\ell m}(\hat{r}) Y^{*}_{\ell m}(\hat{r}) = 0 . \end{eqnarray}
\noindent Similarly
\begin{equation} \langle \tilde{\delta}_{\theta}({\bf r'}) \tilde{\delta}^{\phi}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} = \langle \tilde{\delta}_{r}({\bf r'}) \tilde{\delta}^{\theta}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} = 0 .\end{equation}
\noindent Hence in this coordinate system, the gradient cumulant tensor $\langle \tilde{\delta}_{i}\tilde{\delta}^{j} \rangle$ is diagonal. There is an additional correlation not present for a homogeneous field -- the vector $\langle \tilde{\delta} \tilde{\delta}_{i}\rangle$ has a single non-zero component
\begin{eqnarray} \nonumber \langle \tilde{\delta}({\bf r'}) \tilde{\delta}_{r}({\bf r}) \rangle|_{{\bf r'} \to {\bf r}} &=& {1 \over 2 \pi^{2}} \int dk k^{3} P(k, R_{\rm G}) \sum_{\ell} (2\ell+1) \left[ j_{\ell}(kr) - f(z) \left( j''_{\ell}(kr) + {2 \over kr} j'_{\ell}(kr) \right) \right] \\
\nonumber & & \hspace{10mm} \times \left[ j'_{\ell}(kr) - f(z) \left( j'''_{\ell}(kr) + {2 \over kr} j''_{\ell}(kr) - {2 \over k^{2}r^{2}}j'_{\ell}(kr) \right) \right] \\
&=& - {4 f^{2} \over 3 r^{3}} {1 \over 2\pi^{2}} \int dk P(k, R_{\rm G})
\end{eqnarray}
\noindent There are two crucial differences between this scenario and the previous plane parallel calculation in \cite{Appleby_2019} -- the cumulants are now explicitly functions of the position on the manifold at which they are estimated and they are no longer defined over $\Rspace^{3}$ since we excise the $r=0$ point. Both are consequences of the inhomogeneous nature of the redshift space distortion signal. In each of the cumulants ($\ref{eq:sig0}$-$\ref{eq:sigthe}$), the first term on the right hand sides correspond to the plane parallel limit, and the remaining terms are corrections that are fractionally suppressed by $\sigma_{0}^{2}/(\sigma_{1}^{2} r^{2})$ and $\sigma_{-1}^{2}/(\sigma_{1}^{2}r^{4})$ at large distances from the observer. Similarly the vector $\langle \tilde{\delta} \tilde{\delta}_{i} \rangle$ has asymptotic behaviour $\langle \tilde{\delta}\tilde{\delta}_{i}\rangle \to 0$ as $\sigma_{-1}^{2}/(\sigma_{0}\sigma_{1}r^{3}) \to 0$. Hence at large distances from the observer the cumulants approach their constant, plane parallel limits.
To quantify the departure of the cumulants from the plane parallel limit, we numerically evaluate ($\ref{eq:sigr}$) for a typical cold dark matter density field in the linearized limit. Taking cosmological parameters from Table \ref{tab:1}, we generate a linear $\Lambda$CDM matter power spectrum $P(k, R_{\rm G})$ at $z=0$ and use this and $f \simeq \Omega_{\rm m}^{\gamma}$, $\gamma = 6/11$ to numerically reconstruct the plane parallel limit $\tilde{\sigma}_{r, \parallelsum}^{2}$ and radial-dependent correction $\Delta_{r}^{2}$ to the cumulant $\langle \tilde{\delta}_{r}({\bf r}) \tilde{\delta}^{r}({\bf r}) \rangle = \tilde{\sigma}_{r, \parallelsum}^{2} + \Delta_{r}^{2}$, defined as --
\begin{eqnarray} & & \tilde{\sigma}_{r, \parallelsum}^{2} \equiv {1 \over 2 \pi^{2}} \int dk k^{4} P(k, R_{\rm G}) \left[ {1 \over 3} + {2 \over 5}f + {1 \over 7}f^{2} \right] \label{eqn:delta_pp} \\
& & \Delta_{r}^{2}(r) \equiv {1 \over 2\pi^{2}r^{2}} \int dk k^{2} P(k, R_{\rm G}) \left[ {4 f \over 3 } + {8 f^{2} \over 5 } \right] + {1 \over 2\pi^{2}r^{4}} \int dk P(k, R_{\rm G}) {4 f^{2} \over 3} \label{eqn:delta_a}
\end{eqnarray}
\noindent In Figure \ref{fig:1} we present the dimensionless fraction $\Delta_{r}^{2}/\tilde{\sigma}_{r,\parallelsum}^{2}$ as a function of comoving distance $r$ from an observer at $r=0$, and the corresponding redshift (top axis) using the standard $\Lambda$CDM distance-redshift relation with parameters given in Table \ref{tab:1}. We select Gaussian smoothing scales $R_{\rm G} = 15, 35 \, {\rm Mpc}$ (blue, green lines). We only present $\langle \tilde{\delta}_{r}({\bf r}) \tilde{\delta}^{r}({\bf r}) \rangle$, as this is representative of the other cumulants.
The figure shows that the coordinate dependent corrections to the cumulant are negligible for $r \gg R_{\rm G}$, and that for cosmological density fields that occupy a redshift domain $z > 0.05$ the radial cumulant is practically equal to its plane parallel limit $\langle \tilde{\delta}_{r}({\bf r}) \tilde{\delta}^{r}({\bf r}) \rangle \simeq \tilde{\sigma}_{r,\parallelsum}^{2}$. Conversely, for $r \lesssim R_{G}$ the $\Delta_{r}^{2}$ term is the dominant contribution to the cumulant, which is strongly position dependent. In this regime the cumulants grow without bound as $r \to 0$, so there is always a region for which the field $\tilde{\delta}$ cannot be considered perturbatively small. However, the region $r \lesssim R_{G}$ is not typically utilised in any cosmological scale density field reconstruction, and the plane parallel limit of the cumulants is very accurate for our purposes.
After calculating the cumulants $\langle\tilde{\delta}_{i} \tilde{\delta}^{j} \rangle$, $\langle \tilde{\delta} \tilde{\delta}_{i} \rangle$, $\langle \tilde{\delta}^{2} \rangle$, we can now estimate the ensemble average $\langle w_{i}{}^{j} \rangle$ --
\begin{equation}\label{eq:fens} \langle w_{i}{}^{j} \rangle = {1 \over 6} \int \Phi(X,\Sigma(r)) {\tilde{\delta}_{i}\tilde{\delta}^{j} \over |\nabla \tilde{\delta}|} \delta_{D}(\tilde{\delta}-\delta_{t}) dX
\end{equation}
\noindent where $\Phi(X,\Sigma(r))$ is the probability distribution of the variables $X$. The array $X$ denotes any combination of the stochastic fields ($\tilde{\delta},\tilde{\delta}_r,\tilde{\delta}_{\theta},\tilde{\delta}_{\phi}$) to which $w_{i}{}^{j}$ is sensitive. $\Sigma$ is a square matrix whose dimension is given by the number of components of $X$.
We use the fact that $\tilde{\delta}_{\theta}$ and $\tilde{\delta}_{\phi}$ are uncorrelated with $\tilde{\delta}$ and $\tilde{\delta}_{r}$ and one another, and their variances are equal as given by equations ($\ref{eq:sigphi},\ref{eq:sigthe}$). Furthermore, if the density field is statistically isotropic on the two-sphere it suffices to calculate $\langle w_{\theta}{}^{\theta} + w_{\phi}{}^{\phi} \rangle$, and then halve this value to obtain the individual elements. To estimate $\langle w_{\theta}{}^{\theta} + w_{\phi}{}^{\phi} \rangle$ and $\langle w_{r}{}^{r} \rangle$ we can use the variables $X = (\tilde{\delta}, \tilde{\delta}_{r}, y)$ where $y=\sqrt{\tilde{\delta}_{\theta}\tilde{\delta}^{\theta}+ \tilde{\delta}_{\phi}\tilde{\delta}^{\phi}}$, where $\tilde{\delta}_{\phi}\tilde{\delta}^{\phi}$ and $\tilde{\delta}_{\theta}\tilde{\delta}^{\theta}$ are given by equations (\ref{eq:sigphi}) and (\ref{eq:sigthe}) respectively. The quantity $y$ is Rayleigh distributed and uncorrelated with $\tilde{\delta}$ and $\tilde{\delta}_{r}$. The fields $\tilde{\delta}$ and $\tilde{\delta}_{r}$ are Gaussian random variables with non-zero correlations
\begin{equation} \hat{\Sigma}(r) \equiv \left( \begin{tabular}{cc}
$\langle \tilde{\delta}^{2} \rangle$ & $\langle \tilde{\delta} \tilde{\delta}_{r} \rangle$ \\
$\langle \tilde{\delta} \tilde{\delta}_{r} \rangle$ & $\langle \tilde{\delta}_{r} \tilde{\delta}^{r} \rangle$
\end{tabular} \right)
\end{equation}
\noindent Each term in $\hat{\Sigma}$ is non-zero and a function of $r$, but in the limit $r \gg \sigma_{0}/\sigma_{1}$ and $r \gg \sqrt{\sigma_{1}/\sigma_{-1}}$, $\Sigma$ approaches a diagonal form with constant components -- the plane parallel limit of \cite{Appleby_2019}. In the same limit the Rayleigh distribution for $y$ becomes independent of the radial coordinate. Defining ${\bf d} = (\tilde{\delta}, \tilde{\delta}_{r})$, $X = (\textbf{d}, y)$ and the probability distribution is
\begin{equation}\label{eq:pdf} \Phi({\bf d},y,\Sigma (r)) = {y \over \sigma_{y}^{2}\sqrt{(2\pi)^{2}|\hat{\Sigma}|}} \exp\left[- {1 \over 2} {\bf d}^{T} \hat{\Sigma}^{-1} {\bf d} - {y^{2} \over 2\sigma_{y}^{2}} \right]
\end{equation}
\noindent where $\sigma_{y}^{2} = \langle \tilde{\delta}_{\theta}\tilde{\delta}^{\theta}\rangle = \langle \tilde{\delta}_{\phi}\tilde{\delta}^{\phi} \rangle$. Although we cannot perform the integral in ($\ref{eq:fens}$) analytically, $\langle w_{i}{}^{j} \rangle$ can be numerically estimated for any $r$. In the regime $r^{2} \gg \sigma_{0}^{2}/\sigma_{1}^{2}$ and $r^{4} \gg \sigma_{-1}^{2}/\sigma_{1}^{2}$ we can use the plane parallel limit calculated in \citet{Appleby_2019} as an excellent approximation.
In Figure \ref{fig:ens} we present the ensemble average ($\ref{eq:fens}$) using the probability distribution ($\ref{eq:pdf}$) for fixed $R_{\rm G} = 20 \, {\rm Mpc}$, using the radially dependent cumulants in $\hat{\Sigma}(r)$ and $\sigma_{y}^{2}(r)$. The yellow/blue/green curves correspond to the value of the ensemble average at $r=10, 25, 50 \, {\rm Mpc}$ respectively, and the solid/dashed curves are the $(r,r)$ and $(\theta,\theta)$ components. The $(\phi,\phi)$ components are always equal to the $(\theta,\theta)$ element and so are not plotted. The grey lines correspond to the plane parallel limit of the ensemble average, obtained by taking $r$ to be some arbitrarily high value $r = 10^{3} \, {\rm Mpc}$. For $r < R_{\rm G}$, the ensemble average significantly departs from the standard Gaussian expectation value (cf Yellow, blue curves). This is due to the $r$ dependent terms in the cumulants dominating for $r < R_{\rm G}$, and also the shape change in the $(r,r)$ component is due to the cross correlation contribution $\langle \tilde{\delta}\tilde{\delta}_{r} \rangle \neq 0$. For $r > R_{\rm G}$, the components approach the plane parallel limit (cf. green curves).
The shape of the $\langle w_{i}{}^{j} \rangle$ curves depend on the correlation properties of the field. When $\langle \delta \delta_{i} \rangle = 0$, the components of $\langle w_{i}{}^{j} \rangle$ possess the well-known $\nu$ functional dependence $e^{-\nu^{2}/2}$. Any correlation between the field and its gradient will modify the shape of these statistics, even for a Gaussian random field. Practically, it would not be feasible to extract the extremely non-standard $\nu$ dependence presented in Figure \ref{fig:ens} for $r < R_{\rm G}$ from large scale structure catalogs, because we measure the volume average $\bar{w}_{i}{}^{j}$ and for $r < R_{\rm G}$ the volume is insufficient to obtain the fair sample required to estimate $\langle w_{i}{}^{j} \rangle$. Still, we can potentially probe small perturbations to the shape of the Minkowski functionals and tensors arising from the $\langle \delta \delta_{i}\rangle$ field correlation.
\subsection{Volume Average $\bar{w}_{i}{}^{j}$}
\label{sec:volav}
Next we consider what is actually extracted from cosmological data -- the volume average of $w_{i}{}^{j}$. We assume that the continuous field $\tilde{\delta}$ has been sampled at a finite set of points, specifically we take $\tilde{\delta}$ evaluated on a Cartesian grid in a cubic volume. The volume of the cube is $L^{3} \, {\rm Mpc}^{3}$ and each pixel occupies volume $\Delta^{3} \, {\rm Mpc}^{3}$. We denote a discretized field with subscript $\{...\}$ brackets to denote pixel dependence, so $\tilde{\delta}_{\{m,n,p\}}$ is the value of the field in the $m$, $n$, $p$ pixel in $(x,y,z)$ coordinates. We define the Cartesian basis vectors as ${\bf e}_{x}$, ${\bf e}_{y}$, ${\bf e}_{z}$, and spherical basis vectors ${\bf e}_{r}$, ${\bf e}_{\theta}$, ${\bf e}_{\phi}$ in a coordinate system with respect to an observer located at the center of the cube. At each grid point we construct the gradient of the field in Cartesian coordinates using a second order accurate finite difference scheme. Then $w_{i}{}^{j}$ at each point on the grid is given by
\begin{equation} w_{i}{}^{j}{}_{\{m,n,p\}} = {1 \over 6 } {\tilde{\delta}_{i}{}_{\{m,n,p\}}\tilde{\delta}^{j}{}_{\{m,n,p\}} \over |\nabla \tilde{\delta}_{\{m,n,p\}}|} \delta_{D}(\tilde{\delta}_{\{m,n,p\}}-\delta_{t}) ,
\end{equation}
\noindent where the Dirac delta function is also discretized \citep{Schmalzing:1997aj}
\begin{equation}\delta_{D}(\tilde{\delta}_{\{m,n,p\}} - \delta_{t}) = \begin{cases}
\epsilon^{-1} & {\rm if} \quad |\tilde{\delta}_{\{m,n,p\}}-\delta_{t}| < \epsilon/ 2 \\
0 & {\rm Otherwise} .
\end{cases}
\end{equation}
\noindent $\epsilon$ is a small parameter that we fix as $\epsilon = 10^{-2}$ in what follows. There is a discretization error that comes with this approximation \citep{Lim_2012, Chingangbam:2017uqv}, but we neglect this subtlety. The function $\delta_{D}(\tilde{\delta}_{\{m,n,p\}} - \delta_{t})$ selects a subset of pixels of roughly equal field value which are the points on $\Mspace$ at which we sample the vector field $\tilde{\delta}_{i}{}_{\{m,n,p\}}$ for each threshold $\delta_{t}$. Since the gradient of the field $\tilde{\delta}_{i}$ is approximately uncorrelated with $\tilde{\delta}$ point-wise on the manifold, this sampling generates an unbiased estimate of the underlying vector field $\tilde{\delta}_{i}$ for every $\delta_{t}$. The only caveat is that in spherical redshift space, $\tilde{\delta}_{r}$ is weakly correlated with $\tilde{\delta}$ but the correlation is negligible for $r^{4} \gg \sigma_{-1}^{2}/\sigma_{1}^{2}$. The quantity $w_{i}{}^{j}{}_{\{m,n,p\}}$ is a tensor evaluated at a particular point on the manifold (specified by the ${}_{\{m,n,p\}}$ pixel), and $\bar{w}_{i}{}^{j}$ is their volume average.
The concept of a volume average of non-Cartesian tensors defined at different points on a manifold is ambiguous. To proceed, we should define a fiducial pixel $\{a,b,c\}$ at which we take the volume average, and a choice of path by which we transport each $w_{i}{}^{j}{}_{\{m,n,p\}}$ to $\{a,b,c\}$. We write the volume average as
\begin{equation}\label{eq:barf} \bar{w}_{i}{}^{j} (\gamma, a,b,c) = {1 \over 6V} \sum_{m,n,p} \Delta^{3} \delta_{D}(\tilde{\delta}_{\{m,n,p\}}-\delta_{t}) {{}^{\gamma}\tilde{\delta}_{i}{}_{\{m,n,p\}}{}^{\gamma}\tilde{\delta}^{j}{}_{\{m,n,p\}} \over |\nabla \tilde{\delta}_{\{m,n,p\}}|}
\end{equation}
\noindent where the $\gamma$ superscript ${}^{\gamma}\tilde{\delta}_{i}$ denotes the transport of $\tilde{\delta}_{i}$ from $\{m,n,p\}$ to $\{a,b,c\}$ along a path $\gamma$ and
\begin{equation} V = \sum_{m,n,p} \Delta^{3} \end{equation}
\noindent We do not use all pixels in the cubic volume, but rather $\sum_{m,n,p}$ represents all pixels that lie in some radial range $r_{\rm min} \leq r \leq r_{\rm max}$, where $r_{\rm min} > R_{\rm G}$ and $r_{\rm max} < L/2$ are selected to ensure that we cut pixels close to the central observer and in the vicinity of the boundary of the box.
The choice of path $\gamma$ is completely arbitrary. However, the manifold on which the field is defined is $\mathbb{S}^{2} \times \mathbb{R}_{> 0}$ which is geodesically incomplete with respect to Euclidean paths. Since we are adopting a spherical coordinate system and anticipate a preferred signal in the radial direction, it behooves us to select a transport that preserves the radial basis vector. A natural choice that achieves this is great arc transport on the two-sphere from the angular location of $\{m,n,p\}$ to $\{a,b,c\}$ followed by a radial translation to the same distance from the central observer. Great arc transport from $\{m,n,p\}$ to $\{a,b,c\}$ rotates the spherical basis vectors ${\bf e}_{r} \to {\bf e}'_{r}$, ${\bf e}_{\theta} \to {\bf e}'_{\theta}$, ${\bf e}_{\phi} \to {\bf e}'_{\phi}$ such that ${\bf e'}_{r} = {\bf e}_{r}$ but ${\bf e'}_{\theta}$ and ${\bf e'}_{\phi}$ become mixed relative to ${\bf e}_{\theta}$, ${\bf e}_{\phi}$\footnote{Parallel transport along geodesics on $\mathbb{S}^{2}$ preserves the orientation of the tangent space relative to the tangent vector of the transport. The mixing described here arises due to the fact that the angle subtending a great arc tangent vector and the basis vectors ${\bf e}_{\theta}$, ${\bf e}_{\phi}$ varies continuously along the path.}. The mixing of spherical components is unimportant, because we are assuming that the field is isotropic on $\mathbb{S}^{2}$. We explicitly present the rotation of the spherical basis vectors -- relative to great arc tangents -- in Appendix \ref{sec:appen3}.
To perform this transport for all pixels that satisfy $\delta_{D}(\tilde{\delta}_{\{m,n,p\}} - \delta_{t}) \neq 0$, we define ${\bf \hat{T}_1}$ and ${\bf \hat{T}_2}$ as unit vectors pointing to pixels $\{m,n,p\}$ and $\{a,b,c\}$ from the `observer' at $r=0$, and rotate the vector $\tilde{\delta}_{i \, \{m,n,p\}}$ by angle $\cos\theta = {\bf \hat{T}_1} \cdot {\bf \hat{T}_2}$ about the axis defined by ${\bf \hat{T}_1} \times {\bf \hat{T}_2}$. Such a rotation can be used to describe great arc transport. The second stage of transport, along ${\bf e}_{r}$, is trivial and undertaken implicitly. Finally the transported, Cartesian gradient $\tilde{\delta}'_{i \, \{m,n,p\}}$, now defined at $\{a,b,c\}$, is converted into the spherical basis via a coordinate transformation. Note that we used a Cartesian basis to define $\tilde{\delta}_{i}$ and performed a coordinate transformation as a final step, but one could instead define $\tilde{\delta}_{i}$ in a spherical basis then rotate from $\{m,n,p\}$ to $\{a,b,c\}$. The final result will not depend on the ordering of these operations.
If we used Euclidean paths to transport $\tilde{\delta}_{i}$ to a common point on the manifold (ignoring the geodesic incompleteness), then we would obtain a completely different result. In this case, all three spherical basis vectors ${\bf e}_{r}$, ${\bf e}_{\theta}$, ${\bf e}_{\phi}$ would mix, and $\bar{w}_{i}{}^{j}$ would depend entirely on the volume over which the field is defined.
The fact that the choice of transport affects the volume average is troubling, because the ensemble average is defined at a point on the manifold and requires no notion of transport. However, we expect that our choice is appropriate for the very specific physical scenario considered in this work. With our path selection, the radial basis vector is preserved and although the angular derivatives become mixed, we are working with a field that is isotropic on $\Sspace^{2}$.
Other choices of path could be used instead -- for example transport along lines of latitude and longitude. This choice is not angle preserving -- lines of latitude are not generally geodesics. Ultimately there is no unique path definition, but for a field that is isotropic on $\Sspace^{2}$ these details are not important. Also the point on $\Sspace^{2}$ at which we take the average will not impact the volume average for an idealised field that is isotropic on $\Sspace^{2}$. Anisotropic fields on $\Sspace^{2}$ will be considered elsewhere, as many of these subtleties are likely to become problematic in the absence of this symmetry.
We would like to equate the volume and ensemble averages of $w_{i}{}^{j}$, defined in equations ($\ref{eq:barf}$) and ($\ref{eq:fens}$) respectively\footnote{Since we measure $\bar{w}_{i}{}^{j} \equiv W^{0,2}_{1}$ from a cosmological density field, we should not compare the measurement to the ensemble average of the Minkowski tensor $\langle W^{0,2}_{1} \rangle$ but rather $\langle w_{i}{}^{j} \rangle$. When the field is statistically homogeneous, we can write $\langle w_{i}{}^{j} \rangle = \langle W^{0,2}_{1} \rangle$ and there is no distinction to be made.}. As justification, we can appeal to the weak law of large numbers -- for a sequence of identically distributed variables $X_{n}$ we define an average
\begin{equation} \label{eq:sm} \bar{X} = {1 \over N} \sum_{n=1}^{N} X_{n} .\end{equation}
\noindent Then if the covariance between variables $(X_{n},X_{n+m})$ asymptotes to zero as $m \to \infty$, the sample mean $\bar{X}$ in ($\ref{eq:sm}$) approaches the underlying expectation value $E(X)$ in the limit $N \to \infty$. In our example, the summed variables are the combination on the right hand side of equation ($\ref{eq:barf}$). As we take the volume $V \to \infty$, we expect that the pixels will provide a fair sample and the correlation functions of $\tilde{\delta}$ and its gradient satisfy $\zeta(r) \to 0$ as $r \to \infty$. This suggests that the ensemble and sample averages will converge, but in realistic scenarios we deal with finite volumes, and furthermore the fields $\tilde{\delta}$, $\tilde{\delta}_{i}$ are non-Gaussian in the low redshift universe. It is not clear that the sample and ensemble averages converge when higher point correlations are present, and if so how quickly they do as the volume increases \citep{10.1046/j.1365-8711.2003.06130.x}. With our choice of transport, we do not expect the volume average to be sensitive to the point on the sphere at which we take the average, and we will take density fields located at $r \gg R_{\rm G}$ so the radial dependence of the cumulants should be irrelevant. We therefore expect that for this particular physical scenario, our choice of coordinate system and transport will allow us to use the approximation $\bar{w}_{i}{}^{j} \simeq \langle w_{i}{}^{j} \rangle$. We confirm this numerically in Section \ref{sec:num}. However, before moving on to the numerics we present a counter example in Section \ref{sec:cart} for which the notion of ergodicity (even approximate) fails completely.
\section{Spherical Redshift Space, Cartesian Coordinate System}
\label{sec:cart}
In this section, we calculate the cumulants of the spherically redshift space distorted field in Cartesian coordinates $(x,y,z)$, following the methodology of \citet{Castorina:2017inr}. The calculation is extremely tedious, and we simply state some important steps and the results in the main body of the text. The density field in redshift space $\tilde{\delta}({\bf r})$ can be written in terms of its real-space counterpart $\delta({\bf r})$ via
\begin{equation}
\tilde{\delta}({\bf r}) = \left(1+\frac{f}{3}\right)\int \frac{d^3k}{2\pi^3}\mathcal{L}_0(\hat{k}\cdot\hat{r}) e^{i{\bf k}\cdot{\bf r}}\delta({\bf k}) + \frac{2f}{r}\int \frac{d^3k}{2\pi^3}\frac{\mathcal{L}_1(\hat{k}\cdot\hat{r})}{ik} e^{i{\bf k}\cdot{\bf r}}\delta({\bf k}) + \frac{2f}{3}\int \frac{d^3k}{2\pi^3}\mathcal{L}_2(\hat{k}\cdot\hat{r}) e^{i{\bf k}\cdot{\bf r}}\delta({\bf k}),
\label{eqn:del_rsd_exp}
\end{equation}
\noindent where $\mathcal{L}_{\ell}$s are the Legendre polynomials.
We express $\textbf{k}$ as $k_x {\bf e}_x + k_y {\bf e}_y + k_z {\bf e}_z$ in Cartesian coordinates. The differentiation of the first term on the right hand side in equation (\ref{eqn:del_rsd_exp}) with respect to $x$ gives,
\begin{eqnarray}
\nonumber \left(1+\frac{f}{3}\right)\int \frac{d^3k}{2\pi^3} \frac{\partial}{\partial x}\mathcal{L}_0(\hat{k}\cdot\hat{r}) e^{i{\bf k}\cdot{\bf r}}\delta({\bf k}) &=& \left(1+\frac{f}{3}\right)\int \frac{d^3k}{2\pi^3}\delta({\bf k}) \frac{\partial}{\partial x} e^{i{\bf k}\cdot{\bf r}} \\
&=& \left(1+\frac{f}{3}\right)\int \frac{d^3k}{2\pi^3} (ik\sin\theta\cos\phi) e^{i{\bf k}\cdot{\bf r}} \delta({\bf k}).
\end{eqnarray}
We treat the other terms in the right hand side of equation (\ref{eqn:del_rsd_exp}) in a similar way and substitute the results into $\left< \frac{\partial}{\partial x}\tilde{\delta}({\bf r}) \frac{\partial}{\partial x'}\tilde{\delta}({\bf r'}) \right>\bigg\rvert_{{\bf r'} \to {\bf r}}$. We then use the relation (\ref{app:b2}) and ${\cal L}({\bf \hat{x}}.{\bf \hat{x}})=1$, along with the result
\begin{equation}
\left<\delta({\bf k_1})\delta({\bf k_2})\right> = \left( 2\pi \right)^3 \delta_D^{(3)}({\bf k_1}+{\bf k_2})P(k) ,
\end{equation}
\noindent to get
\begin{eqnarray}
\nonumber \left< \frac{\partial}{\partial x}\tilde{\delta}({\bf r}) \frac{\partial}{\partial x'}\tilde{\delta}({\bf r'}) \right>\bigg\rvert_{{\bf r'} \to {\bf r}} &=& \frac{4f^2}{3r^4}\int \frac{\textrm{d}k P(k, R_{\rm G})}{2\pi^2}+\left[ \frac{4f}{3r^4}(2x^2-r^2) +\frac{4f^2}{5r^4}(r^2+x^2) \right] \int \frac{k^2\textrm{d}kP(k, R_{\rm G})}{2\pi^2} \\
& & +\left[ \frac{1}{3} + \frac{2f}{15r^2}(r^2+2x^2) +\frac{f^2}{35r^2}(r^2+4x^2) \right]\int \frac{k^4\textrm{d}kP(k, R_{\rm G})}{2\pi^2} .
\end{eqnarray}
\noindent Similarly,
\begin{eqnarray}
\nonumber \left< \frac{\partial}{\partial y}\tilde{\delta}({\bf r}) \frac{\partial}{\partial y'}\tilde{\delta}({\bf r'}) \right>\bigg\rvert_{{\bf r'} \to {\bf r}} &=& \frac{4f^2}{3r^4}\int \frac{\textrm{d}kP(k, R_{\rm G})}{2\pi^2}+\left[ \frac{4f}{3r^4}(2y^2-r^2) +\frac{4f^2}{5r^4}(r^2+y^2) \right] \int \frac{k^2\textrm{d}kP(k, R_{\rm G})}{2\pi^2} \\
& & +\left[ \frac{1}{3} + \frac{2f}{15r^2}(r^2+2y^2) +\frac{f^2}{35r^2}(r^2+4y^2) \right]\int \frac{k^4\textrm{d}kP(k, R_{\rm G})}{2\pi^2} ,
\end{eqnarray}
\begin{eqnarray}
\nonumber \left< \frac{\partial}{\partial z}\tilde{\delta}({\bf r}) \frac{\partial}{\partial z'}\tilde{\delta}({\bf r'}) \right>\bigg\rvert_{{\bf r'} \to {\bf r}} &=& \frac{4f^2}{3r^4}\int \frac{\textrm{d}kP(k, R_{\rm G})}{2\pi^2}+\left[ \frac{4f}{3r^4}(2z^2-r^2) +\frac{4f^2}{5r^4}(r^2+z^2) \right] \int \frac{k^2\textrm{d}kP(k, R_{\rm G})}{2\pi^2} \\
& & +\left[ \frac{1}{3} + \frac{2f}{15r^2}(r^2+2z^2) +\frac{f^2}{35r^2}(r^2+4z^2) \right]\int \frac{k^4\textrm{d}kP(k, R_{\rm G})}{2\pi^2} ,
\end{eqnarray}
\begin{eqnarray}
\left< \frac{\partial}{\partial x}\tilde{\delta}({\bf r}) \frac{\partial}{\partial y'}\tilde{\delta}({\bf r'}) \right>\bigg\rvert_{{\bf r'} \to {\bf r}} &=& \left( \frac{8f}{3} + \frac{4f^2}{5} \right)\frac{xy}{r^4}\int \frac{k^2\textrm{d}kP(k, R_{\rm G})}{2\pi^2} + \left( \frac{4f}{15}+\frac{4f^2}{35} \right)\frac{xy}{r^2}\int \frac{k^4\textrm{d}kP(k, R_{\rm G})}{2\pi^2} ,
\end{eqnarray}
\begin{eqnarray}
\left< \frac{\partial}{\partial x}\tilde{\delta}({\bf r}) \frac{\partial}{\partial z'}\tilde{\delta}({\bf r'}) \right>\bigg\rvert_{{\bf r'} \to {\bf r}} &=& \left( \frac{8f}{3} + \frac{4f^2}{5} \right)\frac{xz}{r^4}\int \frac{k^2\textrm{d}kP(k, R_{\rm G})}{2\pi^2} + \left( \frac{4f}{15}+\frac{4f^2}{35} \right)\frac{xz}{r^2}\int \frac{k^4\textrm{d}kP(k, R_{\rm G})}{2\pi^2} .
\end{eqnarray}
\noindent In this coordinate system, the cumulant tensor $\langle \tilde{\delta}_{i}\tilde{\delta}^{j} \rangle$ is not diagonal. We visualize the Cartesian cumulants in Figure \ref{fig:4a}. We smooth the power spectrum with a Gaussian kernel with scale $R_{\rm G} = 20 \, {\rm Mpc}$, select a fixed radial distance $r=200 \, {\rm Mpc}$ from the central observer and present Mollweide projections of the dimensionless quantity $\langle \tilde{\delta}_{i}\tilde{\delta}^{j} \rangle/\sigma_{1}^{2}$ on the sphere. The
top row panels show the diagonal $(x,x)$, $(y,y)$ and $(z,z)$ components (left to right), while the bottom row panels show $(x,y)$, $(x,z)$ and $(y,z)$ components (left to right). The diagonal elements present a series of dipoles on the sphere, and the off-diagonal elements are quadrupolar. All elements are generically non-zero and vary significantly with spatial position. This is in contrast with the cumulants in a spherical coordinate system, which are isotropic on the sphere and vary only weakly with $r$.
The spatial dependence of $\langle \tilde{\delta}_{i}\tilde{\delta}^{j} \rangle$ means that the vector $\tilde{\delta}_{i}$ located at different points on $\Rspace^{3}$ are not equally likely to be observed in a realisation. Given $\langle \tilde{\delta}^{2} \rangle$, $\langle \tilde{\delta}\tilde{\delta}_{j}\rangle$ and $\langle \tilde{\delta}_{i}\tilde{\delta}^{j}\rangle$, we can construct the ensemble average $\langle w_{i}{}^{j}\rangle$ in this coordinate system
\begin{equation}\label{eq:hhm} \langle w_{i}{}^{j} \rangle = {1 \over 6} \int \Phi(X,x,y,z) {\tilde{\delta}_{i}\tilde{\delta}^{j} \over |\nabla \tilde{\delta}|} \delta_{D}(\tilde{\delta}-\delta_{t}) dX ,
\end{equation}
\noindent where $\Sigma(x,y,z)$ is given by
\begin{equation} \Sigma(x,y,z) = \left( \begin{tabular}{cc}
$\langle \tilde{\delta}^{2} \rangle$ & $\langle \tilde{\delta} \tilde{\delta}_{i} \rangle$ \\
$\langle \tilde{\delta} \tilde{\delta}^{j} \rangle$ & $\langle \tilde{\delta}_{i} \tilde{\delta}^{j} \rangle$
\end{tabular} \right) .
\end{equation}
It is clear that the volume average $\bar{w}_{i}{}^{j}$ will not generically be representative of the ensemble average $\langle w_{i}{}^{j} \rangle$ in this coordinate system, due to the coordinate dependence of $\langle w_{i}{}^{j} \rangle$. For example, taking the all-sky spatial average of $w_{i}{}^{j}$ extracted from a field with the particular cumulant pattern in Figure \ref{fig:4a} will yield an isotropic result $\bar{w}_{i}{}^{j} \propto \delta_{i}{}^{j}$ -- we confirm this in the following section\footnote{Simply adding Cartesian components of $w_{i}{}^{j}$ at different points of the manifold to obtain $\bar{w}_{i}{}^{j}$ implicitly assumes Euclidean path transport, but neglects the geodesic incompleteness of the manifold. Regardless, we do not use the Cartesian coordinate system other than to provide an example for which ergodicity fails.}. The volume average in this particular case would incorrectly identify the field as isotropic, because the spatial dependence of the signal in this coordinate system would be washed out by the averaging. The volume and ensemble averages cannot be equated even approximately in this example. This conclusion is not in contradiction with the plane parallel limit, because here we are considering an all-sky average. If we instead took a small patch on the sky and aligned the Cartesian coordinate system with one axis pointing to the patch, then the plane parallel limit could be approximately realised.
The underlying point is that for tensors and inhomogeneous fields, the volume average can reasonably approximate the ensemble average or completely misrepresent it, depending on the properties of the field and choice of coordinate system, volume and transport path.
\section{Numerical Extraction of Minkowski Tensors in Spherical Redshift Space}
\label{sec:num}
We now confirm numerically some of the results of the previous sections, and furthermore study the conditions under which we can faithfully extract the Kaiser signal from a redshift space distorted, non-Gaussian matter field in the low redshift Universe. The matter density field is assumed to be Gaussian in the early Universe, but the non-linear nature of gravitational collapse couples Fourier modes. This is a scale dependent statement, and by smoothing the late time density field over sufficiently large scales, the standard model of cosmology posits that the density field is perturbatively non-Gaussian. We attempt to extract the Kaiser redshift space distortion signal from the large-scale-averaged density field.
In this work we do not pursue the computational challenges that come with real data, such as galaxy bias, shot noise, complex survey geometries and Malmquist bias -- these issues will be considered elsewhere. When galaxies are scattered radially, the relative volume difference along the line of sight can introduce a spurious radial gradient in the mean density, which must be carefully subtracted. Neglecting these subtleties, we focus specifically on two questions -- can we use the volume average constructed in Section \ref{sec:volav} as an unbiased estimate of the ensemble average derived in Section \ref{sec:ensav}, and over what scales must we smooth the non-Gaussian dark matter field to reproduce the Gaussian limit of these statistics? We also compare the MTs extracted from plane parallel and spherical redshift space distorted fields and confirm that they are indistinguishable for fields occupying cosmological volumes.
To perform these tests, we use two data sets -- initially Gaussian random fields and then dark matter particle distributions that have been gravitationally evolved to $z=0$.
\subsection{Gaussian Random Fields}
\label{sec:grf}
For Gaussian random fields, we start by generating an isotropic and homogeneous field $\delta$ in a periodic cube of side length $L = 1490 \, {\rm Mpc}$ ( $= 1000 \, h^{-1} {\rm Mpc}$), using an input linear $\Lambda$CDM matter power spectrum $P(k,R_{\rm G})$ at $z=0$ with cosmological parameters given in Table \ref{tab:1}. We smooth the field with Gaussian kernel $W(k R_{\rm G}) \propto e^{-k^{2}R_{\rm G}^{2}/2}$. The field is sampled on a Cartesian grid with $N_{\rm p} = 512$ pixels per side, with resolution $\Delta = 2.9 \, {\rm Mpc}$. We then create plane parallel and spherical redshift space distorted fields. For the plane parallel case, we apply the standard operator (cf. equation ($\ref{eq:pp1}$)) to $\delta$, in Fourier space, using $f = \Omega_{\rm m}^{6/11}$ and aligning the RSD correction with the ${\bf e}_{z}$ axis of the box.
To construct a spherically redshift space distorted field, we generate a second isotropic field $\Omega \equiv \nabla^{-2}\delta$ on the grid, and construct the gradient $\nabla_{i} \Omega$ in the Cartesian coordinate system. Then we infer the radial derivative $\partial_{r}\Omega$ using a standard transformation (we provide our angle conventions explicitly in equation (\ref{eq:other1})). We repeat this procedure on $\partial_{r}\Omega$ to obtain the second derivative $\partial_{rr}\Omega$, and finally define the spherically redshift space distorted density field as
\begin{equation} \tilde{\delta}_{\{m,n,p\}} = \delta_{\{m,n,p\}} + f \left( \partial_{rr}\Omega_{\{m,n,p\}} + {2 \over r}\partial_{r}\Omega_{\{m,n,p\}} \right) .
\end{equation}
This field is masked such that $\tilde{\delta}_{\{m,n,p\}}$ is assigned zero value and not used in our analysis if the pixel $\{m,n,p\}$ is such that it's radial distance from the `observer' at the center of the box, lies outside the range $100<r \le 630 $ in Mpc units.
We use `all-sky' data, taking the complete $4\pi r^2$ area on $\mathbb{S}^{2}$ relative to the central observer.
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c c ||}
\hline
Parameter \, & Fiducial Value \\ [0.5ex]
\hline\hline
$\Omega_{\rm m}$ & $0.318$ \\
$h$ & $0.671$ \\
$w_{\rm de}$ & $-1$ \\
$n_{\rm s}$ & $0.962$ \\
$\sigma_{8}$ & $0.834$ \\
\hline
\end{tabular}
\caption{\label{tab:1}Fiducial cosmological parameters used in this work, selected to match the fiducial cosmology of the Quijote simulations \cite{Villaescusa-Navarro:2019bje}. }
\end{center}
\end{table}
For each dataset, we calculate the mean $\bar{\delta}$ and variance $\tilde{\sigma}_{0}^{2}$ of the unmasked pixels, and define the zero mean, unit variance field $(\tilde{\delta}_{\{m,n,p\}} - \bar{\delta})/\tilde{\sigma}_{0}$. The volume average $\bar{w}_{i}{}^{j}$ is calculated for each of the three datasets -- isotropic, plane parallel and spherical redshift space distorted. For the isotropic and plane parallel fields, we use the entire box with periodic boundary conditions, and $\bar{w}_{i}{}^{j}$ is defined in the Cartesian coordinate system of the box. From the Cartesian lattice we use a simple second order accurate finite difference scheme to construct the gradients $\delta_{i}$ and $\tilde{\delta}_{i}$, and since we use Euclidean paths to collect tensors in $\Rspace^{3}$ we can simply take a sum of $w_{i}{}^{j}_{\{m,n,p\}}$ pixels without any explicit transport transformation. Hence the volume averages are
\begin{eqnarray} {}^{\rm re}\bar{w}_{i}{}^{j} &=& {1 \over 6V} \sum_{m,n,p} \Delta^{3} \delta_{D}(\delta_{\{m,n,p\}}-\nu) {\delta_{i}{}_{\{m,n,p\}} \delta^{j}{}_{\{m,n,p\}} \over |\nabla \delta_{\{m,n,p\}}|} ,\\
{}^{\rm pp}\bar{w}_{i}{}^{j} &=& {1 \over 6V} \sum_{m,n,p} \Delta^{3} \delta_{D}(\tilde{\delta}_{\{m,n,p\}}-\nu) {\tilde{\delta}_{i}{}_{\{m,n,p\}}\tilde{\delta}^{j}{}_{\{m,n,p\}} \over |\nabla \tilde{\delta}_{\{m,n,p\}}|} ,
\end{eqnarray}
\noindent where the superscripts denote `real space' (re) and `plane parallel' (pp) and $\nu$ is the root mean square normalised threshold $\nu = \delta_{t}/\sigma_{0}$ or $\nu = \delta_{t}/\tilde{\sigma}_{0}$, respectively.
For the spherically distorted field, we follow the procedure outlined in Section \ref{sec:volav} - we randomly select an unmasked pixel $\{a,b,c\}$ as the fiducial point at which we take the spatial average, with unit vector pointing to the pixel denoted ${\bf \hat{T}_2}$. Then for each pixel selected by the discretized delta function $\delta_{D}(d_{\{m,n,p\}}-\nu) \neq 0$, where $d$ is either $\tilde\delta$ or $\delta$, we define the unit vector pointing to this pixel as ${\bf \hat{T}_1}$, and use ${\bf \hat{T}_1}$ and ${\bf \hat{T}_2}$ to construct a unit quaternion $q$ which is used to rotate the Cartesian gradient vector $\tilde{\delta}'_{i} = q \tilde{\delta}_{i} q^{*}$, reflecting its change of orientation when transported from ${\bf \hat{T}_1}$ to ${\bf \hat{T}_2}$. The components of the quaternion are given in Appendix \ref{sec:appen2}. At $\{a,b,c\}$, the rotated Cartesian gradient is transformed to the spherical coordinate basis ${\bf e}_{r}$, ${\bf e}_{\theta}$, ${\bf e}_{\phi}$. Note that there is no unique rotation/great arc transport for pixels at antipodal points on the sphere to $\{a,b,c\}$; for these we select a random rotation axis in the plane perpendicular to ${\bf \hat{T}_2}$ (we have confirmed that different choices do not affect our numerical results). The volume average for the spherically redshift-space distorted case (superscript ${}^{\rm sp}$) is
\begin{equation} {}^{\rm sp}\bar{w}_{i}{}^{j} = {1 \over 6V} \sum_{m,n,p} \Delta^{3} \delta_{D}(\tilde{\delta}_{\{m,n,p\}}-\nu) {{}^{\gamma}\tilde{\delta}_{i}{}_{\{m,n,p\}} {}^{\gamma}\tilde{\delta}^{j}{}_{\{m,n,p\}} \over |\nabla \tilde{\delta}_{\{m,n,p\}}|} ,
\end{equation}
\noindent where ${}^{\gamma}$ denotes great arc transport, and the tensor is defined in a spherical basis. We measure $\bar{w}_{i}{}^{j}$ over $N_{\nu} = 51$ threshold values $\nu$ equi-spaced over the range $-3.8 \leq \nu \leq 3.8$, for $N_{\rm real} = 50$ realisations of a Gaussian random field. We repeat the measurements for fields smoothed with scale $R_{\rm G}$ over the range $15 \, {\rm Mpc} \leq R_{\rm G} \leq 45 \, {\rm Mpc}$.
Before presenting the numerical results, we discuss a way to check the Gaussian nature of a random field. For a general weakly non-Gaussian field we can expand the components of the Minkowski tensors as a series of Hermite polynomials\footnote{$H_{n}(\nu)$ are the probabilist's Hermite polynomials, the first few of which are given by $H_{0}(\nu)=1$, $H_{1}(\nu) = \nu$, $H_{2}(\nu) = \nu^{2} - 1$.}, as follows,
\begin{equation}
\bar{w}_{i}{}^{j} = e^{-\nu^2/2}\left(A|_i^j H_0(\nu) + a_1|_i^j H_1(\nu) + a_2|_i^j H_2(\nu)+\ldots\right).
\end{equation}
This expansion is equivalent to Matsubara's perturbative expansion for the scalar Minkowski functionals \citep{2003ApJ...584....1M}, albeit the expansion coefficients are assigned to each Hermite polynomial and not to powers of the variance. The coefficients contain information of the generalized skewness, kurtosis and higher moments of the field. The coefficients $A|_i^j, a_1|_i^j, a_2|_i^j$ can be computed using the orthogonality properties of the Hermite polynomials, as,
\begin{eqnarray} A|_{i}{}^{j} &=& {1 \over \sqrt{2\pi}} \int_{-\nu_{{\rm max}}}^{\nu_{\rm max}} \bar{w}_{i}{}^{j}(\nu) H_{0}(\nu) d\nu ,
\label{eqn:Aij}\\
a_{1}|_{i}{}^{j} &=& {1 \over \sqrt{2\pi}} \int_{-\nu_{{\rm max}}}^{\nu_{{\rm max}}} \bar{w}_{i}{}^{j}(\nu) H_{1}(\nu) d\nu ,
\label{eqn:a1ij}\\
a_{2}|_{i}{}^{j} &=& {1 \over 2\sqrt{2\pi}} \int_{-\nu_{{\rm max}}}^{\nu_{{\rm max}}} \bar{w}_{i}{}^{j}(\nu) H_{2}(\nu) d\nu,
\label{eqn:a2ij}
\end{eqnarray}
where $\nu_{\rm max} \to \infty$. For our analysis we take $\nu_{{\rm max}} = 3.8$, after checking that our results are not sensitive to reasonable variations of this value\footnote{The Hermite polynomials are exactly orthogonal only in the limit $\nu_{{\rm max}}\to \infty$. However, since $\bar{w}_{i}{}^{j}$ is exponentially damped at large thresholds, it suffices to choose finite $\nu_{{\rm max}}$. Taking $\nu_{{\rm max}}$ to be too large in a finite volume dataset can generate biased values of the Hermite polynomial coefficients (cf Appendix A-4 of \citet{Appleby:2021lfq}).}. For Gaussian random fields, the coefficients of the higher order terms in the expansion $a_1,a_2$ etc should be consistent with zero in real and redshift space, so we refer to the coefficient of $H_{0}(\nu)$ as the `amplitude' of the MT components.
Using the above way of representing weakly non-Gaussian random fields, in the Gaussian and plane parallel limits we have \citep{Appleby_2019}
\begin{equation} {}^{\rm pp}\bar{w}_{i}{}^{j} = {}^{\rm pp}A_{G}|_{i}{}^{j} H_{0}(\nu) e^{-\nu^{2}/2}
\end{equation}
\noindent with
\begin{eqnarray}\label{eq:amp1} & & {}^{\rm pp}A_{G}|_{x}{}^{x} = {A_{0} \over 4}\left[ {(2\lambda^{2}-1)\cosh^{-1}\left(2\lambda^{2}-1\right) \over (\lambda^{2}-1)^{3/2}} - {2\lambda \over \lambda^{2}-1} \right] , \\
\label{eq:amp2} & & {}^{\rm pp}A_{G}|_{y}{}^{y} = {}^{\rm pp}A_{G}|_{x}{}^{x} , \\
\label{eq:amp3} & & {}^{\rm pp}A_{G}|_{z}{}^{z} = A_{0}\left({\lambda^{2} \over \lambda^{2}-1}\right) \left( \lambda - {\cosh^{-1} \lambda \over \sqrt{\lambda^{2}-1}}\right) ,
\end{eqnarray}
\noindent where $A_{0}$ and $\lambda$ are defined in equations ($\ref{eq:a0}$) and ($\ref{eq:lam}$). In real space we have \citep{Appleby:2018tzk}
\begin{equation}\label{eq:Agau} {}^{\rm re}A_{\rm G}|_{x}{}^{x} = {}^{\rm re}A_{\rm G}|_{y}{}^{y} = {}^{\rm re}A_{\rm G}|_{z}{}^{z} = {\sigma_{1} \over 9\sqrt{3} \pi \sigma_{0}} .
\end{equation}
\noindent For the Minkowski functional $W_{1}$, the coefficient $a_{1}$ of $H_{1}(\nu)$ is one of two terms induced as a leading order non-Gaussian correction and $a_{2}$ is one of several higher order contributions. Hence we use these terms as proxies to study the non-Gaussian corrections of the MTs that are induced by gravitational collapse. As mentioned above, for the Gaussian random fields considered in this subsection, $a_{1}$ and $a_{2}$ should be consistent with zero in real, plane parallel and spherical redshift space. The redshift space distortion operator does not change the Gaussian nature of the field. We check that the numerically computed values of $a_{1}$ and $a_{2}$ are consistent with zero in our calculations, when measuring the Minkowski tensor of Gaussian random fields.
In Figure \ref{fig:grf} (top left panel) we present the diagonal and off-diagonal components of ${}^{\rm re}\bar{w}_{i}{}^{j}$ and ${}^{\rm sp}\bar{w}_{i}{}^{j}$ extracted from the fields smoothed with $R_{\rm G} = 20 \, {\rm Mpc}$. The points/error bars correspond to the mean and root-mean-square (rms) of the realisations, hence we are presenting the ensemble average of the volume average. The filled/open diamonds are measurements in spherical redshift and real space respectively. The diagonal components in real space are equal, modulo a noise component (cf light green/blue/red open diamonds). The real-space volume average satisfies ${}^{\rm re}\bar{w}_{i}{}^{j} \propto \delta_{i}{}^{j}$ in every coordinate system. In redshift space, the radial component of $\bar{w}_{i}{}^{j}$ is significantly larger than the angular components -- this is the Kaiser signal. The off-diagonal components of ${}^{\rm re}\bar{w}_{i}{}^{j}$, ${}^{\rm pp}\bar{w}_{i}{}^{j}$ and ${}^{\rm sp}\bar{w}_{i}{}^{j}$ are all consistent with zero.
In Figure \ref{fig:grf} we present the values of $A|_{i}{}^{j}$ (top right panel), $a_{1}|_{i}{}^{j}$ (bottom left panel) and $a_{2}|_{i}{}^{j}$ (bottom right panel) for $\bar{w}_{i}{}^{j}$ extracted from the real and spherical redshift space distorted fields as a function of smoothing scale $R_{\rm G}$. In the top right panel, the solid/dashed gold lines are the corresponding plane parallel Kaiser limits given in equations ($\ref{eq:amp1}-\ref{eq:amp3}$) and the solid silver line is the isotropic expectation value in equation ($\ref{eq:Agau}$).
\noindent The volume averages ${}^{\rm re}\bar{w}_{i}{}^{j}$ and ${}^{\rm sp}\bar{w}_{i}{}^{j}$ extracted from the spherical RSD and real space data sets match the ensemble averages derived in \cite{Appleby:2018tzk, Appleby_2019}. Similarly the coefficients $a_{1}$, $a_{2}$ are consistent with zero at all scales probed (cf bottom panels). This is expected - we generated Gaussian random fields and the application of the linear redshift space distortion operator preserves Gaussianity. This provides a check on the ergodicity condition $\langle w_{i}{}^{j} \rangle \simeq \bar{w}_{i}{}^{j}$, and indicates that our definition of the volume average can be used to reproduce the ensemble average.
Finally, in Figure \ref{fig:sp_pp_grf} we present the fractional differences $({}^{\rm sp}A|_{i}{}^{j} - {}^{\rm pp}A|_{i}{}^{j})/{}^{\rm pp}A|_{i}{}^{j}$ (left panel), $({}^{\rm sp}a_{1}|_{i}{}^{j} - {}^{\rm pp}a_{1}|_{i}{}^{j})/{}^{\rm pp}A_{\rm G}|_{i}{}^{j}$ (central panel) and $({}^{\rm sp}a_{2}|_{i}{}^{j} - {}^{\rm pp}a_{2}|_{i}{}^{j})/{}^{\rm pp}A_{\rm G}|_{i}{}^{j}$ (right panel) as a function of smoothing scale $R_{\rm G}$. These quantities are all consistent with zero at all scales probed, confirming that the plane parallel and spherical redshift space distorted fields are statistically indistinguishable for data that is at cosmological distance $ > 100 \, {\rm Mpc}$ from the observer.
\subsection{Non-Gaussian Dark Matter Fields}
\label{sec:ngrf}
To study the gravitationally evolved non-Gaussian dark matter density field, we use $N_{\rm real} = 50$, $z=0$ snapshot boxes from the Quijote simulations \citep{Villaescusa-Navarro:2019bje}). These are a suite of cosmological scale dark matter simulations in which $\sim 44,000$ realisations of $512^{3}$ particles are gravitationally evolved in boxes of size $L = 1490 \, {\rm Mpc}$ ( $= 1000 \, h^{-1} {\rm Mpc}$) from $z=127$ to $z=0$. We take $N_{\rm real} = 50$, $z=0$ snapshot boxes and generate real space density fields by binning the dark matter particles into a regular $512^3$ Cartesian grid of resolution $\Delta = 2.9 \, {\rm Mpc}$ using a cloud-in-cell scheme. Defining the number density field $\delta_{\{i,j,k\}} = (n_{\{i,j,k\}} - \bar{n})/\bar{n}$, where $n_{\{i,j,j\}}$ is the number of particles in the $\{i,j,k\}$ pixel and $\bar{n}$ is the mean number of particles per pixel. We smooth this field with a Gaussian kernel $W(kR_{\rm G}) \propto e^{-k^{2}R_{G}^{2}/2}$ in Fourier space.
To generate the plane parallel and spherical redshift space distorted fields, we take the real-space positions of the particles ${\bf x}$ and perturb them according to
\begin{eqnarray} & & {\bf s} = {\bf x} + {\bf e}_{z} ({\bf v}.{\bf e}_{z}) {(1+z) \over H(z)} , \\
& & {\bf s} = {\bf x} + {\bf e}_{r} ({\bf v}.{\bf e}_{r} ) {(1+z) \over H(z)} , \end{eqnarray}
\noindent respectively, where ${\bf v}$ is the velocity of the particle, ${\bf e}_{z}$ is the unit vector aligned with the $z$ direction of the Cartesian grid and ${\bf e}_{r}$ is the radial basis vector to the particle from an observer at the center of the box. We take redshift zero snapshot boxes, so we fix $z=0$ and $H(z) = H_{0}$.
For the redshift space distorted fields we bin the particles into pixels with the cloud-in-cell scheme according to their redshift space position, using the same $\Delta = 2.90 \, {\rm Mpc}$ Cartesian grid. We apply periodic boundary conditions for the plane parallel corrected box along ${\bf e}_{z}$, which renders the field homogeneous but anisotropic. The spherical redshift space distortion operator is incompatible with periodicity. So we exclude all pixels that lie at distances $r \leq 50 \, {\rm Mpc}$ and $r \geq 700 \, {\rm Mpc}$ from the central observer in our calculations of $\bar{w}_{i}{}^{j}$.
The outer boundary of the shell is at least $50 \, {\rm Mpc}$ from the edges of the box, so all particles affected by the periodic boundary are excluded. Finally, we smooth these pixel boxes with Gaussian kernel $W(kR_{\rm G}) \propto e^{-k^{2}R_{G}^{2}/2}$ in Fourier space, and then further exclude all pixels that lie a distance $r \leq 100 \, {\rm Mpc}$ and $r \geq 670 \, {\rm Mpc}$ from the central observer. This last step eliminates pixels that are affected by sampling near the boundary. The end result is a set of three fields from which we extract ${}^{\rm re}\bar{w}_{i}{}^{j}$, ${}^{\rm pp}\bar{w}_{i}{}^{j}$ and ${}^{\rm sp}\bar{w}_{i}{}^{j}$.
We calculate the mean $\bar{\delta}$ and variance $\tilde{\sigma}_{0}^{2}$ of the unmasked pixels for each field, and define the zero mean, unit variance quantity $(\tilde{\delta}_{\{m,n,p\}} - \bar{\delta})/\tilde{\sigma}_{0}$. The quantities ${}^{\rm re}\bar{w}_{i}{}^{j}$, ${}^{\rm pp}\bar{w}_{i}{}^{j}$ and ${}^{\rm sp}\bar{w}_{i}{}^{j}$ are measured over $n_{\nu} = 301$ values of threshold density $\nu$ from the minimum and maximum values of the field in each simulation. We then re-scale the iso-density threshold $\nu$ to $\nu_{\rm A}$, where $\nu_{\rm A}$ is the threshold for which the excursion set has the same volume fraction as a corresponding Gaussian field:
\begin{equation}\label{eq:afrac}
f_{\rm A} = {1 \over \sqrt{2\pi}} \int^{\infty}_{\nu_{A}} e^{-t^{2}/2} \, dt ,
\end{equation}
where $f_{\rm A}$ is the fractional volume of the field above $\nu_{\rm A}$. Expressing the MTs as a function of $\nu_{\rm A}$ as opposed to $\nu$ partially Gaussianizes the statistics \citep{1987ApJ...319....1G,1987ApJ...321....2W,1988ApJ...328...50M}. To perform this re-scaling, we use spline interpolation on the $W^{0,2}_{1}$ versus $\nu$ calculated data and construct $W^{0,2}_{1}$ versus $\nu_A$ at $n_{\nu_{A}}=41$ values equi-spaced over the range $-3.8 < \nu_{\rm A} < 3.8$.
In Figure \ref{fig:2a} we present the components of the Minkowski tensor ${}^{\rm sp}\bar{w}_{i}{}^{j}$ as a function of $\nu$ (top-left panel) and $\nu_{\rm A}$ (top-middle panel) for the fields smoothed with comoving scale $R_{\rm G} = 20 \, {\rm Mpc}$. The off-diagonal components are presented in the top-right panel and are consistent with zero. The same is true for all smoothing scales tested in this work. The top panels represent the components of ${}^{\rm sp}\bar{w}_{i}{}^{j}$ in the spherical basis.
In the bottom panels of Figure \ref{fig:2a} we present the components of ${}^{\rm sp}\bar{w}_{i}{}^{j}$ in a Cartesian basis, calculated using Euclidean paths to transport tensors to a common location on the manifold. We plot the $(x,x)$, $(y,y)$, $(z,z)$ components as a function of $\nu$ (left), $\nu_{A}$ (middle) and the off-diagonal elements (right panel). The Minkowski tensors in the top and bottom panels are both extracted from the same spherical redshift space distorted density field, only the coordinate systems and choice of transport paths differ. In the bottom panels, we observe that the diagonal elements of the Minkowski tensor are statistically equivalent, and the off-diagonal elements consistent with zero. Hence ${}^{\rm sp}\bar{w}_{i}{}^{j} \propto \delta_{i}{}^{j}$, and the volume average incorrectly infers that the field is isotropic. As discussed in Section \ref{sec:cart}, in a Cartesian basis the spherical redshift space distortion operator generates spatially dependent cumulants, and taking the volume average washes out the anisotropic signal.
Next we explore the information contained in the coefficients $A,\, a_1$ and $a_2$ defined in section~\ref{sec:grf}. In the top left panel of Figure \ref{fig:2} we present the components $A|_{i}{}^{j}$ with $(i,j)=(r,r), (\theta, \theta), (\phi,\phi)$ (dark red, blue, green filled diamonds respectively) in redshift space. The points/error bars are the mean and rms values of the $N=50$ snapshot boxes and points that overlap have been slightly perturbed for visual clarity. For comparison we also show the expectation values for ${}^{\rm sp}A_{G}|_{r}{}^{r}$ (solid gold line) and ${}^{\rm sp}A_{G}|_{\theta}{}^{\theta} = {}^{\rm sp}A_{G}|_{\phi}{}^{\phi}$ (dashed gold line) in the limit $r \to \infty$ for a Gaussian random field with a linear $\Lambda$CDM power spectrum and the same cosmological parameters as the Quijote simulations.
In the top right panel we exhibit the ratio of ${}^{\rm sp}A|_{i}{}^{j}$ extracted from the Quijote simulations and the Gaussian plane parallel expectation values ($\ref{eq:amp1}-\ref{eq:amp3}$). We also present ${}^{\rm re}A|_{i}{}^{j}$ divided by the isotropic expectation value ($\ref{eq:Agau}$), with $(i,j)=(x,x), (y, y), (z,z)$ (light red, blue, green open diamonds respectively) extracted from the corresponding real space snapshot boxes without any velocity correction applied to the particle positions.
The results for the isotropic field (light open diamonds) present no surprises. The amplitude of each component $(x,x)$, $(y,y)$, $(z,z)$ are statistically indistinguishable, and the Gaussian limit is an excellent approximation at quasi-linear scales $R_{G} \gtrsim 30 \, {\rm Mpc}$ (cf. top panels). Below this scale, the amplitude of the Minkowski tensor components starts to drop relative to the Gaussian expectation (cf top right panel). This is due to the `gravitational smoothing' effect first observed in \citet{1988ApJ...328...50M} for the scalar functionals. The $a_{1}$ component (cf bottom left) is consistent with zero on large scales, but is ${\cal O}(0.01)$ at quasi-linear scales $R_{G} \sim 25 \, {\rm Mpc}$. The $a_{2}$ term (cf bottom right), which we expect to be induced at higher order in a $\sigma_{0}$ expansion of non-Gaussianity, is consistent with zero at all scales probed.
In redshift space (dark filled diamonds), the picture changes considerably. The most striking difference is the strong departure of ${}^{\rm sp}A|_{r}{}^{r}$ from its Gaussian expectation value (cf red filled diamonds, top panels). Even on large scales $R_{\rm G} \sim 40 \, {\rm Mpc}$, the Gaussian, Kaiser formula ($\ref{eq:amp1}$) is not a particularly good approximation. In contrast, the Kaiser approximation ($\ref{eq:amp2},\ref{eq:amp3}$) is excellent for the perpendicular components (green/blue filled diamonds, top panels). It was noted in \cite{Kim:2014axe} that the Gaussian, Kaiser limit is only a good approximation for the scalar Minkowski functionals when the density field is smoothed on very large scales. Our results support this statement, and further show that the radial component of the field is the origin of the breakdown. In addition to the decrease in $A|_{r}{}^{r}$, the non-Gaussian terms $a_{1,2}|_{i}{}^{j}$ are larger for the $(r,r)$ component in redshift space, but remain small at the scales probed. The fact that $a_{2}$ is induced at a statistically significant level on scales $R_{G} \leq 20 \, {\rm Mpc}$ suggests that novel non-Gaussian contributions are induced in redshift space (cf red filled diamonds, lower right panel).
In \citet{Appleby_2019} it was noted that the ratio of parallel and perpendicular components of the Minkowski tensor would provide a relatively pure measurement of $f$ (or $\beta = f/b$ for biased tracers). However, it is clear that ${}^{\rm sp}A|_{r}{}^{r}$ strays far from the Kaiser limit. The perpendicular components ${}^{\rm sp}A|_{\theta}{}^{\theta}$, ${}^{\rm sp}A|_{\phi}{}^{\phi}$ remain closer to their Gaussian expectation values on small scales, but their values are not sensitive to $f$ alone. Specifically, each individual component of the Minkowski tensors are sensitive to $n_{s}$, $\Omega_{c}h^{2}$ and $f$. Measuring the ratios ${}^{\rm sp}A|_{\theta}{}^{\theta}/{}^{\rm sp}A|_{r}{}^{r}$, ${}^{\rm sp}A|_{\phi}{}^{\phi}/{}^{\rm sp}A|_{r}{}^{r}$ would potentially break these degeneracies, but only after we have resolved the origin of the ${}^{\rm sp}A|_{r}{}^{r}$ behaviour.
The large departure of ${}^{\rm sp}A|_{r}{}^{r}$ from the Kaiser limit is not due to the imposition of the spherical redshift space distortion operator. To hightlight this, in Figure \ref{fig:sp_pp} we present the fractional differences $({}^{\rm sp}A|_{i}{}^{j} - {}^{\rm pp}A|_{i}{}^{j})/{}^{\rm pp}A|_{i}{}^{j}$ (left panel), $({}^{\rm sp}a_{1}|_{i}{}^{j} - {}^{\rm pp}a_{1}|_{i}{}^{j})/{}^{\rm pp}A_{\rm G}|_{i}{}^{j}$ (central panel) and $({}^{\rm sp}a_{2}|_{i}{}^{j} - {}^{\rm pp}a_{2}|_{i}{}^{j})/{}^{\rm pp}A_{\rm G}|_{i}{}^{j}$ (right panel). All three fractional differences are consistent with zero over all scales probed in this work, meaning that the spherical and plane parallel redshift space fields possess statistically indistinguishable Minkowski tensor functionals, similar to the Gaussian random fields in the previous subsection.
\subsection{Non-Gaussian Effects along the line of sight}
The significant drop in the amplitude of the Minkowski tensor component parallel to the line of sight on small scales observed in the previous subsection can be interpreted as the Finger of God effect, which scatters particle positions over megaparsec scales due to the large peculiar velocity dispersion $\sigma_{\rm pec}$ associated with bound structures \citep{1972MNRAS.156P...1J}. The dominant effect of $\sigma_{\rm pec}$ is an amplitude decrease in $A|_{r}{}^{r}$ which is consistent with an additional, anisotropic damping factor acting on the power spectrum. The Finger of God effect has a long history within theoretical and observational cosmology \cite{1972MNRAS.156P...1J,1994ApJ...431..569P,1995ApJ...448..494F}, and it is well known that its effect on the power spectrum is imprinted even on relatively large scales \citep{Juszkiewicz:1998em,Hikage:2013yja,10.1093/mnras/stu1051,10.1093/mnras/stu1391,Tonegawa:2020wuh,Okumura:2015fga}. Observations of the two-point functions indicate that the Kaiser limit is only accurate on the largest scales \citep{PhysRevD.70.083007,10.1111/j.1365-2966.2010.17581.x,Jennings_2010, Okumura_2010,Kwan_2012,10.1093/mnras/stu2460}.
Our analysis provides two new insights into this phenomenon in the context of the Minkowski statistics. First, the components of the Minkowski tensor perpendicular to the line of sight remain well described by the Kaiser approximation, even on relatively small scales $R_{\rm G }\gtrsim 25 \, {\rm Mpc}$. Second, on ``small scales'' $R_{\rm G} \lesssim 20 \, {\rm Mpc}$ the non-Gaussianity of the components $\bar{w}_{r}{}^{r}$ and $\bar{w}_{\theta}{}^{\theta}$ differ with considerable statistical significance; this can be observed in the $a_{2}|_{i}{}^{j}$ coefficient in Figure \ref{fig:2} (bottom right panel). This indicates that additional non-Gaussian effects are induced in redshift space parallel to the line of sight.
Regarding the amplitude decrease in the $A|_{r}{}^{r}$ component, we can attempt to model this effect using the standard approach in the literature -- following \citet{1976Ap&SS..45....3P,Peacock:1993xg,1994ApJ...431..569P,Desjacques:2009kt,PhysRevD.70.083007} we introduce an additional damping kernel $P(k,R_{\rm G}) \to P(k,R_{\rm G}) e^{-k_{\parallelsum}^{2}\sigma_{\rm pec}^{2}}$ into the power spectrum that is used in defining the cumulants. Returning to the plane parallel limit, we can write the cumulants in redshift space as
\begin{eqnarray} & & \tilde{\sigma}_{0}^{2} \equiv \langle \tilde{\delta}({\bf x'}) \tilde{\delta}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} = {1 \over (2\pi)^{2}} \int_{-1}^{1} d\mu \int_{0}^{\infty} dk k^{2} (1 + f\mu^{2})^{2} P(k,R_{\rm G})e^{-k^{2}R_{\rm G}^{2}} e^{-k^{2}\mu^{2} \sigma^{2}_{\rm pec}} , \\
& & \tilde{\sigma}_{z}^{2} \equiv \langle \tilde{\delta}_{z}({\bf x'}) \tilde{\delta}_{z}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} = {1 \over (2\pi)^{2}} \int_{-1}^{1} d\mu \int_{0}^{\infty} dk k^{4} \mu^{2} (1 + f\mu^{2})^{2} P(k,R_{\rm G})e^{-k^{2}R_{\rm G}^{2}} e^{-k^{2}\mu^{2} \sigma^{2}_{\rm pec}} , \\
& & \tilde{\sigma}_{x}^{2} \equiv \langle \tilde{\delta}_{x}({\bf x'}) \tilde{\delta}_{x}({\bf x}) \rangle|_{{\bf x'} \to {\bf x}} = {1 \over 2(2\pi)^{2}} \int_{-1}^{1} d\mu \int_{0}^{\infty} dk k^{4} (1-\mu^{2}) (1 + f\mu^{2})^{2} P(k,R_{\rm G})e^{-k^{2}R_{\rm G}^{2}} e^{-k^{2}\mu^{2} \sigma^{2}_{\rm pec}} , \\
& & \tilde{\sigma}_{y}^{2} = \tilde{\sigma}_{x}^{2} ,
\end{eqnarray}
\noindent where $\sigma_{\rm pec}$ is a free parameter that describes the velocity dispersion of tracer particles within bound structures and $\mu^{2} = k_{z}^{2}/k^{2}$. If we use these cumulants to derive the ensemble average $\langle {}^{\rm pp}\bar{w}_{i}{}^{j} \rangle$, the additional anisotropic exponential damping term due to the Finger of God effect introduces a significant drop in the $(z, z)$ component, but does not have a large effect on the perpendicular $(x, x)$, $(y, y)$ elements. The amplitudes ${}^{\rm pp}A_{i}{}^{j}$ as a function of $R_{\rm G}$ are presented in the left panel of Figure \ref{fig:FoG}, keeping all parameters fixed and varying $\sigma_{\rm pec} = 0, 4, 6, 8 \, {\rm Mpc}$ (yellow, green, blue, red lines respectively). The components parallel and perpendicular to the line of sight are presented as solid/dashed lines respectively, and we have included the isotropic limit $f = \sigma_{\rm pec} = 0$ (silver lines) and Kaiser limit $\sigma_{\rm pec}=0$ (yellow lines). The right panel exhibits the ratio of the Finger-of-God affected ensemble averages to the Kaiser limit. The large decrease in the parallel cumulant is clearly observed on all scales and the result is in qualitative agreement with the dark matter snapshot results (cf top right panel of Figure \ref{fig:2}). The perpendicular components increase by $\sim 1-2\%$ relative to the Kaiser approximation. We also observe this effect in the dark matter data -- in the top right panel of Figure \ref{fig:2} the $(\theta,\theta)$, $(\phi,\phi)$ components in redshift space are marginally higher than the isotropic components (top right panel, blue/green filled diamonds and light blue/green/red open diamonds respectively). However, in the dark matter snapshot case, all components in real and redshift space have a systematically lower amplitude relative to the Gaussian limit due to the non-Gaussianity of the field (cf top right panel, Figure \ref{fig:2}) which requires further modelling.
Attempting to simultaneously constrain $n_{s}$, $\Omega_{c}h^{2}$, $f$ and $\sigma_{\rm pec}$ from the Minkowski tensors will yield strong degeneracies. Potentially some of these can be broken since the Finger of God contribution is scale dependent (cf Figure \ref{fig:FoG}), while the Kaiser signal is independent of our choice of $R_{\rm G}$. Hence measuring the MTs at multiple scales will provide simultaneous constraints on $\sigma_{\rm pec}$ and $f$. We must be careful to check for additional, non-Gaussian effects since these will also be scale dependent. A study of perturbative non-Gaussianity in redshift space is beyond the scope of this work and will be conducted elsewhere.
An alternative approach to mitigating the Finger-of-God effect is to iteratively correct galaxy positions using some higher order prescription \citep{1991ApJ...379....6N,1993ApJ...405..449G,Narayanan_1998,2010ApJ...714..207P}, to reduce the large scatter induced by stochastic velocities within bound structures. This method attempts to reconstruct the galaxy density field in redshift space, but with non-linear effects removed. Many such reconstruction methods rely on the plane parallel approximation, so this approach requires further development to be applied to radial redshift space distortion. A comparison of these different approaches will be a direction of future study.
\section{Discussion}
\label{sec:dis}
We have presented an analysis of the rank-two tensor Minkowski functionals for an anisotropic and inhomogeneous Gaussian random field, in particular an isotropic and homogeneous field that has been subjected to the spherically symmetric redshift space distortion operator. Anisotropy here means that the structures in the field share a common alignment along the radial direction, leading to an inequality in the diagonal components of the Minkowski tensors parallel and perpendicular to the line of sight. The inhomogeneity of the field introduces some significant pitfalls -- the ensemble average is now a function of position on the manifold, and the volume average of the statistics will not necessarily be representative of the ensemble average. This statement depends on the coordinate system selected, the volume occupied by the field and also the choice of path transport used to define a volume average of the tensors.
For the spherically redshift space distorted field, there is a singularity in the cumulants at $r=0$ which indicates that this point must be excised from the manifold. This fact, in conjunction with the assumed symmetry properties of the field -- isotropic on $\mathbb{S}^{2}$ -- suggest that spherical coordinates and great arc transport provide a natural framework to measure the Minkowski tensors. We constructed the cumulants of the density field and the gradient in this system and found that they are only weakly coordinate dependent at large distances from a central observer at $r=0$, and furthermore are insensitive to angular position on $\mathbb{S}^{2}$ perpendicular to the line of sight. Similarly the volume average is insensitive to the specifics of how we transport vectors on $\mathbb{S}^{2}$. Of course, we are free to adopt any coordinate system that we want. However, we cannot naively equate volume and ensemble averages when the field is inhomogeneous. We have presented evidence that a spherical coordinate system allows us to extract the Kaiser signal from the components of the volume average ${}^{\rm sp}\bar{w}_{i}{}^{j}$. In contrast, the volume average in Cartesian coordinates does not necessarily replicate the ensemble average due to the non-trivial coordinate dependence of the cumulants. It is important to stress that the volume average of a tensor is generically ambiguous and our choice of coordinates and transport determines the properties of $\bar{w}_{i}{}^{j}$. We can choose a definition that approximately respects the properties of the ensemble average $\langle w_{i}{}^{j} \rangle$, but ergodicity is not exactly realised except in highly idealised scenarios ; Gaussian and isotropic fields, Euclidean manifolds. We have argued that it can be approximately realised for anisotropic and inhomogeneous fields, but only with careful contrivance.
We extracted the Minkowski tensors from Gaussian random fields and gravitationally evolved dark matter snapshot boxes at $z=0$, for three different fields (isotropic, plane parallel and spherical redshift space distorted). We found that the plane parallel and spherical redshift space fields are statistically indistinguishable if the data is sufficiently distant from a central observer at $r=0$. At cosmological distances, the inhomogeneous nature of the cumulants in spherical coordinates is negligible.
The effect of non-Gaussianity on the MTs is an order $\sim 3\% - 1\%$ effect for the isotropic fields over the range $15 \, {\rm Mpc} \leq R_{\rm G} \leq 45 \, {\rm Mpc}$, manifesting as a decrease in the amplitude of the diagonal elements, and inducing a non-zero value of the coefficient of $H_{1}(\nu_{\rm A})$ Hermite polynomial that mildly skews the MT as a function of $\nu_{\rm A}$. However, in redshift space the component of the MT parallel to the line of sight for the non-Gaussian dark matter field significantly departs from the Kaiser limit, even for large smoothing scales. The most significant effect is an amplitude decrease that is approximately $\sim 12\%$ on scales $R_{\rm G} \sim 15 \, {\rm Mpc}$. This signal is due to large peculiar velocities along the line of sight from nonlinear regions of the density field, which can scatter particle positions over megaparsec scales. To extract the Kaiser signal from the data, we must model the non-linear velocity component and account for this additional signal. The non-Gaussianity of the redshift space field is also observed in the dark matter data, which indicates that on scales $R_{\rm G} \lesssim 20 \, {\rm Mpc}$, treating the Finger-of-God effect purely in terms of a suppression of the power spectrum is insufficient. Perturbative non-Gaussianity in redshift space is an important area of future study, and the Minkowski tensors are necessary for studying the directional dependence of the non-Gaussian signal. The scalar Minkowski functionals, which are proportional to the trace of these quantities, contain directionally averaged information.
Although we have focused on the radial anisotropy generated by redshift space distortion, even in real space we can expect a radially anisotropic signal. This is due to the fact that we observe tracer particles on the lightcone, and the density field evolves significantly from the beginning of the matter dominated epoch to the present. At the level of linearized perturbations the evolution can be absorbed into a $z$-dependent galaxy bias, amplitude of the matter power spectrum and the growth rate $f(z)$ in the redshift space distortion signal. In reality, the picture is more complicated on small scales and the Minkowski functionals and tensors will exhibit systematic evolution when measured at different epochs due to non-Gaussianity induced by gravitational collapse. The non-Gaussian evolution can be potentially measured and quantified, and this will be the focus of future work. In this work we have neglected the time dependence of $f(z)$, as this effect is tied to the evolution of the field and hence beyond the scope of our analysis.
The Minkowski functionals and tensors provide a method to test the fundamental assumptions on which the standard model of cosmology is based. Without the need for {\it a priori} assumptions, the Minkowski functionals provide a measure of the non-Gaussianity of the field as a function of scale, agnostic of the nature of non-Gaussianity. Similarly the eigenvalues of the Minkowski tensors can be used to quantify the isotropy of a field without assuming the presence or absence of this symmetry property. A test of statistical homogeneity is more difficult to engineer, but coordinate dependent cumulants are a smoking gun for inhomogeneous signals. Constructing a test of statistical homogeneity using the tensor transformation properties of the MTs is an interesting direction for future study.
\section*{Acknowledgment}{SA and JK are supported by an appointment to the JRG Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government, and were also supported by the Korean Local Governments in Gyeongsangbuk-do Province and Pohang City. This work is supported by Korea Institute for Advanced Study (KIAS) grant funded by the Korea government.}
\bibliography{biblio}{}
\appendix
\section{Minkowski Tensor $W^{0,2}_{2}$}
\label{sec:appen1}
The second independent, translation invariant Minkowski tensor considered in \citet{Appleby_2019} is given by --
\begin{eqnarray}\label{app1:w22} W_{2}^{0,2}|_{i}{}^{j} &\equiv& {1 \over 3\pi V} \int_{\partial Q} G_{2} \hat{n}_{i} \hat{n}^{j} \textrm{dA} \\
\label{app:w22_2 }&=& \frac{1}{3\pi V} \int_{V} \textrm{dV} \, \delta_{D}\left( \delta - \delta_{t} \right) G_{2} \frac{\delta_{i} \delta^{j}}{\left| \nabla \delta \right|} ,
\end{eqnarray}
\noindent where the scalar quantity $G_{2}$ is the mean curvature at each point of the iso-field surface, and can be written as
\begin{equation}\label{eq:g2} G_{2} = -{1 \over 2} \nabla . {\bf \hat{n}} = -{1 \over 2} \nabla . \left( {\nabla \delta \over |\nabla \delta|} \right) .
\end{equation}
\noindent Similarly to the $W^{0,2}_{1}$ case in the main body of the text, the quantity $W^{0,2}_{2}|_{i}{}^{j}$ can be interpreted as the volume average of the following tensor
\begin{equation} v_{i}{}^{j} = {1 \over 3\pi} \delta_{D}\left( \delta - \delta_{t} \right) G_{2} \frac{\delta_{i} \delta^{j}}{\left| \nabla \delta \right|} .
\end{equation}
$G_{2}$ is a function of both first and second derivatives of the field. Hence when constructing the ensemble average $\langle v_{i}{}^{j} \rangle$, we must use a ten-dimensional multivariate probability distribution involving the field and its first and second derivatives -- $X = (\delta, \delta_{i},\delta_{jk})$. If the field is homogeneous and isotropic or plane parallel redshift space distorted, then the dependence of $G_{2}$ on the second derivatives $\delta_{jk}$ does not contribute to the ensemble average, hence $\langle v_{i}{}^{j} \rangle$ reduces to an integral over the joint probability distribution of $\delta$ and $\delta_{i}$. For an inhomogeneous field we cannot assume this remains true and must construct the corresponding full $10 \times 10$ covariance matrix, $\Sigma$, for $\delta$, $\delta_{i}$ and $\delta_{jk}$. For a spherically redshift space distorted field, due to the assumed residual isotropy on the two-sphere, many off-diagonal terms are zero. The expression for $\Sigma$ takes the form
\begin{equation}
\Sigma =
\begin{pmatrix}
\langle \delta^{2} \rangle & \langle \delta\delta_{r} \rangle & 0 & 0 & \langle \delta\delta_{rr} \rangle & 0 & 0 & \langle \delta\delta_{\theta\theta} \rangle & \langle \delta\delta_{\phi\phi} \rangle & 0 \\
\langle \delta_{r}\delta \rangle & \langle \delta_{r}^{2} \rangle & 0 & 0 & \langle \delta_{r}\delta_{rr} \rangle & 0 & 0 & \langle \delta_{r}\delta_{\theta\theta} \rangle & \langle \delta_{r}\delta_{\phi\phi} \rangle & 0 \\
0 & 0 & \langle \delta_{\theta}^{2} \rangle & 0 & 0 & \langle \delta_{\theta}\delta_{r\theta} \rangle & 0 & 0 & \langle \delta_{\theta}\delta_{\phi\phi} \rangle & 0 \\
0 & 0 & 0 & \langle \delta_{\phi}^{2} \rangle & \langle \delta_{\phi}\delta_{rr} \rangle & 0 & \langle \delta_{\phi}\delta_{r\phi} \rangle & 0 & 0 & \langle \delta_{\phi}\delta_{\theta\phi} \rangle \\
\langle \delta_{rr}\delta \rangle & \langle \delta_{rr}\delta_{r} \rangle & 0 & 0 & \langle \delta_{rr}^{2} \rangle & 0 & 0 & \langle \delta_{rr}\delta_{\theta\theta} \rangle & \langle \delta_{rr}\delta_{\phi\phi} \rangle & 0 \\
0 & 0 & \langle \delta_{r\theta}\delta_{\theta} \rangle & 0 & 0 & \langle \delta_{r\theta}^{2} \rangle & 0 & 0 & \langle \delta_{r\theta}\delta_{\phi\phi} \rangle & 0 \\
0 & 0 & 0 & \langle \delta_{r\phi}\delta_{\phi} \rangle & 0 & 0 & \langle \delta_{r\phi}^{2} \rangle & 0 & 0 & \langle \delta_{r\phi}\delta_{\theta\phi} \rangle \\
\langle \delta_{\theta\theta}\delta \rangle & \langle \delta_{\theta\theta}\delta_{r} \rangle & 0 & 0 & \langle \delta_{\theta\theta}\delta_{rr} \rangle & 0 & 0 & \langle \delta_{\theta\theta}^{2} \rangle & \langle \delta_{\theta\theta}\delta_{\phi\phi} \rangle & 0 \\
\langle \delta_{\phi\phi}\delta \rangle & \langle \delta_{\phi\phi}\delta_{r} \rangle & \langle \delta_{\phi\phi}\delta_{\theta} \rangle & 0 & \langle \delta_{\phi\phi}\delta_{rr} \rangle & \langle \delta_{\phi\phi}\delta_{r\theta} \rangle & 0 & \langle \delta_{\phi\phi}\delta_{\theta\theta} \rangle & \langle \delta_{\phi\phi}^{2} \rangle & 0 \\
0 & 0 & 0 & \langle \delta_{\theta\phi}\delta_{\phi} \rangle & 0 & 0 & \langle \delta_{\theta\phi}\delta_{r\phi} \rangle & 0 & 0 & \langle \delta_{\theta\phi}^{2} \rangle
\end{pmatrix}
\end{equation}
\noindent This is the covariance matrix of the partial derivatives. If one uses covariant derivatives as random variables, then different correlations will be present. The $4\times 4$ block in the top left corner has been calculated in the main body of the text. In this appendix we calculate the other terms as follows --
\begin{eqnarray}
\nonumber \left<\delta_{rr}^2\right> &=& \left(\frac{1}{\pi^2}\right)\left(\frac{1}{10} + \frac{f}{7} + \frac{f^2}{18}\right)\int k^6 P(k,R_{\rm G})dk + \left(\frac{1}{\pi^2r^2}\right)\left(\frac{4f}{5} + \frac{6f^2}{7}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{8f^2}{3\pi^2r^6}\right)\int P(k,R_{\rm G}) dk \\
& & \\
\nonumber \left<\delta_{\phi\phi}^2\right> &=& \left(\frac{r^4\sin^4\theta}{\pi^2}\right)\left(\frac{1}{10} + \frac{f}{35} + \frac{f^2}{210}\right)\int k^6P(k,R_{\rm G})dk + \\
\nonumber & & \left(\frac{r^2}{\pi^2}\right)\left[\frac{\sin^2\theta}{6} + \left(\frac{\sin^2\theta}{15}-\frac{6\sin^4\theta}{5}\right)f + \left(\frac{\sin^2\theta}{70} + \frac{12\sin^4\theta}{35}\right)f^2\right]\int k^4P(k,R_{\rm G})dk \\
& & +\left(\frac{1}{\pi^2}\right)\left[\frac{-2f\sin^2\theta}{3} + \left(\frac{2\sin^2\theta}{5} + \frac{18\sin^4\theta}{5}\right)f^2\right]\int k^2P(k,R_{\rm G})dk + \frac{2f^2\sin^2\theta}{3\pi^2r^2}\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{\theta\theta}^2\right> & = & \left(\frac{r^4}{\pi^2}\right)\left(\frac{1}{10} + \frac{f}{35} + \frac{f^2}{210}\right)\int k^6P(k,R_{\rm G})dk + \left(\frac{r^2}{\pi^2}\right)\left(\frac{1}{6} - \frac{17f}{15} + \frac{5f^2}{14}\right)\int k^4P(k,R_{\rm G})dk \\
& & +\left(\frac{1}{\pi^2}\right)\left(-\frac{2f}{3} + 4f^2\right)\int k^2P(k,R_{\rm G})dk + \frac{2f^2}{3\pi^2r^2}\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{r\phi}^2\right> & = & \left(\frac{r^2\sin^2\theta}{\pi^2}\right)\left(\frac{1}{30} + \frac{f}{35} + \frac{f^2}{126}\right)\int k^6P(k,R_{\rm G})dk + \left(\frac{\sin^2\theta}{\pi^2}\right)\left(\frac{1}{6} + \frac{f}{5} + \frac{3f^2}{10}\right)\int k^4P(k,R_{\rm G})dk \\
& & +\left(\frac{\sin^2\theta}{\pi^2r^2}\right)\left(\frac{2f}{3} + \frac{4f^2}{5}\right)\int k^2P(k,R_{\rm G})dk + \frac{2f^2\sin^2\theta}{3\pi^2r^4}\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{r\theta}^2\right> & = & \left(\frac{r^2}{\pi^2}\right)\left(\frac{1}{30} + \frac{f}{35} + \frac{f^2}{126}\right)\int k^6P(k,R_{\rm G})dk + \left(\frac{1}{\pi^2}\right)\left(\frac{1}{6} + \frac{f}{5} + \frac{3f^2}{10}\right)\int k^4P(k,R_{\rm G})dk \\
& & +\left(\frac{1}{\pi^2r^2}\right)\left(\frac{2f}{3} + \frac{4f^2}{5}\right)\int k^2P(k,R_{\rm G})dk + \frac{2f^2}{3\pi^2r^4}\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{\theta\phi}^2\right> & = & \left(\frac{r^4\sin^2\theta}{\pi^2}\right)\left(\frac{1}{30}+\frac{f}{105}+\frac{f^2}{630}\right)\int k^6 P(k,R_{\rm G})dk \\
\nonumber & & + \left(\frac{r^2}{\pi^2}\right)\left[\left(\frac{1}{6}-\frac{\sin^2\theta}{6}\right)+\left(\frac{1}{15}-\frac{7\sin^2\theta}{15}\right)f + \left(\frac{1}{70}+\frac{\sin^2\theta}{10}\right)f^2\right]\int k^4 P(k,R_{\rm G})dk \\
\nonumber & & + \left(\frac{1}{\pi^2}\right)\left[\left(-\frac{2}{3}+\frac{2\sin^2\theta}{3}\right)f + \left(\frac{2}{5}+\frac{4\sin^2\theta}{5}\right)f^2\right]\int k^2 P(k,R_{\rm G})dk \\
& & + \left(\frac{f^2}{\pi^2r^2}\right)\left(\frac{2}{3}-\frac{2\sin^2\theta}{3}\right)\int P(k,R_{\rm G}) dk \\
\nonumber & & \\
\nonumber \left<\delta_{rr}\delta_{\phi\phi}\right> & = & \left(\frac{r^2\sin^2\theta}{\pi^2}\right)\left(\frac{1}{30} + \frac{f}{35} + \frac{f^2}{126}\right)\int k^6 P(k,R_{\rm G})dk \\
\nonumber & & + \left(\frac{\sin^2\theta}{\pi^2}\right)\left(\frac{2f}{15} + \frac{2f^2}{7}\right)\int k^4 P(k,R_{\rm G})dk +\left(\frac{\sin^2\theta}{\pi^2r^2}\right)\left(\frac{2f}{3} + \frac{4f^2}{5}\right)\int k^2P(k,R_{\rm G})dk \\
& & - \left(\frac{4f^2\sin^2\theta}{3\pi^2r^4}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{rr}\delta_{\theta\theta}\right> & = & \left(\frac{r^2}{\pi^2}\right)\left(\frac{1}{30} + \frac{f}{35} + \frac{f^2}{126}\right)\int k^6 P(k,R_{\rm G})dk + \left(\frac{1}{\pi^2}\right)\left(\frac{2f}{15} + \frac{2f^2}{7}\right)\int k^4 P(k,R_{\rm G})dk + \\ & & \left(\frac{1}{\pi^2r^2}\right)\left(\frac{2f}{3} + \frac{4f^2}{5}\right)\int k^2P(k,R_{\rm G})dk
- \left(\frac{4f^2}{3\pi^2r^4}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\left<\delta_{r\theta}\delta_{\phi\phi}\right> & = & \left(\frac{r\sin 2\theta}{\pi^2}\right)\left(\frac{-1}{12} - \frac{f}{30} - \frac{f^2}{140}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{f^2\sin 2\theta}{3\pi^2r^3}\right)\int P(k,R_{\rm G})dk \\
\nonumber & &
\end{eqnarray}
\begin{eqnarray}
\left<\delta_{r\phi}\delta_{\theta\phi}\right> & = & \left(\frac{r\sin 2\theta}{\pi^2}\right)\left(\frac{1}{12} + \frac{f}{30} + \frac{f^2}{140}\right)\int k^4 P(k,R_{\rm G})dk - \left(\frac{f^2 \sin 2\theta}{3\pi^2r^3}\right)\int P(k,R_{\rm G}) dk \\
\nonumber & & \\
\left<\delta\delta_r\right> & = & \left(\frac{-2f^2}{3\pi^2r^3}\right)\int P
(k)dk \\
\nonumber & & \\
\nonumber \left<\delta\delta_{rr}\right> & = & \left(\frac{1}{\pi^2}\right)\left(-\frac{1}{6}-\frac{f}{5}-\frac{f^2}{14}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{1}{\pi^2r^2}\right)\left(-\frac{2f}{3}-\frac{4f^2}{5}\right)\int k^2P(k,R_{\rm G})dk \\
& & + \left(\frac{4f^2}{3\pi^2r^4}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\left<\delta_r\delta_{rr}\right> & = & \left(\frac{1}{\pi^2r^3}\right)\left(-\frac{2f}{3}-\frac{4f^2}{5}\right)\int k^2 P(k,R_{\rm G})dk - \left(\frac{4f^2}{3\pi^2r^5}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\left<\delta_{r\theta}\delta_{\theta}\right> & = & \left(\frac{r}{\pi^2}\right)\left(\frac{1}{6}+\frac{f}{15} + \frac{f^2}{70}\right)\int k^4 P(k,R_{\rm G})dk - \left(\frac{2f^2}{3\pi^2r^3}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\left<\delta_{r\phi}\delta_{\phi}\right> & = & \left(\frac{r\sin^2\theta}{\pi^2}\right)\left(\frac{1}{6}+\frac{f}{15} + \frac{f^2}{70}\right)\int k^4 P(k,R_{\rm G})dk - \left(\frac{2f^2\sin^2\theta}{3\pi^2r^3}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{\theta\theta}\delta\right> & = & \left(\frac{r^2}{\pi^2}\right)\left(-\frac{1}{6}-\frac{f}{15} - \frac{f^2}{70}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{1}{\pi^2}\right)\left(\frac{2f}{3} - \frac{2f^2}{5}\right)\int k^2P(k,R_{\rm G})dk \\
& & - \left(\frac{2f^2}{3\pi^2r^2}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{\phi\phi}\delta\right> & = & \left(\frac{r^2\sin^2\theta}{\pi^2}\right)\left(-\frac{1}{6}-\frac{f}{15} - \frac{f^2}{70}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{\sin^2\theta}{\pi^2}\right)\left(\frac{2f}{3} - \frac{2f^2}{5}\right)\int k^2P(k,R_{\rm G})dk \\
& & - \left(\frac{2f^2\sin^2\theta}{3\pi^2r^2}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\left<\delta_{\theta\theta}\delta_r\right> & = & \left(\frac{r}{\pi^2}\right)\left(-\frac{1}{6}-\frac{f}{15} - \frac{f^2}{70}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{2f^2}{3\pi^2r^3}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\left<\delta_{\phi\phi}\delta_r\right> & = & \left(\frac{r\sin^2\theta}{\pi^2}\right)\left(-\frac{1}{6}-\frac{f}{15} - \frac{f^2}{70}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{2f^2\sin^2\theta}{3\pi^2r^3}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{\phi\phi}\delta_{\theta}\right> & = & \left(\frac{r^2\sin 2\theta}{\pi^2}\right)\left(-\frac{1}{12}-\frac{f}{30} - \frac{f^2}{140}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{\sin 2\theta}{\pi^2}\right)\left(\frac{f}{3} - \frac{f^2}{5}\right)\int k^2P(k,R_{\rm G})dk \\
& & - \left(\frac{f^2\sin 2\theta}{3\pi^2r^2}\right)\int P(k,R_{\rm G})dk \\
\nonumber & & \\
\nonumber \left<\delta_{\theta\phi}\delta_{\phi}\right> & = & \left(\frac{r^2\sin 2\theta}{\pi^2}\right)\left(\frac{1}{12}+\frac{f}{30} + \frac{f^2}{140}\right)\int k^4 P(k,R_{\rm G})dk + \left(\frac{\sin 2\theta}{\pi^2}\right)\left(-\frac{f}{3} + \frac{f^2}{5}\right)\int k^2P(k,R_{\rm G})dk \\
& & + \left(\frac{f^2\sin 2\theta}{3\pi^2r^2}\right)\int P(k,R_{\rm G})dk
\end{eqnarray}
These are the correlations between partial derivatives of the field, although covariant derivatives can be used instead. If we take the limit $r \to \infty$, then the covariance matrix reduces to the plane parallel limit. Hence similar to the main body of the text, if the field is sufficiently distant from the observer at $r=0$, the ensemble average $\langle v_{i}{}^{j}\rangle$ is well approximated by the plane parallel result in \citet{Appleby_2019}. More concretely, the dimensionless terms $\sigma_{-1}^{2} /(r^{6} \sigma_{2}^{2})$, $\sigma_{1}^{2}/(r^{2}\sigma_{2}^{2})$, $\sigma_{0}^{2}/(r^{4}\sigma_{2}^{2})$ must all be negligible to satisfy the plane parallel limit.
The volume average of $v_{i}{}^{j}$ in a spherical basis is given by
\begin{equation} {}^{\rm sp}\bar{v}_{i}{}^{j} = {1 \over 3\pi V} \sum_{m,n,p} \Delta^{3} \delta_{D}(\tilde{\delta}_{\{m,n,p\}}-\delta_{t}) G_{2\, \{m,n,p\}} {{}^{\gamma}\tilde{\delta}_{i}{}_{\{m,n,p\}} {}^{\gamma}\tilde{\delta}^{j}{}_{\{m,n,p\}} \over |\nabla \tilde{\delta}_{\{m,n,p\}}|} .
\end{equation}
\noindent This is straightforward to construct, the only complication beyond ${}^{\rm sp}\bar{w}_{i}{}^{j}$ is that we must additionally estimate $G_{2}$ at each pixel using ($\ref{eq:g2}$). Since $G_{2}$ is a scalar quantity, we use Cartesian coordinates and a simple second order accurate finite difference scheme for the first and second derivatives to reconstruct $G_{2\, \{m,n,p\}}$ at each pixel.
In Figure \ref{fig:app1} we present the components of the Minkowski tensor $\bar{v}_{i}{}^{j}$ from the Quijote $z=0$ snapshot boxes smoothed with scale $R_{\rm G} = 20 \, {\rm Mpc}$ as a function of $\nu$ (left panel) and $\nu_{\rm A}$ (right panel). The color scheme matches Figure \ref{fig:2a} in the main body of the text. All off-diagonal components are consistent with zero and not plotted. The redshift space distortion signal is present, with the $(\theta,\theta)$, $(\phi,\phi)$ components systematically lower in amplitude compared to the real space statistics (cf light hollow diamonds). The radial component (red filled diamonds) is only marginally higher than the isotropic components -- this is due to the same Finger-of-God effect observed in the main body of the text (see Section \ref{sec:ngrf}).
In Figure \ref{fig:app2} we present the amplitude of $\bar{v}_{i}{}^{j}$ defined as the coefficient of the $H_{1}$ coefficient --
\begin{equation} B|_{i}{}^{j} = {1 \over \sqrt{2\pi}} \int_{-\nu_{{\rm max}}}^{\nu_{\rm max}} \bar{v}_{i}{}^{j}(\nu_{\rm A}) H_{1}(\nu_{\rm A}) d\nu_{\rm A} ,
\label{eqn:Bij}
\end{equation}
\noindent and the additional Hermite polynomial coefficients
\begin{eqnarray} b_{0}|_{i}{}^{j} &=& {1 \over \sqrt{2\pi}} \int_{-\nu_{{\rm max}}}^{\nu_{{\rm max}}} \bar{v}_{i}{}^{j}(\nu_{\rm A}) H_{0}(\nu_{\rm A}) d\nu_{\rm A} ,
\label{eqn:b1ij}\\
b_{2}|_{i}{}^{j} &=& {1 \over 2\sqrt{2\pi}} \int_{-\nu_{{\rm max}}}^{\nu_{{\rm max}}} \bar{v}_{i}{}^{j}(\nu_{\rm A}) H_{2}(\nu_{\rm A}) d\nu_{\rm A} .
\label{eqn:b2ij}
\end{eqnarray}
\noindent
The colour scheme is the same as in Figure $\ref{fig:2}$. Qualitatively we observe the same behaviour for $W^{0,2}_{2}$ as $W^{0,2}_{1}$, but here it is more pronounced. Both the isotropic and spherically redshift space distorted fields are significantly affected by the non-Gaussianity of the Quijote fields for scales $R_{\rm G} \leq 35 \, {\rm Mpc}$, and the $(r,r)$ component in redshift space most significantly departs from the Gaussian limit (cf top panels, red diamonds/error bars). The redshift space Gaussian and plane parallel limit of the amplitudes are given by
\begin{eqnarray}\label{eq:ampB1} & & {}^{\rm pp}B_{G}|_{x}{}^{x} = {3 \over 2}\sqrt{\pi \over 2} A_{0}^{2}\left[ {(\lambda^{2}-2)(\lambda^{2}-1)^{1/2} + \lambda^{4}\tan^{-1}\sqrt{\lambda^{2}-1} \over (\lambda^{2}-1)^{3/2} } \right] , \\
\label{eq:ampB2} & & {}^{\rm pp}B_{G}|_{y}{}^{y} = {}^{\rm pp}B_{G}|_{x}{}^{x} , \\
\label{eq:ampB3} & & {}^{\rm pp}B_{G}|_{z}{}^{z} = 3 \sqrt{\pi \over 2}A_{0}^{2}\left[{\lambda^{2} \left[ \left(\lambda^{2}-1\right)^{1/2} + (\lambda^{2}-2) \tan^{-1}\sqrt{\lambda^{2}-1} \right] \over (\lambda^{2}-1)^{3/2} }\right] ,
\end{eqnarray}
\noindent and the isotropic Gaussian limit is
\begin{equation}\label{eq:Bgau} {}^{\rm re}B_{\rm G}|_{x}{}^{x} = {}^{\rm re}B_{\rm G}|_{y}{}^{y} = {}^{\rm re}B_{\rm G}|_{z}{}^{z} = {\sigma_{1}^{2} \over 27\pi \sqrt{2\pi} \sigma_{0}^{2}} .
\end{equation}
The non-Gaussian coefficients $b_{0}|_{i}{}^{j}$ and $b_{2}|_{i}{}^{j}$ remain small even on relatively small scales $R_{\rm G} \gtrsim 15 \, {\rm Mpc}$.
\section{Rotation of Basis Vectors Relative to a Great Arc}
\label{sec:appen3}
In the main body of the paper we constructed an algorithm to describe the rotation of a vector under geodesic transport to a different location on the two-sphere. In this appendix we present the rotation of the spherical basis vectors explicitly using a simple geometric prescription. Starting with the unit sphere, we select two points on the sphere defined with $\mathbb{R}^{3}$ unit vectors $\hat{u}$ and $\hat{v}$. To parameterize the great arc that passes through these two points we introduce the vectors $\hat{m} = \hat{u} \times \hat{v}/|\hat{u}\times \hat{v}|$ and $\hat{n} = \hat{m} \times \hat{u}/|\hat{m}\times \hat{u}|$. The unit vectors $\hat{u}$, $\hat{m}$ and $\hat{n}$ are mutually orthogonal, and $\hat{u}$, $\hat{n}$ form a basis in the plane in which the great circle is defined. Any position on the great arc can then be represented parametrically with the vector
\begin{equation} \hat{R} = \hat{u}\cos t + \hat{n} \sin t \end{equation}
\noindent for $0 < t \leq 2\pi$. The tangent vector to the great arc is
\begin{equation} \hat{T} = -\hat{u} \sin t + \hat{n}\cos t \end{equation}
\noindent Each point on the great arc, specified by the vector $\hat{R}$, can be described using the angles $\theta,\phi$ in a spherical coordinate system, and we can then define the spherical basis vectors in the usual way
\begin{eqnarray} & & {\bf e}_{r} = \sin\theta \cos\phi \, {\bf e}_{x} + \sin\theta \sin\phi \, {\bf e}_{y} + \cos\theta \, {\bf e}_{z} \\
& & {\bf e}_{\theta} = \cos\theta \cos\phi \, {\bf e}_{x} + \cos\theta \sin\phi \, {\bf e}_{y} - \sin\theta \, {\bf e}_{z} \\
& & {\bf e}_{\phi} = - \sin\phi \, {\bf e}_{x} + \cos\phi \, {\bf e}_{y} \\
\end{eqnarray}
The dot products $\hat{T} . {\bf e}_{\theta}$, $\hat{T} . {\bf e}_{\phi}$ then represent the angle rotation of the spherical basis vectors relative to the great arc tangent along the path. This is the rotation that is accounted for in the main body of the text, when summing vector fields at different locations on the manifold. Parallel transport preserves the orientation of a tangent space relative to $\hat{T}$, so after geodesic transport the components of a vector in the basis ${\bf e}_{\theta}$, ${\bf e}_{\phi}$ are rotated. Conversely the dot product $\hat{T} . {\bf e}_{r}$ is always zero and components of a vector parallel to ${\bf e}_{r}$ are preserved. If the great arc lies on the equator of the sphere then the basis vectors do not rotate with respect to the great arc tangent vector.
We present $N=10$ great arcs defined by selecting $\hat{u}$, $\hat{v}$ randomly in the left panel of Figure \ref{fig:app3a}. The thick gold arc lies on the equator. The corresponding rotation angles $\alpha_{\theta} = \cos^{-1}(\hat{T}.{\bf e}_{\theta})$ and $\alpha_{\phi} = \cos^{-1}(\hat{T}.{\bf e}_{\phi})$, as a function of the arc parameter $t$, are presented in the right panel of Figure \ref{fig:app3a}. Only when the great arc aligns with the coordinate basis is there no relative rotation of the tangent space (cf gold lines). There are two points on each great arc at which the path is perpendicular to ${\bf e}_{\theta}$ and hence $\hat{T}$ either aligns or anti-aligns with ${\bf e}_{\phi}$, depending on the direction of the arc tangent vector. This is the origin of the dichotomy observed in the $\alpha_{\phi}$ panel. Note that the vectors return to their original orientation if transported along the entire great arc.
\section{Useful Relations}
\label{sec:appen2}
We provide some useful identities regarding the spherical Bessel functions and other functions that are used in the paper. Some of these can be found in standard textbooks \citep{mabramowitz64:handbook} --
\begin{eqnarray}
\label{app:b2} & & \sum_{m=-\ell}^{\ell} Y_{\ell m}(\hat{{\bf s}}_{1}) Y_{\ell m}^{*}(\hat{{\bf s}}_{2}) = {2 \ell + 1 \over 4\pi}{\cal L}_{\ell}(\hat{{\bf s}}_{1} . \hat{{\bf s}}_{2}), \\
\label{app:b3} & & \sum_{m=-\ell}^{\ell} {\partial Y_{\ell m}(\theta,\phi) \over \partial \phi} {\partial Y_{\ell m}^{*}(\theta,\phi) \over \partial \phi} = {(2 \ell + 1)\ell (\ell + 1 ) \over 8\pi}\sin^{2}\theta, \\
\label{app:b4} & & \sum_{m = -\ell}^{\ell} {\partial Y^{m *}_{\ell}(\theta,\phi) \over \partial \theta} {\partial Y^{m }_{\ell}(\theta,\phi) \over \partial \theta} = {(2 \ell + 1) \ell (\ell+1) \over 8\pi}, \\
\label{app:b5} & & \sum_{m = -\ell}^{\ell} {\partial^2 Y^{m *}_{\ell}(\theta,\phi) \over \partial \phi^2} {\partial^2 Y^{m }_{\ell}(\theta,\phi) \over \partial \phi^2} = {(2 \ell + 1) \ell (\ell+1) \over 8\pi}\sin^2\theta\left[ \frac{3\sin^2\theta}{4}\left(\ell(\ell+1) + \left(1-\frac{3\sin^2\theta}{2}\right) \right) \right], \\
\label{app:b6} & & \sum_{m = -\ell}^{\ell} {\partial^2 Y^{m *}_{\ell}(\theta,\phi) \over \partial \theta^2} {\partial^2 Y^{m }_{\ell}(\theta,\phi) \over \partial \theta^2} = {(2 \ell + 1) \ell (\ell+1) \over 8\pi}\left[ \frac{3}{4}\ell(\ell+1) - \frac{1}{2} \right].
\end{eqnarray}
\noindent These can be derived using the general result ;
\begin{equation}\label{eq:n1} P_{\ell}(\cos\gamma) = {4\pi \over 2\ell + 1} \sum_{m = -\ell}^{\ell} Y^{m *}_{\ell}(\theta',\phi') Y^{m}_{\ell}(\theta,\phi), \end{equation}
\noindent where
\begin{equation}\label{eq:n2} \cos\gamma = \cos\theta \cos\theta' + \sin\theta \sin\theta' \cos(\phi -\phi'), \end{equation}
\noindent along with the differential equation that the Legendre polynomial solves --
\begin{equation}\label{eq:n3} (1-x^2)P''_{\ell}(x) - 2x P'_{\ell}(x) + \ell (\ell + 1) P_{\ell}(x) = 0, \end{equation}
\noindent and the normalisation $P_{\ell}(1) = 1$. Taking derivatives of eq.~($\ref{eq:n1}$) w.r.t. $\phi$, $\phi'$, $\theta$, $\theta'$, and then taking the limit $\theta \to \theta'$, $\phi \to \phi'$ and $x = \cos\gamma \to 1$ yields results such as ($\ref{app:b3}, \ref{app:b4}$).
We also have the following relation for the spherical Bessel function of the first kind, $j_{\ell}$,
\begin{equation}\label{eq:rel1} \sum_{\ell=0}^{\infty} (2\ell + 1)\left[j^{(p)}_{\ell}(x)\right]^{2} = {1 \over 2p + 1}, \end{equation}
\noindent where the $(p)$ superscript denotes the $p^{\rm th}$ derivative of the spherical Bessel function with respect to its argument. Also important are the following relations
\begin{eqnarray}\label{app:b13} & & \sum_{\ell=0}^{\infty} (2\ell+1) \ell (\ell + 1) j^{2}_{\ell}(x) = {2x^{2} \over 3}, \\
\label{app:b14} & & \sum_{\ell=0}^{\infty} (2\ell + 1) \ell (\ell + 1) \left[j'_{\ell}(x)\right]^{2} = {2 \over 3} + {2 \over 15}x^{2}, \\
\label{app:b15} & & \sum_{\ell=0}^{\infty} (2\ell+1) \ell(\ell+1) \left[j''_{\ell}(x) \right]^{2} = {8 \over 15} + {2x^{2} \over 35} .
\end{eqnarray}
\noindent The $j_\ell$ functions satisfy the equation,
\begin{equation}
\ell\left(\ell+1\right)j_\ell(x) = x^2j_{\ell}''(x) + 2xj_{\ell}'(x) + x^2j_{\ell}(x),
\label{app:b16}
\end{equation}
which can be differentiated w.r.t. $x$ twice to give,
\begin{equation}
\ell\left(\ell+1\right)j_{\ell}''(x) = x^2j_{\ell}''''(x) +6xj_{\ell}'''(x) + \left(x^2+6\right)j_{\ell}''(x) + 4xj_{\ell}'(x) + 2j_{\ell}(x).
\label{app:b17}
\end{equation}
In Section \ref{sec:num}, when reconstructing the Minkowski tensors numerically, we transform between Cartesian and spherical coordinate systems. We adopt the standard angle conventions such that the conversion from Cartesian to radial gradients is
\begin{equation}\label{eq:other1} {\partial \Omega \over \partial r} = \sin\theta \cos\phi {\partial \Omega \over \partial x} + \sin\theta \sin\phi {\partial \Omega \over \partial y} + \cos\theta {\partial \Omega \over \partial z} .
\end{equation}
\noindent To define volume averages in Section \ref{sec:num}, we rotate vectors on the two-sphere. To do so we define ${\bf \hat{m}}$ as the unit vector pointing to a position on the manifold at which the $\delta_{D}(\tilde{\delta}_{[m,n,p]}-\delta_{t}) \neq 0$ and ${\bf \hat{a}}$ as the unit vector pointing to a fiducial point at which we take the volume average of $w_{i}{}^{j}$, then the unit quaternion $q = q_{0} + {\bf q}$ with elements
\begin{equation} q_{0} = \cos{\theta \over 2} , \qquad {\bf q} = { {\bf \hat{m}} \times {\bf \hat{a}} \over |{\bf \hat{m}} \times {\bf \hat{a}}|} \sin{\theta \over 2} , \end{equation}
\noindent is used to rotate the gradient vector sampled at $[m,n,p]$ to $[a,b,c]$, where $\cos\theta = {\bf \hat{m}} . {\bf \hat{a}}$. The complex conjugate is $q^{*} = q_{0} - {\bf q}$ and the rotation operator acting on an arbitrary vector ${\bf v}$ can be written as
\begin{equation} q {\bf v} q^{*} = (q_{0}^{2} - |{\bf q}|^{2}){\bf v} + 2 ({\bf q}.{\bf v}){\bf q} + 2 q_{0} ({\bf q} \times {\bf v}). \end{equation}
|
Title:
The Fermi Large Area Telescope |
Abstract: The Large Area Telescope, the primary instrument on the Fermi Gamma-ray Space
Telescope, is an imaging, wide field-of-view gamma-ray telescope. After many
improvements to the data acquisition and event analysis procedures, it now
covers the broad energy range from $\sim 20$ MeV to $\sim 2$ TeV. After more
than 13 years of operation since its launch in June 11, 2008, it has provided
the best-resolved and deepest portrait of the gamma-ray sky. In this chapter we
review the design of the instrument, the data acquisition system, calibration,
and performance.
| https://export.arxiv.org/pdf/2208.13635 |
\title*{The Fermi Large Area Telescope}
\author{Riccardo Rando}
\institute{R. Rando \at
University of Padova and I.N.F.N. Padova\\
via Marzolo 8, I-35131 Padova, Italy\\
\email{riccardo.rando@pd.infn.it}
}
\abstract{
The Large Area Telescope, the primary instrument on the Fermi Gamma-ray Space Telescope, is an imaging, wide field-of-view \gammaRayHyph\ telescope. After many improvements to the data acquisition and event analysis procedures, it now covers the broad energy range from $\sim 20$~MeV to $\sim 2$~TeV. After more than 13 years of operation since its launch in \mydate{2008}{June}{11}, it has provided the best-resolved and deepest portrait of the \gammaRayHyph\ sky. In this chapter we review the design of the instrument, the data acquisition system, calibration, and performance.
}
\section{Keywords}
gamma-ray telescope, calibration, silicon microstrip detector, electromagnetic calorimeter, plastic scintillator
\section{Introduction}
The birth of \emph{multi-wavelength} astronomy dates back to the 1960s, when radio astronomy reached maturity.
Most of the electromagnetic (EM) spectrum, however, was still out of reach due to atmospheric absorption outside the radio and optical windows. To cover most of the infrared band, X rays and soft \gammaRays\ it was necessary to wait until the early space missions in the 1960s and 1970s. In the 1980s the first Imaging Atmospheric Cherenkov Telescopes (IACT) were built on the ground to reach even higher energies. Almost simultaneously instruments appeared capable of astronomical observations with probes other than photons: the observations of cosmic neutrinos signaled the birth of \emph{multi-messenger} astrophysics. For a review see e.g. \citet{multimessenger,multimessenger2}.
The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope (\fermi) was designed to cover, with excellent performance, the energy range from a few MeV to several hundred GeV, overlapping with ground-based IACT observatories. Deployed at a very favorable time, it plays a major role in the multi-messenger revolution currently underway.
In this chapter we describe the design, operation and performance of the LAT instrument. The interested reader can find a description of the \fermi\ mission and an overview of the scientific results in the dedicated chapter of this Handbook.
Firstly, we give a brief summary of the topic of \gammaRayHyph\ astrophysics from space, and how the scientific requirements and the technological advances shaped the design of the instrument. Then we describe in greater detail the three main subsystems of the LAT (tracker, calorimeter and anticoincidence detector), how the data acquisition system operates and how data are processed. We give some details on the operation of the LAT, with particular attention to the procedures required to keep the telescope in good health. Lastly, we describe the calibration of the detector and the scientific performance.
\section{\textit{A space-based MeV--GeV \gammaRayHyph\ observatory}}
Photons with energies above a few 10's of MeV interact almost exclusively through the process of electron--positron pair production. Since the original photon disappears in the interaction, its properties must be derived from measurements performed on the two daughter particles. Standard optical approaches (reflection and refraction) are not applicable, and the photons are too penetrating for collimators. An MeV--GeV telescope is, in fact, a high energy physics (HEP) detector, based on technologies developed for the use at accelerator facilities. Additional constraints are imposed by the need to place the detector in orbit, to escape the opacity of the atmosphere to high-energy photons (short observations can be performed from a balloon).
Looking at the history of \gammaRayHyph\ observatories in space, it is apparent how much of the progress in the field is related to the technological advancement of HEP particle detectors \citep{history,egret-to-lat}.
In the pioneering years (1960s) space-based \gammaRayHyph\ detectors were typically based on a stack of scintillators, restricting the acceptance to a small angle in lieu of measuring the direction of the incoming photons \citep{oso3}. The later generation of instruments (1970s) included a gas-filled spark chamber, where the original direction of the photon could be reconstructed from the ionization tracks left by the secondaries, leading to a great increase in field of view (FOV) and sensitivity \citep{sas2,cosb}. The spark chamber was at the core of the very successful EGRET instrument, on board the Compton Gamma-Ray Observatory (CGRO) that operated from 1991 to 2000 \citep{egret}.
In the 1990s the most important development in particle tracking since EGRET was the advent of large-area silicon strip trackers, so it was natural to design a successor around them. The silicon (micro-)strip detector (SSD) was developed in 1980 \citep{hejne80} and used with success in major HEP facilities all around the world. The advantages with respect to the existing technologies were many: very good spatial resolution ($\geq 50$~\um), excellent signal-to-noise ratio, good radiation hardness, robustness, absence of consumables, self triggering, etc. In parallel to tracking detectors, Application Specific Integrated Circuits (ASICs) were undergoing a rapid evolution from their appearance at the end of the 1960s, enabling the readout of an unprecedented number of channels ($\sim 1$ million) with excellent performance and low power consumption. Manufacturing and design techniques were developed to enable ASICs to withstand severe radiation environments. SSDs and ASICs have been instrumental to the success of the LAT.
Towards the end of the EGRET mission, the general constraints for the next space-based \gammaRayHyph\ observatory were defined in a two-step process. Firstly, the major science targets were identified, including Active Galactic Nuclei, isotropic background radiation, Gamma Ray Bursts (GRBs), sites of cosmic ray acceleration, neutron stars and black holes, dark matter. Secondly, an estimate of the performance necessary to reach the aforementioned science goals was drafted \citep{scireq-site}. NASA released an Announcement of Opportunity in August 1999, detailing the baseline scientific objectives and soliciting proposals for the development of a large-area \gammaRayHyph\ telescope \citep{ao-site} originally called Gamma-ray Large Area Space Telescope (GLAST).
The proposal describing the LAT as we know it was submitted in November 1999: a design based on ``(i) a precision tracker, based on proven Silicon-strip detector technology, (ii) a finely segmented Cesium Iodide calorimeter for energy measurement, and (iii) a segmented anticoincidence that covers the tracker'' \citep{resp-ao-site}.
Let us see in some detail how the scientific and operational constraints defined the main characteristics of the instrument.
\begin{itemize}
\item The large energy coverage, up to hundreds of GeV, sets a lower limit on the thickness, in particular on the mass of the calorimeter, in order to contain a large part of the induced electromagnetic shower.
\item An anticoincidence shield, to veto the abundant charged particles found in space, thick enough to ensure a high detection efficiency but transparent to \gammaRays.
\item A massive calorimeter near a veto scintillator requires a way to alleviate the self-veto, caused by the low energy photons produced in the calorimeter in high-energy events going through the anticoincidence. Segmentation addresses the issue, making it possible to correlate the shower development inside the instrument with the location of the energy depositions in the surrounding veto detector.
\item For a full-sky observatory a squat aspect ratio is preferred, since the geometrical shape defines the FOV. On the other hand, a time-of-flight detector between tracker and calorimeter is very useful for trigger and background rejection, but requires a tall instrument. The choice of a self-triggering tracker made it possible to do without.
\item The size of the instrument was limited by the available launcher: for the LAT, the Delta-II ``Heavy'' launch vehicle constrained the lateral size to $\sim 2$~m and the mass to $\sim 3,000$~kg.
\item High efficiency and excellent spatial resolution in the tracker were considered key parameters. A Silicon tracker having the necessary thickness would have been too costly and with too many readout channels: conversion foils of high-Z material were interleaved, balancing the conversion efficiency and the loss of resolution due to multiple scattering in the foils.
\item The design downlink budget of $<300$~kbps limited the number of readout channels and the number of bits available per channel, the average trigger rate and mandated the presence of a powerful background filter on board.
\item The required mission time of 5 years (with a goal of 10 years) required no consumables, substantial radiation hardness, and a highly modular structure to limit the impact of any damage or failure.
\end{itemize}
The final design divided the instrument into 16 identical towers, each containing a Silicon tracker on top of a Cesium Iodide (CsI) calorimeter and including the necessary readout electronics. All towers, arranged in a $4\times 4$ array on the support structure, were enclosed in the segmented plastic anti-coincidence detector (ACD), and wrapped into a micrometeroid shield and thermal blanket. An artist's cutout view is given in \figref{lat}, to be compared with the pictures in \figref{lat-real}.
The resulting design can be compared with AGILE, a lighter instrument with a similar purpose and design but with the additional inclusion of a hard X-ray detector. Launched in April 2007, with about 1/10th of the LAT mass, AGILE was optimized for operation in the energy ranges 20--60~keV and 400~keV--30~GeV \citep{agile,agile10y}.
The Gamma-ray Burst Monitor (GBM) was selected as a secondary instrument on \fermi\ \citep{gbm}: comprising 12 sodium iodide (NaI) low-energy detectors and 2 bismuth germanate (BGO) high-energy detectors, it covers the entire sky not occulted by the Earth, in the energy range from $\sim 8$~keV to $\sim 20$~MeV; see \figref{lat-real}, right. The \fermi -GBM complements the capabilities of the LAT: together the 2 instruments are sensitive across more than 7 decades of energy, enabling the joint analysis of spectra and time histories of transient events, including GRBs and the electromagnetic counterparts of gravitational wave events \citep{grb-cat}.
\fermi\ was launched on June 11, 2008, and after the initial commissioning, configuration and calibration phase, the LAT began nominal science operations on August 4, 2008.
\subsection{The tracker (TKR)}\label{sec:tkr}
The LAT TKR is the game changer with respect to the predecessors. SSDs replace wire chambers, each detector giving the transverse coordinate of the crossing point of an iozining particle with the readout pitch defining the spatial resolution. The ionizing energy is significantly lower for solid state detectors than for gas based detectors: $\sim 3$~eV for semiconductors versus $\sim 30$~eV for gas. As a consequence, more carriers are generated per ionizing particle, improving the signal-to-noise ratio. SSDs can be operated at a relatively low voltage ($\sim 100$~V) with a small dark current, limiting the power consumption. The inherent radiation hardness of SSDs ensures this figure will not grow too much for the duration of the mission, even taking into account the expected radiation damage in space. On the other hand, SSDs of very large size are impractical: the TKR planes need to be assembled out of several smaller detectors. However, too much granularity is detrimental, due to the required support structures and readout electronics taking up space at the expense of active area, so an optimal compromise is necessary. In HEP trackers, multiple scattering in the Silicon volume can have a severe impact on the angular resolution of the detector and double-sided detectors are often advantageous, enabling the readout of both transverse coordinates in a single detector, thus limiting the mass at the expense of additional complexity and cost. This is not the case for the LAT TKR, where a large mass is required and, in fact, the mass budget is dominated by the conversion foils, so single-sided Silicon detectors are adequate.
The TKR is divided into a matrix of $4\times 4$ identical towers; see \figref{tracker}, left. The basic TKR element is the \emph{tray}; see \figref{tracker}, right. Individual SSDs are assembled longitudinally along the direction of the strips and micro-bonded in ladders of 4; then, 4 ladders are placed side by side to form a Silicon plane. Two SSD planes are located at the top and bottom of a tray, with the strips oriented in the same direction and the readout electronics placed on circuit boards (Multi-Chip Module, MCM) at 2 opposite edges. Tungsten (W) conversion foils are placed in between the active volumes, as close as possible to the bottom SSD plane. A TKR tower is assembled by stacking trays, each one rotated $90^\circ$ with respect to the previous one. This way the top plane of a tray and the bottom plane of the next one form an $x-y$ SSD layer, measuring both coordinates of a passing ionizing particle, and located immediately after the W conversion foil of the top tray. Stacking 19 trays creates 18 $x-y$ planes; the two SSD planes at the very top and bottom of a tower are not necessary and they are not included. Some parameters for the TKR are listed in \tabref{tkrsensor}. For more details on the design of the sensors see also \citet{tkr-ssd-design}.
As mentioned, most of the Coulomb scattering in the TKR occurs in the W foils, affecting the track development and reconstruction. To balance angular resolution against conversion efficiency, the TKR is divided into two sections. In the \emph{front} (or \emph{thin}) section, the 12 tracking layers are preceded by tungsten foils 0.1~mm thick (0.03 radiation lengths), giving relatively lower efficiency but better angular resolution. In the \emph{back} (or \emph{thick}) section, the first 4 tungsten foils are 6 times thicker, with opposite effects, and the 2 last layers have no conversion foils; we will discuss the reason for the missing last 2 foils when describing the trigger and data acquisition system. The performance of the front and back sections in the TKR differ significantly and will be addressed separately in the section about instrument performance.
\begin{table}[htb]
\centering
\begin{tabular}{lcc}
\hline
SSD outer size & $8.95 \times 8.95$~\cmsq \\
Strip pitch & 228~\um \\
Floating strips & no \\
SSD thickness & 400~\um \\
Depletion voltage & $<120$~V \\
Leakage current & $\sim 1$~nA/\cmsq\ at 150~V \\
Breakdown voltage & $>175$~V \\
Fraction of bad channels & $\sim 0.01$\% \\
Number of SSDs tested & 12500 \\
Number of single strip tests & $\sim 30$M \\
Rejected SSDs & 0.6\% \\
\hline
\end{tabular}
\caption{Some parameters for the TKR, including figures from the assembly and test. Adapted from \citet{tkr-perf-2-years}.}
\label{tab:tkrsensor}
\end{table}
The choice of a readout pitch in SSDs is limited by manufacturing considerations and, in particular, by the development of the electric field in the detector. If needed, a fraction of the strips can be left floating, i.e. not connected to a readout, to properly bias the detector while limiting the number of channels. The readout pitch determines the hit spatial resolution, and therefore the angular resolution of the reconstructed tracks, in addition to the number of channels to be read. At lower energies the resulting angular resolution is dominated by multiple Coulomb scattering, leading to an expected $1/E$ energy dependence, while at high energies one reaches the intrinsic resolution given by the strip pitch and lever arm. The chosen value for the SSD strip pitch of 228~\um\ enables the LAT to reach the intrinsic limit of $0.1^\circ$ for normal incidence at a few 10~GeV, close to the design energy upper limit of 100~GeV. For reference, decreasing the pitch by 40\% would improve the angular resolution at 10~GeV by 12\%, with the drawback of 24\% more channels in the TKR and consequently an even tighter energy budget per channel \citep{tkr-2003}. Due to this high spatial resolution, the internal alignment of the active elements and external alignment to the spacecraft are critical. We will mention the alignment of the TKR elements again when describing the calibration procedures.
The TKR readout electronics manage 885,000 channels, each one with a cumulative strip length of $\sim 35$~cm and strip capacitance $\sim 40$~pF. Given the number of channels and the power budget reserved for the TKR, the readout must employ $<300$~\uW/chn. The charge collected by each SSD strip enters an amplifier-shaper-comparator chain in one of the 64-channel front-end ASICs. The amplified signal in a channel is discriminated by a single threshold and the pulse height is not measured, this limits the power consumption to $\sim 180$~\uW/chn \citep{tkr-perf-2-years}. Neighboring front ends are connected in a daisy-chain to instrument an entire TKR plane; digital controller ASICs at both ends control the front-end electronics and interface with the tower electronics. A simple trigger primitive is built from the logical OR operator of all the comparator outputs in a TKR plane, rising when at least one strip goes above threshold. The signal-to-noise ratio is excellent: for a minimum ionizing particle (MIP) crossing the SSD vertically, $\sim 5$~fC are deposited: for the nominal threshold value of 1.4~fC the hit efficiency is $>99.5\%$, while the noise occupancy is only $\sim 5\cdot 10^{-7}$ \citep{tkr-ro}. This electronic noise is one order of magnitude lower than the strip occupancy in orbit, caused by pulse tails of off-time cosmic-ray tracks. The shaping time is set to 1.5~\us : speed is not an issue, and in any case the scintillating detectors (CAL, ACD) impose an event timescale of the order of \us. Even so, when a MIP crosses a TKR plane the output remains high for $\sim 10$~\us\ due to details in the implementation of the baseline restoration circuit. Pile-up with the abundant background event can occur and is managed in the event analysis phase \citep{p7paper,pass8}.
At the low end of the energy range, a \gammaRayHyph\ event deposits a significant fraction of its energy in the TKR ($\sim 50\%$ at 100~MeV). In addition, knowledge of the energy deposition profile in the TKR is useful for the event analysis (e.g. for the background rejection). With no pulse-height capabilities, a simple estimate of the energy deposited in one SSD is obtained by measuring the time the channel remains above the discriminator threshold, the time resolution being defined by the internal 20~MHz clock. The $\sim 5$~fC deposited by a normal-incidence MIP correspond to a time-over-threshold (TOT) of $\sim 7.5$~\us; TOT is counted up to 50~\us\ (6 MIPs) to limit the readout delays, while linearity was shown to be good well beyond this limit \citep{tkr-design}.
The TKR front-end ASICs include a charge injection system, for calibration purposes: a capacitor is connected to each input and a step voltage, generated by a digital-to-analog converter, can be applied to selected channels \citep{tkr-ro}.
When a \gammaRay\ converts inside the TKR, it is usually inside a tungsten foil: each x-y SSD layer contributes only $\sim 0.01$~\rl. If the secondaries are detected in the very first silicon layers the uncertainty on the conversion location caused by Coulomb scattering and lever arm is $\sim 10$~\um, below the intrinsic resolution of the SSDs, but this degrades by more than an order of magnitude if the hits in the first layers are missed, leading to a loss in angular resolution of up to a factor $\sim 2$, at 100 MeV. It is therefore critical to maintain a very high hit efficiency. In particular, \gammaRayHyph\ trigger and hit efficiencies would be severely affected if a sizable number of the channels in the TKR were not functional. In \figref{strips}, left, MIP detection efficiency and fraction of bad channels are shown for the 18 TKR tower modules that were produced. Module A was the first ever built, the experience gained in the process is evident by the improvement in the following modules; despite Module A being slightly worse than the rest of the modules, it is still within the requirements (MIP efficiency $>98\%$), so it qualified for inclusion in the LAT. Module 16 was built with non-flight modules, and together with Module 8 was put aside as a spare part and later included in a so-called ``calibration unit'', discussed later.
In \figref{strips}, right, the evolution in the number of bad channels is shown as a function of time elapsed during the mission. After 10 years of operation in orbit, the number of bad strips amount to 4087 (0.46\%), a small increase with respect to the 3661 bad channels at launch. Bad channels are broken down into several categories: \emph{dead} channels appear to have a dead preamplifier, showing no signal and zero noise; \emph{disconnected} channels give no signal and very low noise, compatible with a floating input not connected to the SSD; \emph{partially disconnected} strips have intermediate noise levels, indicating a broken connection somewhere along the ladders; \emph{noisy} channels are due to some unspecified problem and have to be masked to prevent them from generating trigger requests and data hits. Remarkably, while the number of dead channels increases slowly with time, the number of dead and disconnected channels diminished slightly, indicating some unknown kind of reversible damage \citep{instr10y}.
\subsection{The calorimeter (CAL)}\label{sec:cal}
Inorganic scintillators, and Thallium-doped CsI in particular, are well suited for the construction of a large, segmented calorimeter \citep{knoll}. CsI(Tl) is very bright (54 photons/keV) but relatively slow (decay time of about 1~\us\ for \gammaRays), so it is well suited to applications where the particle rates are not too high. The maximum of the light emission occurs at around 550~nm, well-suited for Silicon photodiode readout. Being a relatively low-hygroscopic material, it makes it unnecessary to seal with passive materials of low EM stopping power. It is quite robust, with no cleavage planes and reasonably radiation hard, and therefore it is widely used for space applications.
The CAL, in order to be placed below the TKR must have the same modularity (16 tower modules) and lateral dimensions, while the vertical dimension, or thickness, is defined by balancing three goals: wide energy range (the thicker the better), wide FOV (squat aspect ratio), and acceptable total mass. With the goal of pushing the energy range above 100~GeV the optimal value thickness was set 8.6 \rl\ of CsI, or 16~cm. Part of the EM showers will escape from the bottom, the sides and along gaps in the modular structure, especially at high energies. To correct for this lost fraction, the shower development must be reconstructed. Several solutions to improve the imaging power were investigated (scintillating fibers, sampling calorimeter, pre-shower) and were discarded due to the small improvement in performance, often accompanied by a loss in energy resolution at the low end of the design energy range \citep{cal-letter}.
\begin{table}[htb]
\centering
\begin{tabular}{lcc}
\hline
Total mass & 1376 kg \\
Scintillator material & CsI(Tl) \\
Crystal dimensions & $26.7$~mm$\times 20$~mm$\times 326$~mm \\
Crystal mechanical tolerance & 0.3~mm \\
Wrapping & aluminized Mylar \\
Number of crystals & 1536 \\
Electronics channels & 6144 \\
Readout dynamic range & $\sim 5\cdot10^5$ \\
Required longitudinal position & $<1.5$~cm \\
\ resolution (1$\sigma$) & \\
Required energy resolution & $<20\%$ ($<100$~MeV, CAL only) \\
\ (on axis, $1\sigma$) & $<10\%$ (100~MeV--10~GeV) \\
& $<20\%$ (10--300~GeV)\\
\hline
\end{tabular}
\caption{Some parameters for the CAL, including figures from the assembly and test. Compiled from \citet{latpaper} and \citet{cal-thesis} .}
\label{tab:calsensor}
\end{table}
The CAL is divided into 16 identical modules, one per tower; some parameters and requirements are listed in \tabref{calsensor}. In each module, there are 8 layers of 12 parallel crystals in hodoscopic arrangement, each layer rotated $90^\circ$ with respect to the previous one. Each crystal is individually wrapped in reflective foil, and two photodiodes are glued at each end: a large one (147~mm$^2$ in area, $2$~MeV to $1.6$~GeV in energy range) and a small one (area 25~mm$^2$, energy range $100$~MeV --$70$~GeV). See \figref{cal}, left, for an artist's rendition of the structure.
The lateral dimensions of the crystals make it possible to sample the shower development: lateral size and thickness are close to the Moli\`ere radius (3.53~cm) and the radiation length of CsI (1.86~cm). The position along the longitudinal direction is obtained from the asymmetry in light collection at the two ends, caused by attenuation inside the crystal. The asymmetry, and hence the spatial resolution, is optimized by controlling the surface treatment of the longitudinal surfaces: polishing improves the yield but impairs the spatial resolution, while roughening (e.g. lightly scratching with an abrasive material) causes the opposite effects. The longitudinal position resolution in the CAL depends on the deposited energy, ranging from a few mm at 1~MeV of deposited energy to less than 1~mm at $>10$~GeV \citep{cal-thesis}. In \figref{cal}, right, the logarithm of the asymmetry in light collection is plotted against longitudinal position for normally incident muons (deposited energy $\sim 11$~MeV).
More than 2000 crystals were procured from the manufacturer with quality slowly improving as the manufacturing process was refined. Overall, $\sim 80\%$ passed all the mechanical and optical requirements without changes, and most of the rest were barely outside specifications and were adjusted with an additional surface treatment during the testing phase.
The dose the crystals were expected to receive was estimated before launch to be $\sim 40$~Gy per 10 years of mission. Radiation hardness was evaluated in the acceptance tests: crystal samples for each production batch were irradiated with $100$~Gy of $1$-MeV \gammaRays, leading to an average loss of light yield of $12\%$ (maximum loss $27\%$), well within the requirement of $<50\%$ \citep{cal-rad}. Accounting for the 2.5 safety factor, the average loss is in good agreement with the observed $\sim 6\%$ yield loss in 10 years, see \figref{calevol}.
The readout electronics, placed on the sides of the CAL modules, must fulfill the demanding task of operating across a wide dynamic range with low power consumption and minimal dead time. The overall scheme is similar to that of the TKR: ASIC readout electronics, a front-end analog chip with an amplifier-shaper-comparator chain (shaping time $\sim 3.5$~\us) for each crystal end, and separate digital readout controller ASICs. The output of each diode is split into two track-and-hold circuits with gains differing nominally by a factor of 8. This enables the coverage of the large dynamic range with 4 readout ranges chosen automatically, in the ratios 1:8:64:512. In the front-end an additional fast shaping amplifier ($\sim 0.5$~\us) is included for trigger discrimination. Two threshold discriminators at each crystal end, one per photodiode, generate two trigger requests indicating a low- or high-energy deposition; nominal settings are 100~MeV and 1~GeV, respectively. See \citet{latpaper,cal-design} for more details. To decrease the readout deadtime and the data volume, an ``accept'' threshold is set for each crystal end (nominally 2~MeV): signals below threshold are suppressed; dead time per event is less than $20$~\us.
Similarly to the TKR front-end electronics, a charge injection system is implemented to calibrate the input channels individually. In addition, the significant overlap between the readout energy ranges makes it possible to cross-calibrate the channels, and simultaneous readout of the four ranges is available and used in calibration runs.
Twenty CAL modules were assembled, of which 16 were integrated into the LAT. Three additional modules are used in the so-called ``calibration unit'', while one engineering module, not completely identical to the others, was used in the beam-tests, which are discussed in the calibration section.
\subsection{The anti-coincidence detector (ACD)}\label{sec:acd}
The lack of a time-of-flight detector on board to tag unwanted upward-moving particles makes the performance of the anti-coincidence detector critical for the success of the instrument \citep{acd-design}. In the \fermi\ orbit, charged particles outnumber \gammaRays\ by five orders of magnitude. Under these conditions, the rate of trigger requests from the TKR alone averages to several kHz, and most of these trigger requests should be rejected in order to limit the dead time to a reasonable figure. Even so, the triggered events are mostly background that must be discarded on board to bring the bit rate within the available downlink bandwidth. As mentioned, the presence of the CAL complicates the matter: in high-energy showers, secondaries will escape and reach the lower parts of the ACD, potentially causing a veto (calorimeter backsplash). In fact, this caused a reduction of the high-energy efficiency for EGRET, by a factor $\sim 2$ at 10~GeV, with respect to the efficiency at 1~GeV \citep{egret-calib}.
Plastic scintillators are well suited for an anti-coincidence detector: they are sturdy, inexpensive, can be machined into complex shapes and easily cover a large surface, and can achieve a hit efficiency greater than 0.999 \citep{knoll}. Embedding wavelength-shifting (WLS) fibers in the material enables the light signal to be brought to the optical readout, which can be conveniently moved away from the FOV.
The ACD is a square hat covering the top and sides of the LAT, extending down to cover the entire TKR, with a total surface of $8.3$~m$^2$ \citep{acd-scint-wsf}. It is segmented into 89 tiles of various shapes and sizes with areas ranging from $\sim 450$ to $\sim 2500$~\cmsq\ and 1~cm thick (the five tiles in the top middle row are 1.2~cm thick, to compensate for the greater distance from the readout with a slightly larger signal), see \figref{acd}. Each tile is a polyvinyl toluene plastic scintillator with 64 grooves machined uniformly on the surface, where 1~mm diameter WLS fibers are embedded. The fibers from a tile are split into two symmetric bundles and run to 2 photomultiplier tubes (PMT) for readout. Light yield uniformity is typically $>95\%$ across most of the tile surface, except for the 2~cm-wide region around the borders, where it remains $>75\%$. Finally, each tile is individually wrapped with light-reflective foils and then a black light-tight envelope.
In addition to the plastic tiles, cable-like ribbons of scintillating fibers are used to improve the seal along directions where tile overlap was not possible; the ribbons themselves have detection efficiency $>90\%$.
The modular structure improves the robustness of the whole system: a puncture in the ACD would disable one tile only, with a limited impact on the overall performance. In addition, segmentation allows for a significant improvement over the monolithic predecessors: the reconstructed direction of the primary and the shower development can be correlated to the location of the hits in the ACD, and the simple veto condition ``hit in the ACD'' can be expanded to mitigate the calorimeter backsplash. A possible disadvantage is leakage through the borders of the tiles where the hit efficiency is lower. This is alleviated in two ways: misalignment and overlap. The ACD is built with a 5x5 structure, so the edges do not match the gaps between the towers where the tracking uncertainty is highest, helping with the correlation of tracks in the TKR and hits in the ACD. In addition, the top tiles are overlapping along one direction and bent tiles are used for the top edges. The remaining gaps are sealed from the inside with the ACD ribbons. Notably, the lowest tiles, closest to the CAL and outside the design FOV, are not segmented: there is no need to try and recover \gammaRayHyph\ events passing through these.
The front-end electronics are located on two opposite sides at the base of the LAT, divided into 12 circuit boards, each managing 18 channels. The electronics assemblies contain the redundant HV power supplies, the analog front end ASICs and the digital readout controllers. The ACD readout generates the fast trigger signals and a sample-and-hold signal for pulse-height analysis (PHA).
Setting the veto threshold defines the efficiency for charged particle detection, with a requirement of $>0.9997$ for a MIP. In the ACD design the tile readout has two thresholds: on board a threshold of $\sim 0.45$~MIP is used for the initial rejection of charged particles, and on ground the analysis threshold of $\sim 0.3$~ MIP for the final analysis brings the detection efficiency to $\sim 0.9999$.
To appreciate the overall efficiency of the ACD, we can consider the background events remaining after the background rejection is performed on the ground. Detection efficiency is not perfectly uniform over the ACD, causing some regions with relatively smaller efficiency to act as a path in for background events. In \figref{acd-bkg}, the minimal distance of the intersection point of the extrapolated TKR track from the edge of the traversed ACD tile is shown for a sample of real and simulated events, after the level of background rejection recommended for the observation of point sources is applied. An excess can be seen at $\sim 40$~mm from the edge, corresponding to the location of many mounting holes; a significant number of background particles remaining in the dataset are passing through those. This region of the phase space can be eliminated in the event analysis process at the cost of a small loss in effective area, and a tighter background rejection can thus be achieved.
Other background populations are visible in the Monte Carlo simulations but hard or impossible to address in reality, e.g. anything producing secondary \gammaRays\ outside the ACD. As an example, protons can undergo inelastic scattering in the passive materials surrounding the ACD, producing low-energy \gammaRays; a clear association of the events with the surroundings of the ACD is problematic, not least because of the relatively large uncertainties in the reconstructed direction at low energy.
\subsection{Data acquisition and event analysis}\label{sec:daq}
LAT data taking is organized in \emph{runs}, each usually spanning one orbit or the time between exiting and entering the South Atlantic Anomaly (SAA). The signals from the subsystems must be collected, processed and sent to the ground for further analysis. This process can be divided into two phases: a hardware one in which channels are latched and read, and data is collected by the LAT Event Processing Unit (EPU). In the second phase, a software one, the data are processed by the EPU and stored in the Solid State Recorder (SSR), ready for downlink.
At the core of data collection is the \emph{event trigger}, which ultimately determines the dead time of the instrument. The LAT is, by design, a \gammaRayHyph\ detector, so the the main role of the trigger is to activate on \mygamma -like events. On the other hand, other kinds of events are necessary in order to calibrate and monitor the subsystems (e.g. MIPs and heavy ions). The LAT operates with a flexible trigger system, where different \textit{trigger engines} run at the same time, based on several \textit{trigger primitives} (or \textit{trigger requests}). Most trigger primitives are generated by the subsystems (TKR, CAL, ACD) when a suitable energy deposition occurs in the active volumes of the detectors; a few are generated internally by the DAQ system. %
\begin{itemize}
\item \techname{TKR} is issued when three consecutive x-y layers in the tracker have a signal above threshold (nominally $0.25$~MIP), indicating a possible particle track.
\item \techname{CAL\_LO} is issued when a calorimeter crystal has a signal above the low-energy threshold (nominally 100~MeV).
\item \techname{CAL\_HI} is issued when a calorimeter crystal has a signal above the high-energy threshold (nominally 1~GeV).
\item \techname{VETO} is issued when an anticoincidence tile has a signal above the low-energy threshold (nominally 0.3~MIP).
\item \techname{ROI} is issued when a \techname{TKR} primitive happens in coincidence with a \techname{VETO}: each tower has a list of associated anticoincidence tiles to check for coincidence.
\item \techname{CNO} is issued when an anticoincidence tile has a signal above the high-energy threshold (nominally 25~MIP).
\item \techname{PERIODIC} is the only special primitive affecting normal operation. It is issued at a constant frequency (nominally 2~Hz).
\end{itemize}
The ``three'' in the definition of the \techname{TKR} primitive is one of the few non-configurable numerical parameters in an extremely versatile system. We also note that \gammaRays\ converting just before either of the two last TKR layers would not cause a trigger request, which is the reason why there are no tungsten foils in the bottom two trays.
Trigger engines are built with the above primitives, with the relevant ones during normal operation in orbit listed in \tabref{triggers}. To limit bandwidth usage, some are \emph{prescaled} by a factor $n$, i.e. only 1 trigger request in $n$ is acknowledged. The basic \gammaRayHyph\ trigger is number 7, corresponding to a track candidate in the TKR, no veto from the ACD, no large energy deposit in the CAL, and no prescaling. The resulting cumulative trigger rate averages $\sim 1.5$~kHz, see \figref{rates}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{cccccccrr}
Engine & \techname{PERIODIC}& \techname{CAL\_HI}& \techname{CAL\_LO}& \techname{TKR}& \techname{ROI} & \techname{CNO} & Avg. rate [Hz] & \\
\hline
3 & \isreq & & & & & & 2 & \\
4 & \isexc & & \isreq & \isreq & \isreq & \isreq & 200 & \\
5 & \isexc & & & & & \isreq & 5 & $\dagger$ \\
6 & \isexc & \isreq & & & & \isexc & 100 & \\
7 & \isexc & \isexc & & \isreq & \isexc & \isexc & 1500 & \\
8 & \isexc & \isexc & \isreq & \isexc & \isexc & \isexc & 400 & $\star$\\
9 & \isexc & \isexc & \isreq & \isreq & \isreq & \isexc & 700 & \\
10 & \isexc & \isexc & \isexc & \isreq & \isreq & \isexc & 100 & $\ddagger$ \\
\end{tabular}
\caption{Definition of the standard trigger engines: primitives used
(\isreq: required, \isexc: excluded) and
average rates; adapted from \citet{p7paper}.
Trigger engines 0 to 2 are not relevant for science operation. $\dagger$: prescaled by 250. $\ddagger$: prescaled by 50. $\star$: currently disabled.}
\label{tab:triggers}
\end{center}
\end{table}
The next step is the \emph{event filter}, fulfilling the role of reducing the event rate until the data stream is compatible with the available downlink bandwidth. This system is also configurable, allowing several independent filters to run at the same time; three filters are active during nominal operation.
\begin{itemize}
\item \techname{GAMMA} is designed to accept \gammaRayHyph\ candidates;
\item \techname{MIP} is designed to select heavy-ion candidates;
\item \techname{DIAGNOSTIC} accepts all events taken by the \techname{PERIODIC} trigger and an unbiased sample of all other trigger engines, prescaled by a set factor (nominally 250).
\end{itemize}
Most of the \gammaRay\ events in the science datasets passed the \techname{GAMMA} filter: a sequence of conditions are evaluated for each event, in order of complexity. This includes a rudimentary track reconstruction in the later stages. The average event rate, after this filter, is reduced to an average $\sim 400$~Hz, see \figref{rates}. This translates to an average of 1.5 Mbps sent to the SSR.
Each event is timestamped and the detector dead time is recorded, with the intrinsic time resolution of 50~ns set by the LAT internal clock, operating at 20~MHz. Timestamping relies on the absolute time provided by the internal GPS receiver, plus a precision 20-MHz scaler synchronized to the GPS Pulse-Per-Second signal. Tests before launch indicated that LAT GPS times are maintained within 20~ns of UTC time, and event timestamps are accurate within 1~\us\ \citep{onorbitcalib}. Timing is monitored during the mission with an accuracy of a few $\mu$s by measuring the period of millisecond pulsars \citep{instr10y}. Instrumental dead time is dominated by the time required to latch the front ends and read out the event, with a minimum of $26.5$~\us\ per event. Dead time is, on average, $\sim 8$\% outside the SAA, with small variations correlated to the trigger rate \citep{onorbitcalib}. The value is downlinked with the event data, since it must be accounted for in order to calculate the fluxes of \gammaRayHyph\ sources. Additional dead time due to data loss (on board or on ground) is well below $1$\%.
In addition to the aforementioned nominal science operation mode, the LAT can be operated in \emph{dedicated mode}: in this mode the detector electronics and the trigger system are configured to acquire data for calibration and synchronization purposes. While in dedicated mode the \techname{PERIODIC} trigger can be put in charge-injection mode, instructing the front end electronics to feed charge pulses directly to the preamplifiers, making it possible to calibrate all readout channels individually. Since this mode is incompatible with science data taking, dedicated mode must be periodically enabled during \emph{calibration runs}.
The LAT transmits to the ground an average of 16~GB of compressed data every day. Data are processed at the computer farm at the SLAC National Accelerator Laboratory (SLAC): over 3000 CPU cores are available to promptly reconstruct and analyze the event data and publicly release them as soon as possible. Another 1500 CPU cores are available at the IN2P3/CNRS facilities in Lyon, France, and are commonly used for Monte Carlo simulations.
Calibration datasets undergo a separate, dedicated analysis. In addition, the abundant background events are the target of dedicated analyses to evaluate the fluxes of cosmic-ray electrons, positrons, and protons \citep{electrons,elepos,eleanyso,protons}. From here onward, we focus on the \gammaRays, examining in some detail the process on the ground that leads to the creation of the \gammaRayHyph\ dataset.
\emph{Reconstruction} is the process of translating calibrated data from the detectors (deposited energy, hit locations, \dots) into a description of the physics behind them (tracks, energy of particles, volumes crossed, \dots). \emph{Event analysis} is the assembly of such information into a \gammaRayHyph\ event, with an incoming direction, energy, time, and ancillary quantities (quality of the energy and direction estimates, probability of being a background event, \dots). Finally, a set of \emph{cuts} on the available variables defines an \emph{event class} and, in practice, an event set.
Releasing an improved event processing procedure is a major task: in addition to the effort in designing and validating the new algorithm, after deploying the new version to the real-time data analysis pipeline, all the past data in the archives must be reprocessed. From the beginning of the nominal operations in August 2008 to November 2013, the \Psix\ reconstruction and event analysis scheme was in place, developed prior to launch. From November 2013 to June 2015 \Pseven\ was employed, featuring the same reconstruction as the predecessor but a significantly improved event analysis scheme. The reconstruction and analysis procedure, and the several validation procedures that are performed on LAT data are described in great detail in \citet{p7paper,latpaper}. Since June 2015, \Peight\ is operational, featuring novel reconstruction algorithms and a new event analysis \citep{pass8}; the version currently in use is the third release, featuring a slightly improved background rejection \citep{p8v3}. Thus, in order to identify an event set, one must name the reconstruction and analysis procedure (e.g. \techname{P8R3} for \Peight\ release \techname{3}) and the selection cuts, usually described with a byname suggestive of the intended use or strictness of the background rejection (e.g. \techname{SOURCE} for the analysis of point sources). %
The instrument performance differs for each event class, with more stringent cuts decreasing efficiency and generally improving the quality, e.g. in terms of resolution, residual background level, etc; see the description of the LAT performance in the dedicated section. In all data releases, photon events have been partitioned into two conversion types (\emph{front} and \emph{back}), given the significant difference in performance, see \figref{perfplots}. Since \Peight, this has been expanded and generalized into the concept of event types, event subsets for which performance is evaluated and provided, and which can be included or excluded from the scientific analysis. On top of the conversion type partition, two new event type partitions are available: \emph{psf} event types, indicating increasing quality of the reconstructed direction, and \emph{edisp} event types, indicating increasing quality of the energy reconstruction \citep{p8data-site}. If the angular and/or energy resolution are critical, one can select only the better event types for the analysis, at the price of a smaller effective area.
\subsection{Operation}\label{sec:operation}
The latitude of the launch
site, at Cape Canaveral Air Force Station Space Launch Complex 17-B, set the initial orbit inclination at $28.5^\circ$. Once a circular orbit at $550$~km above sea level was reached, the remaining fuel was used to reduce the orbit inclination to $25.6^\circ$, thus reducing the time spent in the SAA. In this region, HV power supplies in the ACD are turned off to protect the PMTs, thus regular data taking is disabled. For the operation of the LAT, the SAA is defined by a 12-vertex polygon, stop and start commands are issued 30 seconds before entry and after exit. Notably, the SAA polygons for the LAT and GBM differ, see e.g. the case of GRB170817A \citep{grb17a-gbm,grb17a-lat}: at the time of the GRB the GBM was outside its SAA polygon and taking data but the LAT was already inoperative.
The orbit of \fermi\ is slowly decaying, with an altitude loss more pronounced during the solar maximum: in 10 years, the altitude decreased by about 20~km, with a corresponding change in orbital period from 95.7 to 95.3~minutes.
The attitude profile is determined in order to maximize the uniformity of exposure across the sky, leveraging the wide FOV of 2.4~sr. The Earth is a very bright source of \gammaRays\ \citep{limb09} and can cause excessive dead time saturating the trigger system, so it is best kept outside the FOV. In the nominal observation profile, in survey mode (``scanning''), the spacecraft rocks north and south of the orbital plane on alternate orbits. As a consequence, the LAT boresight is offset from the zenith
toward either the north or south orbital poles by a characteristic
rocking angle. Initially, the rocking angle was set at $35^\circ$. This value was later changed a few times to optimize the operation and performance, to finally set at $50^\circ$. After the anomaly of March 2018, discussed later, the pointing strategy is more complex due to operational constraints. In addition to the uniformity of exposure across the sky, the rocking angle affects the average temperature of the spacecraft batteries: the larger the rocking angle, the more time is spent tilted away from the relatively warmer Earth, thus improving the cooling and increasing the mission lifetime. Within a given orbit, \fermi\ also executes a slow roll about the boresight to maintain an optimal orientation of the solar panels with respect to the Sun.
\fermi\ can be set in pointed observation mode, so that the LAT points in the direction of a target. This can be requested from ground (e.g. for Target Of Opportunity observations, transients, etc.) or initiated autonomously (e.g. when the onboard analysis detects a GRB candidate satisfying given requirements). In pointing mode, the target is kept close to the boresight, but when the Earth limb approaches the FOV, \fermi\ rotates to maintain a fixed Earth avoidance angle (nominally set to $20^\circ$) between LAT axis and Earth limb. In case an occultation occurs, the telescope switches to a roll along the Earth limb at a set angle (nominally $50^\circ$) with the appropriate speed to catch the target as it rises from occultation.
Calibration runs are performed routinely to evaluate and monitor the instrument performance. These require running the instrument with a special configuration and/or with a specific attitude profile.
Maintaining an optimal temperature is critical for the survival of the instrument and for ensuring the quality of data: thermal radiators and active heater elements operate to keep all subsystems within the allowed temperature range. %
If a major problem arises, the LAT is automatically powered off by the \fermi\ spacecraft. The temperature is maintained in a survival temperature range by the survival heaters on board, controlled by the spacecraft: the LAT can survive for an indefinite period of time in this state, while the Fermi Mission Operations Center at NASA's Goddard Space Flight Center and the LAT experts within the LAT Collaboration act to solve the problem.
On \mydate{2008}{July}{31}, an intermittent short in the wiring caused several temperature readouts to falsely appear too cold, outside the safe range, causing a power-off. The affected alarms have been disabled; in any case, the large thermal mass of the LAT allows ground operators to see any real temperature changes and react before the temperatures change too much.
On \mydate{2009}{March}{11}, a software error occurred in the LAT computer, causing a chain of other errors that ultimately caused the spacecraft to go into its safe-mode and powering off the LAT. The LAT was restarted by ground commanding and the errors in the computers were identified in the diagnostic data and fixed in a subsequent flight software update.
On \mydate{2018}{March}{16}, \fermi\ went into safe-mode and powered off the LAT, because the -Y solar panel stopped moving. The LAT remained powered off for
over 17 days, as the solar panel problem was investigated.
Power up proceeded without problems and on April 2, science data taking resumed. Due to the large thermal inertia of the CAL and its temperature-dependent performance, the data were flagged as unfit for science analysis for a few days, until nominal operation was resumed 23 days after the shutdown, the longest interruption since launch.
The affected solar panel on Fermi remains stuck since March 2018, and the rocking profile for all-sky survey was replaced with periods of various alternating rocking angles, keeping the \fermi\ power system operating nominally with minimal changes to the LAT sky exposure. Autonomous repointing
has also been disabled.
Other rare issues require direct intervention by the Fermi Mission Operations Center. On \mydate{2012}{April}{03} the \fermi\ thrusters were fired briefly to avoid a predicted close approach with another satellite, resulting in a minimal impact on the orbital parameters. A similar maneuver was considered for a few days in April 2010 for another close approach and was canceled once the probability of a collision dropped down to acceptable levels (the orbit predictions become more accurate the closer in time they are to the conjunction).
The performance of the LAT subsystems and of the ground infrastructure are constantly monitored through the LAT Data Quality Monitor (DQM) system \citep{instr10y}. During the data processing on the ground, histograms are generated and made available to the scientists on duty, while automated alarms are issued if a quantity deviates from the allowed range. Monitored quantities include temperatures, pedestals and gain of the detectors, channel rates, and event rates. About 12,000 parameters are monitored and the DQM system makes about 4100 checks on parameter ranges. Quantities that vary significantly during an orbit due to dependence on the attitude or to the geomagnetic coordinates are parameterized as a function of the relevant variables, leading to normalized quantities that are easier to monitor. If the DQM system identifies a problem that can potentially influence the data quality, a bad time interval (BTI) is marked in the data file. As of the end of 2021 the cumulative bad time amounted to less than 10 days, in large part due to the aforementioned shutdowns and the following temperature stabilizing periods after restart ($\sim 70\%$); almost all the remaining BTIs are caused by solar flare activity. During solar flares, X rays hitting the ACD can cause excessive veto signals, reducing the sensitivity to \gammaRays. Under these conditions the LAT performance could deviate significantly from the parameterization provided for scientific analysis, so the affected time intervals are flagged as bad.
\subsection{Calibration}\label{sec:calib}
Fully calibrating an instrument the size of the LAT with a beam test is generally not feasible: the full detector is too big for the beam facilities and transport and irradiation are, in any case, too risky. In the case of the LAT, structures and modules were tested with radiation sources at each stage in the production and assembly \citep{tkr-ssd-design,tkr-ssd-rad,cal-rad}, including one full CAL module (known as the ``Engineering Model'', slightly different from the flight models) \citep{cal-ions}. In this section, we focus on the larger assemblies only, containing sensors for all the three LAT subsystems.
The basic concept of the LAT was validated early in 1997, using a test structure built with simple versions of the planned flight sensors (a few SSDs, a hodoscopic arrangement of CsI crystals, plastic scintillator tiles for anti-coincidence) \citep{beamtest-slac}. The outcome of the campaign was the validation of the design choices, verifying the expected performance. Most importantly, comparison of the results with the Monte Carlo simulations confirmed that the software tools accurately reproduced the instrument performance. A follow-up beam test of a structure resembling one LAT tower module (the Beam Test Engineering Model, BTEM) was performed in 1999/2000 with similar results \citep{beamtest-slac2}. A balloon flight was performed in 2001 using a detector similar to the BTEM; it confirmed that the LAT design could operate in a space-like background environment \citep{balloon}.
The ``Calibration Unit'' is a scaled down instrument, assembled using the two spare towers (including TKR and CAL), 1 additional CAL module and 5 ACD tiles. The main purpose is to have on ground a platform to perform tests and replicate issues that may happen in space. A final on ground calibration campaign was performed \citep{beamtest-cern} to verify the calibration procedures and to tune the parameters of Monte Carlo simulation: backsplash from the calorimeter, energy leakage corrections, background rejection techniques, etc.
The calibration procedures, validated via the beam tests on-ground, are routinely carried out in space \citep{onorbitcalib}. Many parameters must be evaluated and finely tuned: gains, thresholds, alignment, time delays, live time, etc.
Data needed for calibration are constantly collected during normal science operation, in particular thanks to the dedicated trigger engines. \tabref{triggers}\ shows one such example, engine 4, requiring a high energy deposition in the ACD, a track in an associated TKR tower, and some energy deposition in the CAL. This engine is very effective at collecting heavy ions crossing both TKR and CAL.
In addition, calibration runs in \emph{dedicated mode} are scheduled periodically to collect data that cannot be obtained during nominal operation, as discussed in the section about data acquisition. The LAT spends approximately 2.5~h every 3~months in dedicated mode.
Monitoring the calibration parameters as a function of time is also an effective way of monitoring the health and stability of the instrument. After more than 12 years of operation the instrument remains in excellent conditions. Let us mention only a few examples, for a detailed discussion see \citet{onorbitcalib,instr10y}.
The most evident effect related to aging is the increase of power consumption in the TKR. In \figref{tkrevol}, right, the current drawn by the TKR is plotted as a function of time. In the same plot the current for all the 16 tower modules is shown, multiplied by 10 to fit in the same scale. The slow increase is attributed to the expected radiation damage in the SSDs. In \figref{tkrevol}, left, the corresponding increase in the noise level in the TKR readout is shown. While non negligible, the increase has no consequence: the noise has reached $\sim 1325$ equivalent electrons, or $0.21$~fC, to be compared to the average signal for a MIP at $\sim 5$~fC. In particular there is no need to update the TKR noise discriminator threshold from the initial values of $1/4$ to $1/3$ of a MIP.
The second most evident effect is the loss of light yield in the CAL due to radiation exposure of the CsI crystals: a degradation of $\lesssim 1\%$ per year is observed, see \figref{calevol}, and easily managed with a corresponding change in the CAL energy calibration. We mentioned that light absorption in the crystals cause yield loss but improve position resolution. This applies to radiation damage: the light asymmetry along the crystals is increasing very slightly as a function of time, leading to a small improvement in position resolution.
The relative mechanical alignment of the detectors within the LAT affects directly the track reconstruction and the angular resolution, while the alignment between the LAT and the spacecraft affects the conversion between internal LAT coordinates and sky locations.
Intra-tower and inter-tower TKR alignments use reconstructed events recorded during nominal science operations with no special selection,
minimizing the possibility of selection bias. Residuals in the track fitting procedure are converted into geometrical displacement of the SSD planes (three traslations and three rotations) and included in the calibration. The sensitivity of the measurement is better than 2~\um\ for translations, and than 0.02 mrad for rotations around the coordinate axes, roughly an order of magnitude below what would affect the angular resolution at high energies. No significant time evolution is observed.
The \fermi\ reference system is determined by the optical star tracker system in the spacecraft. The relative alignment of the LAT to the spacecraft is determined by
minimizing the residuals of the measured locations of known
\gammaRayHyph\ sources in the sky: an accuracy better than $5^{\prime\prime}$ on the three rotation angles is achieved, again with no significant time evolution.
\subsection{Performance}\label{sec:perf}
An accurate description of the instrument performance is necessary to reconstruct astrophysical quantities (e.g. \gammaRayHyph\ flux, source extension, \dots) from the observed event counts. The instrument design, reconstruction, and event analysis determine the overall performance. In particular, candidate \gammaRays\ may be assigned to several different event classes, and a class selection defines an event set and a corresponding performance.
The \emph{Instrument Response Function} (IRF) is the map converting the incoming photon flux into detected events; the IRF is canonically factorized into \emph{effective area} (i.e. geometric area times efficiency), \emph{point spread function} (angular resolution) and \emph{energy dispersion} (energy resolution). For each event class, the corresponding quantities, as a function of the \emph{true} energy and direction in the instrument frame of reference, are evaluated with Monte Carlo simulations. Since launch, such simulations have been tuned to best replicate the behavior and quirks observed in real data \citep{p7paper}. %
An example of LAT performance plots is shown in \figref{perfplots}. The name of the event class (``\techname{P8R3\_SOURCE\_V2}'') indicates that this refers to the current \Peight\ event analysis, release 3. The ``\techname{SOURCE}'' event class is tuned for the analysis of non-transient point sources; other event classes are available, optimized for the study of transients, diffuse components, etc. The final version number refers to the IRF parametrization, the progressive version number includes test versions and release candidates so not all versions are released publicly.
The top left, top right, and bottom left plots in \figref{perfplots} show example values of effective area, point spread function, and energy resolution, respectively. The events converting in the \emph{front} and \emph{back} part of the TKR are shown separately, in addition to the total, or average, value. All figures of merit are described in terms of \emph{true} (i.e. Monte Carlo) energy and inclination bins. The first plot shows the effective area as a function of energy for on-axis incident \gammaRays; an additional (few \%) dependence on the azimuthal angle around the LAT axis, due to the square shape and the alignment of the gaps parallel to the sides of the instrument, is averaged out to produce the plot. The remaining plots account for the dependence on the off-axis angle, by integrating the effective area over the solid angle.
The details of the IRF implementation and the procedure to generate the plots above are described in great detail in \citet{p7paper}.
The bottom right plot in \figref{perfplots} shows the sensitivity for point source detection at different locations in the sky, derived semi-analytically from the instrument performance and from a background model (10 years of observation, $5\sigma$ sensitivity) \citep{p7paper,perf-site}.
\subsection{Conclusion}\label{sec:end}
With no consumables limiting its lifetime, after more than 13 years of operation, the LAT remains in excellent operating condition.
Considering the continuing good performance, the LAT can play a major role in the new era of multi-messenger astrophysics alongside the existing and future instruments \citep{neutrinos,gravitational,cta,rubin}.
Since the time of the LAT design phase, no game-changing new technology in the field has appeared, at least nothing comparable with the shift from gas spark chambers to Silicon microstrip trackers. An important, but not revolutionary, improvement in detector technology is the appearance of Silicon photomultipliers (SiPM) as a replacement for photomultiplier tubes \citep{sipm}, and the good performance of scintillating fiber trackers is also worth mentioning \citep{fibers}. All things considered, the design of the LAT remains very close to the state of the art for imaging \gammaRayHyph\ observatories. A different optimization can be sought, e.g. a thicker calorimeter for a better performance at high energy, at the cost of a smaller FOV, see e.g. the \gammaRayHyph\ performance of CALET \citep{calet}. On the other hand, the heritage of the LAT is evident in the design of the proposed future space observatories in the MeV regime \citep{amego,astrogam}.
Until a new breakthrough in HEP detectors occurs, the LAT will remain the best all-purpose, wide FOV \gammaRayHyph\ instrument covering the energy range from the onset of pair production at a few tens of MeV to a few hundred GeV energies with excellent performance, delivering invaluable scientific data.
\bibliographystyle{spbasic}
\section{Cross-references}\label{sec:crossref}
\begin{enumerate}
\item[] Thompson, D and Wilson-Hodge, CA (2021) Fermi Gamma-ray Space Telescope, in this volume.
\end{enumerate}
\bibliography{fermilat}
|
Title:
Radio observations of the tidal disruption event AT2020opy: a luminous non-relativistic outflow encountering a dense circumnuclear medium |
Abstract: Tidal disruption events (TDEs) occur when a star passes too close to a
supermassive black hole and is destroyed by tidal gravitational forces. Radio
observations of TDEs trace synchrotron emission from outflowing material that
may be ejected from the inner regions of the accretion flow around the SMBH or
by the tidal debris stream. Radio detections of tidal disruption events are
rare, but provide crucial information about the launching of jets and outflows
from supermassive black holes and the circumnuclear environment in galaxies.
Here we present the radio detection of the TDE AT2020opy, including three
epochs of radio observations taken with the Karl G. Jansky's Very Large Array
(VLA), MeerKAT, and upgraded Giant Metrewave Radio telescope. AT2020opy is the
most distant thermal TDE with radio emission reported to date, and from
modelling the evolving synchrotron spectra we deduce that the host galaxy has a
more dense circumnuclear medium than other thermal TDEs detected in the radio
band. Based on an equipartition analysis of the synchrotron spectral properties
of the event, we conclude that the radio-emitting outflow was likely launched
approximately at the time of, or just after, the initial optical flare. We find
no evidence for relativistic motion of the outflow. The high luminosity of this
event supports that a dense circumnuclear medium of the host galaxy produces
brighter radio emission that rises to a peak more quickly than in galaxies with
lower central densities.
| https://export.arxiv.org/pdf/2208.13967 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
transients: tidal disruption events -- radio continuum: transients
\end{keywords}
\section{Introduction}
When a star passes within the tidal radius of a black hole, the star can be destroyed, producing a bright flare of electromagnetic radiation visible from radio to X-ray wavelengths \citep[e.g.][]{Rees1988}.
The afterglow emission from such tidal disruption events (TDEs) at different wavelengths gives insight into the process by which the star was destroyed, the formation of accretion disks around black holes, the magnetic and gravitational fields of the central black hole, and the nuclear environment of the host galaxy \citep[e.g.][]{Lodato2011}.
TDEs show diverse optical, X-ray, and radio properties, thought to be explained by the circumstances surrounding the stellar disruption and subsequent behaviour of the debris such as the SMBH mass, viewing angle, impact parameter as well as the circumnuclear environment of the host galaxy. Simulations have shown that the bound stellar debris may circularise to form an accretion disk \citep[e.g.][]{Bonnerot2016,Hayasaki2016,Liptai2019,Bonnerot2020,Mummery2020}, emitting X-ray radiation from accretion onto the SMBH, and optical radiation, from either re-processing of the X-rays in the accretion disk or stream-stream collisions of the tidal debris %
\citep[e.g. see][for a review]{Roth2020}.
The time taken for the stellar debris to circularise and accretion to begin onto the SMBH is a matter of debate, with physical system properties such as the SMBH mass, stellar orbit, and stellar properties thought to affect the organisation of the debris \citep{Hayasaki2016,Liptai2019,Lu2019}. Observationally, X-ray properties of TDEs are extremely diverse \citep{Auchettl2017}, with some events never detected in X-rays, others detected immediately \citep[e.g.][]{Miller2015}, and others showing delayed onset of bright X-ray emission \citep[e.g.][]{Hinkle2021}. The intersection of debris streams and the circularisation process could be an important factor driving the diversity in the observational properties of many TDEs \citep{Lu2019}.
Radio emission from TDEs is rare; only $\sim10\%$ of TDEs discovered have reported radio detections. Radio observations of TDEs probe the outflowing material ejected during the stellar destruction, including any jets or wind-induced outflows, as well as their interactions with the circumnuclear medium \citep[CNM; see][for a review]{Alexander2020}. Recent radio observations of TDEs have identified two distinct populations: relativistic, non-thermal, jetted events \citep[e.g. Swift J1644+57;][]{Bloom2011}, and the more common non-relativistic, thermal events \citep[e.g. ASASSN-14li;][]{Alexander2016,vanVelzen2016}, as well as highlighting the diverse characteristics of individual events within these populations.
Non-thermal TDEs are thought to produce a relativistic jet, giving rise to bright radio emission with luminosities $>10^{40}$\,erg\,s$^{-1}$ \citep{Levan2011,Burrows2011,Zauderer2011,Bloom2011}. In contrast, thermal TDEs exhibit radio emission with spectral luminosities $<10^{40}$\,erg\,s$^{-1}$ that often is observed within months after the initial optical flare, and rises to a peak within a couple of years depending on the frequency \citep[e.g.][]{Alexander2016,Anderson2020,Cendes2021,Goodwin2022}. Recently, it has been suggested that delayed radio flares are common in TDEs \citep{Horesh2021,Horesh2021b,Cendes2022,Perlman2022}. However, without continuous radio coverage of the TDE lightcurve, it cannot be determined if these are "flares" or simply a slow rise to the radio peak with a structured radio lightcurve, as was the case for AT2019azh \citep{Goodwin2022,Sfaradi2022}. There are two strong cases in which there is evidence that a delayed mildly relativistic jet was produced $>500$\,d post initial disruption: ASASSN-15oi and AT2018hyz \citep{Horesh2021,Cendes2022}.
The radio emission from thermal TDEs is thought to arise from either a mildly-collimated, sub-relativistic jet \citep[e.g.][]{vanVelzen2016}, a spherical accretion-induced wind outflow \citep[e.g.][]{Alexander2016}, the unbound debris stream \citep[e.g.][]{Krolik2016}, or a spherical outflow from stream-stream collisions during the circularisation of the stellar debris \citep[e.g.][]{Lu2019}. Existing radio observations of thermal TDEs have been unable to convincingly discern the mechanism behind the non-relativistic outflows that have been observed, and new observations are crucial in identifying if there is a single mechanism behind all radio outflows from TDEs, or if it differs from system to system.
In this work we present the radio detection of AT2020opy, including three epochs of radio spectral observations of the event over 8 months. In Section \ref{sec:observations} we describe the radio observations and the data processing. In Section \ref{sec:results} we present the results and synchrotron modelling of the outflow. In Section \ref{sec:discussion} we discuss the implications of the results and provide a comparison of AT2020opy with other TDEs, and finally in Section \ref{sec:conclusion} we summarise this work and provide concluding remarks.
\section{Observations}\label{sec:observations}
The TDE AT2020opy (ZTF20abjwvae) was first detected on 2020 July 08 by the Zwicky Transient Facility (ZTF) as a transient coincident with the nucleus of the galaxy SDSS J155625.72+232220.6 \citep{ZTF_ATATel}. The source rose slowly to a peak optical flux of $g=18.9$\,mag on 2020 August 02 and showed a featureless blue continuum in spectral observations taken with the Palomar 60in SED Machine. \textit{Swift} follow-up observations on 2020 August 09 revealed bright UV emission from the event but no associated X-ray source, motivating \citet{ZTF_ATATel} to classify the transient as a TDE. Based on the ZTF observations and optical spectral properties, \citet{Hammerstein2022} classified AT2020opy as an H+He TDE at a redshift of $z=0.159$ due to broad H$\alpha$ and H$\beta$ emission lines, as well as a complex of He II emission lines.
\subsection{VLA}
We observed the optical position of AT2020opy on four occasions with the Karl G. Jansky Very Large Array (VLA; Proposal ID: 20A-392, PI: Van Velzen) between 2020 October 06 and 2021 June 03. Our initial observation on 2021 October 06 was taken at 8--12\,GHz to search for radio emission from the event. We discovered a point source consistent with the optical position of the galaxy with a flux density of 65$\pm$7\,$\mu$Jy at 10\,GHz. We subsequently triggered 3 epochs of radio spectral observations of the source spanning 2--18\,GHz over 8 months. The radio observations are summarised in Table~\ref{tab:observations}.
\begin{table}
\centering
\caption{Dedicated radio observations of AT2020opy. $\nu$ is the central frequency of each sub-band (with bandwidth of 1\,GHz at S-band and C-band, 2\,GHz at X-band, 3\,GHz at Ku-band, and 0.856\,GHz for MeerKAT L-band), $F_{\nu}$ is the measured flux density of the source, and "Array" describes the VLA array configuration for the observations.}
\label{tab:observations}
\begin{tabular}{lcccr} %
\hline
Date (UTC)& Array & Band & $\nu$ (GHz) & $F_{\nu}$ (uJy)\\
\hline
\hline
06-Oct-2020 17:19:14 & VLA-B & X & 10 & 65$\pm$7 \\
\hline
15-Oct-2020 23:15:51 & VLA-B & X & 11 & 61$\pm$6\\
& & X & 9 & 68$\pm$10 \\
& & C & 4.55 & 40$\pm$13 \\
& & C & 5.04 & 47$\pm$16 \\
& & C & 6.13 & 51$\pm$63 \\
& & C & 7.6 & 63$\pm$12 \\
\hline
17-Dec-2020 13:35:07 & VLA-A & Ku & 16.5 & 98.8$\pm$10 \\
& & Ku & 13.5 & 137$\pm$10 \\
& & X & 11 & 137.0$\pm$9.5 \\
& & X & 9 & 134$\pm$8 \\
& & C & 7.5 & 139$\pm$11 \\
& & C & 6.5 & 134$\pm$15 \\
& & C & 5.5 & 91$\pm$12\\
& & C & 4.5 & 75$\pm$11\\
& & S & 3.76 & 60$\pm$13\\
& & S & 3.24 & 53$\pm$25 \\
\hline
03-Jun-2021 00:52:34 & VLA-C->D & X & 11 & 139$\pm$11\\
& & X & 9 & 198$\pm$9 \\
& & C & 7.5 & 175$\pm$14 \\
& & C & 6.5 & 241$\pm$16 \\
& & C & 5.5 & 256$\pm$17 \\
& & C & 4.5 & 252$\pm$17 \\
& & S & 3.5 & 228$\pm$27\\
& & S &2.5 & 251$\pm$100 \\
14-Aug-2021 17:42:51 & MeerKAT & L & 1.28 & 94$\pm$15 \\
\hline
11-May-2022 22:09:16 & MeerKAT & L & 1.28 &141$\pm18$ \\
\hline
23-Jun-2022 14:20:56 & uGMRT & L & 1.26 & 141$\pm$29\\
& uGMRT & P & 0.65 & $<261$\\
\hline
\end{tabular}
\end{table}
All VLA data were reduced in the Common Astronomy Software package \citep[CASA 5.6.3,][]{McMullin2007} following standard procedures using the VLA pipeline. For all observations, 3C 286 was used for flux density calibration, J1609+2641 was used for phase calibration for frequency ranges 2--12\,GHz (S, C, and X-band), and J1619+2247 was used for phase calibration for 12--18\,GHz (Ku-band). To extract the source flux density, images of the target field were made using the CASA task {\sc tclean} and the flux density was measured in the image plane by fitting an elliptical Gaussian point source fixed to the size of the synthesised beam using the CASA task {\sc imfit}. We split each frequency band into 2 or 4 sub-bands depending on the bandwidth available after radio frequency interference (RFI) flagging.
\subsection{MeerKAT}
\label{sMKTobs}
We observed AT2020opy with MeerKAT during an observing run on 2021
Aug. 14 and 2021 May 11. We used the ``4K'' (4096-channel) wideband continuum mode
and observed with bandwidth of 856~MHz around a central frequency of
1.28~GHz, over a total time of about 3.7 hr of which $\sim$1~hr was
spent on-source for AT2020opy.
The data were reduced using the OxKAT scripts \citep{Heywood2020}. We
used observations of 3C~286 (ICRF J133108.2+303032) to set the flux
density scale and calibrate the bandpass, and PKS J1609+2641 (ICRF
J160913.3+264129) as a secondary calibrator. The final images were
made using the WSClean ($w$-stacking CLEAN) imager
\citep{Offringa+2014, OffringaS2017}, and had a restoring
beam of $12.9\arcsec\ \times 5.2\arcsec$ at $-20\deg$.
We obtained the final flux densities by fitting elliptical Gaussians
to the image. Since there is a relatively nearby confusing source
with a flux density similar to that of AT2020opy, about 8\arcsec\ to
the southwest, we simultaneously fitted two elliptical Gaussians, one
for AT2020opy and one for the confusing source, along with a
zero-level to account for any constant offsets in the flux of the image. Our value
for the flux density of AT2020opy is the flux density of the fitted
Gaussian, and the uncertainty includes both the statistical uncertainty
and a systematic one due to the uncertainty in the flux-density
bootstrapping, estimated at 5\%.
\subsection{uGMRT}
We observed AT2020opy with the upgraded Giant Metrewave Telescope (uGMRT) on 2022 June 23 at band 4 (total bandwidth of 300\,MHz with a central frequency of 0.65\,GHz) and band 5 (total bandwidth of 460\,MHz with a central frequency of 1.26\,GHz) over a total time of 3\,hr, with 51\,min on target at band 4 and 34\,min on target at band 5. Each frequency band was broken into 2048 spectral channels.
Data reduction was carried out in CASA using standard procedures including flux and bandpass calibration with 3C286 and phase calibration with ICRF J160913.3+264129. Images of the target field were created using the CASA task \texttt{tclean}. Two phase only and three phase and amplitude rounds of self-calibration were carried out on the band 4 data. As with the VLA images, the target flux density was extracted in the image plane at both bands using the CASA task \texttt{imfit}. Unfortunately no detection of the source was obtained in the band 4 observation due to a nearby bright source on the edge of the primary beam causing a high image rms. We instead report the 3$\sigma$ upper limit that was obtained at 0.65\,GHz. A 5$\sigma$ detection of the source was obtained at 1.26\,GHz (band 5) and reported in Table~\ref{tab:observations}.
\subsection{Archival radio observations}
To explore the possibility of previous AGN activity in the host galaxy, we searched the radio archives for observations covering the coordinates of AT2020opy. The VLA Sky Survey \citep[VLASS,][]{Lacy2020} observed the coordinates of AT2020opy at 3\,GHz on 2020 July 16 and 2017 September 25 (35 months pre-optical flare). There was no detection of the host galaxy in either observation, with 3$\sigma$ upper limits of 490$\,\mu$Jy and 340$\,\mu$Jy respectively. The NRAO VLA Sky Survey \citep[NVSS,][]{Condon1998} also observed the coordinates of AT2020opy at 1.4\,GHz on 1995 February 28, but did not detect the host galaxy with a 3$\sigma$ upper limit of 2.1\,mJy. These observations rule out the possibility of bright ($>300\,\mu$Jy) AGN activity in the host galaxy in the past 20\,yr, but we cannot eliminate the possibility of the galaxy hosting a low luminosity AGN.
\section{Results} \label{sec:results}
The VLA radio lightcurve at 5.5\,GHz for AT2020opy compared to other radio-bright thermal TDEs is shown in Figure \ref{fig:LC_comparison} and the broadband radio spectra for each of our three epochs are plotted in Figure \ref{fig:spectra}.
AT2020opy appears brighter than other thermal TDEs at early times relative to the outflow launch date, with luminosity $\nu L_{\nu}\approx5\times10^{38}$\,erg\,s$^{-1}$, but is not as luminous as the relativistic event Swift J1644+57 \citep[$L_{\nu}\approx2\times10^{45}$\,erg\,s$^{-1}$,][]{Zauderer2011}. The radio emission from AT2020opy is well-described by a peaked synchrotron spectrum that evolves on timescales of months, consistent with an outflow travelling through the circumnuclear medium surrounding the SMBH and accelerating electrons along magnetic field lines in the resulting shock front.
\subsection{Synchrotron spectral fitting}
We fit the synchrotron spectra of AT2020opy using the same approach outlined in \citet{Goodwin2022}. We apply the \citet{Granot2002} model assuming the synchrotron self-absorption frequency is associated with the peak of the spectrum and $\nu_{\rm{m}} < \nu_{\rm{a}} < \nu_{\rm{c}}$, where $\nu_{\rm{m}}$ is the synchrotron minimum frequency, $\nu_{\rm{a}}$ is the synchrotron self-absorption frequency, and $\nu_{\rm{c}}$ is the synchrotron cooling frequency. This approach enables the total spectral flux density to be modelled as a function of frequency in order to constrain the break frequencies and electron energy index, $p$. We assume no contribution to the radio emission from the host galaxy due to no previous radio detections of the host in archival observations indicating that the dominant contribution to the radio emission is the synchrotron transient component, and because earlier observations of the transient were significantly fainter than later observations.
As in \citet{Goodwin2022}, we use a Python implementation of Markov Chain Monte Carlo (MCMC), \texttt{emcee} \citep{emcee} to marginalise over the synchrotron model parameters to determine the best fit parameters and uncertainties. Due to the paucity of the data at high frequencies, we fix the synchrotron energy index to $p=2.7$ \citep[e.g.][]{Cendes2021}, but note that the derived parameters do not deviate significantly from the 1$\sigma$ uncertainty ranges if we instead choose other reasonable values, such as $p=2.5$ or $p=3$. Furthermore, $p=2.7$ is the best-fit spectral index when we allow $p$ to be a free parameter while fitting the third epoch in which the optically thin slope is best-constrained by the data.
The observed and modelled synchrotron spectra for AT2020opy are plotted in Figure \ref{fig:spectra}, and the best-fitting peak flux density and peak frequency for each epoch are listed in Table \ref{tab:spectralfits}. The synchrotron peak flux density rose consistently between the three epochs and the peak frequency decreased between the epochs.
\subsection{Outflow modelling}
We model the radio outflow based on the inferred synchrotron emission properties using the same approach outlined in \citet{Goodwin2022}, in which following the model of \citet{BarniolDuran2013} we assume the ambient electrons are accelerated into a power-law distribution by the blastwave from the outflow, $N(\gamma) \propto \gamma^{-p}$, where $\gamma$ is the electron Lorentz factor. In order to estimate the outflow radius, energy, magnetic field strength, and velocity, we assume equipartition between the electron and magnetic field energy densities, which enables the derivation of an equipartition radius and energy. Once the equipartition radius and energy are obtained, we then parameterise the deviation from equipartition to derive the total energy and radius, from which other parameters can be derived. We refer the reader to equations 4--13 of \citet{Goodwin2022} for the specific equations also used in this work. To account for different outflow mechanisms, we model two geometries of the outflow: a spherical outflow and a mildly collimated conical outflow with a half opening angle of 30\degr. We note that we model the outflow as non-relativistic (bulk Lorentz factor $\Gamma=1$), as a relativistic outflow is only possible for very small ($\lesssim0.1$\,deg) opening angles. The estimated physical outflow properties for AT2020opy are plotted in Figure \ref{fig:outflowmodel} and listed in Table \ref{tab:spectralfits} for each of these geometries.
\begin{table*}
\label{tab:spectralfits}
\centering
\caption{Synchrotron spectral fitting parameters and outflow model predictions for AT2020opy. \textbf{Fix values in table}}
\begin{tabular}{lccccccccr}
\hline
& $\delta t$ (d) & $\nu_{\mathrm{peak}}$ (GHz) & $F_{\rm peak}$ (mJy) & log$_{10}$\,$R$ (cm) & log$_{10}$\,$E$ (erg) & $\beta$ & log$_{10}$\,$B$ (G) & log$_{10}$\,$n_e$ (cm$^{-3}$) \\
\hline
\hline
& 50 & $9.6\pm1.2$ & $0.070\pm0.007$ & $16.2\pm0.1$ & $48.9\pm0.3$ & $0.12\pm0.04$ & $-0.4\pm0.9$ & $4.7\pm1.3$\\
Spherical & 116 & $9.4\pm0.5$ & $0.141\pm0.006$ & $16.3\pm0.1$ & $49.3\pm0.3$ & $0.08\pm0.02$ & $-0.5\pm0.7$ & $4.6\pm1.2$ \\
& 281 & $4.2\pm0.4$ & $0.28\pm0.02$ & $16.8\pm0.1$ & $50.0\pm0.3$ & $0.10\pm0.03$ & $-0.9\pm0.7$ & $3.8\pm1.2$ \\
\hline
& 50 & $9.6\pm1.2$ & $0.070\pm0.007$ & $16.6\pm0.1$ & $49.4\pm0.3$ & $0.24\pm0.08$ & $-0.7\pm0.9$ & $4.1\pm1.3$ \\
Conical & 116 & $9.4\pm0.5$ & $0.141\pm0.006$ & $16.7\pm0.1$ & $49.8\pm0.3$ & $0.16\pm0.05$ & $-0.7\pm0.7$ & $4.0\pm1.2$ \\
& 281 & $4.2\pm0.4$ & $0.28\pm0.02$ & $17.2\pm0.1$ & $50.5\pm0.3$ & $0.2\pm0.06$ & $-1.1\pm0.7$ & $3.3\pm1.2$ \\
\hline
\end{tabular}
\textit{Note:} $\delta t$ is reported with reference to $t_0$, the estimated outflow launch date of MJD 59087.
\end{table*}
The radius increased approximately linearly with time, indicating approximately constant velocity of the outflow. A simple linear fit to the radius (Figure \ref{fig:outflowmodel}) gives an outflow launch date of MJD=59087$\pm$41\,d or MJD=59088$\pm$43\,d for spherical and conical geometries, respectively; 50\,d after the optical flare was first observed. This predicted outflow launch date is coincident with the optical peak, on MJD 59070, and is also coincident within 2-$\sigma$ of the initial optical flare on MJD=59038.
The energy of the outflow increased approximately linearly with time, as is expected for an increasing synchrotron peak flux density, which could be indicative of constant energy injection into the outflow. The velocity and magnetic field strength remained approximately constant at 0.1\,$c$ (0.2\,$c$) and 0.28\,G (0.15\,G) for the spherical (conical) geometries, with no sign of relativistic motion of the outflow. We note that the inferred radius, energy, and ambient density are consistent with having remained constant between the first and second epochs of observations at 50 and 116\,d post outflow launch. However, due to the paucity of data in the first epoch and the large uncertainties in the resulting spectral fits, we deduce that it is more likely that the outflow was evolving between these two epochs, as there is signficant evolution between the first and last epochs.
\section{Discussion}\label{sec:discussion}
The radio properties of the TDE AT2020opy indicate a non-relativistic outflow was launched at the time of or just after the initial optical flare. We deduce that the outflow has an approximately constant velocity with $\beta\approx0.1\,c$ and energy $\sim10^{48}$\,erg for radii $\sim10^{16}$\,cm. Between 2020 October and 2021 June the radio emission from AT2020opy was increasing in peak flux density and the peak frequency of the synchrotron spectrum was decreasing, consistent with a constant velocity outflow moving through the CNM and sweeping up material.
\citet{Hammerstein2022} analysed the optical spectra of AT2020opy and classified the event as a TDE with broad H$\alpha$ and H$\beta$ emission lines as well as a complex of He II emission lines (H+He spectral class) and a structured optical lightcurve with some flaring activity. In an analysis of 30 TDEs, \citet{Hammerstein2022} found some evidence that TDEs with structured lightcurves tend to occur in galaxies with lower total mass, and thus could occur around lower mass SMBHs. We note that this finding is strongly dependent on an assumption of a relationship between total stellar mass in the host galaxy and SMBH mass. \citet{Hammerstein2022} propose that the structured flaring activity seen in the lightcurves of these TDEs could be due to longer circularisation times of the lower mass SMBHs. A longer circularisation time of the disk for AT2020opy could explain the lack of early X-ray emission from the event \citep{ZTF_ATATel}, as well as a radio outflow that was launched after the initial optical flare.
\subsection{The outflow mechanism}
The radio measurements that enable determinations about the physical properties of the outflow produced in AT2020opy enable some discrimination between current models of non-relativistic outflows in TDEs. Firstly, the data indicate that the outflow in this event is likely to have been launched approximately 50\,d after the initial optical flare, however we cannot rule out a contemporaneous launch of the outflow at a high degree of confidence (we infer it was launched at least 8\,d after to 1$\sigma$). In comparison, the radio outflows observed for the thermal TDEs AT2019azh and ASASSN-14li were inferred to have been launched at the time of the initial optical flare \citep{Goodwin2022,Alexander2016,vanVelzen2016}.
Secondly, the observed velocity of the outflow is approximately constant (under the assumption of ballistic motion) and the radio emission cannot be explained by a relativistic outflow unless the jet has an unphysically small opening angle.
\citet{Hammerstein2022} found some evidence that TDEs with structured lightcurves occur in lower mass host galaxies, which could translate to lower mass central black holes with longer circularisation times, making stream-stream collisions more important. A long circularisation time of the stellar debris for AT2020opy could explain a delayed onset of the radio outflow if the outflow was produced by either a disk wind or debris collisions. Finally, the lack of X-ray emission from the event at early times \citep{ZTF_ATATel} could either be due to intrinsically low X-ray activity due to the accretion disk taking a long time to form, making the super-Eddington accretion induced wind outflow scenario unlikely, or due to the large distance to the source.
We thus conclude that the outflow from AT2020opy is more likely to be explained by a spherical outflow from stream-stream collisions of the circularising stellar debris \citep{Lu2019} than by an accretion induced wind outflow from accretion onto the SMBH \citep[e.g.][]{Alexander2016}. We deduce that the unbound debris stream is unlikely to explain the radio properties of this outflow due to the low mass predicted in the outflow ($\lesssim 10^{-2}M_{\odot}$) and predicted small opening angle of the unbound debris stream \citep[e.g.][]{Guillochon2014}.
Under the assumption that the outflow was produced by a collision induced outflow (CIO) or accretion induced wind, the increasing energy with constant velocity of the radio outflow is consistent with a single injection of energy into the outflow that is sweeping up material from the CNM \citep{Lu2019}. Under such an assumption, we can approximately infer the deceleration time of the outflow using the model from \citet{Lu2019}, where for $k=2$, the deceleration radius is given by
\begin{equation}
r_{\mathrm{dec}}= \frac{1}{\Omega} \frac{2 E_{\rm k}}{N m_{\rm p} v_0^2}
\end{equation}
where the outflow is assumed to be a thin shell covering solid angle $\Omega\sim2\pi$, $E_{\rm k}$ is the kinetic energy in the outflow, $m_{\rm p}$ is the proton mass, $N$ is the electron density, and $v_0$ is the average outflow velocity.
For $E_{\rm k}\sim5\times10^{51}$ ($10^{52})$\,erg, $v_0=0.1\,c$, and $N=10^{3}$\,cm$^{-3}$, we infer $r_{\mathrm{dec}}\approx10^{17}$\,cm ($2\times10^{17}$\,cm), and thus $t_{\mathrm{dec}}\approx$390\,d (780\,d). At 281\,d we observed the radio outflow to still be expanding and increasing in luminosity/energy(Figure \ref{fig:outflowmodel}), and at 623--666\,d we observed the radio emission to still be increasing/constant in luminosity at 1.25\,GHz (Figure \ref{fig:lightcurve}). We thus predict that the outflow could still be increasing in luminosity until 2022 November, depending on the kinetic energy available in the outflow. Further radio observations of the event would help constrain the deceleration radius/time of the outflow and enable further discrimination between outflow models. Importantly, we note that for this kind of outflow, the radio emission can continue to increase for up to years after the initial event, depending on the energy available in the outflow and the density of the CNM. For AT2020opy, the observed radio emission is more luminous likely due to a denser CNM, and thus may peak earlier than other thermal TDEs.
\subsection{Comparison with other TDEs}
A comparison of the inferred outflow properties for AT2020opy with other radio-bright TDEs is shown in Figure \ref{fig:comparison}. AT2020opy clearly fits into the population of non-relativistic events in terms of energy, velocity, and radius from the central black hole.
The ambient density is approximately proportional to $n\propto R^{-1.5}$--$R^{-2.5}$ for AT2020opy, similar to other thermal TDEs. The CNM density of AT2020opy appears to be $\sim30\%$ denser than any of the other TDEs observed to date, which could explain the higher luminosity (and higher distance) of the radio emission from the event. Interestingly, AT2020opy is also the most distant of the thermal TDEs with detected radio emission reported, implying that for more energetic events radio emission may be observed from further away. The inferred CNM density and radio luminosity of AT2020opy further confirm that for galaxies with higher CNM densities, outflow emission rises more quickly and is brighter than in galaxies with less-dense CNMs \citep{Lu2019}.
The delay relative to the optical flare in radio emission observed from AT2020opy was not large, similar to the two other thermal TDEs that have been observed in the radio-rise phase AT2019azh \citep{Goodwin2022} and AT2019dsg \citep{Stein2021,Cendes2021}, in contrast to the late-time radio flare that was observed from ASASSN-15oi \citep{Horesh2021}. Our modelling constrains the onset of the radio outflow in AT2020opy to be consistent with the time of or just after the optical flare was observed.
\section{Conclusions} \label{sec:conclusion}
We followed the radio evolution of the tidal disruption event AT2020opy for 20 months with the VLA, MeerKAT, and uGMRT radio telescopes. Based on modelling of the synchrotron emission observed, we find that the radio emission is likely due to a non-relativistic outflow, which could take the form of a spherical wind, collision induced outflow or a mildly collimated jet. We note that based on modelling of the evolution of the radius, we find the outflow was launched after or around the time that the initial optical flare was observed. Through synchrotron spectral modelling of the radio emission, we deduce that the circumnuclear medium of the host galaxy is denser than inferred for other TDE hosts, which causes brighter, quickly rising radio emission from the outflow.
Follow-up observations of this event are encouraged to continue to observe the long-term decay of the radio emission, which we predict will reach a peak luminosity at 390-780\,d post-optical flare (up to 2022 November).
\section*{Acknowledgements}
The authors thank K. Alexander, N. Blagorodnova, P. Woudt, M. Bottcher, R. Fender, J. Bright, and S. Kulkarni for their contributions to the observing proposals that were instrumental to this work. This work was supported by the Australian government through the Australian Research
Council's Discovery Projects funding scheme (DP200102471). A.H. is grateful for the support by the I-Core Program of the Planning and Budgeting Committee and the Israel Science Foundation, and support by ISF grant 647/18. A.H. is grateful for support by the Zelman Cowen Academic Initiatives. GRS is supported by NSERC Discovery Grants RGPIN-2016-06569 and RGPIN-2021-0400.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility
of the National Research Foundation, an agency of the Department of Science and Innovation. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.
\section*{Data Availability}
The spectral fitting and equipartition modelling software used in this work is publicly available on Github at \url{https://github.com/adellej/tde_spectra_fit}.
\section*{Software}
This research made use of Matplotlib, a community-developed \texttt{Python} package \citep{Hunter2007}, NASA's Astrophysics Data System Bibliographic Services, the Common Astronomy Software Application package \texttt{CASA} \citep{McMullin2007}, The Cube Analysis and Rendering Tool for Astronomy \citep[CARTA][]{Comrie2021} and the \texttt{Python} packages cmasher \citep{cmasher}, and emcee \citep{emcee}.
\bibliographystyle{mnras}
\bibliography{bibfile} %
\bsp %
\label{lastpage} |
Title:
Spinning Nanoparticles Impacted by C-shock: Implications for Radio-millimeter Emission from Star-forming Regions |
Abstract: We investigate the impact of anomalous microwave emission (AME) on the
radio-millimeter spectral energy distribution for three typical interstellar
medium (ISM) conditions surrounding star-forming regions -- cold neutral
medium, warm neutral medium, and photodissociation region -- by comparing the
emissivities of three major contributors: free-free, thermal dust emission, and
AME. In particular, for spinning nanoparticles (i.e., potential carriers of
AME), we consider a known grain destruction mechanism due to a centrifugal
force from spin-up processes caused by collisions between dust grains and
supersonic neutral streams in a magnetized shock (C-shock). We demonstrate
that, if the ISM in a magnetic field is impacted by a C-shock developed by a
supernova explosion in the early phase of massive star-formation ($\lesssim 10$
Myr), AME can be significantly or almost entirely suppressed relative to
free-free and thermal dust continuum emission if the grain tensile strength is
small enough. This study may shed light on explaining the rare observations of
AME from extragalactic star-forming regions preferentially observed from
massive star clusters and suggest a scenario of "the rise and fall of AME" in
accordance with the temporal evolution of star-forming regions.
| https://export.arxiv.org/pdf/2208.05510 | command.
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\usepackage{amsmath}
\usepackage{amsbsy}
\let\oldAA\AA
\newcommand\angst{\text{\normalfont\oldAA}}
\newcommand\nh{\ifmmode{n_{\tiny \mbox{H}}}\else{$n_{\tiny \mbox{H}}$}\fi}
\newcommand\ngr{\ifmmode{n_{\tiny \mbox{gr}}}\else{$n_{\tiny \mbox{gr}}$}\fi}
\newcommand\jvsp{\ifmmode{j_{\tiny \nu,sp}}\else{$j_{\tiny \nu, sp}$}\fi}
\newcommand\jvff{\ifmmode{j_{\tiny \nu,ff}}\else{$j_{\tiny \nu,ff}$}\fi}
\newcommand\jvbb{\ifmmode{j_{\tiny \nu,bb}}\else{$j_{\tiny \nu,bd}$}\fi}
\newcommand\amax{\if{a_{\tiny \mbox{max}}}\else{$a_{\tiny \mbox{max}}$}\fi}
\newcommand\amin{\if{a_{\tiny \mbox{min}}}\else{$a_{\tiny \mbox{min}}$}\fi}
\newcommand\cmvol{\ifmmode{\mbox{cm}^{-3}}\else{$\mbox{cm}^{-3}$}\fi}
\newcommand\HII{$\textrm{H} \scriptstyle\mathrm{II}$}
\received{September 22, 2021}
\revised{June 29, 2022}
\accepted{July 25, 2022}
\submitjournal{ApJ}
\shorttitle{Radio-\textit{mm} SED of SF region with AME}
\shortauthors{Yoon}
\graphicspath{{./}{figures/}}
\begin{document}
\title{Spinning Nanoparticles Impacted by C-shock: Implications for Radio-millimeter Emission from Star-forming Regions}
\correspondingauthor{Ilsang Yoon}
\email{iyoon@nrao.edu}
\author[0000-0001-9163-0064]{Ilsang Yoon}
\affiliation{National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA}
\keywords{submillimeter: ISM -- radio continuum: ISM -- galaxies: ISM -- ISM:dust -- radiation mechanisms: non-thermal -- radiation mechanisms: thermal}
\section{Introduction} \label{sec:intro}
Thermal free-free emission is a good tracer of star formation and widely used to estimate the star formation rate (SFR) of galaxies using radio continuum measurements at 10-33 GHz where the free-free emission is dominant \citep[][]{murphy_etal_2011}. However, anomalous microwave emission (AME) explained by dipole radiation from spinning nanoparticles \citep{draine_and_lazarian_1998} is also bright at this frequency range, and the peak frequency can be as high as 100 GHz \citep[][]{spdust2009}, which implies that (1) AME can be a significant contamination if the SFR is measured by thermal free-free radio continuum emission from a single frequency \citep[e.g.,][]{murphy_etal_2011} and (2) AME can affect a measurement of molecular gas mass by a submillimeter (e.g., 850 $\mu$m or 350 GHz) flux density \citep[e.g.,][]{scoville_etal_2016}.
Therefore, the observational study of AME in Galactic and extragalactic star-forming regions is important to understand the impact of AME on the radio-millimeter continuum spectral energy distribution (SED). The current estimate of the fraction of AME to the total emission at a frequency near $\approx 30$ GHz in our Galaxy is as large as 50\% \citep{dickinson_etal_2018}. However, in other galaxies, AME does not seem to be strong, as shown by the observations of two galaxies, NGC 6946 \citep{murphy_etal_2010,scaife_etal_2010b,hensley_etal_2015} and NGC 4725B \citep{murphy_etal_2018} although the integrated flux over entire M31 galaxy shows significant enhancement of AME \citep{battistelli_etal_2019}.
The extragalactic AME detection is reported for only two galaxies (NGC 6946 and NGC 4725B) with spatially resolved emission and one galaxy (M31) with spatially integrated emission. It is not clear whether this discrepancy is due to a different interstellar medium (ISM) environment associated with the AME region in our Galaxy and other galaxies or just a bias in the AME estimate in our Galaxy based on the observation of solar neighbors or a combination of both \citep{dickinson_etal_2018}. If the observation of AME in our Milky Way is biased, and AME is indeed weak and rare in star-forming regions in galaxies in general, it is interesting to understand the possible reasons for the weakness and rareness of AME in extragalactic star-forming regions.
In general, a star-forming region is complex and consists of a mixture of different phases of ISM from the fully ionized phase (\HII\ region) to the cold phase of gas and dust shielding strong UV radiation, as illustrated in Figure~\ref{fig:sfregion}. As illustrated by \cite{spdust2009} using models of spinning dust emission, the types of ISM from which one can expect both thermal free-free emission and significant AME are cold neutral medium (CNM), warm neutral medium (WNM), and photodissociation region (PDR) that are usually associated with nearby \HII\ regions (Figure~\ref{fig:sfregion}).
Although our current understanding of a connection between the physical ISM conditions of star-forming regions and the AME detection is still incomplete and evolving, the emerging impression is that the lower-density star-forming regions with a high interstellar radiation field (ISRF; like WNM) provide the most promising environment for detecting AME, while a very high ISRF may destroy the grain population \citep[see ][for a review]{scaife_2013}.
Recent observations of the resolved star-forming clouds suggest that AME is correlated with polycyclic aromatic hydrocarbon (PAH) emission associated with the PDR \citep[][]{bell_etal_2019,arcetord_etal_2020,casassus_etal_2021}, which is consistent with the previous observations suggesting a correlation between AME and PAH tracers \citep{scaife_etal_2010a,ysard_etal_2010,tibbs_etal_2011,battistelli_etal_2015}.
The AME may also be detected in \HII\ regions \citep[e.g., RCW 175 by][]{tibbs_etal_2012,battistelli_etal_2015} if the dust is present in the interior of the \HII\ region \citep{paladini_etal_2012,watson_etal_2008} or in the \HII\ bubble \citep{anderson_etal_2012,flagey_etal_2011}. However, in general, it is not likely that AME in the \HII\ regions dominates the free-free emission because the abundance of dust particles (e.g., PAH) in \HII\ regions is suppressed due to strong UV radiation \citep[e.g.,][]{peeters_etal_2004,binder_and_povich_2018,povich_etal_2007} and supernova shocks \citep[e.g.,][]{jones_etal_1996}, and, even in high-density \HII\ regions with increased dust abundance \citep[e.g.,][]{draine_2011}, free-free emission is likely to be dominant due to the free-free emissivity scaled as $n^2_e$.
In the current working models of AME \citep[e.g.,][]{draine_and_lazarian_1998,spdust2009}, there are several model parameters -- distribution of the grain size, charge, dipole moment, strength of the ambient ISRF, gas temperature, and hydrogen density \citep[e.g.,][]{spdust2009,hensley_and_draine_2017} -- that can change the shape and intensity of the AME emissivity function such that AME can be suppressed. However, the emissivity of AME from CNM, WNM, and PDR is still significantly larger than the free-free emissivity in the frequency of 10-100 GHz for typical parameter values of the ISM and dust \citep{spdust2009}. Furthermore, the recently proposed dust destruction process by radiative torque in a strong radiation field suggests that it efficiently breaks large grains \citep{hoang_etal_rat_2019} and increases the small-grain population, which implies that AME can be even stronger in those star-forming regions with strong UV radiation from OB star associations (see Section ~\ref{sec:riseame}).
Another possible (and probably more plausible) way of suppressing AME from those star-forming regions (CNM, WNM, and PDR) is to reduce the abundance of small nanoparticles by destroying them, which has not been discussed much. Recently, \cite{hoang_etal_2019} suggested a new disruption mechanism of very small grains (nanoparticles with size $\sim 10^{-9}$m) by increased centrifugal force due to suprathermal rotation by stochastic mechanical torque (originally proposed by \cite{gold_1952}) from a magnetized shock (C-shock). The magnetic field is ubiquitous, and, for the ISM with a lower ionization fraction, the supersonic drift of neutral particles in a C-type shock \citep{draine_2011book} can impact small nanoparticles bound by the magentic field and increase their angular velocity \citep{hoang_etal_2019}. Indeed, relative to other grain destruction mechanisms (thermal sputtering, nonthermal sputtering, and grain--grain collision), this rotational disruption appears to be the fastest mechanism to destroy nanoparticles in C-shocks, as suggested by the comparison of the time scale for each process \citep{hoang_etal_2019}.
In this work, we compute the emissivity of AME by incorporating the disruption of nanoparticles impacted by C-shocks and investigate the emissivity of the composite of AME, thermal free-free, and dust continuum
emission in the radio-millimeter wavelength (10--100 GHz) from the ISM associated with star-forming regions, to characterize how much AME contributes to the radio-millimeter SED.
In Section~\ref{sec:ame}, we introduce a brief overview of the emission mechanism of spinning nanoparticles and the characteristic properties of the ISM associated with star-forming regions. In Section~\ref{sec:dynamics}, the properties of C-shocks and the rotational dynamics of nanoparticles in the region affected by C-shocks are discussed. In Section~\ref{sec:destroy}, we discuss the suprathermal angular rotation velocity of spinning nanoparticles and their critical angular rotation speed to resist centrifugal force, as well as the implications of assumed dust size distribution impacting the AME emissivity. In Section~\ref{sec:emission}, we show the impact of C-shocks on the radio-millimeter emissivity by considering grain destruction mechanisms discussed in Section~\ref{sec:destroy}. In Section~\ref{sec:discuss}, we propose a possible scenario of the rise and fall of AME and discuss the relevant issues related to the current work. Finally, we summarize our work in Section~\ref{sec:summary}.
\section{AME from Star-forming Regions} \label{sec:ame}
In this section, we provide a brief overview of the grain rotational dynamics. More details are found in \cite{draine_and_lazarian_1998} and \cite{spdust2009}.
\subsection{Basics of Dipole Radiation from Spinning Nanoparticles}\label{sec:ame_instro}
The radiation power at the frequency $\nu=\omega/2\pi$ from spinning particles with an angular velocity $\boldsymbol{\omega}$ of the electric dipole moment $\boldsymbol{\mu}$, with a component $\boldsymbol{\mu}_{\perp}$ perpendicular to $\boldsymbol{\omega}$, is
\begin{equation}
P = \frac{2}{3}\frac{\boldsymbol{\mu}^2_\perp \boldsymbol{\omega}^4}{c^3}.
\end{equation} and the emissivity of the spinning particles per H atom in erg s$^{-1}$ sr$^{-1}$ Hz$^{-1}$ (H atom)$^{-1}$ is
\begin{equation}\label{eq:emissi}
\frac{\jvsp}{\nh} = \frac{1}{4\pi} \int^{a_{\tiny \mbox{max}}}_{a_{\tiny \mbox{min}}} da \frac{1}{\nh} \frac{d\ngr}{da} 4\pi\omega^2 f_a(\omega) 2\pi \frac{2}{3}\frac{\mu^2_\perp \omega^4}{c^3}
\end{equation} where \amin\ and \amax\ are the minimum and maximum size of the dust grain, and $\frac{1}{\nh} \frac{d\ngr}{da}$ is the number of dust grains per unit size per H atom.
The angular velocity distribution function for grain size $a$, $f_a(\omega)$, can be obtained by solving a stationary Fokker--Planck equation \citep{spdust2009},
\begin{equation}\label{eq:FP}
\frac{df_a(\omega)}{d\omega}+\left[\frac{I\omega}{kT}\frac{F}{G} + \frac{\tau_{\tiny{\mbox{H}}}}{\tau_{\tiny{\mbox{ed}}}}\frac{1}{3G}\frac{I^2\omega^3}{(kT)^2}\right] f_a(\omega) = 0
\end{equation}
where $I$ is the moment of inertia of the dust particles, $\tau_{\tiny{\mbox{H}}}$ is the characteristic rotational damping time-scale for collisions with neutral H atoms, and $\tau_{\tiny{\mbox{ed}}}$ is the characteristic damping time scale for electric dipole radiation. Here $\tau_{\tiny{\mbox{H}}}$ and $\tau_{\tiny{\mbox{ed}}}$ are
\begin{eqnarray}\label{eq:tscale}
\tau_{\tiny{\mbox{H}}} & = & \left[\nh m_{\tiny \mbox{H}} \left(\frac{2kT}{\pi m_{\tiny \mbox{H}}}\right)^{1/2}\frac{4\pi a^4_{\tiny \mbox{cx}}}{3I} \right]^{-1} \\
\tau_{\tiny{\mbox{ed}}} & = & \frac{I^2c^3}{2\mu_\perp^2kT}
\end{eqnarray}
where $a_{\tiny \mbox{cx}}$ is the ``cylinderical excitation-equivalent'' radius, defined as $4\pi a^4_{\tiny \mbox{cx}} \equiv \frac{3}{2}\oint\rho^2d\mbox{S}$ \citep{spdust2009}.
In Equation~\ref{eq:FP}, $F$ and $G$ are the dimensionless damping and excitation coefficients of spinning dust particles for various interaction processes (e.g., collisions with ions and neutrals, plasma drag, infrared radiation, photoelectric emission, H$_2$ formation) and need to be computed for each interaction process \citep[e.g.,][]{draine_and_lazarian_1998,spdust2009}.
Although the small grains may be sheetlike \citep{draine_and_lazarian_1998,spdust2009}, we adopt a spherical geometry for the grains when computing the damping and excitation coefficients for the impact of C-shocks, as many previous works do when computing $F$ and $G$ \citep[e.g.,][]{draine_and_lazarian_1998,spdust2009,silsbee_etal_2011,hoang_etal_2019}.
For computing \jvsp, we use the publicly available code \texttt{SpDust}\citep{spdust2009,silsbee_etal_2011} that solves Equation~\ref{eq:FP} numerically to obtain $f(\omega)$ and computes $\frac{\jvsp}{\nh}$ for an assumed grain size distribution $\frac{1}{\nh} \frac{d\ngr}{da}$, a dipole moment $\mu$ and ISM environment parameters (\nh, $T$, $x_{\mbox{\tiny H}}$, $x_{\mbox{\tiny M}}$ and $y$ in Table~\ref{tab:ismparam}).
\subsection{ISM Associated with Star-forming Regions}\label{sec:sfregion}
The classical multiphase model of the ISM proposed by \cite{mckee_and_ostriker_1977} is a simplified but still relevant picture of the ISM in star-forming regions.
In Figure~\ref{fig:sfregion}, we show an illustrative picture of a star-forming region where the ``magnetized'' molecular cloud is associated with an \HII\ region. The molecular cloud consists of a multiphase ISM (WNM and CNM) and is exposed to UV radiation from the nearby \HII\ region where the ISM is fully ionized at $T\approx10^4$K by OB stars. A small part of the high-density molecular cloud forms a PDR where dust is present in abundance as both large grains and PAHs, as shown by the far-IR/submillimeter emission (from large grains) and near-IR emission (e.g. 8 $\mu$m from PAHs). A low-velocity shock wave propagating into the ``magnetized'' molecular cloud far from the \HII\ region creates a ``C-type'' shock.
In this study, we assume that a high-velocity strong supernova shock wave (J-type shock) or strong UV radiation destroys the dust grains (including AME carriers) in the \HII\ region; therefore, AME does not rise from the \HII\ region itself, and the radio SED from the \HII\ region is dominated by free-free emission. We study the condition for AME formation in the CNM, WNM, and PDR in molecular clouds and investigate the relative strength of AME compared with free-free and thermal dust emissions that are also from the molecular cloud.
The approximate volume filling factor $f_{\tiny V}$ for hot ionized medium (HIM), warm ionized medium (WIM), WNM and CNM in our Galaxy is 0.5, 0.1, 0.4, and 0.01, respectively \citep{draine_2011book}. Although the WNM fills a significant fraction of the volume of the galactic disk, the effective volume densities $n_{\tiny \mbox{H}} f_{\tiny V}$ of WNM and CNM are similar (0.2 and 0.3 \cmvol); therefore, it is likely that the contribution to the observed AME from CNM and WNM is similarly important for a given observing beam.
If the OB stars are forming and creating the \HII\ region, the clouds are exposed to strong UV radiation, and some of them form a PDR. The significance of AME from a PDR with high density ($n_{\tiny \mbox{H}}\approx 10^5 \cmvol$) is difficult to assess due to its uncertain and probably very small volume filling factor; however, it could be an important contribution when the next-generation Very Large Array \citep[ngVLA;][]{ngvlabook} starts to resolve individual star-forming regions in galaxies with high angular resolution (0.5-50 mas).
Within a few tens of megayears, the supernova explodes and the shock wave propagates into the clouds. For a low-ionization medium with average or higher density like CNM, WNM, and PDR, the shock wave is a C-type where the shock velocities of the ionized and neutral particles are different due to the existence of a magnetic field, and therefore the neutral particle can have a supersonic drift velocity relative to the ionized particles \citep{draine_2011book}.
In Figure~\ref{fig:shocktube}, we show a schematic picture of the propagation of a C-shock into the ISM with dust particles having different sizes. The small orange circles in the pre-shock region on the left-hand side of the shock layer (hatched red lines in the middle) represent the nanoparticles as a source of AME, and the other large dust grains existing in both pre- and post-shock regions are represented by larger blue circles. The velocity, temperature, and density profile of the ISM in the shock layer are shown in the zoom-in panel. If a charged dust grain sticks with the magnetic field when the C-shock propagates (to the left), the neutral particles with high drift velocity ($V_n-V_i$) relative to the ions collide with the dust particles via stochastic bombardment and introduce additional damping and excitation terms, which destroys the small dust grains (small orange circles in the pre-shock region). We will discuss the process in detail in Sections~\ref{sec:dynamics} and~\ref{sec:destroy}.
\begin{deluxetable}{lccc}[ht!]
\centering
\tablecaption{ISM Environment Parameters from \cite{draine_and_lazarian_1998}}\label{tab:ismparam}
\tablehead{\colhead{Parameter} &
\colhead{CNM} &
\colhead{WNM} &
\colhead{PDR} }
\startdata
\nh [\cmvol] & 30 & 0.4 & $10^5$\\
$T$ [K] & 100 & 6000 & 300\\
$T_d$ [K] & 20 & 20 & 50\\
$\chi$ & 1 & 1 & 3000\\
$x_{\mbox{\tiny H}}\equiv n(\mbox{H}^+)/$\nh & 0.0012 & 0.1 & 0.0001\\
$x_{\mbox{\tiny M}}\equiv n(\mbox{M}^+)/$\nh & 0.0003 & 0.0003 & 0.0002\\
$y\equiv 2n(\mbox{H}_2)/$\nh & 0 & 0 & 0.5\\
\enddata
\end{deluxetable}
\section{Rotational Dynamics of Spinning Nanoparticles in C-shocks} \label{sec:dynamics}
In the ISM associated with star-forming regions (CNM, WNM, and PDR) from which we expect to observe both free-free emission and AME, a large abundance of small nanoparticles with a high spin angular momentum resulting from the shattering of large grains by supernova winds \citep{jones_etal_1996} is expected to increase the strength of AME. However, on the other hand, one can also expect that the high spin angular velocity of the nanoparticles creates a strong centrifugal force disrupting the particles themselves. This rotational disruption can decrease the abundance of the smallest nanoparticles, which can decrease the resulting AME.
Recently, \cite{hoang_etal_2019} introduced a model of spinning dust in C-shocks, accounting for this destruction effect. We will incorporate this process in the \texttt{SpDust} code \citep{spdust2009,silsbee_etal_2011} and compute the emissivity of spinning nanoparticles in the presence of C-shocks.
\subsection{Structure of C-shocks in CNM, WNM, and PDR}
Shocks in molecular clouds where H$_2$ rovibrational cooling is efficient are common and expected to often be C-type \citep{draine_2011book}. The energy dissipation in C-shocks with two fluids (ions and neutrals) is continuous rather than impulsive, and efficient radiative cooling keeps the gas cool \citep{draine_2011book}.
We use the Paris--Durham shock code \citep{flower_etal_2003,lesaffre_etal_2013,godard_etal_2019} to compute the velocity profiles of neutral and ion particles in C-shocks for the ISM with typical environmental parameters (Table~\ref{tab:ismparam}) for CNM, WNM, and PDR \citep{draine_and_lazarian_1998}.
The simulated shock is static. We assume that ISM parameters like temperature and density are continuous and not drastically different in the pre- and post-shock region because of the nature of the continuity of C-shocks. For the pre-shock parameters, we use the default values in the code, except for hydrogen density, gas temperature, and dust temperature ($n_{\tiny \mbox{H}}$, $T$, and $T_d$) which are chosen for the CNM, WNM, and PDR from Table~\ref{tab:ismparam}. For the initial shock velocity, we note that the velocity of C-type shocks is low (5-25km/s) in general, and the high-velocity shocks are J-type \citep{godard_etal_2019}. Therefore, we choose 20km/s for the initial shock velocity. For the magnetic field strength, the Paris--Durham shock code uses the following parameterization \citep{godard_etal_2019}:
\begin{equation}\label{eq:bcode}
B=b\sqrt{\frac{\nh}{1cm^{-3}}} ~\mu \mbox{G}
\end{equation} where $b=1$ by default in the shock code.
In the upper panels of the plots in Figure~\ref{fig:shockprofile}, we show the velocity profiles of neutrals ($v_n$, blue dashed line) and ions ($v_i$, red dotted-dashed line) in the shock frame for C-shocks in the magnetized ISM, for the physical conditions of CNM, WNM, and PDR in Table~\ref{tab:ismparam} and a 20 km/s shock velocity. The solid black line is the thermal velocity profile ($v_{th}$). In the lower panels of the plots in Figure~\ref{fig:shockprofile}, we show the profile of the ratio between drift velocity ($v_n-v_i$) and thermal velocity ($v_{th}$):
\begin{equation}
s_d=\frac{v_n-v_i}{v_{th}}
\end{equation}
Although the detailed shapes of the profile vary depending on the pre-shock density, temperature, and strength of irradiated radiation \citep[e.g.,][]{godard_etal_2019}, the neutrals and ions have different velocity profiles for all three ISM conditions (CNM, WNM, and PDR) because ions are decelerated by the magnetic field, and the drift velocity becomes supersonic ($s_d\gg1$) at the shock layer (Figure~\ref{fig:shockprofile}). Note that the computation of the velocity profiles stops when the neutrals and ions are about to recouple; however a full profile is not necessary to confirm that the supersonic drift is developed in the C-shock.
The Mach number of the neutral drift velocity, $s_d$ depends on the shock velocity and sound speed, and the C-shock develops supersonic drift for the typical values of shock velocity (a few tens of kilometers per second) found in the ISM velocity--density plane \citep[e.g.,][]{draine_mckee_1993}. The C-shock with supersonic drift ($s_d\gg1$) has an impact on the calculation of the damping and excitation coefficients ($F$ and $G$) described in the following section (Section~\ref{sec:rotdyn}).
\subsection{Rotational Damping and Excitation in C-shocks}\label{sec:rotdyn}
Various damping and excitation processes in the rotational dynamics of grain particles are captured by the dimensionless damping and excitation coefficients $F$ and $G$ in Equation~\ref{eq:FP}. In this work, we consider additional damping and excitation processes due to the presence of supersonic neutral drift relative to charged grains \citep{hoang_etal_2019}.
As discussed in Section~\ref{sec:ame_instro}, we assume a spherical geometry when computing damping and excitation coefficients. For the damping and excitation coefficients for the supersonic ($s_d \gg 1$) and transonic cases ($s_d \sim 1$), we follow \cite{roberge_etal_1995} and adopt their results, as \cite{hoang_etal_2019} does.
Let $\hat{\boldsymbol{x}}\hat{\boldsymbol{y}}\hat{\boldsymbol{z}}$ be the reference frame fixed to the gas, such that the $\hat{\boldsymbol{z}}$-axis is directed along the magnetic field, and the drift velocity $v_d$ lies in the $\hat{\boldsymbol{y}}\hat{\boldsymbol{z}}$-plane with an angle $\alpha$ with $\hat{\boldsymbol{z}}$. We consider a perpendicular shock ($\alpha=90^\circ$).
From \cite{roberge_etal_1995}, the dimensionless damping coefficients $\langle\Delta j_i\rangle$ are
\begin{equation}\label{eq:F}
\langle\Delta j_i\rangle =
\begin{cases}
-\left[\delta + M_0(s_d)\right]j_i, & i=x,y, \\
-M_0(s_d)j_i, & i=z.
\end{cases}
\end{equation}
In this equation,
\begin{equation}
M_0(s_d) = \frac{\sqrt{\pi}}{4s_d}\left[2(1+s^2_d)\mbox{erf}(s_d) - P\left(\frac{3}{2},s^2_d\right)\right]
\end{equation} where $P$ is an incomplete gamma function, and $\delta$ is a magnetic damping parameter measuring the relative efficiency of magnetic versus gas damping, in the sense that a large $\delta$ value would correspond to efficient magnetic alignment \citep{roberge_etal_1995}, and is written as \citep{roberge_etal_1993}
\begin{equation}
\delta = \frac{3KVB^2}{4\sqrt{\pi}\nh m_{\tiny \mbox{H}}v_{th}b^4\Gamma_{\|}}.
\end{equation} where $K=10^{-13}\left(\frac{T_d}{18\mbox{\tiny K}}\right)^{-1}$ \citep{draine_2011book}, $V=\frac{4\pi}{3}a_{\mbox{\tiny cx}}^3$, $v_{th} = \sqrt{\frac{2kT}{m_{\tiny \mbox{H}}}}$, $b\approx a_{\mbox{\tiny cx}}$, and $\Gamma_{\|}=1.0$ for spherical grains \citep{roberge_etal_1993}.
For the magnetic field strength ($B$), we use Equation~\ref{eq:bcode}. Although $\delta$ is not well constrained due to the highly uncertain material parameter $K$\citep{roberge_etal_1993}, we find that $\delta \ll M_0(s_d)$ for the ISM condition that we are considering (i.e., high temperature in the C-shock region).
The dimensionless angular momentum $j_i (i=x,y,z)$ is $j_i\equiv\frac{J_i}{\sqrt{IkT}}$ and $j_i\approx1$ for a spherical grain rotating with the equipartition energy \citep{roberge_etal_1995}. The dimensionless damping coefficient $F_{sd}$ for spherical grains due to supersonic neutral drift in the C-shock becomes
\begin{equation}
F_{sd} = \frac{1}{3}\displaystyle\sum_{i=x,y,z} \langle\Delta j_i\rangle
\end{equation}
Also from \cite{roberge_etal_1995}, the dimensionless excitation coefficients $\langle (\Delta j_i)^2 \rangle$ are
\begin{equation}
\langle (\Delta j_x)^2 \rangle = D_T + \frac{T_d}{T} \left[2\delta+M_0(s_d)\right]\\
\end{equation}
\begin{equation}
\langle (\Delta j_y)^2 \rangle = D_P \sin^2\alpha + D_T \cos^2\alpha + \frac{T_d}{T} \left[2\delta+M_0(s_d)\right]\\
\end{equation}
\begin{equation}
\langle (\Delta j_z)^2 \rangle = D_P \cos^2\alpha + D_T \sin^2\alpha + \frac{T_d}{T} M_0(s_d)\\
\end{equation}
The quantities
\begin{equation}
D_T(s_d)=\frac{3}{4}\left[\left(1+2s_d^2\right) M_0(s_d) + \left( 1-2s_d^2 \right) M_2(s_d)\right]
\end{equation}
and
\begin{equation}
D_P(s_d)=\frac{3}{2}\left[M_0(s_d)-M_2(s_d)\right]
\end{equation} where
\begin{equation}
\begin{aligned}
M_2(s_d) = \frac{\sqrt{\pi}}{4}s_d\mbox{erf}(s_d) - \frac{3\sqrt{\pi}}{16}s_d^{-3}P\left(5/2,s_d^2\right) \\ + \frac{\sqrt{\pi}}{4}s_d^{-3}P\left(3/2,s_d^2\right)
\end{aligned}
\end{equation}
are dimensionless, monotonically increasing functions of $s_d$ that have the limiting values $D_T(0)=D_P(0)=1$ and satisfy the inequality $D_T(s_d) > D_P(s_d)$ for $s_d>0$ \citep{roberge_etal_1995}. The dimensionless excitation coefficient $G_{sd}$ for spherical grains due to supersonic neutral drift in C-shocks becomes
\begin{equation}
G_{sd} = \frac{1}{3}\displaystyle\sum_{i=x,y,z} \langle \left(\Delta j_i\right)^2\rangle
\end{equation}.
\section{Rotational Disruption of Spinning Nanoparticles}\label{sec:destroy}
\subsection{Critical Angular Velocity for Disruption}
The tensile stress of a spherical grain with mass $m$, radius $r$ and angular velocity $\omega$ is $S = \frac{1}{4}\rho\omega^2 a^2$, which is converted to the relation between the critical angular velocity for grain disruption and the maximum grain tensile strength, $S_{\mbox{\tiny max}}$ \citep{hoang_2020},
\begin{equation}\label{eq:wcri}
\frac{\omega_{cri}}{2\pi} \simeq 5.72\times10^{10} a^{-1}_{-7} S^{1/2}_{\mbox{\tiny max,9}} \hat{\rho}^{-1/2}~~[\mbox{Hz}]
\end{equation} where $a_{-7}=a/(10^{-7}\mbox{cm})$, $S_{\mbox{\tiny max,9}}=S_{\mbox{\tiny max}}/(10^{9}~\mbox{erg cm}^{-3})$ and $\hat{\rho}=\rho/(3~\mbox{g cm}^{-3})$.
The exact value of the maximum tensile strength $S_{\mbox{\tiny max}}$ depends on the composition and structure of the dust grain and is largely unknown; compact grains can have higher tensile strength than porous/composite grains \citep[e.g.,][]{hoang_etal_2019}. For example, a polycrystalline bulk solid has $S_{\mbox{\tiny max}}\sim10^9-10^{10}$erg cm$^{-3}$ while ideal material, like diamond, can have $S_{\mbox{\tiny max}}\gtrsim10^{11}$erg cm$^{-3}$\citep[][]{hoang_2020}. Since the smallest grains may be sheetlike and have approximately 100 carbon atoms corresponding to the size of large PAH \citep{draine_and_lazarian_1998,spdust2009}, it is a reasonable assumption that the tensile strength of the nanoparticles considered in this study is not very large because they are not compact. Therefore, in this study, we consider $S_{\mbox{\tiny max}}$ to be varied for a range, $10^8$--$10^{10}$erg cm$^{-3}$. However, we emphasize that the result of the grain destruction model in this study depends on the poorly understood maximum tensile strength parameter.
\subsection{Spin Angular Velocity of Nanoparticles}
If one assumes Maxwellian distribution for $f_a(\omega)$ with gas temperature $T$, one can solve Equation~\ref{eq:FP} to obtain $\langle\omega^2\rangle$ \citep{draine_and_lazarian_1998}:
\begin{equation}\label{eq:wrot}
\langle\omega^2\rangle = \frac{2}{1+\left[1+(G/F^2)(20\tau_{\tiny{\mbox{H}}}/\tau_{\tiny{\mbox{ed}}})\right]^{1/2}}\left(\frac{G}{F}\right)\left(\frac{3kT}{I}\right)
\end{equation}
This analytic expression lets us compute the angular velocity $\omega_{rot} \equiv \sqrt{\langle\omega^2\rangle}$ of grain particles with a rotational temperature $T_{rot}$ for a given grain size $a$ \citep{hoang_etal_2019},
\begin{eqnarray}\label{eq:wrot2}
\frac{\omega_{rot}}{2\pi} & = & \frac{1}{2\pi}\left(\frac{3kT_{rot}}{I}\right)^{1/2} \nonumber \\
& \simeq & 1.4\times 10^{10} a^{-5/2}_{-7} \left(\frac{T_{rot}}{10^3\mbox{K}}\right)^{1/2} \hat{\rho}^{-1/2}~[\mbox{Hz}]
\end{eqnarray}
which will be compared to the critical angular velocity $\omega_{cri}$ in Equation~\ref{eq:wcri} for centrifugal disruption of grain particles. The rotational temperature $T_{rot}$ is related to the gas temperature $T$ by
\begin{equation}
\frac{T_{rot}}{T} = \frac{2}{1+\left[1+(G/F^2)(20\tau_{\tiny{\mbox{H}}}/\tau_{\tiny{\mbox{ed}}})\right]^{1/2}}\left(\frac{G}{F}\right)
\end{equation} and if the nanoparticles are in suprathermal rotation, $T_{rot}>T$.
\subsection{Grain size distribution}
The emissivity of AME is determined by the ensemble of electric dipole emission from nanoparticles with a distribution of their size $a$ ($\frac{1}{\nh} \frac{d\ngr}{da}$ in Equation~\ref{eq:emissi}), and most of the AME emissivity is from the smallest particle ($a_{-7}\lesssim0.5$), as shown in \cite{spdust2009}. Therefore, the shape of the grain size distribution, especially for small grains, is a very important component of the AME models \citep{hensley_and_draine_2017}; however, is not well understood for different ISM environments. Therefore, for carbonaceous grains, we assume that the grain size distribution follows the commonly used form of the composite of log-normal and power-law distribution by \cite{weingartner_and_draine_2001} for grain radii $a_{\mbox{\tiny min}}=3.5$\AA\ $<a<a_{\mbox{\tiny max}}=100$\AA\ as adopted in the \texttt{SpDust} code \citep{spdust2009},
\begin{equation}\label{eq:asize}
\begin{aligned}
\frac{1}{\nh} \frac{d\ngr}{da} = D(a) + \frac{C}{a}\left(\frac{a}{a_t}\right)^{\alpha} F(a;\beta;a_t)\\
\times
\begin{cases}
1 & a_{\mbox{\tiny min}}<a<a_t, \\
e^{-\left[(a-a_t)/a_c\right]^3}& a>a_t
\end{cases}
\end{aligned}
\end{equation}
where
\begin{equation}
F(a;\beta;a_t)=
\begin{cases}
1+\beta\frac{a}{a_t} & \beta \geq 0, \\
\left(1-\beta\frac{a}{a_t}\right)^{-1} & \beta < 0
\end{cases}
\end{equation}
and the log-normal distribution $D(a)$ is
\begin{equation}
D(a)=\displaystyle\sum^{2}_{i=1}\frac{B_i}{a}\mbox{exp}\left\{ -\frac{1}{2}\left[\frac{\mbox{ln}(a/a_{0,i})}{\sigma}\right]^2\right\}.
\end{equation} with the normalization $B_i$ defined to place a total number $b_{C,i}$ of carbon atoms per H atom in the $i$th log-normal distribution. Here $b_{C,1}=0.75b_C$ and $b_{C,2}=0.25b_C$, $b_C$ being the total carbon abundance per H atom, $a_{0,1}=3.5$\angst, $a_{0,2}=30$\angst, and $\sigma=0.4$. This size distribution has a total of six parameters ($b_C,C,a_t,a_c,\alpha$, and $\beta$).
In Figure~\ref{fig:adist}, we plot the grain volume per H atom per logarithmic interval in dust size, $(4\pi a^3/3)d\ngr/d\mbox{\small ln}a$ for three different models in \cite{weingartner_and_draine_2001} that matches the extinction curve of the diffuse ISM. Blue dotted-dashed, green solid, and red dashed lines are the models with parameters in lines 1, 4, and 7 of Table 1 in \cite{weingartner_and_draine_2001}, and the gray vertical line indicates the minimum dust size, $a_{\mbox{\tiny min}}=3.5$\AA, used in \texttt{SpDust}. In Figure~\ref{fig:adist}, we label each model as ``case1'', ``case2'', and ``case3'', respectively.
The exact shape of the size distribution of the dust grains varies depending on the ISM condition and is not well known for the ISM in star-forming regions exposed to extreme physical conditions (strong radiation and shock). In this study, we use the three labeled models in Figure~\ref{fig:adist} to investigate a range of the dust size distribution from the case without additional enhancement of small grains to the case with significant enhancement of small grains as represented by the log-normal component, which leads to stronger AME. In principle, strong UV radiation from a massive star-forming region can exert radiation torque on dust grains, and the fraction of small grains can increase by disruption of large grains \citep{hoang_etal_rat_2019}, which then may increase the AME in an early stage of massive star formation before the C-shock impacts the ISM and destroys the small nanoparticles. However, the grain size distribution is poorly constrained, and extreme ISRF may be required to destroy large grains (see the discussion in Section~\ref{sec:riseame}).
\section{Radio-Millimeter Emission from Star-forming Regions}\label{sec:emission}
Recent observations of AME from star-forming regions challenge the widely accepted notion about the radio-millimeter SED; the radio-millimeter SED from star-forming regions in the frequency range $\approx 10$--$100$ GHz is dominated by free-free emission and thermal dust emission. Since the observed SED is based on flux measurements with a finite beam, unknown ``environmental'' factors, such as the geometry and volume filling factor of the ISM that are not directly related to the physics of the emission, affect the SED. Therefore, we only focus on the emissivity of each radiation process in order to isolate the environmental factors and characterize the significance of AME relative to the free-free and thermal dust emission.
Originally, the emissivity $j_{\nu}$ is defined as energy emitted at a frequency $\nu$ per unit volume, time, and solid angle (erg s$^{-1}$sr$^{-1}$cm$^{-3}$ Hz$^{-1}$). However, in this work, we use the emissivity per H atom $\frac{j_\nu}{\nh}$ in erg s$^{-1}$ sr$^{-1}$ Hz$^{-1}$ (H atom)$^{-1}$ (see Equation~\ref{eq:emissi}) following previous works \citep{draine_and_lazarian_1998,spdust2009}. The emissivities of free-free and thermal dust emission are computed as a function of frequency $\nu$ for CNM, WNM, and PDR ISM conditions (Table~\ref{tab:ismparam}) and the emissivity of AME (Equation~\ref{eq:emissi}) before and after the impact of the C-shock will be computed by following the process in Section~\ref{sec:dynamics} and compared with the emissivity of free-free and thermal dust emission.
\subsection{Calculating the Emissivity of Free-Free, Thermal Dust Emission and AME}
\subsubsection{Free-Free emission}
The emissivity per H atom of free-free emission with electrons of temperature $T$ is \citep{radibook}
\begin{equation}\label{eq:jvff}
\frac{\jvff}{\nh} = \frac{1}{4\pi}2^5\pi\left(\frac{e^6}{3m_e c^3}\right)\left(\frac{2\pi}{3k m_e}\right)^{1/2}g_{ff}\frac{\nh}{\sqrt{T}}e^{-\frac{h\nu}{kT}}
\end{equation} where $g_{ff}$ is the Gaunt factor for the free-free transition. The free-free emissivity is implemented in \texttt{SpDust}\citep{spdust2009,silsbee_etal_2011} using $g_{ff}$ tabulated from \cite{sutherland_1998}.
\subsubsection{Thermal dust emission}
The emissivity per H atom of thermal dust emission with dust temperature $T_d$ is \citep{radibook}
\begin{equation}\label{eq:jvbb1}
\begin{aligned}
\frac{\jvbb}{\nh} = \frac{\kappa_0}{\nh}\left(\frac{\nu}{\nu_0}\right)^{\beta}\frac{2h\nu^3}{c^2}\frac{1}{e^{\frac{h\nu}{k T_d}}-1}
\end{aligned}
\end{equation}
where $\kappa_0$ is the dust volume absorption coefficient (i.e., cross section per unit volume), and $\beta$ is the emissivity spectral index. Instead of $\kappa_0$, often the more frequently used coefficient is the mass absorption coefficient $\kappa_0^{\prime}$ (cm$^2$/g) and $\kappa_0=\kappa_0^{\prime}\rho_d$ for dust mass density $\rho_d$ (total dust mass/ISM volume). If gas and dust are well mixed in the same volume, $\rho_d$ can be inferred by the gas mass density ($\rho_{\mbox{\tiny gas}}=\nh m_{\mbox{\tiny H}}$) and dust-to-gas mass ratio $M_d/M_g=0.0083\rho_{\mbox{\tiny ref}}/(3~\mbox{g cm}^{-3})$ \citep{draine_2011book},
\begin{equation}
\begin{aligned}
\rho_d =\nh m_{\mbox{\tiny H}}\times0.0083\left(\frac{\rho_{\mbox{\tiny ref}}}{3~\mbox{g/cm}^{-3}}\right)
\end{aligned}
\end{equation}
where the reference density $\rho_{\mbox{\tiny ref}}$ is the solid density of a grain particle and different for different types of dust particles with an intermediate value of $3~\mbox{g/cm}^{-3}$ \citep{draine_2011book}. Both $\kappa_0^{\prime}$ and $\beta$ vary depending on the type of dust and wavelength. In this work, we adopt $\kappa_0^{\prime}=1.8$ (cm$^2$/g) at $\nu_0=599.98$ GHz from \cite{clark_etal_2016} and $\beta=2.0$ \citep[e.g.,][]{schnee_etal_2010}. Then \jvbb\ can be written as
\begin{equation}\label{eq:jvbb}
\begin{aligned}
\frac{\jvbb}{\nh} = 0.015\left(\frac{\rho_d}{3 \mbox{g/cm}^{3}}\right)\frac{\nh m_{\mbox{\tiny H}}}{\nh} \left(\frac{\nu}{\nu_0}\right)^{2} \frac{2h\nu^3}{c^2}\frac{1}{e^{\frac{h\nu}{k T_d}}-1}
\end{aligned}
\end{equation}
We assume that even though the small nanoparticles (i.e., the source of AME) are destroyed by shocks, the bulk of the grain volume is still dominated by large grain particles (Figure~\ref{fig:adist}), and the thermal dust emission is not much affected by the disruption of the small nanoparticles. Therefore, we use the same \jvbb\ before and after the shock. We implemented the emissivity of thermal dust emission in the \texttt{SpDust} code.
\subsubsection{AME}
The emissivity per H atom of AME is shown in Equation~\ref{eq:emissi}. We modify the \texttt{SpDust} code and add the damping and excitation coefficients $F_{sd}$ and $G_{sd}$ due to the interaction with supersonic neutral drift in C-shocks (Section~\ref{sec:dynamics}) to the other damping and excitation coefficients, $F_i$ and $G_i$ when computing the total damping and excitation coefficients:
\begin{equation}
\begin{aligned}\label{eq:coeffs}
F=\displaystyle\sum_{i}F_i + F_{sd}\\
G=\displaystyle\sum_{i}G_i + G_{sd}.
\end{aligned}
\end{equation}
Since grains might not spin around the axis of their greatest inertia, \cite{silsbee_etal_2011} introduced the correction terms to the damping coefficient by incoming particles. We also introduce the correction terms to the damping coefficient, $F_{sd}$ in order to account for the randomized orientation of grains relative to their angular momentum vector. The impact of C-shocks is the collision between neutrals (H, H$_2$ and He) and charged grains, and we use the correction terms (Equation (140) in \cite{silsbee_etal_2011}) for the charged grain with neutral impactors.
\subsubsection{Total Emissivity in Radio-millimeter Wavelength}
Using \texttt{SpDust}, we compute the total emissivity per H atom for the emission in the radio-millimeter wavelength, including free-free, thermal dust, and AME emissivity, using Equation~\ref{eq:jvff}, ~\ref{eq:jvbb}, and ~\ref{eq:emissi}:
\begin{equation}
\frac{j_\nu}{\nh} =
\left(\frac{\jvff}{\nh} + \frac{\jvbb}{\nh} + \frac{\jvsp}{\nh} \right)
\end{equation}
The original input parameters to run \texttt{SpDust} are the ISM environment parameters (Table~\ref{tab:ismparam}). Two additional parameters that we add to these standard \texttt{SpDust} parameters are the dust emissivity spectral index ($\beta$ in Equation~\ref{eq:jvbb1}) and the minimum size of nanoparticles ($a_{\mbox{\tiny min}}$ in Equation~\ref{eq:asize}).
The total emissivity of the radio-millimeter emission is computed for three different ISM environments, CNM, WNM, and PDR, before and after the C-shock impacts the ISM.
\subsection{AME before the Impact of the C-shock}
Using \texttt{SpDust}, we compute the AME emissivity \jvsp~(Equation~\ref{eq:emissi}) without including damping and excitation coefficients due to the supersonic neutral drift ($F_{sd}$ and $G_{sd}$ in Equation~\ref{eq:coeffs}). In Figure~\ref{fig:wrot_before}, we show the angular velocity $\omega_{rot}$ (connected blue dots) in Equation~\ref{eq:wrot2} as a function of grain size $a$ based on the computed damping and excitation coefficients for CNM, WNM, and PDR, together with the critical angular velocity $\omega_{cri}$ (red lines) in Equation~\ref{eq:wcri} for a range of maximum tensile strength, $S_{\mbox{\tiny max}}=10^8$ -- $10^{10}$ [erg cm$^{-3}$]. As the grain size becomes smaller, both $\omega_{rot}$ and $\omega_{cri}$ increase. For a grain whose size is smaller than a certain length, where $\omega_{rot}=\omega_{cri}$, one can expect that $\omega_{rot}$ will be larger than $\omega_{cri}$ and grains smaller than that size will be broken apart due to centrifugal force, which, however, is not likely to happen in the cases for the given range of $S_{\mbox{\tiny max}}$ (Figure~\ref{fig:wrot_before}), where the impact of the C-shock is not being considered.
Therefore, the integration over grain size in Equation~\ref{eq:emissi} is performed over the full range (3.5-100\angst). The resulting emissivities, including AME (dotted-dashed, solid and dashed blue line for case1, 2, and 3, respectively), free-free (red dashed line), and thermal dust emission (green dashed line) for CNM, WNM, and PDR, are shown in Figure~\ref{fig:emiss_before}. In addition to the fact that the strength of free-free emissivity increases as we go from CNM to PDR, the AME emissivity is significantly larger than the free-free emissivity for CNM, WNM, and PDR. In particular, for PDR, the peak frequency even moves to the higher frequency ($>100$ GHz) and impacts the emissivity curve at millimeter wavelength that is usually dominated by the emissivity of thermal dust emission if there is no AME.
\subsection{AME after the Impact of the C-shock}
We compute the AME emissivity \jvsp~(Equation~\ref{eq:emissi}) including damping and excitation coefficients due to the supersonic neutral drift ($F_{sd}$ and $G_{sd}$ in Equation~\ref{eq:coeffs}) by following the formalism described in Section~\ref{sec:dynamics}. The $s_d$ can be varied depending on the exact shock condition in the ISM and we use $s_d=10$ from Figure~\ref{fig:shockprofile}.
In Figure~\ref{fig:wrot_after}, we show the angular velocity $\omega_{rot}$ (connected blue dots) as a function of grain size $a$ based on the computed damping and excitation coefficients, including $F_{sd}$ and $G_{sd}$, for CNM, WNM, and PDR, together with the critical angular velocity $\omega_{cri}$ (red lines). Due to the excitation by the interaction with supersonic neutral drift, $\omega_{rot}$ increases significantly, and the critical grain size below which the destruction process happens for the grain with the maximum tensile strength $S_{\mbox{\tiny max}}=10^8$~[erg cm$^{-3}$], becomes larger ($a>1$ nm or $10$\angst). For each ISM condition in Figure~\ref{fig:wrot_after}, we find the critical grain size where $\omega_{rot}$ and $\omega_{cri}$ for $S_{\mbox{\tiny max}}=10^8$~[erg cm$^{-3}$] become equal and use this grain size as \amin\ for integrating Equation~\ref{eq:emissi}, which reduces the AME emissivity.
The resulting emissivities, including AME (dotted-dashed, solid, and dashed blue lines for case1, 2, and 3, respectively) impacted by the C-shock, free-free (red dashed line), and thermal dust emission (green dashed line) for CNM, WNM, and PDR, are shown in Figure~\ref{fig:emiss_after}. Compared to Figure~\ref{fig:emiss_before}, we find that the AME emissivity is significantly (for CNM) and almost entirely (for WNM and PDR) suppressed because the small grains are destroyed by centrifugal force from increased spin angular velocity.
\section{Discussion}\label{sec:discuss}
In Section~\ref{sec:emission}, we demonstrate that the impact of C-shocks can suppress the AME emissivity for typical conditions of the ISM (CNM, WNM, and PDR). Although progress has been made \citep[for a review, see][]{dickinson_etal_2018}, the ISM condition of star-forming regions for strong AME is still not well known; the number of objects that are not detected as AME still have a physical condition similar to those that are detected \citep{scaife_2013}. In this section, we propose a hypothesis of the rise and fall of the AME in star-forming regions according to their evolutionary stage (Seections~\ref{sec:riseame} and \ref{sec:fallame}), consider the `observed' SED predicted from the volume-integrated composite emissivity for monotonically or mildly varying ISM physical parameters (Section~\ref{sec:obsame}), and discuss other rotational disruption processes (Section~\ref{sec:met}), dust fragmentation (Section~\ref{sec:frag}), and the difficulty and prospect of the extragalactic AME detection in the era of high-resolution radio observation (Section~\ref{sec:ngvla}).
\subsection{The Rise of AME in the Early Stage of Star Formation}\label{sec:riseame}
A typical lifetime of OB stars is 1-10 Myr. In the early stage of star-formation ($\lesssim 10$ Myr), the strong radiation from young stars exerts a radiative torque on the dust particles and increases their angular velocity \citep{draine_and_weingartner_1997,lazarian_and_hoang_2007}. The radiative torque breaks large grains more efficiently, such that grains with a size larger than $a_{\mbox{\tiny disr}}$ are disrupted by the centrifugal force due to the increased angular velocity \citep{hoang_etal_rat_2019},
\begin{equation}
\left(\frac{a_{\mbox{\tiny disr}}}{0.1\mu m}\right)^{2.7} \approx 0.046\gamma^{-1} \bar{\lambda}_{0.5}^{-1.7} \left(\frac{U_6}{1.2}\right)^{-1/3} S_{\mbox{\tiny max,9}}^{1/2}
\end{equation} where $\gamma$ is the anisotropy parameter of the radiation field $0<\gamma<1$, and $\bar{\lambda}_{0.5}=(\bar{\lambda}/0.5\mu m)$ for the mean wavelength of the radiation field $\bar{\lambda}$. Here $U_6 = U/10^6$ where $U$ is the radiation energy density normalized by the ISRF energy density at the solar neighborhood ($u_{\mbox{\tiny ISRF}}=5.29\times10^{-14}$ erg cm$^{-3}$ \cite{draine_2011book}). This radiative torque disruption is more efficient than other disruption mechanisms (thermal sublimation, thermal sputtering, grain shattering) in destroying large ($a\gtrsim 0.1\mu m$) grains \citep{hoang_etal_rat_2019}.
Since the radiation field energy density $u_{\mbox{\tiny rad}}$ is $F/c$
where $F$ is flux density $L/(4\pi R^2)$ at a distance $R$ from the source with bolometric luminosity $L$, we can write $U$ as follows:
\begin{equation}\label{eq:radfield}
U = \frac{u_{\mbox{\tiny rad}}}{u_{\mbox{\tiny ISRF}}} \approx 2100 \left(\frac{L}{10^7 L_\odot}\right)\left(\frac{R}{10\mbox{pc}}\right)^{-2}
\end{equation}
If a massive star-forming region contains 100 O stars of bolometric luminosity $\sim 10^5 L_{\odot}$ with anisotropic illumination ($\gamma=0.1$) of radiation and $\bar{\lambda}=0.5\mu m$, $a_{\mbox{\tiny disr}}\approx 0.14\mu m$ at 10 pc away and $a_{\mbox{\tiny disr}}\approx 0.25\mu m$ at 100 pc away from the center of the star-forming region for the grain particle with the maximum tensile strength of $S_{\mbox{\tiny max}}=10^9$erg cm$^{-3}$. So, in typical extragalactic star-forming regions with massive star formation, grains with sizes larger than 0.2$\mu m$ (or $2\times 10^{-5}$cm) will be disrupted by the radiation torque, while small nanoparticles ($a\sim10^{-7}$cm) are still resilient against the centrifugal force.
Detailed dust grain modeling by \cite{silsbee_draine_2016} suggests that `fluffy' grains exposed to solar radiation can be disrupted by increased spin. However, in order to make the radiation pressure an efficient mechanism for breaking large grains, it may require an extreme radiation field (Draine 2021 private communication), which is much larger than the value from Equation~\ref{eq:radfield}.
The disrupted large grains are likely to be added as small grains and change the shape of the dust size distribution by steepening the power-law distribution in Equation~\ref{eq:asize}. This implies that the AME might be stronger in young (10 Myr or less) and massive ($\sim 100 M_\odot$) star-forming regions before a supernova explosion drives a shock wave into the ISM, which is consistent with the interpretation of the extragalactic AME observation by \cite{murphy_etal_2018}, where the best AME model for the source suggests that AME is from a nascent star-forming region with very young ($<3$ Myr) massive stars.
A compilation of the observations of AME from Galactic star-forming regions associated with \HII\ region is reported in \cite{dickinson_2013}. It is difficult to compare the reported values of the significance of AME detection for individual regions because the observations are made with different angular scales and the exact definitions of the significance of AME detection are different. However, amongst the observed star-forming regions, those that show a significant flux enhancement over the thermal free-free emission in $\sim 30$ GHz frequency are powered by OB stars \citep{finkbeiner_etal_2004,dickinson_etal_2009,planck_2011,shuping_etal_2012,tibbs_etal_2012}. On the other hand, the AME is not detected from a star-forming region that is relatively older and previously impacted by shocks, like the Orion Nebula \citep{planck_2011}, and from the supernova remnant like 3C 396 \citep{cruciani_etal_2016}. These observations suggest that another important environmental issue for understanding the observed AME might be whether or not the ISM is impacted by a shock, which is correlated with the age of the star-forming region.
\subsection{The Fall of AME in the Later Stage of Star Formation}\label{sec:fallame}
As we show in Section~\ref{sec:emission}, the C-shock in the magnetized medium can suppress the AME by disrupting small nanoparticles ($a \sim 10^{-7}$ cm). This implies that if a supernova explodes within 10-100 Myrs when massive stars become old enough to die, the star-forming region will be impacted by the C-shock and may not be the source of strong AME anymore. Here we argue that this scenario is plausible by comparing several important timescales: shock propagation time, neutral flow time, rotational disruption time, and dust reformation time.
Shock propagation time is the time for the shock front with speed $v_{s}$ to propagate into the ISM to a distance $l$ and can be written as
\begin{equation}
\tau_{\mbox{\tiny prop}} = 4.8\times10^5\left(\frac{l}{10\mbox{pc}}\right)\left(\frac{20\mbox{km/s}}{v_s}\right) \mbox{yr}.
\end{equation}
Neutral flow time is the time for neutrals with drift velocity $v_{\mbox{\tiny drift}}$ to pass the length scale of shock $l_s$ and can be written as \citep{hoang_etal_2019}
\begin{equation}
\tau_{\mbox{\tiny flow}}=30\left(\frac{l_s}{10^{15}\mbox{cm}}\right)\left(\frac{10\mbox{km/s}}{v_{\mbox{\tiny drift}}}\right) \mbox{yr}.
\end{equation}
Rotational disruption time is the time required to spin-up nanoparticles with maximum tensile strength $S_{\mbox{\tiny max}}$ to $\omega_{cri}$ and can be written as \citep{hoang_etal_2019}
\begin{equation}
\tau_{\mbox{\tiny disr}}\simeq0.005a_{-7}^4\left(\frac{\nh}{10^5\mbox{cm}^{-3}}\right)^{-1}S_{\mbox{\tiny max,9}}\left(\frac{v_{\mbox{\tiny drift}}}{10\mbox{km/s}}\right)^{-3} \mbox{yr}.
\end{equation}
Dust reformation time is the time required to replenish the disrupted nanoparticles, which is most likely done by the fresh dust formation process via stellar winds from the low- and intermediate-mass stars evolved from the main sequence because dust growth via coagulation is a very slow process with gigayears of timescale \citep{asano_etal_2013}. In particular, the thermally pulsating asymptotic giant branch (AGB) phase of intermediate-mass stars is likely to be the most promising site for dust formation \citep{morgan_and_edmunds_2003}. Given that the main-sequence lifetime of intermediate-mass ($M \sim 3-5M_{\odot}$) stars is $\gtrsim 100$ Myr \citep{schaller_etal_1992}, the dust reformation time via stellar wind from evolved stars is at least larger than 100 Myr, $\tau_{\mbox{\tiny reform}}\gtrsim100~\mbox{Myr}$, and this will probably be the fastest way of replenishing the nanoparticles.
To understand the effect of gas bombardment on disruption of small nanoparticles in C-shocks, we need to compare $\tau_{\mbox{\tiny disr}}$ with $\tau_{\mbox{\tiny flow}}$:
\begin{equation}
\begin{aligned}
\frac{\tau_{\mbox{\tiny flow}}}{\tau_{\mbox{\tiny disr}}}\simeq6000a^{-4}_{-7}\left(\frac{l_s}{10^{15}\mbox{cm}}\right) \left(\frac{\nh}{10^5\mbox{cm}^{-3}}\right)S_{\mbox{\tiny max,9}}^{-1}\\\times\left(\frac{v_{\mbox{\tiny drift}}}{10\mbox{km/s}}\right)^{2}.
\end{aligned}
\end{equation}
In the shock layer ($l_s\approx 10^{15}$cm), the $\frac{\tau_{\mbox{\tiny flow}}}{\tau_{\mbox{\tiny disr}}}$ for CNM and PDR is significantly larger than 1 for typical values of nanoparticle size $a \approx 10^{-7}$cm, drift velocity $v_{\mbox{\tiny drift}}\approx 10$km/s and maximum tensile strength $S_{\mbox{\tiny max}} \approx 10^9$ erg cm$^{-3}$. Although, for WNM, $\frac{\tau_{\mbox{\tiny flow}}}{\tau_{\mbox{\tiny disr}}}$ is less than 1 ($\frac{\tau_{\mbox{\tiny flow}}}{\tau_{\mbox{\tiny disr}}}\sim0.024$), for the same values of $a$, $v_{\mbox{\tiny drift}}$ and $S_{\mbox{\tiny max}}$, $\tau_{\mbox{\tiny flow}}$ and $\tau_{\mbox{\tiny disr}}$ become easily comparable as the particle size becomes smaller ($a_{-7}<1$) and $v_{\mbox{\tiny drift}}$ becomes larger (due to a high-velocity shock). Therefore, a $\tau_{\mbox{\tiny flow}}$ much larger than $\tau_{\mbox{\tiny disr}}$ implies that the grain disruption process is almost instantaneous after the shock sweeps the ISM.
Even though the nanoparticles are quickly disrupted in the shock layer ($\frac{\tau_{\mbox{\tiny flow}}}{\tau_{\mbox{\tiny disr}}}\gg 1$), AME will not be suppressed if the small nanoparticles, after the shock propagation, are reformed quickly as a result of the dust formation process via stellar wind from AGB stars. However, the shock propagation time scale, $\tau_{\mbox{\tiny prop}}$ is much shorter than the dust reformation time $\tau_{\mbox{\tiny reform}}\gtrsim100~\mbox{Myr}$. It implies that the shock propagates almost instantaneously through the ISM and disrupts the nanoparticles without quick replenishment of dust grains by the AGB star.
Dust grains are also produced in the supernova explosion \citep[for a review, see][]{sarangi_etal_2018}. But in such high-temperature gas ($T>10^6$K), a 0.1$\mu$m size grain survives for $<0.1$Myr \citep{draine_2011book}, and the theoretical modeling works of dust formation and evolution in the supernova explosion suggest that the dust produced in the supernova explosion is mostly large grains \citep[$>1\mu$m;][]{brooker_etal_2021,sarangi_etal_2018}. Also, \cite{gall_etal_2014} reported that only grain size distributions with grain radii larger than 0.25$\mu$m with a lower limit of 0.7$\mu$m can reproduce the observed supernova extinction curves. The large dust grains formed in the high-temperature supernova explosion do not survive long and are not likely to be the source of AME.
\subsection{Observed Radio-millimeter SED}\label{sec:obsame}
What we compute is the emissivity, not the observed flux density, as we note in Section~\ref{sec:emission}. However, we can infer the significance of the observed AME relative to the free-free and thermal dust emission by assuming a simple profile of the physical parameters.
In a simple spherical geometry, the observed SED of the star-forming region at distance $D$ is determined by the measured flux density $F_\nu$ at $\nu$.
\begin{equation}\label{eq:obsemiss}
F_\nu = \frac{1}{4\pi D^2}\int^{R_f}_{R_i}dr4\pi r^2\nh \left(\frac{4\pi j_\nu}{\nh}\right)
\end{equation}
where
\begin{equation}
\frac{4\pi j_\nu}{\nh} =
\left(\frac{4\pi \jvff}{\nh} + \frac{4\pi \jvbb}{\nh} + \frac{4\pi \jvsp}{\nh} \right)
\end{equation}
and the volume integration is performed from the inner radius $R_i$ to the outer radius $R_f$ of the star-forming region.
From Equation~\ref{eq:obsemiss} and the emissivity equation for each emission (Equation~\ref{eq:jvff}, \ref{eq:jvbb} and \ref{eq:emissi}), we can infer that the observed flux density of free-free emission depends on the radial distribution of \nh\ and $T$, the flux density of thermal dust emission depends on the radial distribution of \nh\, and the flux density of AME depends on the radial distribution of \nh, $T$, $\chi$, $x_{\mbox{\tiny H}}$.
In general, for each ISM, the CNM, WNM, and PDR around the star-forming regions, $\nh(r)$, $T(r)$, $\chi(r)$, and $x_{\mbox{\tiny H}}(r)$, are monotonically decreasing and not likely to vary much. Therefore, the `relative' strength of the emissivity for free-free and thermal dust emission and AME will not change much with $r$. For each ISM condition (CNM, WNM, and PDR), we indeed made $\nh(r)$, $T(r)$, $\chi(r)$, and $x_{\mbox{\tiny H}}(r)$ an order of magnitude lower than the typical values and computed the total emissivity. We find that although the peak of AME emissivity changes, the relative strength of each emission does not change much by the variation of these parameters.
This implies that the observed radio-millimeter SED resulting from the volume integration of the emissivity reflects the relative strength of the emissivity for each emission, and the relative strength of each emission in the radio-millimeter SED is not much altered by the spatial variation of the ISM physical parameters. This justifies our argument; for the smooth and monotonic variation of ISM parameters, the shape of the observed radio-millimeter SED, a result of volume integration of the free-free, thermal dust, and AME emissivity, is determined by the relative contribution of each emissivity to the total emissivity.
\subsection{Regular Mechanical Torque by Subsonic Drift}\label{sec:met}
In this study, we assume spherical dust affected by a stochastic mechanical torque from supersonic drift of dust relative to the neutral atoms \citep{hoang_etal_2019}. However, realistic nanoparticles are expected to be irregular and have helicity, which increases the spin angular momentum by the interaction with subsonic astrophysical flow \citep{lazarian_and_hoang_2007a,hoang_etal_2018}. This mechanical torque (called MET) was originally proposed for grain alignment \citep{lazarian_and_hoang_2007a} and known to be much more efficient for spinning up nanoparticles than stochastic mechanical torque \citep{lazarian_and_hoang_2021}. Our understanding of the impact of MET is primarily based on qualitative guidance from a simple model \citep{lazarian_and_hoang_2007a}; however, more complicated models and analyses \citep[e.g.,][]{das_and_weingartner_2016,hoang_etal_2018,reissl_etal_2022} support the idea that irregular grains exhibit helicity while interacting with gaseous flows, and the corresponding regular torques dominate the stochastic mechanical torques \citep{lazarian_and_hoang_2021}. The MET is difficult to calculate and depends strongly on grain shape \citep{hoang_lazarian_2018}, and there is no available tool for numerical studies to characterize its performance in astrophysical situations \citep{lazarian_and_hoang_2021}. Although it is hard to make a quantitative argument on the impact of MET on dust disruption, the efficiency of rotational disruption by the centrifugal force due to increased spin from mechanical torque (either the regular or the stochastic mechanical torque, or both) may be perhaps more efficient in real astronomical situations \citep{hoang_etal_2019}.
\subsection{Grain Fragmentation}\label{sec:frag}
In this study, we assume that small nanoparticles with higher angular velocity than the critical angular velocity for centrifugal disruption are instantaneously and completely destroyed. However, because the rotational disruption process cannot destroy grains to atoms, one can expect a continuous hierarchical disruption where large grains become small grains, and as a result, the grain size distribution might change \citep[e.g.,][]{guillet_etal_2011}. Our study does not model the fragmentation of grains. If the grains are disrupted to smaller grains and those small grains survive for long enough time, one can expect increased AME. However, we argue that smaller grains fragmented from large grains with an angular momentum larger than the critical value have higher angular velocity and will be disrupted quickly.
Let us suppose that one large spherical grain with angular velocity $\omega$ and moment of inertia $I=\frac{2}{5}MR^2$ for mass $M$ and radius $R$ has angular momentum $J=I\omega$ and fragments into two equal-sized spherical grains. Then, the two small grains will have the same angular momentum,
$J^{\prime}=\frac{2}{5}\left(\frac{M}{2}\right){R^\prime}^2{\omega^\prime}$. If one assumes that the angular momentum is conserved ($J=2J^\prime$), $\omega^\prime = \omega \left(\frac{R}{R^\prime}\right)^2$. In this simple fragmentation scenario, $\omega$ is inversely proportional to the square of the dust size ($\omega\sim\frac{1}{a^2}$), which increases faster than the critical angular velocity $\omega_{cri}$ in Equation~\ref{eq:wcri} as the dust size decreases. Although this simple fragmentation scenario does not represent reality, it is likely that small grains fragmented from large grains have a higher angular velocity than $\omega_{cri}$ and therefore are disrupted easily.
\subsection{AME from Extragalactic Star-forming Region in the Era of ngVLA}\label{sec:ngvla}
Detection of AME from extragalactic star-forming regions is rare (reported for only two galaxies). Assuming that there is no difference in the ISM condition, on average, in our Milky Way and other galaxies, the following observational selection effects affecting the detection of \HII\ regions and AME may explain the rarity of extragalactic AME.
(i) The AME detection experiment for other galaxies starts from the existing radio map at the frequency where thermal free-free emission is dominant \citep[e.g.,][]{eric_sfr_2018}. For the radio sources detected in the map, a follow-up multifrequency observation is made if the radio spectral SED of the source is unusually shallow or suggests an increased flux at higher frequency. It means that detecting the star-forming region in radio should not be biased to an extreme system. However, detecting thermal free-free emission from extragalactic \HII\ regions has systematic selection effects: beam dilution, confusion with disk emission, and confusion with a nonthermal discrete source \citep{israel_1980}. These systematic effects make the detection of \HII\ regions difficult because the contrast between the individual \HII\ region and the underlying extended disk emission of the galaxy becomes small. As a result, the detected extragalactic \HII\ regions with radio free-free emission may be biased towards giant \HII\ regions formed by massive stars, as illustrated by \cite{israel_etal_1975}, showing that at better resolution, the detected \HII\ regions at larger beams break up into groups or chains of smaller clumps. Therefore, the number of detected extragalactic \HII\ regions with free-free emission would be less than that of our Milky Way, where we see many individual (small and large) \HII\ regions, which limits our ability to find AME candidates.
(ii) The observations suggest that AME is likely to be a local phenomenon. For example, recent studies of the spatially resolved AME region show that the 31GHz emission (as a proxy for AME) is related to the local PAH column density \citep{arcetord_etal_2020}, and the ``excess'' emission at 15 GHz (interpreted as AME) larger than the ``expected'' radio synchrotron and free-free emission varies locally \citep{battistelli_etal_2015}. If the AME is governed by local ISM conditions on small scales, one expects to see a smaller number of AME sources in other galaxies than our Milky Way because the same radio beam encompasses a much larger area ($\gtrsim 100$pc) in nearby galaxies, and the emission in the beam including \HII\ regions is dominated by free-free emission from the \HII\ regions, not by AME from the surrounding molecular clouds.
In addition to the dust destruction mechanism proposed in this study, the systematic effect in radio observations makes the detection of extragalactic AME difficult. Therefore, if detected, the extragalactic AME is likely to be from a large molecular cloud around a young massive star cluster before a supernova explodes ($<10$Myr), which also explains why extragalactic AME is not detected very often compared to our Milky Way, where AME is detected more frequently.
When the ngVLA operates with high angular resolution and sensitivity \citep[][]{selina_etal_2018}, we will be able to locate individually resolved extragalactic star-forming regions (at a similar scale as our Milky Way) and find AME candidates. We can follow up on them with multifrequency observations to confirm the SED shape of AME, which will enable us to test the AME formation hypothesis based on large samples.
\section{Summary}\label{sec:summary}
We consider the impact of C-shocks on the rotational grain destruction process for small nanoparticles emitting AME \citep{hoang_etal_2019} and compute the emissivity of AME, free-free, and thermal dust continuum emission in the radio-millimeter wavelength for the typical CNM, WNM, and PDR ISM conditions surrounding star-forming regions. Our model of the rotational destruction process for AME suppression is based on a spherical dust shape and a specific range of the maximum tensile strength to withstand centrifugal stress, which is very poorly understood. With these caveats in mind, our study can be summarized as follows.
\begin{enumerate}
\item[$\bullet$] The magnetized shock (C-shock) for the typical conditions of CNM, WNM, and PDR can create a supersonic neutral drift that collides with grains and increases their spin angular momentum.
\item[$\bullet$] If the spin angular velocity is larger than the critical angular velocity that is resilient to the centrifugal force for given maximum tensile strength, the AME emitting grains break up, and the emissivity of AME is suppressed relative to the others: emissivity of free-free and thermal dust continuum emission.
\item[$\bullet$] If a C-shock destroys the small nanoparticles in the ISM surrounding the star-forming region, the AME that might be prominent in the early stage of star-formation ($\lesssim 10$Myr) becomes significantly suppressed and might not be detectable after $\sim 10$ Myr, when the shock from the supernova explosion develops and impacts the surrounding ISM in a massive star-forming region, which might explain the rare observations of extragalactic AME.
\item[$\bullet$] This study suggests that the presence of AME can be indicative of an early stage of star formation, and the strength of AME depends on the local conditions of the ISM, which implies that spatially resolved high-resolution radio observations are required to understand the detailed physics of AME and its connection to the ISM.
\end{enumerate}
\acknowledgments
I.Y. is grateful to the referee who provided valuable comments that greatly improved the current work.
I.Y. thanks Bruce Draine for his valuable comment on the \HII\ region and the grain destruction process and Antoine Gusdorf and Sylvie Cabrit for answering questions regarding the shock code. I.Y. also thanks Eric Murphy and Sean Linden for useful conversations regarding the observations of extragalactic AME. I.Y. acknowledges kind financial support from the National Radio Astronomy Observatory for the publication of this work. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
\software
{\texttt{SpDust} \citep{spdust2009,silsbee_etal_2011}, \texttt{Paris-Durham} shock code \citep{flower_etal_2003,lesaffre_etal_2013,godard_etal_2019}, \texttt{numpy} \citep{harris_etal_2020}, \texttt{matplotlib} \citep{hunter_2007}}
\vspace{1mm}
\bibliography{ame}{}
\bibliographystyle{aasjournal}
|
Title:
Magnetic field effects on nucleosynthesis and kilonovae from neutron star merger remnants |
Abstract: We investigate the influence of parametric magnetic field configurations of a
hypermassive neutron star (HMNS) on electromagnetic (EM) observables,
specifically the kilonova lightcurves and nucleosynthesis yields. We perform
three-dimensional (3D) dynamical-spacetime general-relativistic
magnetohydrodynamic (GRMHD) simulations, including a neutrino leakage scheme,
microphysical finite-temperature equation of state (EOS), and an initial
poloidal magnetic field. We find that varying the magnetic field strength and
falloff impacts the formation of magnetized winds or mildy-relativistic jets,
which in turn has profound effects on the outflow properties. All of the
evolved configurations collapse to a black hole (BH) $\sim 21-23$ ms after the
onset of the simulations, however, the ones forming jets may be considerably
more effective at transporting angular momentum out of the system, resulting in
earlier collapse times. Larger mass ejecta rates and radial velocities of
unbound material characterise the systems that form jets. The bolometric light
curves of the kilonovae and $r$-process yields change considerably with
different magnetic field parameters. We conclude that the magnetic field
strength and falloff have robust effects on the outflow properties and
electromagnetic observables. This can be particularly important as the total
ejecta mass from our simulations ($\simeq 10^{-3}\;M_{\odot}$) makes the ejecta
from HMNS a compelling source to power kilonova through radioactive decay of
$r$-process elements.
| https://export.arxiv.org/pdf/2208.05330 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
stars: magnetars -- (magnetohydrodynamics) MHD -- methods: numerical -- nuclear reactions, nucleosynthesis, abundances
\end{keywords}
\section{Introduction}
Multi-messenger observations of GW170817 have confirmed that binary neutron
star (BNS) merger remnants can launch short gamma-ray bursts \citep[sGRB, e.g.,][]{2017ApJ...848L..12A,2017ApJ...848L..13A,2017ApJ...848L..15S}. Moreover,
the UV, optical and (near-)infrared observations of the BNS merger show that the
radioactive decay of rapid-neutron capture process ($r$-process) elements is
taking place in the ejecta. \citep[e.g.,][]{2017Sci...358.1574S,2017Natur.551...75S,2017ApJ...848L..19C,2017Natur.551...67P}.
Different engine models have been proposed, however late-time kilonova
emission and sGRB observations have placed constraints on their characterization;
a consensus on whether the remnant was a black hole or a magnetar is yet to be
reached \citep[e.g.,][]{2017ApJ...850L..19M,2017PhRvD..96l3012S,2018ApJ...856..101M}. \citet{2020ApJ...901L..37M}
showed that magnetars formed in BNS mergers are,
indeed, viable
candidates for powering sGRB.
$r$-process nucleosynthesis in the BNS merger ejecta
produces large amounts of radioactive material, powering kilonova transients
while producing the heaviest elements in the universe
\citep[e.g.,][]{2011ApJ...738L..32G,2017Natur.551...80K,2021RvMP...93a5002C}.
The extensively studied kilonova related to GW170817, AT2017gfo, displayed a two-component emission.
The ``blue'' component is associated to the early phase of the
BNS merger with an emission peak in the UV/optical bands, while
the ``red'' component peaks in the (near-)infrared
frequencies
on the order of a few days post-merger
\citep[e.g.,][]{2017Sci...358.1574S,2017Natur.551...75S}. The blue component
is thought to arise from lanthanide- and neutron-poor ejecta with the majority
of emission originating from light elements \citep[with atomic number $A < 140$
and particularly large amounts of iron,
e.g.,][]{2017ApJ...848L..19C,2017ApJ...848L..18N}. The red component would
then be dominated by emission from heavily synthesized material as a result of
$r$-process nucleosynthesis (nuclei with $A > 140$), therefore being
lanthanide- and neutron-rich
\citep[e.g.,][]{2017Natur.551...67P,2017ApJ...848L..27T,2017ApJ...848L..19C}.
Furthermore, analysis of a large electromagnetic (EM) data set conducted by
\citet{2017ApJ...851L..21V} implied that for the red component, a delayed
outflow from the remnant accretion disk is the most likely dominant origin of
emission, in combination with an emission component from the dynamical ejecta.
The origin of the blue component is not as well understood, as it has proven
difficult to reproduce the inferred outflow properties with simulations
\citep{2018ApJ...869L...3F}. Among the suggested possibilities are shock-heated
polar dynamical ejecta \citep[e.g.,][]{2017arXiv171005931M}, neutrino-driven winds from the HMNS remnant,
magnetized winds from the HMNS remnant \citep[see also][]{2018ApJ...856..101M}
and remnant winds from spiral density waves \citep{2019ApJ...886L..30N}, where the final two seem the most
promising. Furthermore, the EM data analysed by \citet{2017ApJ...851L..21V} implies a
blue kilonova component with an ejecta mass $M_{\rm ejecta}$ of $\approx 2.0
\times 10^{-2} M_{\odot}$ and ejecta speed $v_{\rm ejecta} \approx 0.27c$ and
a red component with $M_{\rm ejecta} \approx 1.1 \times 10^{-2} M_{\odot}$ and
$v_{\rm ejecta} \approx 0.14c$.
BNS post-merger remnants may be highly magnetized following an amplification stage as a
result of magnetic instabilities, such as the Kelvin-Helmholtz instability in
the shear layer between two streams of matter during the pre-merger phase
\citep[e.g.,][]{2013ApJ...769L..29Z,2015PhRvD..92l4034K}. The strong magnetic
field that is generated ,likely, has profound effects on the remnant system.
Therefore, simulations of BNS mergers increasingly account for magnetic field
effects by implementing general-relativistic magnetohydronamic (GRMHD) methods
\citep[e.g.,][]{2009MNRAS.399L.164G,2015PhRvD..92l4034K,2015PhRvD..92h4064D,2019PhRvD.100b3005C}.
Comparisons between GRMHD and purely GRHD simulations of BNS mergers have
implied robust effects of the magnetic field on outflow properties
\citep[e.g.,][]{2008PhRvL.100s1101A,2008PhRvD..78b4012L,2018PhRvD..97l4039K}.
Namely, it may cause the formation of mildly-relativistic jets and results in
considerably larger mass ejecta rates and ejecta velocities \citep{2020ApJ...901L..37M}.
As GRMHD and GRHD simulations of BNS mergers imply strong magnetic field
effects on outflow properties, it is interesting to parametrically explore the
influence of the magnetic field by varying its strength and configuration.
\citet{2014ApJ...785L...6S} investigated the latter, in the context of BNS merger remnants, using three different magnetic field geometries to determine their influence on the X-ray afterglow of the sGRB. They evolved an initially isolated axisymmetric HMNS, with a polytropic equation of state and endowed with a magnetic field, rather than the direct outcome of a BNS merger evolution.
In this work, we perform seven dynamical-spacetime GRMHD simulations of (post-merger)
hypermassive neutron star (HMNS) systems including a parameterized magnetic
field with different field strengths and configurations, to investigate
the influence of these magnetic field parameters on the HMNS outflows and kilonova.
We map a snapshot
of BNS post-merger data, at $t_{\rm map} = 17$ ms after coalescence, from a
GRHD simulation performed by \citet{2018ApJ...869..130R} and use it as initial
data for all the simulations. We post-process the HMNS ejecta, using Lagrangian
tracer particles, to compute the $r$-process yields and
a spherically-symmetric radiation-hydrodynamics code to compute bolometric
light curves of the kilnovae. Both magnetic field parameters show profound effects on the computed
outflow properties, nucleosynthesis yields and kilonova light curves. All simulations
collapse to a BH $\sim 38 - 40$ ms after coalescence of the two neutron stars.
Two of the seven simulations show the emergence of
mildly-relativistic jets, while displaying significantly earlier BH collapse
times compared to the other simulations (by $\sim 1.6$ ms). This may imply that
jets are more effective at transporting angular momentum out of the remnant
system compared to magnetized winds. Furthermore, the two simulations that
exhibit jet formation contain significantly larger mass ejecta rates and
radial velocities of unbound material. We find that the total ejecta mass of the
HMNS system is in the $2.4 \times 10^{-4}\,M_{\odot} < M_{\rm ejecta} < 8.3 \times
10^{-3}\,M_{\odot}$ range for all seven simulations. Finally, we show that the magnetic field has significant implications on the nucleosynthesis yields and kilonova light curves even for the weaker magnetic field range explored, thus making this a robust feature for magnetized HMNS remnants.
The paper is organized as follows. In Section \ref{sec:methods}, we describe
our simulation setup, numerical methods and the procedure for obtaining the
$r$-process yields and kilonova light curves.
In Section~\ref{sec:outflowproperties}, we discuss the various black hole collapse times
and outflow properties of the HMNS system, followed by the evolution of the
magnetic vector field in Section \ref{sec:magnetic-properties}. We discuss the
nucleosynthesis yields and bolometric light curves of the kilonovae in Section
\ref{sec:nucleo-and-kilonovae}. We summarize and discuss our conclusions in
Section \ref{ch:summary-conclusions}.
\section{Numerical Methods and Setup}
\label{sec:methods}
The simulations performed in this work make use of the Einstein toolkit framework
\citep{2012CQGra..29k5001L}, which is a publicly-available infrastructure for relativistic astrophysics and gravitational physics simulations (\url{http://einsteintoolkit.org}).
The code is based on multiple components, including the \texttt{Carpet} thorn that is
responsible for adaptive mesh refinement (AMR) \citep{2004CQGra..21.1465S}, the
code that provides GRMHD named \texttt{GRHydro} \citep{2014CQGra..31a5005M} and
the \texttt{McLachlan} module that generates GR evolution
\citep{2009PhRvD..79d4023B,2011PhRvD..83f4008R}. We use finite-volume high-resolution
shock capturing (HRSC) methods to evolve the system in time and adopt 5th-order
weighted-ENO (WENO5) reconstruction
\citep{2007MNRAS.379..469T,2013PhRvD..87f4023R}
and the HLLE approximate Riemann solver
\citep{1983JCoPh..49..357H,1988SJNA...25..294E}. To prevent violations of the magnetic field divergence-free constraint, $\vec{\nabla} \cdot \vec{B} = 0$, we enforce them through a constrained transport scheme.
\subsection{Equation of state and neutrino treatment}
For the simulations performed in this work we adopt a microphysical, finite-temperature
equation of state (EOS) in tabulated form. Specifically, we use the $K_0 = 220$ MeV
variant of the EOS from \citet{1991NuPhA.535..331L} (where $K_0$ is the nuclear compression modulus), which is the
so-called LS220 EOS.
The simulations include a neutrino treatment through a scheme that adopts
neutrino heating and leakage approximations, based on
\citet{2010CQGra..27k4103O} and \citet{2013ApJ...768..115O} (which in turn are
based on \citet{2003MNRAS.342..673R} and \citet{1996A&A...311..532R}). The
scheme tracks three different neutrino species; electron neutrinos $\nu_e$,
electron anti-neutrinos $\bar{\nu}_e$ and heavy-lepton tau and muon
(anti-)neutrino's, which are grouped in a single neutrino species $\nu_x = \{
\nu_{\mu}, \nu_{\tau}, \bar{\nu}_{\mu}, \bar{\nu_{\tau}} \}$. This grouping is
reasonable as these neutrinos interact only through neutral current processes in
the post-merger environment, which contain similar cross sections. The following
interactions are included in estimates for the neutrino energy and number
emission rates; the charged-current capture processes
\begin{equation}
p + e^{-} \leftrightarrow n + \nu_e\:,
\label{eq:electronneutrino}
\end{equation}
\vspace{-.5cm}
\begin{equation}
n + e^{+} \leftrightarrow p + \bar{\nu}_e\:,
\label{eq:electronantineutrino}
\end{equation}
plasmon decay,
\begin{equation}
\gamma \leftrightarrow \nu + \bar{\nu}\,,
\end{equation}
electron and positron pair annihilation/creation,
\begin{equation}
e^{-} + e^{+} \leftrightarrow \nu + \bar{\nu}\,,
\end{equation}
and nucleon-nucleon Bremsstrahlung,
\begin{equation}
N + N \leftrightarrow N + N + \nu + \bar{\nu}\,,
\end{equation}
where the approximate neutrino energy and number emission rates from the above
processes depend on local thermodynamics and the energy-averaged optical depth.
Estimates for the neutrino optical depth are based on non-local calculations,
which have been implemented using a ray-by-ray approach. The scheme solves the
neutrino optical depth along radial rays that cover the simulation domain using
the $\theta$ and $\psi$ directions. Tri-linear interpolation is then used in
spherical coordinates $(r,\theta,\psi)$ for determining the optical depth at
Cartesian grid cell centers. For the simulations, 20 rays in $\theta$ are
employed that cover $[0,\pi/2]$ and 40 rays in $\psi$, covering $[0,2\pi]$. The
rays contain 800 equidistant points each up to a distance of 120 km, after
which 200 logarithmically spaced points are adopted to account for the
remainder of the domain.
The approximated local neutrino heating function is based on the charged-current absorption of $\nu_e$ and $\bar{\nu}_e$, \eqref{eq:electronneutrino} and \eqref{eq:electronantineutrino}, and is given by
\begin{equation}
\mathcal{Q}^{\rm heat}_{\nu_i} = f_{\rm heat} \frac{L_{\nu_i}(r)}{4 \pi r^2} \langle \epsilon^2_{\nu_i} \rangle S_{\nu} \frac{\rho}{m_n} X_i \biggl< \frac{1}{F_{\nu_i}} \biggr> e^{-2 \tau_{\nu_i}}\,,
\label{eq:neutrinoheating}
\end{equation}
where $f_{\rm heat}$ is the heating scale factor, $L_{\nu_i}(r)$ the
approximate neutrino luminosity that emerges radially from below as
interpolated by the ray-by-ray approach of the neutrino leakage scheme and $~{S_{\nu} =0.25 \ (1 + 3 \alpha^2)\ \sigma_0\ \frac{1}{(m_e c^2)}}$, where $\alpha = 1.23$,
$\sigma_0 = 1.76 \times 10^{-44}\,\mathrm{cm}^{-2}$, $m_e$ the electron mass and $c$
the speed of light. Additionally, $\langle \epsilon^2_{\nu_i} \rangle$ is the approximate
neutrino mean-squared energy, $m_n$ the neutron mass, $X_i$ is the neutron or
proton mass fraction for the electron neutrino's or anti-neutrinos,
respectively, $\langle \frac{1}{F_{\nu_i}} \rangle$ is the mean inverse flux
factor and $\tau_{\nu_i}$ is the approximate neutrino optical depth. More
specifically, $\langle \frac{1}{F_{\nu_i}} \rangle$ depends on neutrino
radiation field details and is parameterized as a function of $\tau_{\nu_i}$,
based on neutrino transport calculations from \cite{2008ApJ...685.1069O} and
given by $\langle \frac{1}{F_{\nu_i}} \rangle = 4.275 \tau_{\nu_i} + 1.15$.
Furthermore, the heating scale factor $f_{\rm heat}$ is a free parameter that
has been set to $f_{\rm heat} = 1.05$, which is consistent with heating in
core-collapse supernova simulations that adopt full neutrino transport schemes
\citep{2013ApJ...768..115O}. The above neutrino heating function was first
derived by \citet{2001A&A...368..527J}. Neutrino heating is turned off in
the simulations for densities $\rho < 6.18 \times 10^{10}$ g cm$^{-3}$, in
order to maintain numerical stability. The neutrino scheme correctly captures
the overall neutrino energetics up to a factor of a few when compared to the
full neutrino transport scheme of \citet{2010CQGra..27k4103O} for simulations
of core-collapse supernovae. The dependence on energy, the deposition of
momentum and the annihilation of neutrino pairs are not included in the scheme,
and consequently will likely affect our inferred composition properties of the
ejecta.
\subsection{Initial conditions of the simulations}
The initial data is mapped from a GRHD simulation of a BNS merger by
\citet{2018ApJ...869..130R}, covering both the pre-merger phase and a small
fraction of the post-merger phase. This simulation is based on the
\texttt{WhiskyTHC} code (model LS135135M0), and evolves an equal-mass binary NS with component masses at infinity of $1.35 M_{\odot}$, the same EOS
and similar neutrino treatment. The mapping of this simulation is done at a
time $t_{\rm map} - t_{\rm merger} = 17$ ms, thereby avoiding transient,
oscillatory effects caused by the NS remnant core in the early post-merger
phase.
Five different AMR levels are implemented, varying by a factor of two in resolution between consecutive levels. The highest refinement level region, covering the HMNS, has a resolution
$h_{\rm fine} = 185$ m, while for the coarsest region $h_{\rm coarse} = 3.55$
km. The structure of the AMR grid is made up of boxes that extend up
to 177.3km, 118.2 km, 59.1 km and 29.6 km, while the outermost boundary of the
simulation domain extends to a distance of $\sim 355$ km.
At the onset of our HMNS simulations, we add a parameterized magnetic field to the simulations, which varies in strength and falloff between the different simulations. We initialize the parameterized magnetic field with the analytical prescription of the vector potential $\vec{A}$, where $\vec{B} = \nabla \times \vec{A}$, of the form
\begin{equation}
A_r = A_\theta = 0; \quad A_\phi = B_0\,r\,\sin(\theta) \frac{r_{\rm falloff}^3}{r_{\rm falloff}^3 + r^3}\:,
\label{eq:initialBfield}
\end{equation}
where $B_0$ is the initial magnetic field strength and $r_{\rm falloff}$
controls the range of the magnetic field. As we add this purely poloidal, large-scale magnetic field \emph{ad hoc}, we implicitly
assume that a dynamo process is present during the pre-merger (and possibly
also early post-merger) phase that is capable of producing such an ordered,
strong field. Even though previous research of proto-neutron stars formed in core-collapse supernovae implies the presence of such a
dynamo \citep[e.g.,][]{2015Natur.528..376M,2020SciA....6.2732R}, current
BNS merger simulations are not capable of fully resolving this
magnetic amplification process \citep[e.g.,][]{2018PhRvD..97l4039K}.
We perform a total of seven simulations. For the first three simulations, we
vary the magnetic field strength between $B_0 = \{10^{13},\, 10^{14},\, 10^{15}\}$ G while keeping the magnetic falloff parameter $r_{\rm falloff} = 20$
km fixed. For the next three simulations, we fix the magnetic field strength $B_0 = 10^{15}$ G while varying
$r_{\rm falloff}$ between $r_{\rm falloff} = \{5,\, 10,\,15\}$ km. For the final
simulation, we change both magnetic field parameters, explicitly, $B_0 = 5 \times
10^{15}$ G and $r_{\rm falloff} = 10$ km. We list the values of the magnetic
field parameters of the seven simulations in Table
\ref{table:sevensimulations}, and include corresponding nomenclature for the simulations.
\begin{table}
\centering
\begin{tabular}{ | c | c | c | }
Simulation name & $B_0$ [Gauss] & $r_{\rm falloff}$ [km] \\
\hline
B15-r20 & $10^{15}$ & 20 \\
B14-r20 & $10^{14}$ & 20 \\
B13-r20 & $10^{13}$ & 20 \\
B15-r5 & $10^{15}$ & 5 \\
B15-r10 & $10^{15}$ & 10 \\
B15-r15 & $10^{15}$ & 15 \\
B5-15-r10 & $5 \times 10^{15}$ & 10
\end{tabular}
\caption{\label{table:sevensimulations} Initial conditions of the various magnetic fields that have been adopted during the seven performed simulations of this work. The parameter $B_0$ controls the magnetic field strength, while $r_{\rm falloff}$ is responsible for the range the magnetic field. For the mathematical form of the vector potential of the magnetic field, see equation \eqref{eq:initialBfield}.}
\end{table}
\subsection{Nucleosynthesis and kilonova analysis}
\label{sec:nucleo-analysis}
To calculate the nucleosynthesis yields, we use Lagrangian tracer particles to
determine the encountered neutrino luminosities and thermodynamic
quantities of the merger outflows. The tracer particles are spaced uniformly
and we extract the corresponding quantities once the tracers reach a distance
of $r = 150\: M_{\odot}$. We determine the composition of the merger ejecta
by post-processing the tracers using the nuclear reaction network \texttt{SkyNet}
\citep{2017ApJS..233...18L}. REACLIB is used to obtain the forward strong
rates, nuclear masses, partition functions and part of the weak rates
\citep{2010ApJS..189..240C}. The remaining weak rates are taken from
\citet{1982ApJS...48..279F}, \citet{1994ADNDT..56..231O} or \citet{2000NuPhA.673..481L}.
Note that we adopt an approximate neutrino leakage
scheme in the simulations, while the ejecta composition depends sensitively
on the neutrino transport performed by this scheme. This causes uncertainties in
our predictions of $Y_e$ distributions and $r$-process abundances. These uncertainties have been investigated by \citet{2021arXiv211200772C}, where various neutrino luminosities have been adopted to determine its influence on the $r$-process abundances and $Y_e$ distributions. They conclude that the $r$-process production of heavy elements is reduced by up to a factor of $\sim$10 when comparing the
two most extreme cases that bracket the entire adopted parameter space.
In order to compute the luminosity of the kilonova on a timescale of days,
we use a modification of \texttt{SNEC} \citep[SuperNova Explosion Code;][]{2015ApJ...814...63M},
which is a 1D Lagrangian equilibrium-diffusion radiation hydrodynamics code
that can simulate the evolution of merger outflows and consequent kilonova
emission. Modifications to \texttt{SNEC} are implemented to account for kilonova
as opposed to supernova modeling, such as the nickel heating term which is
replaced by radioactive heating from $r$-process nuclei. We follow the same procedure
as \citet{2021arXiv211200772C}, where more details on
the modifications and
methods of the kilonova modeling and on the post-processed
nucleosynthesis can be found.
\section{Results}
\label{sec:results}
\subsection{Black hole collapse and outflow properties}
\label{sec:outflowproperties}
In Fig. \ref{fig:rhomaxvstime}, we show the maximum density $\rho_{\rm max}$ as a function of time for all simulations. Simulations B15-r20 and B5-15-r10 collapse to a BH after $\sim 21.3$ ms, while the other simulations show an on-average increased collapse time of $\sim 1.6$ ms at $\sim 22.9$ ms (all simulations display slight differences in the exact collapse times). The significant difference in collapse time of $\sim 1.6$ ms between these two groups of simulations may be explained by the formation of collimated, mildly-relativistic jets for these two simulations. Even though all simulations launch magnetized winds along the rotation axis of the HMNS remnant \citep{2004ApJ...611..380T}, only for the two aforementioned simulations is the magnetic field powerful enough to collimate part of the outflow from the HMNS into jets.
In order to evaluate the properties of unbound material exclusively, we calculate the material's Bernoulli criterion $-h u_t > 1$, where $h = (1 + \epsilon + p + \frac{b^2}{2})/\rho$ is the fluid's relativistic enthalpy and $u_t$ the time component of the fluid four-velocity. If the Bernoulli criterion is satisfied, the corresponding material is unbound. In the upper row of Fig. \ref{fig:Vr-histograms-mdot}, we show histograms of the velocity's radial component $v^r$ of unbound material with corresponding ejecta mass $M_{\rm ejecta}$ for simulations B13-r20, B14-r20 and B15-r20 at $t - t_{\rm map} = 5$ and 20 ms. In addition, we show the evolution of the sphere-averaged mass ejecta rates $\dot{M}_{\rm ejecta}$ as a function of time for the same simulations, which are computed using
\begin{equation}
\dot{M}_{\rm ejecta} = \int_{r_1}^{r_2} \sqrt{g} \rho W v^r dV \frac{1}{(r_2 - r_1)}\:,
\end{equation}
with $r_1 = 44.3$ km and $r_2 = 192.1$ km. Material is only included in this computation if the Bernoulli criterion is satisfied. We show B15-r20 \citep[which is almost identical to B15-low in][]{2020ApJ...901L..37M} as a reference case in black\footnote{We modified how tracer particles record neutrino luminosities in low-density regions.}.
For the $v^r$ evolution at $t - t_{\rm map} = 5$ ms, B13-r20 and B14-r20 display very similar $v^r$ profiles with $v^r < 0.3c$. Simulation B15-r20 contains significantly larger ejecta masses for nearly all $v^r$, while also displaying ejecta in the $0.3c < v^r < 0.5c$ regime. By $t - t_{\rm map} = 20$ ms, the ejecta mass across all velocity bins have decreased significantly for all simulations. The $v^r$ profile of B14-r20 exhibits larger ejecta masses in the $v^r > 0.2c$ range while B13-r20 loses all of its ejecta in this velocity regime. For B15-r20, the mass ejecta peak has shifted to significantly lower velocities ($v^r \simeq 0.08c$).
Simulation B15-r20 shows considerably larger $\dot{M}_{\rm ejecta}$ during its evolution compared to B14-r20 and B13-r20. Simulations B14-r20 and B13-r20 exhibit very similar $\dot{M}_{\rm ejecta}$ patterns, while also displaying two short peaks at $t - t_{\rm map} \sim 6$ ms and $t - t_{\rm map} \sim 7.5$ ms. These $\dot{M}_{\rm ejecta}$ peaks are slightly enhanced for simulation B13-r20 compared to B14-r20, although the latter does generally display larger $\dot{M}_{\rm ejecta}$ values compared to the former.
In the lower row of Fig. \ref{fig:Vr-histograms-mdot}, we show $v^r$ histograms of unbound material with corresponding ejecta masses for simulations with varying $r_{\mathrm{falloff}}$, these are B15-r5, B15-r10, B15-r15, B15-r20 and B5-15-r10. At $t - t_{\rm map} = 5$ ms, all displayed simulations exhibit apparent differences in $v^r$ profiles, where especially B5-15-r10 contains large amounts of high-velocity ejecta with $0.3c < v^r < 0.66c$ while B15-r20 also shows some high-velocity outflows with $0.3c < v^r < 0.52c$. Simulations B15-r10 and B15-r15 exhibit less high-velocity ejecta with $0.3c < v^r < 0.42c$ and $0.3c < v^r < 0.48c$, respectively, while B15-r5 only contains outflows with $v^r < 0.28c$.
At $t - t_{\rm map} = 20$ ms, the $v^r$ profiles of simulations B15-r5, B15-r10 and B15-r15 look reasonably similar, where B15-r10 and B15-r15 have lost the majority of their high-velocity ($v^r > 0.3c$) ejecta between $t - t_{\rm map} = 5$ and 20 ms. For B5-15-r10, nearly all $v^r > 0.5c$ material has rapidly decreased or disappeared in the same time interval, although it has retained significant $M_{\rm ejecta}$ values in the $0.3c < v^r < 0.5c$ regime. Simulation B15-r20, by contrast, displays larger high-velocity mass fractions at $t - t_{\rm map} = 20$ compared to $t - t_{\rm map} = 5$. Finally, we note that jet formation in simulations B15-r20 and B5-15-r10 leads to considerably larger $v^r$ values compared to their purely magnetized wind-forming counterparts.
For the corresponding $\dot{M}_{\rm ejecta}$ panel, B5-15-r10 exhibits much larger $\dot{M}_{\rm ejecta}$ values compared to the other simulations including B15-r20, despite both simulations showing jet formation. Simulation B15-r20 does exhibit significantly larger mass ejecta rates throughout most of its evolution compared to B15-r15, B15-r10 and B15-r5. Furthermore, simulation B15-r15 exhibits considerably larger $\dot{M}_{\rm ejecta}$ compared to B15-r10 and B15-r5, even showing an increasing $\dot{M}_{\rm ejecta}$ trend towards the end of the simulation. Finally, simulation B15-r5 shows a very similar $\dot{M}_{\rm ejecta}$ evolution compared to B14-r20 and B13-r20, also displaying two short peaks at $t - t_{\rm map} \sim 6$ ms and $t - t_{\rm map} \sim 7.5$ ms.
In Fig. \ref{fig:volume-renderings}, we show volume renderings of the Bernoulli criterion (equivalent to the Lorentz factor) for the outflows (white-red colormap) and density for the accretion torus (white-blue colormap) of simulations B13-r20, B15-r20, B5-15-r10 and B15-r10. The magnetic field lines are also shown in the lower plane ($z < 0$, where $z$ is the vertical axis) in white. When comparing B13-r20 and B15-r20, the latter shows a more structured accretion torus and a considerably larger amount of ejecta, in addition to higher Lorentz factors. Simulation B15-r10 shows a narrower outflow structure and relatively disordered magnetic field geometry compared to B15-r20, though notably contains similar Lorentz factors. Simulation B5-15-r10, despite forming jets, displays lower Lorentz factors when compared to B15-r20.
The maximum Lorentz factor of B15-r20 is 3.94 whereas for B5-15-r10 it is 2.32. This is likely caused by the jet's radial velocities decreasing over time, as also implied by panels d and e of Fig. \ref{fig:Vr-histograms-mdot}.
\subsection{Evolution of the magnetic field}
\label{sec:magnetic-properties}
In Fig. \ref{fig:2d-Bvec-B13-14-15}, we show streamplots in the meridional ($xz$) plane of the magnetic field (that is, integrating the $\{B_x, B_z\}$ components) for simulations B13-r20, B14-r20 and B15-r20 at $t - t_{\rm map} = 0$ and 20 ms. We adopt three different values for the magnetic field strength $|\vec{B}|$ to highlight their normative features for each simulation. The $t - t_{\rm map} = 0$ magnetic vector field represents the initial ordered magnetic field, which we compute from the vector potential $A$ in equation \eqref{eq:initialBfield} with varying $B_0$ and $r_{\rm falloff} = 20$ for each of the simulations. For $t - t_{\rm map} = 20$ ms, we compute the figures using simulation data, specifically from magnetic variables in the GRMHD evolution of the HMNS system. We infer the relation between the magnetic field parameters and its final configuration by comparing the magnetic field structure at early and late times. This is especially apparent for simulations B13-r20 and B14-r20, which show extreme changes in the magnetic field morphology between $t - t_{\rm map} = 0$ and 20 ms due to the field's adaptation to the underlying magnetohydrodynamical flow of the remnant system, thereby rapidly losing their large-scale structure. For simulation B15-r20, the field appears to be collimated in the polar region due to the development of large toroidal field components, seen in Fig.~\ref{fig:volume-renderings}.
In Fig. \ref{fig:2d-Bvec-falloff}, similarly, we show streamplots in the meridional ($xz$) plane of the magnetic field for simulations B15-r5, B15-r10, B15-r15 and B5-15-r10. The $t - t_{\rm map} = 0$ magnetic vector fields display the initial magnetic field computed from the vector potential $A$ in equation \eqref{eq:initialBfield} with varying $r_{\rm falloff}$ (and $B_0$ for B5-15-r10) for each of the simulations. All simulations, as before, adjust rapidly to the underlying magnetohydrodynamical flow, while showing different magnetic field morphologies and strengths throughout the displayed planes. For simulation B15-r5 at $t - t_{\rm map} = 20$ ms, the magnetic field is dominated by relatively low-$|\vec{B}|$ values and disordered field configurations. Simulations B15-r10 and B15-r15, by contrast, display larger magnetic field strengths and higher degrees of order in their field structures, albeit also exhibiting disordered and/or low-$|\vec{B}|$ regions. Simulation B5-15-r10 exhibits the most ordered field in combination with high-$|\vec{B}|$ regions, although notably showing a considerably different field morphology compared to B15-r20 in Fig. \ref{fig:2d-Bvec-B13-14-15}.
\subsection{Nucleosynthesis and kilonovae}
\label{sec:nucleo-and-kilonovae}
In panel a of Fig. \ref{fig:Ye5GK}, we show electron fraction histograms of all tracer particles for simulations B13-r20, B14-r20 and B15-r20, when the temperature of the particles is last above 5 GK. As this is approximately the temperature at which $r$-process nucleosynthesis starts, the electron fractions at this temperature are the relevant quantities for setting the $r$-process yields. As mentioned, the approximate neutrino scheme of the simulations causes uncertainties in our nucleosynthesis predictions, where the $r$-process production of heavy elements may be reduced by up to a factor of $\sim$10 \citep[when comparing the most extreme cases;][]{2021arXiv211200772C}. We compute the $Y_e$ distributions using \texttt{SkyNet} \citep{2017ApJS..233...18L}. All simulations exhibit wide distributions in $Y_e$, where especially B13-r20 contains more low-$Y_e$ material while also showing some ejecta in the $0.1 < Y_e < 0.16$ range. Simulation B14-r20 contains significant $Y_e > 0.4$ ejecta, while also displaying a larger average $Y_e$ compared to B13-r20. For B15-r20, a large amount of $0.24 < Y_e < 0.34$ material is ejected, while also showing significant high-$Y_e$ material. These results seem to tentatively imply that when increasing $B_0$, the $Y_e$ of the ejecta generally shifts to larger values.
In panel b of Fig. \ref{fig:Ye5GK}, we show $Y_e$ distributions of all tracer particles when their temperature is last above 5 GK for simulations B15-r5, B15-r10, B15-r15, B15-r20 and B5-15-r10. Simulation B15-r5 mostly contains ejecta with $Y_e \sim 0.3$, albeit also showing both low- and high-$Y_e$ material including an extremely low electron fraction at $0.1 < Y_e < 0.12$. Simulations B15-r10 and B15-r15 mostly show ejecta around $Y_e \sim 0.3$, although displaying significantly shallower distributions compared to the other simulations. Simulation B15-r20 contains similar ejecta masses compared to B15-r10 and B15-r15 around $Y_e \sim 0.3$, although showing a considerably wider distribution. For B5-15-r10, a lower peak around $Y_e \sim 0.24$ and significant low-$Y_e$ ejecta of $Y_e < 0.2$ is inferred. Although $r_{\mathrm{falloff}}$ comes in with a cubic power in Eq.~\eqref{eq:initialBfield}, due to the astrophysically-relevant small parameter range used, it is harder to discern a clear trend between $r_{\mathrm{falloff}}$ and $Y_e$. Indeed, some of the histograms have broadly similar features, which is to be expected given that the changes introduced through $r_{\mathrm{falloff}}$ are slightly more subtle. The differences in the $Y_e$ distribution could arise due to the variation of the falloff parameter and/or differences in the flow structure that individual tracer particles advect along.
In Panel a of Fig. \ref{fig:abundancesall}, we show the fractional abundances as a function of mass number for simulations B13-r20, B14-r20 and B15-r20. We compute these abundances using the neutrino luminosity recorded by tracer particles for each simulation. As mentioned, the $Y_e$ distributions (see Fig. \ref{fig:Ye5GK}) for each simulation should coincide with the inferred abundances, where $Y_e \lesssim 0.2$ ejecta causes a strong $r$-process, $0.25 \lesssim Y_e \lesssim 0.4$ results in unsubstantial amounts of heavy nuclei ($A > 140$) production and $Y_e \gtrsim 0.4 - 0.5$ causes a weak $r$-process \citep{2021arXiv211200772C}. It is mainly interesting to investigate the amount of heavy nuclei production, for which B13-r20 shows the largest abundances for the majority of mass numbers. Simulation B14-r20 shows similar abundances in the heavy-nuclei regime, except in the range of $140 < A < 155$. For B15-r20, considerably lower amounts of heavy nuclei are produced for nearly all $A > 140$ regimes. The fractional abundances of these three simulations seem to be in line with the $Y_e$ distributions in Panel a of Fig. \ref{fig:Ye5GK}, as a larger $B_0$ leads to a decrease in heavy element production.
In Panel b of Fig. \ref{fig:abundancesall}, we show the fractional abundances as a function of mass number for simulations B15-r5, B15-r10, B15-r15, B15-r20 and B5-15-r10. We compute the abundances using the neutrino luminosities encountered by tracer particles. Notably, B15-r5 and B5-15-r10 display very similar abundances for $A > 140$, while also producing the largest fractions of heavy elements when compared to the other simulations in this panel. Indeed, the ejected material for simulations B15-r5 is only sampled by a small amount of tracers particles, which give rise to an abundance computation based on relatively low statistics. This may impact the relative abundances tracers are probing. For B15-r20 and B15-r15, similar heavy nuclei production is inferred, albeit not forming significant amounts of $A > 140$ material. Simulation B15-r10 displays even less nuclei with $A > 140$, while its fractional abundance rapidly drops after $A \gtrsim 200$. The abundances and the $Y_e$ distribution are correlated as we expected, however, a definitive trend between the abundances and $r_{\mathrm{falloff}}$ is hard to discern. This, similarly, could come down to the trajectories of tracer particles within each simulations or to the subtle impact of $r_{\mathrm{falloff}}$ on the outflow composition.
In Fig. \ref{fig:inferredkilonovae}, we show kilonova light curves in terms of the bolometric luminosities $L$ for all simulations, which we compute using outflow properties extracted at a radius of $r = 100 M_{\odot}$. In Panel a, we show the bolometric luminosities for simulations with varying $B_0$. Simulations B13-r20 and B14-r20 exhibit very similar light curves, where the latter shows a slightly brighter peak. Simulation B15-r20 contains significantly larger luminosity values throughout its evolution compared to B13-r20 and B14-r20.
In Panel b of Fig. \ref{fig:inferredkilonovae}, we show the bolometric luminosities obtained at $r = 100 M_{\odot}$, in this case for simulations B15-r5, B15-r10, B15-r15, B15-r20 and B5-15-r10. The brightest kilonova is produced by B5-15-r10, which shows both the largest luminosity peak and consistently larger $L$ compared to the other simulations, including B15-r20. For B15-r15 and B15-r20, very similar kilonova light curves and peak values are obtained. Simulations B15-r5 and B15-r10 also exhibit similar luminosity evolution, although the latter produces a significantly larger peak.
\section{Summary and conclusions}
\label{ch:summary-conclusions}
We have performed seven GRMHD simulations of a HMNS system with varying parameterized magnetic field strengths and configurations, to investigate its effects on the outflow properties, nucleosynthesis yields and kilonova light curves. Our simulations include a neutrino treatment and tabulated, nuclear EOS.
Simulations B15-r20 and B5-15-r10, which contain the strongest magnetic fields, show the emergence of collimated, mildly-relativistic jets as opposed to magnetized winds only. Jets can emerge in the simulations as a result of the strong magnetic fields in addition to the incorporation of neutrino effects, as this reduces baryon pollution in the polar regions \citep[e.g.,][]{2020ApJ...901L..37M}. The jets are then collimated by hoop stresses from the strong toroidal magnetic field windup along the rotation axis of the remnant. For B5-15-r10 and B15-r20, we find multiple indications for the presence of mildly-relativistic jets. Most notably, these two simulations exhibit larger velocities of unbound material and mass ejecta rates (see Fig. \ref{fig:Vr-histograms-mdot}) compared to the other simulations. Moreover, the earlier collapse times of B5-15-r10 and B15-r20 (by $\sim 1.6$ ms, see Fig. \ref{fig:rhomaxvstime}) indicate that angular momentum is extracted more efficiently from the remnant, pointing towards jetted outflows, and the magnetic field morphologies are more structured in the polar region (see Fig. \ref{fig:2d-Bvec-B13-14-15} and Fig. \ref{fig:2d-Bvec-falloff}).
In order to estimate the total ejected mass during the simulations, we integrate the mass ejecta rate over the phase of quasi-steady state evolution. Subsequently, we multiply by the total simulation time over the time of quasi-steady state evolution, to account for the HMNS system's full evolution. We choose to integrate over the phase of quasi-steady state evolution only to exclude variable mass ejecta rate behaviour in the early stages of the simulation. The quasi-steady state phase for $\dot{M}_{\rm ejecta}$ is different for each simulation (see Panels a and b in Fig. \ref{fig:Vr-histograms-mdot}), however, in all cases, we integrate from 10 ms up to the end of the simulation. This captures most or all of the quasi-steady state phase for the majority of simulations and allows for comparison between the estimated total ejecta masses, however, for B5-15-r10 and B15-r20 the integration interval is then (partly) over a non-quasi steady state phase. We also compute the average of the mass ejecta rates over the same time interval. We list the results in Table \ref{table:massejecta} for all seven simulations. The averaged ejecta mass and mass ejecta rates for B5-15-r10 are considerably larger compared to all other simulations. This simulation, however, exhibits varying $\dot{M}_{\rm ejecta}$ behaviour throughout the evolution, meaning it does not reach a phase of quasi-steady state evolution before collapse. Despite simulations B15-r20 and B5-15-r10 both forming jets, we find much lower averaged $\dot{M}_{\rm ejecta}$ and $M_{\rm ejecta}$ values for the former, which is largely due to the $\dot{M}_{\rm ejecta}$ rapidly decreasing after $\sim 11$ ms. The averaged ejecta values in combination with the $\dot{M}_{\rm ejecta}$ evolution and larger $v^r$ velocities of unbound material for B5-15-r10 compared to B15-r20 (see Fig. \ref{fig:Vr-histograms-mdot}) indicate that a considerably more powerful jet or magnetized wind emerges in the former simulation. Except for B15-r15, all other simulations show significantly lower averaged $\dot{M}_{\rm ejecta}$ and $M_{\rm ejecta}$ compared to the jet-forming simulations. However, as we infer $M_{\rm ejecta} > 10^{-4} M_{\odot}$ for all simulations, even without jet-formation the contribution of ejected mass from the HMNS is relevant when compared to the dynamical ejecta, for which $10^{-4} M_{\odot} < M_{\rm ejecta} < 10^{-2} M_{\odot}$ has been inferred \citep{2013PhRvD..87b4001H}. Furthermore, the results in table \ref{table:massejecta} and Fig. \ref{fig:Vr-histograms-mdot} clearly show that for larger $B_0$ and $r_{\rm falloff}$, the mass ejecta and mass ejecta rates increase considerably. Similarly, the radial velocity of unbound material, shown in Fig. \ref{fig:Vr-histograms-mdot}, increases significantly for larger values of the initial magnetic field parameters of the simulations.
\begin{table}
\centering
\begin{tabular}{ | c | c | c | }
Simulation & $M_{\rm ejecta}$ [$10^{-4}\:M_{\odot}$] & $\dot{M}_{\rm ejecta}$ [$10^{-2}\:M_{\odot}$ s$^{-1}$] \\
\hline
B15-r20 & 26.8 & 12.7 \\
B14-r20 & 5.2 & 2.3 \\
B13-r20 & 2.4 & 1.1 \\
B15-r5 & 3.4 & 1.6 \\
B15-r10 & 6.1 & 2.8 \\
B15-r15 & 17.1 & 7.8 \\
B5-15-r10 & 83.1 & 39.8
\end{tabular}
\caption{\label{table:massejecta} Total ejecta mass $M_{\rm ejecta}$ and averaged mass ejecta rates $\dot{M}_{\rm ejecta}$ from the HMNS outflows. For all simulations, both values are computed from 10 ms up to the end of the simulation time.}
\end{table}
In the absence of jet formation, changing the $r_{\rm falloff}$ and $B_0$ parameters of the simulations has similar effects. Namely, these simulations exhibit remarkably similar collapse times, only showing marginal differences of $\sim 0.1 - 0.2$ ms or less between simulations B13-r20, B14-r20, B15-r5, B15-r10 and B15-r15 (see Fig. \ref{fig:rhomaxvstime}). Also, they display reasonably similar mass ejecta rate evolutions (see panel a and b in Fig. \ref{fig:Vr-histograms-mdot}). Such similarities could imply small magnetic field effects on outflow properties in the absence of jet formation. However, the magnetic field parameters have considerable effects on outflow properties for other quantities, also when jets are not formed. Firstly, the radial velocities of unbound material are significantly different between the five aforementioned simulations that do not form jets (see Fig. \ref{fig:Vr-histograms-mdot}). Also, the $Y_e$ distributions and fractional abundances show apparent dissimilarities. Another indication that the magnetic fields of these simulations have considerable effects on the outflow properties is that the averaged mass ejecta rates and total ejecta mass for these simulations are significantly larger compared to the purely hydrodynamical case without magnetic field, which has been conducted by \citet{2020ApJ...901L..37M} (based on a nearly identical simulation code as this work). They find a total ejected mass of $5.8 \times 10^{-5} M_{\odot}$ and averaged mass ejecta rate of $2.4 \times 10^{-3} M_{\odot}$ s$^{-1}$ during quasi-steady state evolution. Even the lowest values for both of these quantities from table \ref{table:massejecta}, for B13-r20, are a factor $\sim 4$ and $\sim 4.5$ larger for $M_{\rm ejecta}$ and $\dot{M}_{\rm ejecta}$, respectively.
The purely hydrodynamical simulation from \citet{2020ApJ...901L..37M} does show a very similar BH collapse time of $\sim 23$ ms compared to the purely magnetized wind-forming simulations of this work. As mentioned, collapse times are partially dictated by the transport of angular momentum-carrying material out of the remnant system. It may therefore be related to the total ejecta mass during the simulation, however, this exhibits considerable differences between HD and MHD simulations as mentioned, while the collapse time does not. This implies that magnetized winds may be ineffective at transporting angular momentum out of the HMNS system compared to the mildly-relativistic jets, as the difference in collapse time between simulations with and without such magnetized winds is insignificant. Also, jet-forming simulations B15-r20 and B5-15-r10 do contain larger $M_{\rm ejecta}$ while displaying significantly smaller collapse times. These results further strengthen the case of mildly-relativistic jets being more effective at transporting angular momentum out of the remnant system compared to magnetized winds.
Increasing $B_0$ by an order of magnitude seems to have significant effects on the $Y_e$ distributions of the ejecta (when the temperature is last above 5 GK, see Fig. \ref{fig:Ye5GK}) and $r$-process yields (see Fig. \ref{fig:abundancesall}). Namely, when increasing $B_0$, the $Y_e$ distribution seems to shift to larger values while the fractional abundances exhibit lower amounts of heavy element production. Such a trend does not seem to exist for $r_{\rm falloff}$, which is especially clear when comparing the fractional abundances. However, as mentioned, this may caused by lower statistics for simulation B15-r5 (and possibly also B15-r10) due to a relatively low amount of tracer particles for this simulation, rather than being a consequence of a physical feature.
We have shown that the strength and specific configuration of the magnetic field in post-merger magnetars can lead to robust and sizeable effects in outflow properties, such as the mass ejecta rate and radial velocity of unbound material. Indeed, in two of the seven performed simulations, the larger values of the initial magnetic field strength and falloff result in the launching of mildly-relativistic jets, thus providing characteristic electromagnetic observables. Furthermore, the change in magnetic field parameters leads to profound effects on the abundance patterns and electron fractions, and hence on the kilonova light curves. We conclude, then, that the magnetic field strength and falloff have a significant imprint on the electromagnetic observables.
\section*{Acknowledgements}
The authors thank Luciano Rezzolla for helpful discussions regarding magnetic field configurations on merger remnants and Roland Haas for his insights regarding technical aspects of the BlueWaters supercomputer. The simulations were carried out on NCSA’s BlueWaters under allocation ILL\_baws, and TACC’s Frontera under allocation DD FTA-Moesta. The analysis of the simulations was carried out on SurfSara's Spider under the allocation EINF-2585.
\section*{Data Availability}
Simulation data is available upon reasonable request.
\bibliographystyle{mnras}
\bibliography{sources} %
\bsp %
\label{lastpage} |
Title:
A Mass-Magnitude Relation for Low-mass Stars Based on Dynamical Measurements of Thousands of Binary Star Systems |
Abstract: Stellar mass is a fundamental parameter that is key to our understanding of
stellar formation and evolution, as well as the characterization of nearby
exoplanet companions. Historically, stellar masses have been derived from
long-term observations of visual or spectroscopic binary star systems. While
advances in high-resolution imaging have enabled observations of systems with
shorter orbital periods, stellar mass measurements remain challenging, and
relatively few have been precisely measured. We present a new statistical
approach to measuring masses for populations of stars. Using Gaia astrometry,
we analyze the relative orbital motion of $>3,800$ wide binary systems
comprising low-mass stars to establish a Mass-Magnitude relation in the Gaia
$G_\mathrm{RP}$ band spanning the absolute magnitude range
$14.5>M_{G_\mathrm{RP}}>4.0$, corresponding to a mass range of
$0.08$~M$_{\odot}\lesssim M\lesssim1.0$~M$_{\odot}$. This relation is directly
applicable to $>30$ million stars in the Gaia catalog. Based on comparison to
existing Mass-Magnitude relations calibrated for 2MASS $K_{s}$ magnitudes, we
estimate that the internal precision of our mass estimates is $\sim$10$\%$. We
use this relation to estimate masses for a volume-limited sample of
$\sim$18,200 stars within 50~pc of the Sun and the present-day field mass
function for stars with $M\lesssim 1.0$~M$_{\odot}$, which we find peaks at
0.16~M$_{\odot}$. We investigate a volume-limited sample of wide binary systems
with early K dwarf primaries, complete for binary mass ratios $q>0.2$, and
measure the distribution of $q$ at separations $>100$~au. We find that our
distribution of $q$ is not uniformly distributed, rather decreasing towards
$q=1.0$.
| https://export.arxiv.org/pdf/2208.12112 | .
\begin{document}
\title{A Mass-Magnitude Relation for Low-mass Stars Based on \\
Dynamical Measurements of Thousands of Binary Star Systems }
\author[0000-0002-0078-5288]{Mark R. Giovinazzi}
\affiliation{Department of Physics and Astronomy, University of Pennsylvania \\
209 South 33rd Street, Philadelphia, PA 19104 USA}
\author[0000-0002-6096-1749]{Cullen H. Blake}
\affiliation{Department of Physics and Astronomy, University of Pennsylvania \\
209 South 33rd Street, Philadelphia, PA 19104 USA}
\keywords{Binary Stars, Stellar Masses, Astrostatistics}
\section{Introduction} \label{sec:intro}
The measurement of stellar mass is important for addressing a wide range of scientific questions. The evolution of an isolated star is primarily determined by its initial stellar mass. While metallicity and angular momentum play a role, it is mass that dictates the path the star will take through the Hertzsprung-Russel (H-R) diagram as it evolves, the timescale of that evolution, and what will remain of the star after its fusion-powered lifetime. As one of the central predictions for star formation, measuring the Initial Mass Function is crucial to constraining those theories. The total stellar mass of the Milky Way and other galaxies is important for understanding the dynamics and evolution of the galactic environment. As the pace of exoplanet discovery has increased over the last decade, and the precision of the stellar measurements that enable the indirect detection of exoplanets has improved, precise knowledge of the host star mass is becoming an important limitation in efforts to characterize the bulk properties of these exoplanets.
Despite all these compelling scientific reasons to measure stellar mass, even today relatively few stellar masses have been measured directly in a model-independent way. This is because stellar mass remains extremely difficult to measure, with no known observation of a single star that directly and precisely constrains mass (see \citealt{Serenelli2021} for a review of the methods of measuring stellar mass across the H-R diagram). Direct, model-independent stellar masses can be measured with observations of stellar binary systems. Both detached, double-lined eclipsing binary systems and fully resolved visual binaries combined with stellar radial velocity (RV) measurements can, in principle, produce individual stellar mass estimates with $2\%$ accuracy or better (\citealt{Serenelli2021}). Particularly when extremely precise RVs and astrometry from Gaia can be combined, impressive precision in mass measurements can be obtained (see, for example, \citealt{Brandt2019}). However, there are precious few cases where stellar mass is directly measured using dynamical techniques with accuracy at the $<2\%$ level. Fewer than 600 stars in 300 eclipsing binary systems meet this criteria today in the catalog described by \cite{Southworth2015}. The cases where stellar age and metallicity are well-estimated in addition to mass, the so-called benchmark systems that can be directly compared to stellar evolutionary models, are many fewer in number. Other, usually less precise, model-independent methods for estimating stellar mass have also been demonstrated, including the use of photometric ``flicker" to infer surface gravity mass (see, for example, \citealt{Stassun2018}), and in rare cases gravitational lensing effects (\citealt{Kluter202}).
Low-mass stars evolve relatively slowly, so it becomes possible to estimate mass directly from luminosity without degeneracies related to stellar evolution off the main sequence. For example, according to the MIST evolutionary tracks, a 0.7~M$_{\sun}$ star will change brightness in absolute Gaia $G_\mathrm{RP}$ magnitude by only 0.25 magnitudes between an age of 1-10~Gyr (\citealt{MIST2016}). A number of authors have developed empirical Mass-Magnitude relations that use precise, directly measured low-mass star masses to relate observed absolute magnitude to mass for stars on the lower main sequence (see, for example, \citealt{Delfosse2000}, \citealt{Mann_2019}, and references therein). Since infrared magnitudes are expected to be less sensitive to metallicity than optical magnitudes, \cite{Mann_2019} used precise dynamical mass measurements of 62 low-mass stars to develop a Mass-Magnitude relation using 2MASS $K_s$ magnitudes having an impressive internal precision of approximately $2\%$ across a mass range of $0.075<\rm{M_{\sun}}<0.7$. \citet{Mann_2019} demonstrated that this relation is insensitive to metallicity over the typical range of metallicities found in the solar neighborhood. However, the applicability of this relation is somewhat limited by the modest depth of 2MASS ($K_s<14.0$, in general) and the relatively small number of reference mass measurements used to calibrate the relation.
The Gaia satellite provides photometric and astrometric measurements of tens of millions of low-mass stars \citep{Gaia2016, Gaia2021}. Statistically, many of these stars are expected to be in gravitationally bound systems (\citealt{Raghavan2010}), some of which will have projected separations large enough to be easily resolved by Gaia. In these cases, the projected \textit{relative} orbital motion of the system may be directly measured. Given the astrometric precision of Gaia (approximately 70~$\mu$as yr$^{-1}$ in proper motion at $G=17$), this relative orbital motion is well-measured for a large number of wide binary systems. The Gaia eDR3 release includes only linear components of the projected stellar motion, so it is not possible to solve for Keplerian orbits or directly estimate the total mass of the system with the current public Gaia data alone. However, given the large number of wide binary systems observed by Gaia, it does become possible to consider the overall properties of this relative orbital motion to directly constrain stellar mass in a statistical sense. A similar approach has been taken by \citet{Tokovinin2016} and \citet{Hwang2022} to constrain the overall eccentricity distribution of populations of binary stars.
Beginning with the catalog of Gaia binary systems assembled by \citet{Elbadry2021}, we carry out an analysis of relative orbital motion for 3,846 low-mass star systems to directly calibrate a modified linear Mass-Magnitude relation in the Gaia $G_\mathrm{RP}$ band that extends from the bottom of the main sequence up to 1.0~M$_{\sun}$ with typical errors of $<10\%$ on a per-object basis. In our analysis, we marginalize over the population of binaries that appear to have non-physical relative orbital motion under the assumption that the majority of our systems are gravitationally bound and contain no additional components beyond the binary pair. Our analysis is model-dependent only through an assumption about the underlying binary eccentricity distribution, which we find has a negligible impact on the resulting Mass-Magnitude relation. We apply our Mass-Magnitude relation to Gaia observations of 18,187 single low-mass stars to estimate the field mass function over a volume-limited sample. We also use a volume-limited sample of wide binaries having an early K-type primary star to estimate the mass ratio distribution $q$ and find that $q$ decreases towards 1.0 for our sample, which is complete for $q>0.2$.
In Section \ref{sec:muprime}, we discuss the relative astrometric orbits of resolved binary star systems in the context of precise proper motion measurements from Gaia. In Sections \ref{sec:samp_select} and \ref{sec:measure}, we describe the selection of a sample of wide binary systems from the Gaia eDR3 database following the work of \citet{Elbadry2021}. In Section \ref{sec:LLH}, we detail the statistical framework we use to infer stellar mass within subsamples of the larger wide binary sample. In Section \ref{sec:results}, we compare our mass estimates to others in the literature and estimate the internal precision of our Mass-Magnitude relation. In Section \ref{sec:apps}, we apply the modified linear Mass-Magnitude relation to estimate the mass function of field stars with masses $<1.0~\mathrm{M}_{\odot}$ within 50~pc of the Sun and the mass ratio distribution of a sample of wide binaries.
\section{Relative Binary Orbits} \label{sec:muprime}
For visual binary systems, where the orbital motion of one star relative to the other can be measured, observations spanning a significant portion of the orbital period can yield a direct, model-independent estimate of the total mass of the system through Kepler's third law. Since the barycentric motion of the system as it orbits the Milky Way is usually not known independently, and absolute positions of the stars are more difficult to measure than relative positions, it is most often this relative orbit that is considered. In some cases, observations of relative orbits over even relatively short observational baselines can yield excellent mass estimates (e.g. \citealt{lucy2014}, \citealt{Brandt2019}, \citealt{Pearce_2020}). This fundamental dynamical technique has been used for more than a century and forms the basis of our understanding of stellar mass across the H-R diagram (e.g. \citealt{80Tau}).
A full astrometric description of a relative Keplerian orbit requires six free parameters, so a single measurement of the projected (on the sky) relative motion and separation of two stars in a binary system does not enable a unique solution for the system (see, for example, \citealt{Pearce_2020}). However, measurements of the relative proper motions of spatially resolved binaries can be used to constrain the likely orbital parameters of the system in a statistical sense. For example, \citet{tokovinin1998}, \citet{shatsky2001}, \citet{Tokovinin2016}, and \cite{Hwang2022}, investigated the angle of the relative orbital motion in a binary system, which is denoted $\gamma$, and found that the distribution of this angle can be used to infer the properties of the eccentricity distribution of the overall binary population, or generate broad posteriors on the eccentricities of individual systems.
In addition to the angle of relative motion, the projected speed of the relative orbital motion, which we denote $\mu$, can be measured as the difference in the proper motions of the two stars. For a circular, equal-mass, face-on binary system of mass $M_\mathrm{tot}$, semi-major axis $a$, and distance $d$, the relative orbital motion of the system is given by
\begin{equation}
\mu = 2.0~\left(\frac{M_\mathrm{tot}}{\rm{M}_{\sun}}\frac{1000~\mathrm{au}}{a}\right)^{0.5}\left(\frac{100~\rm{pc}}{d}\right) \rm{mas}~\rm{yr}^{-1}.
\label{eqn:muscale}
\end{equation}
Given Gaia's reported uncertainties in proper motion, which are approximately $0.07$~mas~yr$^{-1}$ at $G=17$ and $0.5$~mas~yr$^{-1}$ at $G=20$ (\citealt{Gaia2021}), the relative orbital motion for wide binary systems can be robustly measured by Gaia across a range of distances and total system masses, even assuming that the \textit{relative} proper motion uncertainties are at least a factor of $\sqrt{2}$ higher than the values above.
In terms of quantities reported by Gaia, the relative proper motion of the binary system in au~yr$^{-1}$ is
\begin{equation}
\mu=\sqrt{\left[\Delta\left({\pi^{-1}\dot{\alpha}\cos\delta}\right)\right]^{2} + \left[\Delta\left(\pi^{-1}\dot{\delta}\right)\right]^2}
\label{eqn:mu}
\end{equation}
assuming known parallax $\pi$ and proper motions $\dot{\alpha}\cos{\delta}$ and $\dot{\delta}$ for both stars in the binary. This is the projection on the sky of the three-dimensional motion of one star relative to another. In general, this projected speed is a complex function of the eccentricity, phase, and orientation of the orbit to our line of sight (see Appendix A of \citealt{Pearce_2020} for a derivation of the projected relative orbital motion in terms of Keplerian orbital elements). For a circular, face-on orbit, $\mu$ will be constant, but in the general case $\mu$ is not a good estimate of the actual relative orbital velocity of the binary system. However, just as with $\gamma$, the statistical properties of $\mu$ for a population of systems can be used to make inferences about the overall properties of those systems, including total mass and eccentricity.
\cite{Tokovinin2016} described a quantity $\mu'$ that is the ratio of $\mu$ to the expected orbital motion assuming that the system is circular and observed face-on, which we denote $\mu^*$ ($\mu'=\mu/\mu^{*}$). Following from Kepler's third law, $\mu^*=2\pi s^{-1/2}~M_\mathrm{tot}^{1/2}$, where $s$ is the current projected separation of the binary pair, in au, and $M_\mathrm{tot}$ is the total system mass, in solar masses, and the resulting units are au~yr$^{-1}$ assuming distance is known directly through a parallax measurement. A gravitationally bound binary star system with a Keplerian orbit will have $0\leq\mu'<\sqrt{2}$. For a given system, the distribution of $\mu'$ across all orbital phases can be simulated. Or, for a population of systems assumed to have the same $M_\mathrm{tot}$ and eccentricities drawn from the same distribution, the distribution we define as $D\equiv\mu' M_\mathrm{tot}^{1/2}$ (which is what is measured using astrometric data from Gaia), can be used to constrain mass and eccentricity for the population.
We simulate the distribution of $\mu'$, or $P(\mu'|\alpha)$ for known $M_\mathrm{tot}$, where $\alpha$ is a parameter that describes the overall distribution of orbital eccentricities as
\begin{equation}
P\left(e|\alpha\right)=\left(1+\alpha\right)e^\alpha.
\label{eq:e_given_alpha}
\end{equation}
We simulate $\mu'$ for $10^6$ random Keplerian systems with orbital eccentricities drawn from $P(e|\alpha)$, inclinations drawn from a uniform distribution in $\cos{i}$, argument of periastron, $\omega$, drawn from a uniform distribution between 0 and $\pi$, and orbital phase drawn uniformly between 0 and $2\pi$. Following Appendix A of \citet{Pearce_2020}, we find that $\mu'$ does not explicitly depend on the longitude of the ascending node, $\Omega$. We assume a log-normal period distribution from \citet{Raghavan2010}, though we note that $P(\mu'|\alpha)$ does not actually depend on the distribution of orbital periods. \cite{Hwang2022} used the relationship between eccentricity and the relative motion angle $\gamma$ to investigate the eccentricity distribution for wide binary systems observed by Gaia. They found that the best-fit eccentricity distribution varies from consistent with uniform ($\alpha=0$) at small binary separations to super-thermal ($\alpha>1.0$) at larger separations of $>10^3$~au. So, we consider a grid of $0.0<\alpha<2.0$ in steps of 0.1. Our simulation assumes that $M_\mathrm{tot}$ is known, so the simulated distribution of $\mu'$ also does not depend on the distribution of $M_\mathrm{tot}$. The distribution $P(\mu'|\alpha)$ is the distribution of observed $\mu'$ for a population of binary systems if $M_\mathrm{tot}$ and $\alpha$ are known \textit{a priori}. Some examples of $P(\mu'|\alpha)$ are shown in Figure \ref{fig:muprime_alpha}. Changing $\alpha$ changes the overall shape, as well as the median and mean of the distribution of $\mu'$. Comparisons of $\mu'$ distributions for varying eccentricity parameterizations have been used in previous analyses as a probe to study observed populations of binary star systems (e.g., \citealt{2019MNRAS.488.4740P, 2022arXiv220502846P}).
Despite the various cuts we make in selecting our sample, it is possible that undiagnosed systematic effects in eDR3 proper motions could bias our relative orbital motion measurements. Based on previous investigations of these issues in the literature \citep{cantat2021, Elbadry2021}, it is plausible that binary star systems that have tighter orbits or are closer to us may be more susceptible to these biases. We investigate possible separation-dependent systematic biases in our $\mu'$ distributions by comparing subsamples selected by physical separation and heliocentric distance. In Figure \ref{fig:muprime_alpha_twopanel}, we show the $\mu'$ distributions for our systems broken into bins of current physical separation (on either side of the median separation $s=1083$~au) and distance (on either side of the median distance $d=135$~pc) and find that the distributions of $\mu'$ are broadly consistent across these bins. Comparing the means of these distributions using a t-test, we find that the means (after excluding values $\mu'>1.5$) of the $s<1083$~au and $s>1083$~au $\mu'$ distributions, as well as the $d<135$~pc and $d>135$~pc $\mu'$ distributions, are consistent (null hypothesis of identical means has $p>0.05$ in both cases). We note that relative orbital motion biases in high-contrast systems may also be a concern, but these systems are excluded from our total system mass inference using our ``twin" sampling further described in Section \ref{sec:samp_select}.
By comparing the observed distribution of $\mu'$, which is derived from $D$ as a function of $M_\mathrm{tot}$, to $P(\mu'|\alpha)$ for a population of binary systems that are assumed to have the same $M_\mathrm{tot}$, we aim to directly estimate $M_\mathrm{tot}$ while marginalizing over $\alpha$. We will apply this technique to observations of binary star systems from Gaia to estimate an empirical Mass-Magnitude relation valid across the lower main sequence from $0.08~\mathrm{M_\odot}<M<1.0~\mathrm{M_\odot}$.
\section{Gaia Binaries - Sample Selection} \label{sec:samp_select}
With high-precision astrometry and photometry for nearly two billion stars, Gaia offers an unprecedented opportunity to study multiple star systems. \cite{Elbadry2021} carried out an analysis of the Gaia eDR3 database to identify pairs of stars having a high probability of being gravitationally bound based on common proper motion and proximity in three dimensions to produce a catalog of more than $10^6$ spatially resolved binaries. \cite{Elbadry2021} pare down the Gaia data set by considering only the systems that have parallaxes $>1~\mathrm{mas}$, fractional parallax uncertainties $<20\%$, absolute parallax uncertainties $<2~\mathrm{mas}$, and those where both stars have valid Gaia apparent $G$-band magnitudes. They also perform a robust analysis to estimate the likelihood that each system is a chance alignment (denoted as $\mathcal{R}$) rather than a true gravitationally bound system.
From the \cite{Elbadry2021} catalog, we keep systems that satisfy the recommended threshold for having ``high bound probability" $\left(\mathcal{R} < 0.1\right)$, therefore removing pairs that have a non-negligible likelihood of being chance alignments. We follow \cite{Fabricius2021} and \cite{daltio2021} by removing systems where either star has re-normalized unit-weight error \texttt{ruwe}~$>1.4$, \texttt{ipd\_frac\_multi\_peak}~$>10$, \texttt{ipd\_gof\_harmonic\_amplitude}~$>0.1$, or \texttt{astrometric\_excess\_noise~$>1.0$} all of which are known to be indicative of poor astrometric solutions. We exclude regions of high galactic extinction by requiring that $|b|>10^{\circ}$. We exclude white dwarfs by requiring that both stars in a pair have absolute magnitude $3<M_G<3.1\left(G_\mathrm{BP}-G_\mathrm{RP}\right)+5$. We also remove objects with apparent magnitude $G<5$ to avoid bright stars that may cause saturation. Since stars evolve with time, a Mass-Magnitude relation is generally more reliable for stars that evolve very slowly. For example, in the Gaia H-R diagrams shown in \citet{gaiahr2018} and \citet{daltio2021}, stars with absolute $G$ magnitude $M_G=2.0$ could be massive main sequence stars or lower mass stars that have evolved onto the giant branch. However, the lowest mass main sequence stars evolve on timescales comparable to the age of the universe. For this reason, we select binary systems where both stars have absolute magnitudes $M_G>4.0$, corresponding approximately to $M<1.0~\mathrm{M}_{\sun}$.
We exclude binaries with angular separations $s~<~4~\arcsec$ from our analysis to mitigate issues with astrometric or photometric contamination. While this is larger than the formal Gaia confusion limit for systems with small magnitude differences, in binaries with larger magnitude differences the measurements of the fainter companion could be biased. \citet{brandeker2019} simulated source detection effects and found that Gaia should have good sensitivity to systems with contrasts as large as $\Delta G=8.0$ at separations of $4\arcsec$. We apply a cut on the photometric measurement error by requiring that both stars in a pair have \texttt{phot\_rp\_mean\_flux\_over\_error~$>10$}. The reason we only enforce this photometric signal-to-noise cut in the $G_\mathrm{RP}$ passband is because we choose Gaia's RP filter to build our mass-luminosity relationship from. We chose to develop our Mass-Magnitude relation in the red Gaia $G_\mathrm{RP}$ band, instead of the broad $G$ band, for three reasons. First, the lower main sequence stars we will investigate are relatively cool and have spectral energy distributions that peak at redder wavelengths ($G_{\rm{BP}}-G_{\rm{RP}}>1.0$). Second, the interstellar medium is more transparent to photons with redder wavelengths, so the $G_\mathrm{RP}$ photometry is less impacted by the effects of galactic extinction. Finally, we expect that the impact of metallicity on stellar flux is less pronounced at redder wavelengths (\citealt{MIST2016}). \citet{cantat2021} investigate magnitude-dependent biases in proper motions due to frame rotation in eDR3 and find a significant effect at $G=13$ that could bias our estimated relative orbital motion if the apparent magnitudes of the components straddle this boundary. To mitigate this effect, we exclude from our analyses any system where one component has $G<13.0$ while the other has $G>13.0$. Since we ultimately will analyze twin systems where both components have similar magnitudes, this cut only removes a small number of systems.
Following all of the astrometric and photometric cuts described in this section, we have a catalog 12,096 wide binary systems comprising lower main sequence stars $\left(M\lesssim1.0~\mathrm{M}_{\sun}\right)$ where both stars have high-quality Gaia measurements. For completeness, we re-simulate the $P(\mu'|\alpha)$ grid assigning random distances drawn from the actual distance distribution of these remaining Gaia wide binary systems and applying both the angular separation cut at $s>4\arcsec$ and a physical separation cut at $<5\times10^4$~au to the simulated systems.
As we will discuss in Section \ref{sec:LLH}, to simplify the statistical framework for inferring the total system masses of the wide binaries in our sample, in some of the subsequent analyses we will further restrict our sample to systems where the stars have similar absolute magnitudes, with $\Delta M_{G_\mathrm{RP}} < 0.5$. These systems will also be restricted to a binning scheme in which only systems where both stars existing within a predefined set of arbitrary half-magnitude boundaries are kept. This cut further reduces our sample to 3,846 twin systems that will be used for the total mass inference, while also helping to reduce the impact of possible separation- or magnitude-dependent systematics of the type discussed above.
\section{Gaia Binaries - Measurements}\label{sec:measure}
In the general case where the total system mass is not known, we directly measure a quantity from the Gaia position, parallax, and proper motion measurements that we denote $D$, defined as
\begin{equation}
D\equiv\mu' M_\mathrm{tot}^{1/2}=\frac{\mu}{2\pi} s^{1/2}.
\label{eqn:D}
\end{equation}
For each system, we estimate the uncertainty on $D$ through a Monte Carlo simulation, drawing $10^4$ samples from the reported Gaia measurements and uncertainties for right ascension, declination, the corresponding proper motion terms ($\dot{\alpha}\cos{\delta}$~and~$\dot{\delta}$), and the variance-weighted average parallax of the two stars to create an individual $D$ posterior for each system according to Equations \ref{eqn:mu} and \ref{eqn:D}. A small number of our systems have large angular separations, up to 30$'$, in which case projection effects in the observed angular motions of a physically co-moving pair of stars could be important. Following \citet{butkevish2014} we estimate the size of this effect given the actual locations and distances to the systems in our sample, and find that the differences in the observed proper motions were small, $<~1~\mathrm{\mu as}$~yr$^{-1}$ in almost all cases. Since the formal errors reported in eDR3 may be underestimated, we apply inflation factors to both the proper motions and parallaxes before calculating the statistical uncertainty on $D$ for each system. Following Figure 15 (right column) of \citet{Elbadry2021}, we inflate the reported parallax uncertainties by a factor of up to $\sim$1.3, depending on magnitude. For objects with apparent magnitudes $G<11$ we assume an inflation factor of 1.28, linearly decreasing to no error inflation at $G=18.0$. Following \citet{Brandt2021}, we apply an inflation factor of 1.37 to all the eDR3 proper motion uncertainties, independent of magnitude.
The uncertainties in $D$ for an individual system may be large given the magnitude of $\mu$ and the uncertainties in the Gaia proper motion and parallax measurements. For the purpose of selecting systems where $D$ is well-measured, we apply a cut on $\bar{\mu'}$, the expectation value of $\mu'$, and $\sigma_{\mu'}$, the uncertainty on $\mu'$, calculated from $D$ initially assuming $M_\mathrm{tot}=1~\mathrm{M_\odot}$. We exclude systems from our sample that have $\bar{\mu'}>3$, which is non-physical for bound binary systems, and $\sigma_{\mu'}>0.1$. This is effectively a selection on distance and apparent magnitude, which primarily determine the proper motion and parallax errors. Since $\sigma_{\mu'}$ only depends on Gaia astrometric uncertainties, and not on system orbital parameters, this cut does not bias the Mass-Magnitude relation we wish to derive. After making the cuts described in Sections \ref{sec:samp_select} and \ref{sec:measure}, we are left with a sample of 12,096 binary star systems, of which 3,846 have $\Delta M_{G_\mathrm{RP}}<0.5$, are within the bins described in Section \ref{sec:samp_select}, and are therefore in our twin subsample.
\section{Mass Inference} \label{sec:LLH}
An initial investigation of the measured values of $\mu'$ for the subset of our Gaia sample that is bright enough to have 2MASS $K_s$ magnitudes, and therefore potential mass estimates through the \citet{Mann_2019} Mass-Magnitude relation, revealed a significant population of systems (at least 10$\%$) with $\mu'$ significantly larger than the $\mu'=\sqrt{2}$ upper limit for bound, Keplerian orbits as seen in the simulations described in Section \ref{sec:muprime} and shown in Figure~\ref{fig:muprime_alpha}. It is possible that these systems are not actually bound, though given the relative proximity of these systems to the Sun and the detailed analyses carried out in \citet{Elbadry2021}, this seems unlikely to be a significant source of high $\mu'$ values. Alternatively, these systems could be higher-order multiples built from one or more unresolved binaries. Similarly, Jupiter-mass companions with semi-major axes of a few au orbiting either or both stars in a wide binary could induce apparent relative proper motion of a similar magnitude to the Keplerian motion of the wide binary itself. In these cases, the photometric mass estimate will in general be underestimated since, given the slope of the mass-luminosity relation, a relative increase in mass results in a much smaller relative increase in luminosity. \citet{Raghavan2010} explored multiplicity in a nearly volume-limited sample of stars and estimated that up to 25\% of apparent wide binary systems may in fact be triple or quadruple systems (see Table 16 in \citealt{Raghavan2010}). From long-term Doppler exoplanet surveys, we know that Jupiter-mass companions with semi-major axes of a few au are somewhat rare, occurring around approximately 6\% of Sun-like stars in the survey presented by \citet{wittenmyer2020}. While a tail of systems with $\mu'>1.4$ is therefore expected (cases where mass is \textit{underestimated} based on absolute magnitude), it remains difficult to explain systems with $\mu'\sim3.0$, which in this scenario would require the \citet{Mann_2019} photometric mass to be low by a factor of at least four. Independent of the physical explanation for this tail, it is clear that we will need to include this in our model of the observed $\mu'$ distribution.
By comparing the observed $\mu'$ distribution for the cases where both stars in a binary have mass estimates from the \citet{Mann_2019} relation, we found that the overall distribution is well-described by the sum of our simulated $P(\mu'|\alpha)$ and an additional half Gaussian centered at zero. Following the results of \citet{Hwang2022} for wide binary systems ($s>1000$~au for almost all of our systems), we began by assuming that $\alpha=1.3$. This introduces two new free parameters into our model: $N$, the overall normalization of the tail population and $\sigma$, the width of the additional Gaussian. The overall model is then
\begin{eqnarray}
\label{eq:model}
\begin{aligned}
P(\mu'|\alpha, N, \sigma) = (1-N) P(\mu'|\alpha)+
N\frac{\sqrt{2}}{\sigma\sqrt{\pi}}e^{-\frac{\mu'^{2}}{2\sigma^2}}.
\end{aligned}
\label{eqn:model}
\end{eqnarray}
As discussed in Section \ref{sec:samp_select}, we focus our analyses on absolute $G_\mathrm{RP}$ magnitudes. To simplify our analysis, we sort the binary catalog to identify twin systems, which we define to have similar magnitudes ($\Delta G_\mathrm{RP}~<~0.5$~mag) and fall within the confines of our magnitude bins described in Section \ref{sec:samp_select}. Given the slope of the \citet{Mann_2019} $K_s$-band Mass-Magnitude relation, the variation in mass over a bin is expected to be small, less than $0.1$~M$_{\sun}$ per 0.5 magnitudes at 0.5~M$_{\sun}$. Our analysis could also be carried out by inferring the parameters of an empirical function that maps absolute $G_{\rm{RP}}$ magnitude to mass, therefore increasing the total number of systems in our sample by alleviating the half-magnitude binning constraint. However, in this approach the parameters of the Mass-Magnitude relation are effectively latent variables. This, combined with the fact that the masses of the two stars in a binary system may be correlated (that is to say the system mass ratio, $q$, is not necessarily uniformly distributed; see \citealt{elbadryq} and Section \ref{sec:apps} below), significantly complicates the statistical inference framework. Given the large total number of systems in our sample, the twin approach provides a straightforward path to estimating a Mass-Magnitude relation.
Following Bayes' Theorem, we wish to infer the posterior probability of the model given the data (we directly measure $D$ with Gaia, so that is the data), which in this case is
\begin{equation}
\label{eq:Bayes}
\resizebox{.905\hsize}{!}{$P(M_\mathrm{tot},\alpha, N, \sigma | D) \propto P(D|M_\mathrm{tot},\alpha, N, \sigma) P(M_\mathrm{tot},\alpha, N, \sigma)$.}
\end{equation}
Here, the second term on the right side of the proportionality contains our priors on the free parameters of the model. The first term on the right side of the proportionality is a likelihood that can be calculated directly from our measurements of $D$, as well as our simulations of $\mu'$ described in Section \ref{sec:muprime} and the model in Equation \ref{eq:model}. Given that for a fixed set of values for $\alpha$, $N$, and $\sigma$, $P(D|M_{\mathrm{tot}})=P(\mu'| M_\mathrm{tot}^{1/2})$, the likelihood can be expressed as the sum of the log-likelihoods of the individual $D_i$ measurements within an $M_{G_{\rm{RP}}}$ bin as
\begin{equation}
\begin{aligned}
\mathcal{L} = \sum_{i}\ln \left[P(D_{i}|M_\mathrm{tot}^{1/2}, \alpha, N, \sigma) \right].
\end{aligned}
\end{equation}
We divide our binary twin sample into 16 equally spaced bins across the range $12.0>M_{G_\mathrm{RP}}>4.0$ and carry out a Markov Chain Monte Carlo analysis to sample from the posterior probabilities on $M_\mathrm{tot}$, $\alpha$, $N$, and $\sigma$ given the systems in each bin. Here, we are implicitly assuming that within each bin $M_\mathrm{tot}$ is twice the mass of either individual star. We employ a Metropolis-Hastings algorithm and generate chains with $10^5$ steps, removing the initial $10^3$ steps as burn-in. We linearly interpolate within the simulated grid of $P(\mu'|\alpha)$ described in Section \ref{sec:muprime} to evaluate $P(\mu'|\alpha)$ for specific values of $\alpha$ and enforce normalization of the model from Equation \ref{eq:model} for each set of trial parameters. We assume uniform priors on all free parameters as follows: $0.0<\alpha<2.0$, $0.0~\mathrm{M_\odot}<M_\mathrm{tot}<3.0~\mathrm{M_\odot}$, $0.0<N<0.5$, and $0.1<\sigma<3.0$. Examples of fits to individual $M_{G_\mathrm{RP}}$ bins are shown in Figure \ref{fig:P_D}. For each bin, we calculate the modal value of the $M_\mathrm{tot}$ posterior along with a 68\% highest density credible interval. The resulting individual mass estimates are hereafter denoted as $M_\mathrm{GORP}~=~M_\mathrm{tot}/2$.
For the lowest two mass bins we consider, $11.5~>~M_{G_\mathrm{RP}}~>~11.0$ and $12.0~>~M_{G_\mathrm{RP}}~>~11.5$, there are too few twin systems in our wide binary sample for our analysis to work, so instead we consider cases where one star has absolute magnitude within one of these low-luminosity bins and the other has $10.5>M_{G_\mathrm{RP}}>10.0$, fixing the mass of the more massive star as determined above. For these systems, we include the statistical uncertainty derived from the posterior on stellar mass for the more massive stars in the estimation of the credible interval for the mass of the less massive star.
We evaluate the robustness of our modeling framework to the influence of unseen companions, either stellar or sub-stellar, through a simulation that injects random additional relative motion into our $\mu$ measurements to assess the impact on the resulting estimates of $M_\mathrm{tot}$. As discussed earlier in this section, an unseen companion will tend to bias our mass estimates by increasing the measured relative orbital motion. We find that for additional stellar motion at the level of up to 0.04 au yr$^{-1}$, our mass estimates are not significantly biased at the level of our statistical uncertainties.
\section{Results} \label{sec:results}
In Figure \ref{fig:mass_mag} we compare our individual estimates of $M_\mathrm{GORP}$ based on the twin sample across 16 bins in $M_{G_{\rm{RP}}}$ to mass estimates based on the \citet{Mann_2019} $K_s$ Mass-Magnitude relation, MIST evolutionary models at 5~Gyr from \citet{MIST2016}, and 1,617 isochrone-based mass estimates from \citet{brewer2016}. We find excellent agreement with our $M_\mathrm{GORP}$ mass estimates given the typical statistical mass uncertainties derived from the posteriors on $M_\mathrm{GORP}$ of $\sim10\%$ for $M>0.02M~\mathrm{M}_\odot$, rising to $>25\%$ at lower masses. Examples of individual posterior distributions and their 68\% highest credible density intervals for three of our absolute magnitude bins are shown in Figure \ref{fig:MGRP_posteriors}.
For the parameter $N$, the normalization of the tail component of our model, we find consistent values of $N=0.22\pm0.07$ across the mass range we consider. While the physical interpretation of this is difficult, we note that this value is consistent with the relative frequencies of double and triple systems found in \citet{Raghavan2010}. Our data do not place strong constraints on $\alpha$, but across our mass range we find that $\alpha$ is consistent with $1.19\pm0.56$. While \cite{Hwang2022} found in their analyses that $\alpha$ was constrained to approximately $1.25\pm0.25$ for wide ($>1000$~au) systems, we note that their analysis involved a much broader range of binary types and was not broken down by stellar spectral type, mass, or evolutionary state. Given this, our results on the eccentricity distribution appear broadly consistent with \cite{Hwang2022}. For the parameter $\sigma$, the width of the tail distribution, we find $\sigma=0.85\pm0.30$ with no correlation with $M_\mathrm{GORP}$.
We evaluated analytic relations between absolute $G_\mathrm{RP}$ magnitude and the $M_\mathrm{GORP}$ masses. Because of the small number of faint binary systems where both stars pass our photometric and astrometric cuts, our sample only includes systems with $M_{G_\mathrm{RP}}<12.0$, but hydrogen-burning stars can be significantly fainter. For the purposes of these fits, we follow \citet{rayle2018} and fix the magnitude of an object near the hydrogen burning limit with a mass of $0.08\pm0.01$~M$_{\sun}$ at absolute magnitude of $M_{G_\mathrm{RP}}=14.5$. There is a relatively sharp break in our estimated masses around $M_{G_\mathrm{RP}}=9.5$ that makes it difficult to fit our results with a single low-order polynomial. Empirically, we found that our data are well-fitted by the following modified linear function, which enforces a smooth transition to lower masses at $M_{G_\mathrm{RP}}>9.5$ by using the error function, erf(x).
\begin{equation}
\resizebox{.905\hsize}{!}{$
\log_{10}(M_\mathrm{GORP}) = a + b M_{G_\mathrm{RP}}-c \left[1+\mathrm{erf}\left(M_{G_\mathrm{RP}}-9.5\right)\right]$}
\label{eqn:modlinear}
\end{equation}
While this functional form is not physically motivated, it describes the data well with fewer degrees of freedom than a comparable polynomial relation. Based on a $\Delta \rm{AIC}>1.1$, this modified linear model is slightly preferred over a polynomial fit of first order alone (and strongly preferred over higher-order polynomial fits - see \citealt{ivezic2019}). The posteriors on $\log_{10}\left(M_\mathrm{GORP}\right)$ are approximately Gaussian, so we use least-squares optimization to estimate the best-fit values for the three model parameters in Equation~\ref{eqn:model}. We use a Markov Chain Monte Carlo approach to generate 10$^{5}$ samples from the posteriors of the three parameters using a Metropolis-Hastings sampler. Based on the modal values of the posteriors we find $a=0.445$, $b=-0.097$, and $c=0.075$ valid over the range $14.5>M_{G_\mathrm{RP}}>4.0$. Given the covariances between these parameters, the uncertainty in the Mass-Magnitude relation is not well-described by the uncertainties on the parameters as derived from the individual posteriors. Point estimates and corresponding uncertainties based on Monte Carlo simulations are given in Table \ref{table:1}. These values can be interpolated to provide point estimates for mass within the range $14.5>M_{G_\mathrm{RP}}>4.0$.
\begin{deluxetable}{lcccccccccccc}
\tablecaption{Mass-Magnitude Relation}
\label{table:1}
\startdata
\tablehead{\colhead{$M_{G_\mathrm{RP}}$} &&&&&& \colhead{$\log_{10}M$ (M$_{\sun}$)} &&&&&& \colhead{$\sigma_{\log_{10}M}$}}\\[-0.4cm]
4.0 &&&&&& 0.058 &&&&&& 0.028 \\
5.0 &&&&&& -0.039 &&&&&& 0.019 \\
6.0 &&&&&& -0.136 &&&&&& 0.015\\
7.0 &&&&&& -0.233 &&&&&& 0.015 \\
8.0 &&&&&& -0.333 &&&&&& 0.020 \\
9.0 &&&&&& -0.463 &&&&&& 0.016 \\
10.0 &&&&&& -0.637 &&&&&& 0.030\\
11.0 &&&&&& -0.778 &&&&&& 0.040 \\
12.0 &&&&&& -0.867 &&&&&& 0.037 \\
13.0 &&&&&& -0.964 &&&&&& 0.036 \\
14.5 &&&&&& -1.110 &&&&&& 0.039 \\
\enddata
\tablecomments{Point estimates and uncertainties for $\log_{10}{M}$ based on the modified linear Mass-Magnitude relation described in Section \ref{sec:results}.}
\end{deluxetable}
In \citet{Mann_2019} and other examples of Mass-Magnitude relations in the literature (e.g. \citealt{1993AJ....106..773H, Delfosse2000}), photometric mass estimates are typically calibrated against a small set of stars (62 in the case of \citealt{Mann_2019}) that have precise dynamical mass estimates. We searched the literature for visual binary systems that are wide enough to be well-resolved by Gaia and have precise (better than $5\%$ precision) mass measurements, either total for the system or for individual components. Unfortunately, we identified fewer than a dozen such systems.
The strong agreement between $M_\mathrm{GORP}$ and $M_\mathrm{Mann}$ is clear from Figure \ref{fig:mass_mag}. Since the \citet{Mann_2019} masses are expected to have very small internal errors, at the level of $\sim2\%$, by directly comparing $M_\mathrm{GORP}$ and $M_\mathrm{Mann}$ we can assess the internal errors of the $M_\mathrm{GORP}$ estimates derived here. Using the same photometric and astrometric quality cuts described in Section \ref{sec:samp_select}, we identified a sample of individual Gaia stars that also had 2MASS photometry in the absolute magnitude range recommended by \cite{Mann_2019} for making reliable mass estimates $\left(10.5>M_{K_{s}}>4.5\right)$. Additionally, we only selected stars with $K_s<14.0$ to mitigate the effects of photometric errors on the resulting mass estimates. In total, we selected $\sim$30,000 stars and calculated the distribution of the quantity $\Delta M \equiv M_\mathrm{GORP}-M_\mathrm{Mann}$ across 12 absolute $K_{s}$ magnitude bins spanning the aforementioned range, each 0.5 magnitude in width. The distribution of $\Delta M$ within each bin is approximately Gaussian, so its standard deviation is equivalent to the sum in quadrature of $\sigma_\mathrm{GORP}$ and $\sigma_\mathrm{Mann}$. Therefore, we estimate the internal uncertainty in our relation as $\sigma_\mathrm{GORP}=\left[\sigma_{\Delta M}^2 - \sigma_\mathrm{Mann}^2\right]^{1/2}$. As shown in Figure \ref{fig:gorp_mann_rel_err}, we find that the uncertainty in our mass estimates relative to \citet{Mann_2019}, $\sigma_\mathrm{GORP}/M_\mathrm{Mann}$, is approximately $5-10\%$ over most of our mass range, though it increases below 0.1~M$_{\sun}$. These values are consistent with the mass posteriors derived in Section \ref{sec:LLH}. For $M\lesssim 0.1~\mathrm{M}_\odot$, our $M_\mathrm{GORP}$ mass estimates are somewhat biased high, at the $\sim15\%$ level, relative to \citet{Mann_2019}.
\section{Applications}\label{sec:apps}
The substantially greater depth of Gaia compared to 2MASS allows us to extend an accurate Mass-Magnitude relation for low-mass stars over a much larger volume than the \citet{Mann_2019} relation alone, particularly for the lowest mass stars. There are many potential applications of this relation, including estimating the field mass function and studying the relationship between stellar mass and orbital parameters in binary systems.
We apply the same astrometric and photometric cuts described in Section \ref{sec:samp_select} to select a volume-limited sample of (apparently) single lower main sequence stars with $10~\mathrm{pc}<d<50~\mathrm{pc}$. The inner limit was chosen to avoid issues related to saturation for the brightest stars in our sample, and the outer limit was chosen so that the faintest main sequence stars, down to $M_{G_\mathrm{RP}}=14.5$, will be relatively bright and detected with \texttt{phot\_rp\_mean\_flux\_over\_error~$>10$}, and the parallaxes for all of the objects will be well-measured. We use our derived Mass-Magnitude relation (Equation \ref{eqn:modlinear}) and associated parameter uncertainties to estimate the field mass function between $0.1~\mathrm{M}_{\sun}<M<1.0~\mathrm{M}_{\sun}$ in units of Stars pc$^{-3}$ M$_{\sun}^{-1}$ as shown in Figure \ref{fig:mass_function}. In this calculation, we ignore the photometric and parallax uncertainties for individual stars, which for the subsample considered here are small compared to the uncertainty in the Mass-Magnitude relation. We find that the field mass function peaks at approximately 0.16~M$_{\sun}$, broadly consistent with other results in the literature including \citet{kroupa93}, \citet{rana1987}, and \citet{sollima2019}.
We have not made an attempt to correct this mass function for the effects of unresolved binaries. Following the results of \citet{Raghavan2010} by assuming an overall multiple fraction of 40\% in the stellar mass range considered here, integrating over a log-normal distribution of period (see Figure 13 of \citealt{Raghavan2010}), and assuming a typical total system mass of $M_\mathrm{tot}=1.0$~M$_\odot$, approximately 25\% of our stars could be unresolved binaries given the Gaia small-separation detection limits described in \citet{zeigler2018} and \citet{brandeker2019}. The effect of unresolved binaries on our mass function is difficult to quantify. While there is some evidence that wide binaries may be preferentially of equal mass ( \citealt{elbadryq}), the unresolved systems in our sample will on average have much smaller semi-major axes than the resolved binary systems. At the same time, on the lower main sequence a companion near or below the hydrogen-burning limit will contribute negligibly to the total flux of the system. So, in our analysis an unresolved binary will have an overestimated mass, by up to 25\% for an equal-mass system given the slope of our Mass-Magnitude relation, and will result in counting one slightly more massive star instead of two lower mass stars. \citet{sollima2019} approached estimating the mass function of a similar sample of lower main sequence stars using isochrones and a stellar multiplicity model to forward-model the observed Gaia color-magnitude diagram. The results presented here are consistent with the much more sophisticated analysis of \citet{sollima2019} in the limit of an unresolved binary fraction less than $40\%$, which is what we expect based on previous binarity surveys (see Figure 4 in \citealt{sollima2019}). As noted by \citet{sollima2019}, it is also possible that some unresolved binaries with photocentric motion or complex point spread functions are excluded from our sample by the various astrometric quality cuts we apply.
Our Mass-Magnitude relation can also be used to study the properties of the wide binary systems in our sample. For example, the distribution of binary mass ratios could be used to constrain theories and simulations of star formation. While some authors have found that the binary mass ratio, $q$, is approximately uniformly distributed for certain populations of binary systems (for example, \citealt{Raghavan2010} or \citealt{tokovinin2014}), others have found evidence for a distribution that peaks at smaller values of $q$ (see \citealt{DM1991}, \citealt{gullikson2016}, \citealt{moe2017}), or a distribution that has a clear excess of systems at $q=1.0$ (see \citealt{elbadryq}). In each of these cases, different populations of systems, covering different total system masses and separations and with different selection biases, are being studied.
In an effort to study mass ratios in a sample of wide binaries that is relatively free from selection biases, we select binaries from our catalog that have projected separations $>100$~au, are within 210~pc, and have a primary (more massive) star in the mass range $0.8~\mathrm{M}_\odot<M<0.9~\mathrm{M}_\odot$ (early-K spectral type) using all of the photometric and astrometric quality cuts described in Section \ref{sec:samp_select}, but without the cut on $\sigma_{\mu'}<0.1$. Following \citet{brandeker2019}, we further select systems with angular separations $s>8\arcsec$ to avoid detection biases in high-contrast systems. By selecting distance less than 210~pc and primary stars in this mass range, we ensure that, given our photometric quality cuts, we are sensitive to all systems with mass ratios $q>0.2$. Based on 1,506 wide binary systems passing our selection criteria, we calculate the distribution of $q$ as shown in Figure \ref{fig:q_sep}. We find that the distribution of mass ratios peaks at $q=0.2$ and decreases towards $q=1.0$, which for K-type primaries is consistent with \cite{2013MNRAS.430L...6G}, though we note that our sample is not necessarily volume-limited for mass ratios below this. Since our sample is approximately volume-limited down to $M_{G_\mathrm{RP}}=10.75$, we do not expect the angular separation cut at $s>8\arcsec$ to produce different average total (or primary) stellar masses across our range in projected separation.
\section{Conclusion} \label{sec:concl}
Precise estimates of stellar masses are key to our understanding of a range of scientific topics from the physical characteristics of exoplanets to the formation and evolution of stars. However, these measurements remain difficult to make even today and typically rely on time-consuming observations of short-period binary star benchmark systems. On the lower main sequence, where stars evolve relatively slowly, it is possible to infer mass from luminosity alone. We derive a Mass-Magnitude relation using the Gaia $G_\mathrm{RP}$ passband. Compared to previously published Mass-Magnitude relations based on 2MASS or $V$-band photometry of relatively bright stars and tied to a small number of precise dynamical measurements, our approach is internally calibrated using the orbital motions of a large number of wide binary systems and is applicable over the large volume probed by Gaia.
Starting from the catalog of wide binary star systems derived from Gaia eDR3 by \citet{Elbadry2021}, we apply a number of cuts to ensure high-quality photometric and astrometric measurements and select a sample of 12,096 binary systems across the lower main sequence and analyze the subset of 3,846 pairs having similar magnitudes ($\Delta M_{G_\mathrm{RP}}<0.5)$. For these systems, we measure the projected orbital motion of one star relative to the other as the difference in the measured proper motions, and compare this to simulations of projected orbital motion to infer the total mass of the twin systems (assumed to be twice the mass of the individual stars) in bins of absolute $G_\mathrm{RP}$ magnitude while marginalizing over the eccentricity distribution of the population. Using a Bayesian framework, we estimate stellar mass as a function of absolute $G_\mathrm{RP}$ magnitude for $12.0>M_{G_\mathrm{RP}}>4.0$ with typical internal uncertainties of $5-25\%$ depending on the mass. Assuming a literature value for the absolute magnitude of a star at the hydrogen-burning limit, we fit for a modified linear relation between $M_{G_\mathrm{RP}}$ and $\log_{10}({M_\mathrm{GORP}})$ across the range $14.5>M_{G_\mathrm{RP}}>4.0$.
Our $M_{G_\mathrm{RP}}$ range encompasses the absolute magnitude range of the Mass-Magnitude relation presented by \cite{Mann_2019}, which allows us to directly compare our $M_\mathrm{GORP}$ masses to $M_\mathrm{Mann}$ masses with a sample of stars having measured $K_s$ magnitudes from 2MASS within the magnitude range recommended by \citet{Mann_2019}. To do this, we cross-matched Gaia and 2MASS to find all well-measured stars with $10.5>M_{K_s}>4.5$. We find that our mass estimates for individual objects are consistent with those from \cite{Mann_2019} at the $5-10\%$ level for $M>0.1~\mathrm{M}_{\sun}$, though we note that our masses appear to be somewhat biased relative to \citet{Mann_2019} in the $M<0.2~\mathrm{M}_{\sun}$ range. The Mass-Magnitude relation presented here extends the reach of photometric mass estimation to more than 30 million individual stars in the Gaia eDR3 database, nearly an order of magnitude more objects than in the sample where the \citet{Mann_2019} relation can be directly applied. We use our results to estimate the field stellar mass function in the solar neighborhood for stars within $10-50$~pc of the Sun down to the hydrogen-burning limit, and find that the mass function peaks at a mass of $0.16~\mathrm{M}_{\sun}$, consistent with previous results in the literature. We also use our Mass-Magnitude relation to study the mass ratios of wide binary systems with early K-type dwarf primaries. For this population, we find that the distribution of $q$ is not consistent with uniform, rather decreasing toward $q=1.0$.
\begin{acknowledgments}
We thank the referee, Tim Brandt, for insightful comments that helped to improve this manuscript. We also thank Gary Bernstein, Mike Jarvis, Bhuvnesh Jain, Cyrille Doux, and Marco Raveri for helpful conversations that improved this work. MRG would like to acknowledge support from the NSF through a Graduate Research Fellowship.
\end{acknowledgments}
\vspace{5mm}
\facilities{Gaia, 2MASS}
\software{\texttt{astropy} \citep{astropy2013, astropy2018}}
\bibliography{sample631}{}
\bibliographystyle{aasjournal}
|
Title:
Muons in showers with energy $E_{0} \geq$ 5 EeV and QGSjetII-04 and EPOS LHC models of hadronic interactions. Is there a muon deficit in the models? |
Abstract: The paper presents data on the muon component with a threshold
\(\varepsilon_{thr} \geq\) 1 GeV. Air showers were registered at the Yakutsk
array during almost 50 years of continuous air shower observations. The
characteristics of muons are compared with calculations of QGSjetII-04 and EPOS
LHC models for a proton and an iron nucleus. There is a muon deficit in the
models, at energies greater than 5 EeV. To make an agreement between
experimental data and simulations on muons, further tuning of the models is
required.
| https://export.arxiv.org/pdf/2208.00606 |
\begin{center}{\Large \textbf{
Muons in showers with energy \(E_{0} \geq \) 5 EeV and QGSjetII-04 and EPOS LHC models of hadronic interactions. Is there a muon deficit in the models?\\
}}\end{center}
\begin{center}
S.\,P. Knurenko and
I.\,S. Petrov\textsuperscript{$\star$}
\end{center}
\begin{center}
Yu. G. Shafer Institute of Cosmophysical Research and Aeronomy, Yakutsk, Russia
\\
* igor.petrov@mail.ysn.ru
\end{center}
\begin{center}
\today
\end{center}
\definecolor{palegray}{gray}{0.95}
\begin{center}
\colorbox{palegray}{
\begin{tabular}{rr}
\begin{minipage}{0.1\textwidth}
\includegraphics[width=30mm]{TIFR.png}
\end{minipage}
&
\begin{minipage}{0.85\textwidth}
\begin{center}
{\it 21st International Symposium on Very High Energy Cosmic Ray Interactions (ISVHE- CRI 2022)}\\
{\it Online, 23-27 May 2022} \\
\doi{10.21468/SciPostPhysProc.?}\\
\end{center}
\end{minipage}
\end{tabular}
}
\end{center}
\section*{Abstract}
{\bf
The paper presents data on the muon component with a threshold \(\varepsilon_{thr} \geq\) 1 GeV. Air showers were registered at the Yakutsk array during almost 50 years of continuous air shower observations. The characteristics of muons are compared with calculations of QGSjetII-04 and EPOS LHC models for a proton and an iron nucleus. There is a muon deficit in the models, at energies greater than 5 EeV. To make an agreement between experimental data and simulations on muons, further tuning of the models is required.
}
\section{Introduction}
\label{sec:intro}
The study of cosmic rays (CR) with highest energies greater than 5 EeV is very important from the nature of source perspective, propagation and interaction with matter and magnetic fields in outer space~\cite{Ginzburg1970192,Ginzburg199942}. The properties of CR in these energy ranges are not well known and, for this reason, they are the subject of research at large air shower experiments.
There are two aspects of CR properties: astrophysical and nuclear physics. The astrophysical aspect includes the study of the spectrum, mass composition and anisotropy of CR. The recently obtained irregularity in the CR spectrum at an energy of \(\sim10^{17}\) eV~\cite{Knurenko201982732738, Abbasi201886574, Abbasi201680}, which is associated with the rigidity of particles of galactic origin, is of even greater interest, because the possibility of determining the boundary of the transition from galactic to extragalactic cosmic rays. The knowledge of the mass composition of primary particles in the region of \(10^{16}-10^{19}\)~eV can help to establish the boundary between galactic and extragalactic cosmic rays~\cite{Berezhko201231, Kampert201235, Thoudam2016595, Knurenko201964}.
The nuclear physics aspect includes the study of the interaction of primary particles with the nuclei of air atoms. First --- obtaining the characteristics of an elementary event: the cross section of inelastic interaction \(\sigma_{A-air}\), the average coefficient of inelasticity \(\langle K_{in}\rangle\) and the multiplicity of secondary particles \(n_{ch}\) - hadrons according to air shower registration data in the ultrahigh energy region~\cite{Dyakonov19934,Knurenko19991372375,Ivanov2010main341, Knurenko20135307006}. Second --- the study of the processes of interaction and decay of high-energy hadrons and the influence of these processes on muon and electromagnetic cascades in the development of air showers. To determine most of the listed characteristics, including the estimate of the atomic weight of the primary particle, it is required to know the theoretical model of hadronic interactions that describes the development of the nuclear component of a real air shower. The available models, as shown by comparison with experimental data on muons, cannot yet quantitatively describe the excess of muons in measured air showers. Therefore, the aim of this work is to test modern models of hadronic interactions using data on muons in individual showers.
\section{Lateral distribution of charged particles and muons in showers with energy greater than 10 EeV }
At the Yakutsk array, the fluxes of charged particles and muons are measured by scintillation detectors with an area of 2 m\(^{2}\). The energy threshold of the detectors are 1.8 and 10 MeV.
Observation stations within the detector array are located in such a way that showers always contain information about muons within distances of 100–1300 m from the shower axis. After mathematical processing of the data, the air shower lateral distribution functions (LDF) are plotted from the calibrated signals. As an example, Fig.~\ref{fig:ykt_1} shows the LDF of three individual showers with energies greater than 10 EeV and different zenith angles \(\theta\). The curved lines in the figures are approximations of each of the air shower components~\cite{DyakonovMono, Glushkov201982669}.
\subsection{Correlation of \(\rho_\mu(600)\) with air shower energy}
Fig.~\ref{fig:ykt_2} shows the averaged \rhom{} values as a function of energy for different zenith angles: $\cos \theta$ = (0.667-0.834) and $\cos \theta$ = (0.834-1.000). Where, \(\rho_{\mu}(600)\) is the muon flux density obtained at the 600 m distance from the shower axis. In addition, calculations by the QGSjetII-04 and EPOS LHC models for the proton and iron nucleus are plotted there. The shower samples are consistent with each other within the statistical errors, which indicates the absence of systematic errors due to the inclusion of showers with different zenith angles in the total sample. Comparison of the experimental data with calculations shows that in the range 1-10 EeV the data points lay on the calculations for the proton, and starting from an energy of 20 EeV, the data points get closer to the calculations for the iron nucleus. Although the errors in this energy range are large and do not exclude agreement with the calculations for the proton. The agreement between the models and experimental data on \(\rho_\mu(600)\), taking into account the mass composition of cosmic rays with respect to the parameter z~\cite{Dembinski201921002004}, is given below. For a correct comparison of the results of the analysis of the parameter z obtained at different experiments, it is necessary to agree on the energy estimation between the experiments. In our opinion, this can be done using formula~(\ref{eq:eq_1}), where \(E_{0}\) is determined by the parameter \(\rho_\mu(600)\)~\cite{Knurenko2020102023036}.
\begin{equation}
\label{eq:eq_1}
\lg E_0 = 18.33 + 1.12\cdot \lg(\rho_\mu(R=600))
\end{equation}
\subsection{\(\rho_\mu(600) / \rho_{\mu+e}(600)\) correlation with depth of maximum of electron-photon component of air shower}
Longitudinal development of ultrahigh-energy air showers is reconstructed at the Yakutsk array from measurements of the Cherenkov light LDF~\cite{Glushkov199357,Dyakonov19934} and directly using track Cherenkov detectors~\cite{Knurenko20011157, Egorov2018301}. These methods make it possible to determine the characteristics of the air shower development cascade curve \xmax{}.
We selected individual air showers with measured Cherenkov light, charged and muon component data. We reconstructed \xmax{} --- depth of maximum by Cherenkov light data and estimated \fracmuon{} by charged and muon components data. Then the data were binned by fraction of muons for three different zenith angles and for each bin we determined \(\langle X_{max} \rangle\). Fig.~\ref{fig:ykt_3} shows a correlation of the parameter \fracmuon{} with \xmax{} for three zenith angles \(<\theta> = 18^\circ\). \(<\theta> = 32^\circ\) and \(<\theta> = 58^\circ\).
Using obtained data~(Fig.~\ref{fig:ykt_3}) and exponential function, an empirical relationship between \fracmuon{} and \xmax{} was found:
\begin{equation}
\label{eq:eq_2}
\begin{split}
X_{max} = (745+413\cdot(\sec\theta - 1)) \cdot
\exp{\left(
-\frac{\rho_{\mu}(600)/\rho_{\mu+e}(600)}{-0.818-0.037\cdot(\sec\theta - 1)}
\right)} \\
+ (172+132\cdot(\sec\theta - 1))
\end{split}
\end{equation}
Coefficients in the equation~(\ref{eq:eq_2}) were determined by approximation of the experimental data.
Further, formula~(\ref{eq:eq_2}) was used to calculate \xmax{} in individual showers using the parameter \fracmuon{}. This method for estimating \xmax{} increased the statistics of showers with determined \xmax{} for this work. For example, the measurement time for muons at the Yakutsk array is (50-60)\%, and the measurement time for Cherenkov light is \(\sim\)(6-10)\% of the total time of charged particles measurements. In addition, this technique does not depend on weather conditions, while the registration of Cherenkov light depends on the transparency of the atmosphere, the presence of the Moon in the sky, the Northern Lights, and other factors.
\subsection{Dependence of \xmax{} on air shower energy. Analysis of \(\rho_\mu(600) - E_0\) and \xmax{}\(- E_0\) correlation in the frame of QGSjetII-04, EPOS LHC}
Fig.~\ref{fig:ykt_4} compares the average \xmax{} values obtained from the muon component with the \xmax{} results obtained from the other components at the Yakutsk~\cite{Knurenko201964} and other experiments: PAO~\cite{Bellido2018506}, TA~\cite{Abbasi201886574}, LOFAR~\cite{Corstanje2021103} and Tunka~\cite{Prosin201612103004}.
The muon component of the air shower is considered to be the most sensitive to hadronic interactions. For this reason, muons are usually used to test different models in order to select a model that describes the development of real air showers. Comparison of the number of muons detected in experiments with earlier calculations~\cite{Kalmykov199356, Ostapchenko2006151,Ostapchenko201183} indicated a deficit of muons at the energies greater than \(10^{17}\) in almost all models. The discrepancy increases with energy up to \(\sim\)30\%. Improvement of the models~\cite{Ostapchenko20135202001, Pierog201592,Riehn236558} led to a convergence of the experiment with the new models, but the difference could not be completely eliminated~\cite{Soldin2022}. In the present work, we compare the muon component detected at the Yakutsk array with modern QGSjetII-04 and EPOS LHC models for energies greater than 5 EeV. The z parameter was used to test the models:
\begin{equation}
\label{eq:eq_3}
z = \frac{\ln\rho_{\mu}^{exp} - \ln\rho_{\mu}^{p}}{\ln\rho_{\mu}^{Fe} - \ln\rho_{\mu}^{p}}
\end{equation}
where \(\rho_{\mu}^{exp}\) --- muon flux density at 600 m from air shower axis, p, Fe --- calculated \(\rho_{\mu}\) by QGSjetII-04 and EPOS-LHC, for proton and iron nucleus~\cite{Dembinski201921002004}. The z-value is computed relative to hadronic interaction model.
In formula~(\ref{eq:eq_3}), the parameter z is expected to be in range between 0 and 1, where 0 is pure proton showers and 1 is pure iron showers.
The obtained value of z is shown in Fig.~\ref{fig:ykt_5} for two models QGSjetII-04~\cite{Ostapchenko20135202001} and EPOS LHC~\cite{Pierog201592}. Contour lines show the boundaries of errors in determining the parameter z. In addition there is an expected \(z_{mass}\) value shown in gray contour, estimated from \xmax{} of Cherenkov light measurements~\cite{Knurenko201964}. \(z_{mass}=\frac{\langle \ln A \rangle}{\ln56}\) according to~\cite{Dembinski201921002004}.
To assess the accuracy of the result, we used the error functional for measuring the muon flux density~(\ref{eq:eq_4}), which was established empirically in the course of analyzing the operation of adjacent scintillation detectors, followed by verification of the result obtained by the Monte Carlo method~\cite{Krasilnikov1983117, Ivanov200911}:
\begin{equation}
\label{eq:eq_4}
D_\mu = \sigma^2(\rho_\mu^{exp}) = (\rho_{\mu}^{exp})^2\cdot
\left(
\beta^2 + \frac{1+\alpha^2}{s\cdot \rho_\mu^{exp}\cos\theta}
\right)
\end{equation}
where \(\beta^2\) --- relative error of amplitude meausrements, which reflects instrumental fluctuation of scintillation detector response, \(\alpha^2\) --- statistical error, according to Poisson distribution.
The results of testing the QGSjetII-04 and EPOS-LHC models using muon data are shown in Fig.~\ref{fig:ykt_5}. As can be seen from Fig.~\ref{fig:ykt_5}, the results agree within the experimental errors with the predictions from the QGSjetII-04 and EPOS LHC models up to energies of 10 EeV. Above 10 EeV, there is a trend for an increase of z-value. The total error (systematic and statistical) of the z parameter for the data from the Yakutsk array was determined by differentiating expression~(\ref{eq:eq_3}) with respect to three parameters. As a result, it turned out to be equal to \(\sim\)17\% and is shown in Fig.~\ref{fig:ykt_5} by a filled contour for two models.
The origin of such a behaviour of the z-value that has emerged from experimental data is not clearly known. It could be because of slowing down the development of air shower in the atmosphere for energies greater than 5 EeV, as \xmax{} value shifts to sea level more slowly than in the energy range (0.1 – 5) EeV~\cite{Knurenko201964}. Or with a difference in absorption ranges of muons with threshold \(\varepsilon_{thr} \geq 1~\text{GeV} \Lambda\mu\) in models and found from experimental data~\cite{Glushkov199357}. The latter is possibly related to the problem in describing the energy spectra of muons at the beginning of the nuclear cascade of air shower development~\cite{Espadanal20178634, Cazon2019358005}. It is possible that a sharp change in the mass composition can also lead to such a behavior of the z function. As can be seen from Figs.~\ref{fig:ykt_4}, the experimental data indicates a change in the mass composition from light at lower energies to a heavier composition starting from an energy of 5 EeV. In any case, to solve this problem, further research is required at experiments studying air showers of highest energies, including additional experiments at the LHC, such as oxygen beam collisions~\cite{Albrecht202236727}
\section{Conclusion}
The Yakutsk array has been a testing ground for the study of cosmic radiation in the field of ultra-high and highest energies for 50 years. The complex registration of charged particles, muons, Cherenkov light and air shower radio emission, together with the developed software for the preliminary and subsequent analysis of showers, made it possible to study the radial and longitudinal development of showers and determine their main characteristics~\cite{Ivanov2010main341, Ivanov200911, Glushkov199357, KnurenkoNikolashkin, Knurenko20149292}. Large statistics of showers with a good precision made it possible to create a database in the energy range \(10^{15}-10^{20}\) eV for the study of air shower physics.
Long-term registration of the muon component at the Yakutsk array has shown that the muon flux density at a distance of 600 m from the shower axis \(\rho_{\mu}\)(600) is proportional to the shower energy \(E_{0}\), and the ratio \fracmuon{} is related to the longitudinal development of air showers --- \xmax{}. An analysis of individual showers based on the muon component in the energy range above 5 EeV showed that, within the framework of the QGSjetII-04 and EPOS LHC models, the composition of cosmic rays begins to slowly change towards medium nuclei and, above at energies of 20 EeV, becomes heavier with respect to the energy range 0.1-2 EeV.
From comparison of z-value with \(z_{mass}\) estimated by \xmax{} measurements (Fig.~\ref{fig:ykt_5}) we can assume that up to energies 10 EeV there is no muon deficit. Although, for energies greater than 10 EeV there is a trend for z-value increase, but since it's within pure iron composition and because of high systematic uncertainties we can't confirm a muon deficit. If we assume that there is a deficit of muons in the models~\cite{Dembinski2017301533}, then this fact requires a further analysis and explanation. For example direct comparison of individual air showers with simulations.
\paragraph{Funding information}
This work was carried out in the framework of research project No. AAAA-A21-121011990011-8 by the Ministry of Science and Higher Education of the Russian Federation.
\bibliography{bib_muon.bib}
\nolinenumbers
|
Title:
LIDA - The Leiden Ice Database for Astrochemistry |
Abstract: High quality vibrational spectra of solid-phase molecules in ice mixtures and
for temperatures of astrophysical relevance are needed to interpret infrared
observations toward protostars and background stars. Over the last 25 years,
the Laboratory for Astrophysics at Leiden Observatory has provided more than
1100 spectra of diverse ice samples. Timely with the recent launch of the James
Webb Space Telescope, we have fully upgraded the Leiden Ice Database for
Astrochemistry (LIDA) adding recently measured spectra. The goal of this
manuscript is to describe what options exist to get access to and work with a
large collection of IR spectra, and the UV/vis to mid-infrared refractive index
of H2O ice and astronomy-oriented online tools to support the interpretation of
IR ice observations. LIDA uses Flask and Bokeh for generating the web pages and
graph visualization, respectively, SQL for searching ice analogues within the
database and Jmol for 3D molecule visualization. The infrared data in the
database are recorded via transmission spectroscopy of ice films condensed on
cryogenic substrates. The real UV/vis refractive indices of H2O ice are derived
from interference fringes created from the simultaneous use of a monochromatic
HeNe laser beam and a broadband Xe-arc lamp, whereas the real and imaginary
mid-IR values are theoretically calculated. LIDA also offers online tools. The
first tool, SPECFY, used to create a synthetic spectrum of ices towards
protostars. The second tool aims at the calculation of mid-infrared refractive
index values. LIDA allows to search, download and visualize experimental data
of astrophysically relevant molecules in the solid phase, as well as to provide
the means to support astronomical observations. As an example, we analyse the
spectrum of the protostar AFGL 989 using the resources available in LIDA and
derive the column densities of H2O, CO and CO2 ices.
| https://export.arxiv.org/pdf/2208.12211 |
\title{LIDA - The Leiden Ice Database for Astrochemistry}
\author{W. R. M. Rocha\inst{1,3},
M. G. Rachid\inst{1},
B. Olsthoorn\inst{2},
E. F. van Dishoeck\inst{3},
M. K. McClure\inst{3},
\and
H. Linnartz
\inst{1}
}
\institute{Laboratory for Astrophysics, Leiden Observatory, Leiden University, P.O. Box 9513, NL 2300 RA Leiden, The Netherlands.\\
\email{rocha@strw.leidenuniv.nl}
\and
Nordita, KTH Royal Institute of Technology and Stockholm University, Hannes Alfv{\'e}ns v{\"a}g 12, SE-114 21 Stockholm, Sweden
\and
Leiden Observatory, Leiden University, PO Box 9513, NL 2300 RA Leiden, The Netherlands
}
\date{Received ZZZZ; accepted YYYY}
\abstract
{High quality vibrational spectra of solid-phase molecules in ice mixtures and for temperatures of astrophysical relevance are needed to interpret infrared observations toward protostars and background stars. Such data are collected worldwide, by several laboratory groups, in support of existing and upcoming astronomical observations. Over the last 25 years, the Laboratory for Astrophysics at Leiden Observatory has provided more than 1100 (high resolution) spectra of diverse ice samples.}
{Timely with the recent launch of the {\it James Webb} Space Telescope, we have fully upgraded the Leiden Ice Database for Astrochemistry (LIDA) adding recently measured spectra. The goal of this manuscript is to describe what options exist to get access to and work with a large collection of IR spectra, and the UV/vis to mid-infrared refractive index of H$_2$O ice. This also includes astronomy-oriented online tools to support the interpretation of IR ice observations.}
{LIDA is based on open-source Python software, such as {\texttt{Flask}} and {\texttt{Bokeh}} for generating the web pages and graph visualization, respectively, Structured Query Language (SQL) for searching ice analogues within the database and {\texttt{Jmol}} for three-dimensional molecule visualization. The database provides the vibrational modes of molecules known and expected to exist as ice in space. These modes are characterized by using density functional theory with the \texttt{ORCA} software. The infrared data in the database are recorded via transmission spectroscopy of ice films condensed on cryogenic substrates. The real UV/vis refractive indices of H$_2$O ice are derived from interference fringes created from the simultaneous use of a monochromatic HeNe laser beam and a broadband Xe-arc lamp, whereas the real and imaginary mid-IR values are theoretically calculated. LIDA not only provides information on fundamental ice properties but also offers online tools. The first tool, SPECFY, is directly linked to the data in the database to create a synthetic spectrum of ices towards protostars. The second tool allows the upload of external files and the calculation of mid-infrared refractive index values.}
{LIDA provides an open-access and user-friendly platform to search, download and visualize experimental data of astrophysically relevant molecules in the solid-phase, as well as to provide the means to support astronomical observations, in particular, those that will be obtained with the {\it James Webb} Space Telescope. As an example, we analyse the spectrum of the protostar AFGL~989 using the resources available in LIDA and derive the column densities of H$_2$O, CO and CO$_2$ ices.}
{}
\keywords{Astrochemistry --
solid-state: volatile --
Astronomical databases: miscellaneous
}
\authorrunning{Rocha et al.}
\section{Introduction}
Infrared (IR) spectroscopy is a diagnostic tool used to characterize chemical structures of molecules, and distinguish their functional groups \citep[e.g.,][]{Coblentz1905, BALKANSKI1989729}. For this reason a number of laboratories around the world have been focusing on providing laboratory based IR data of interstellar ice analogues for a range of different ice compositions and temperatures \citep[e.g.,][]{Hagen1979, Strazzulla1984, Schmitt1989, Grim1989, Hudgins1993, Boudin1998, Palumbo1998, Schutte1999, Caro2002, Oberg2009, Pilling2010, Vinogradoff2015, Scheltinga2018, Urso2020, Scheltinga2021, Rachid2020, Rachid2021, Potapov2021}. IR spectra directly represent the molecular geometry of a molecule and as such can act as a molecular fingerprint. In the gas phase and at very high resolution, such rovibrationally resolved spectra are unique, although overlap may still occur. In the solid state, however, interactions with the ice matrix prohibit molecules to (freely) rotate, and cause spectra to broaden and shift with respect to the unperturbed gas phase value. Additionally, spectral overlaps are more common. The amount of broadening and shifting depends on ice composition (both ice constituents and concentration) and ice temperature, as well as other parameters such as the level of ice porosity. In dedicated laboratory studies all these parameters can be derived under fully controlled conditions. Examples can be found in \citet{Oberg2007}.
IR spectroscopy is also the technique widely used to detect solid-phase molecules in the interstellar medium \citep[ISM, e.g.,][]{Gillett1973, Schutte1996, Pontoppidan2003, Gibb2004, Boogert2008, Zasowski2009, Bottinelli2010, Boogert2013, Penteado2015, Perotti2020, Rocha2021, Onaka2021}. The light of a protostar, edge-on disks or background star passes through the circumstellar material, and absorption features in the IR are seen in the protostellar spectral energy distribution (SED). The correct interpretation of those absorption bands is only possible upon comparison with the spectra of ice analogues measured in the laboratory. With this methodology, important discoveries have been done through observations of space- and ground-based telescopes, such as the {\it Infrared Space Observatory} (ISO), the {\it Spitzer} Space Telescope/Infrared Spectrograph (IRS) and the Infrared Spectrometer And Array Camera mounted on the Very Large Telescope (VLT/ISAAC). Up to date, the molecules securely identified in ices are H$_2$O, CO, CO$_2$, NH$_3$, CH$_4$ and CH$_3$OH \citep{Oberg2011, Boogert2015}, and the isotopologues $^{13}$CO and $^{13}$CO$_2$ \citep{Boogert2002_isotoplogue}. Except in the cases of CO and $^{13}$CO, which have only one vibrational mode, these molecules were identified in astrophysical ices by the detection of multiple absorption bands across the IR spectrum. These identifications in ices allowed to study the solid-phase chemistry in different astrophysical environments. For example, amorphous water ice is predominantly found towards background stars, and low-mass protostars \citep[][]{Smith1989, Boogert2008}, whereas some fraction of crystalline water ice was found in the circumstellar material of high-mass protostars \citep{Dartois2002}. CO is also an important discriminator of the ice environment, and astronomical observations indicate that it does not only exist in the pure form, but can also be mixed with CO$_2$, H$_2$O or CH$_3$OH \citep[e.g.,][]{Pontoppidan2003, Cuppen2011}. In the case of CO$_2$ ice, the bending mode around 15~$\mu$m provides a diagnostic of heating and segregation of polar and apolar molecules in ices \citep[e.g.,][]{Ehrenfreund1996, Pontoppidan2008, Isokoski2013}. Among the list of molecules identified in ices, CH$_3$OH (methanol) belongs to the group of the so-called complex organic molecules (COMs), which in astrochemistry is defined as organic molecules containing six or more atoms \citep[e.g., C$_x$H$_y$Y$_z$, with Y = O, N, P, S;][]{Herbst2009}. A number of small molecules have been tentatively identified in ices, for which only one vibrational mode could be assigned from astronomical observations. This list also includes sulfur-bearing molecules \citep[notably, SO$_2$;][]{Boogert1997} and ions \citep[notably, OCN$^-$;][]{Schutte1997}.
Many different COMs have been identified in the gas phase through radio and submillimeter surveys \citep[e.g.,][]{Blake1987, Jorgensen2012, McGuire2016, Belloche2020, vanGelder2020, McGuire2021, Jorgensen2020, Nazari2021, Rivilla2021, Brunken2022}, but astronomical observations have not been able to unambiguously identify frozen COMs larger than CH$_3$OH due to low spectral resolution or sensitivity. Nevertheless, tentative detections of CH$_3$CHO (acetaldehyde) and CH$_3$CH$_2$OH (ethanol) ice have been reported in the literature \citep{Schutte1999_weak, Oberg2011, Scheltinga2018, Rocha2015, Rocha2021}. Consistent with these tentative detections, several laboratory experiments have shown that such molecules can be formed in ices. Some examples are interstellar ice analogues processed by UV radiation \citep[e.g.,][]{Bernstein1995, MunozCaro2003, Oberg2009, Meinert2016, Oberg2016, Nuevo2018, Ishibashi2021, Bulak2021}, electron bombardment \citep[e.g.,][]{Brown1982, Materese2015, Mifsud2021}, X-rays \citep[e.g.,][]{Pilling2015, Ciaravella2019}, cosmic-rays \citep[e.g.,][]{Hudson2001, Domaracka2010, Pilling2010}, and via thermal processing \citep[e.g.,][]{Danger2011, theule2013thermal}. Other mechanisms excluding the presence of energetic triggers, such as atom addition reactions that are more representative for dark clouds conditions, have also been shown to result in the formation of COMs \citep{Watanabe2002, Fuchs2009, theule2013thermal, Linnartz2015, Fedoseev2017, Ioppolo2021}.
Apart from IR spectroscopy, the complex refractive index (CRI) of ice samples is important for the interpretation of astronomical observations. CRI is given by a complex number, $\tilde{m} = n + ik$, where $n$ and $k$, are the real and imaginary parts and are associated with scattering and absorption effects, respectively. In protostellar environments, CRI has been used to evaluate the effect of icy grain sizes and shapes in the spectral features of ices \citep[e.g.,][]{Ehrenfreund1997, Boogert2002, Pontoppidan2005, Boogert2008, Rocha2015, Perotti2020, Dartois2022}. For example, \citet{Boogert2008} observed a dependence of the libration mode of H$_2$O ice peak position with the size of spherical grains. Better fits of this band are obtained when small grains are adopted in the models. Similarly, CRI values have been used to interpret the absorption band at 3~$\mu$m, associated to the O$-$H stretching mode of H$_2$O \citep[e.g.,][]{Smith1989, Dartois2001}. In the solar system, the CRI also play a crucial role in the simulation of reflected light due to icy surfaces to interpret spectral observations. \citep[e.g.,][]{Clark2012, dalle2015}. And, finally, the CRI may be used to construct opacities for a dust grain size distribution model \citep{Weingartner2001}, which can be used with a radiative transfer code to calculate self-consistently the temperature and density distributions of dusty astronomical objects, e.g. protoplanetary disks \citep{Dalessio2006}.
The advances in the identification of molecules in both gas and solid-phase have been strongly supported by atomic and molecular data in open-access databases. In fact, electronic databases have become an essential tool in the context of astrochemistry, given the large amount of data that is produced by laboratory experiments. In particular, the astrochemical community targeting gas-phase chemical species is well served with multiple databases. For example, the Cologne Database for Molecular Spectroscopy\footnote{\url{https://cdms.astro.uni-koeln.de/}} \citep[CDMS;][]{Muller2001, Muller2005, Endres2016} and the Jet Propulsion Laboratory\footnote{\url{https://spec.jpl.nasa.gov/}} \citep[JPL;][]{Pickett1998, Pearson2010} databases provide catalogues with transition frequencies, energy levels and line strengths for atoms and molecules in the gas-phase of astrophysical and atmospheric interest. Collisional rate coefficients are available through the Leiden Atomic and Molecular Database (LAMDA)\footnote{\url{https://home.strw.leidenuniv.nl/~moldata/}} for non-LTE excitation \citep{Schoier2005, Tak2020}. Similarly, BASECOL contains a repository of collisional ro-vibrational excitation data of molecules by colliding with different agents such as atoms, ions, molecules or electrons \citep{Dubernet2006, Dubernet2013}. More oriented to chemical reactions, the UMIST Database for Astrochemistry\footnote{\url{http://udfa.ajmarkwick.net/}} \citep[UDfA;][]{McElroy2013} contains the reaction rates of more than 6000 gas-phase reactions. In a similar vein, the Kinetic Database for Astrochemistry\footnote{\url{https://kida.astrochem-tools.org/}} \citep[KIDA;][]{Wakelam2012} has provided reaction rate coefficients for a massive number of chemical species for astrochemical studies. The photodissociation and photoionization values of gas-phase molecules relevant for astrophysics are available online\footnote{\url{https://home.strw.leidenuniv.nl/~ewine/photo/index.html}}, and described by \citet{Heays2017, vanDishoeck2006, vanDishoeck1988}. The properties of gas-phase polycyclic aromatic hydrocarbons (PAHs) are widely available through the NASA Ames PAH IR Spectroscopy Database\footnote{\url{https://www.astrochemistry.org/pahdb/}} \citep{Bauschlicher2010, Boersma2014, Mattioda2020}.
The astrochemistry community working with solid-phase materials has also been served with databases. The refractive index of refractory materials is available via the Database of Optical Constants for Cosmic Dust\footnote{\url{https://www.astro.uni-jena.de/Laboratory/OCDB/index.html}} \citep{Henning1999, Jager2003}. Likewise, IR spectra of binary ice mixtures and refractive indexes of pure ices can be found at the webpage of the Cosmic Ice Laboratory\footnote{\url{https://science.gsfc.nasa.gov/691/cosmicice/}} from NASA \citep[e.g.,][]{Moore2010, Knez2012, Gerakines2020} and at Databases of the Astrophysics \& Astrochemistry Laboratory\footnote{\url{http://www.astrochem.org/databases.php}} that contain measurements by \citet{Hudgins1993}. A database of refractive indices of ice samples irradiated by heavy ions is also available at LASA (LaboratГіrio de AstroquГmica e Astrobiologia da Univap) webpage\footnote{\url{https://www1.univap.br/gaa/nkabs-database/data.htm}} with calculations performed by \citet{Rocha2014}, \citet{Rocha2018}, and \citet{Rocha2020}. Infrared refractive indices of CO and CO$_2$ ices are available from the Experimental Astrophysics Laboratory at Catania Astrophysical Observatory website\footnote{\url{http://www.ct.astro.it/lasp/optico.html}} \citep{Baratta1998}. Finally, we also mention the Solid Spectroscopy Hosting Architecture of Databases and Expertise\footnote{\url{https://www.sshade.eu/}} \citep[SSHADE;][]{Schmitt2018}, that contains a compilation of spectral and photometric data obtained by various spectroscopic techniques over the whole electromagnetic spectrum from gamma to radio wavelengths, through X-rays, UV, Vis, IR, and millimeter ranges. The data are not limited to ices, but also contain measurements of liquids, minerals, rocks, organic and carbonaceous materials.
Similarly to many of the databases mentioned above, the Leiden Database for Ices has served the astronomical community since the '90s, but until recently no COM spectra were included, and the spectral resolution of the data was around 1$-$2~cm$^{-1}$ \citep[e.g.,][]{Gerakines1996, Ehrenfreund1996, Ehrenfreund1997}. Additionally, the data were fragmented into several databases targeting specific ice samples. To continue supporting the interpretation of ice observations with current and future telescopes, in particular the {\it James Webb} Space Telescope (JWST), we have fully upgraded the Leiden Ice Database for Astrochemistry (LIDA; \url{https://icedb.strw.leidenuniv.nl/}). In particular, LIDA is a deliverable of the Early Release Science program ICE AGE\footnote{\url{http://jwst-iceage.org/}} (PI: Melissa McClure; Co-PIs: Adwin Boogert, Harold Linnartz). In LIDA all data are now available at one central location, and appealing features are included, such as a search capability and dynamic data visualization. Additionally, online tools are included in LIDA to support JWST data analysis or to prepare observing blocks, by deriving integration times based on expected column densities. LIDA covers the most abundant solid-phase species observed in the ISM, which are listed in Table~\ref{icedb_list}, along with information about their secure, tentative or non-identification in the solid-phase in astrophysical environments from previous observations. JWST has the technical potential to enlarge the inventory of ice identifications in space, and several programs (ERS, Guaranteed Time Observations - GTO and General Observer - GO) will search for new ice features toward protostars and background stars, using high spatial and spectral resolution observing modes. Moreover, JWST will shed light on the conundrum of the formation of COMs in ices. For this purpose, comparison with spectra of COMs in astrophysically relevant ice matrices at high spectral resolution is needed. Such data are required for a range of different physical conditions, e.g., mixing ratios, temperatures and porosity levels, as these differences affect the spectral appearance of the ice absorption bands. %
This manuscript systematically guides (new) users through LIDA. Section~\ref{data_db} and Appendix give an overview of the data available in the 2022 version of LIDA and describe the type of data and how these were achieved. Section~\ref{struct_db} provides information about the database structure, namely, the relational design, web interface, and visualization tools. In Section~\ref{on_tools} we introduce the computational online tools dedicated to supporting JWST data analysis. An application ilustrating the potential of LIDA is also shown. Section~\ref{future} points out the upgrades on LIDA that are intended for the next years. A summary of this work is provided in Section~\ref{summary}.
\begin{table*}
\caption{\label{icedb_list} List of molecules with relevant data on LIDA and their solid-phase (tentative or non) detection in the ISM.}
\renewcommand{\arraystretch}{1.0}
\centering %
\begin{tabular}{lllccccc}
\hline\hline
Chemical & Chemical & Name & & Notes on LIDA & & & Solid-phase$^{b,c}$\\
\cline{4-7}
structure$^a$ & formula & & IR spectrum & UV/vis-mid IR & Heating & UV irr. & detection/Ref.\\
\hline
& & & &\\
\vspace{0.1cm}
\raisebox{-.5\height}{\includegraphics[height=0.25in]{Figures/H2O.pdf}} & H$_2$O & Water & yes & yes & yes & yes & \textcolor{dgreen}\faCheckCircle $\hspace{0.1cm}$ / [1]\\
\raisebox{-.5\height}{\includegraphics[height=0.25in]{Figures/CO_new.pdf}} & CO & Carbon monoxide & yes & no & yes & yes & \textcolor{dgreen}\faCheckCircle $\hspace{0.1cm}$ / [2]\\
\raisebox{-.5\height}{\includegraphics[height=0.25in]{Figures/CO2.pdf}} & CO$_2$ & Carbon dioxide & yes & no & yes & yes & \textcolor{dgreen}\faCheckCircle $\hspace{0.1cm}$ / [3]\\
\raisebox{-.5\height}{\includegraphics[height=0.45in]{Figures/CH4v0.pdf}} & CH$_4$ & Methane & yes & no & yes & yes & \textcolor{dgreen}\faCheckCircle $\hspace{0.1cm}$ / [4]\\
\raisebox{-.5\height}{\includegraphics[height=0.35in]{Figures/NH3.pdf}} & NH$_3$ & Ammonia & yes & no & yes & yes & \textcolor{dgreen}\faCheckCircle $\hspace{0.1cm}$ / [5]\\
\raisebox{-.5\height}{\includegraphics[height=0.5in]{Figures/CH3OH.pdf}} & CH$_3$OH & Methanol & yes & no & yes & yes & \textcolor{dgreen}\faCheckCircle $\hspace{0.1cm}$ / [6]\\
\raisebox{-.5\height}{\includegraphics[height=0.52in]{Figures/NH4.pdf}} & NH$_4^+$ & Ammonium ion & yes & no & no & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [7]\\
\raisebox{-.5\height}{\includegraphics[height=0.55in]{Figures/H2CO.pdf}} & H$_2$CO & Formaldehyde & yes & no & no & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [8]\\
\raisebox{-.5\height}{\includegraphics[height=0.25in]{Figures/OCS.pdf}} & OCS & Carbonyl sulfide & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [9]\\
\raisebox{-.5\height}{\includegraphics[height=0.3in]{Figures/SO2.pdf}} & SO$_2$ & Sulfur dioxide & yes & no & no & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [10]\\
\raisebox{-.5\height}{\includegraphics[height=0.23in]{Figures/OCN.pdf}} & OCN$^-$ & Cyanate ion & yes & no & no & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [11]\\
\raisebox{-.5\height}{\includegraphics[height=0.6in]{Figures/HCOOH.pdf}} & HCOOH & Formic acid & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [7]\\
\raisebox{-.5\height}{\includegraphics[height=0.5in]{Figures/CH3CHO.pdf}} & CH$_3$CHO & Acetaldehyde & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [12]\\
\raisebox{-.5\height}{\includegraphics[height=0.5in]{Figures/CH3CH2OH.pdf}} & CH$_3$CH$_2$OH & Ethanol & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [13]\\
\raisebox{-.5\height}{\includegraphics[height=0.5in]{Figures/HCOOCH3.pdf}} & CH$_3$OCHO & Methyl formate & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [14]\\
\raisebox{-.5\height}{\includegraphics[height=0.4in]{Figures/CH3NH2.pdf}} & CH$_3$NH$_2$ & Methylamine & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [15]\\
\raisebox{-.5\height}{\includegraphics[height=0.4in]{Figures/CH3CN.pdf}} & CH$_3$CN & Acetonitrile & yes & no & yes & no & \textcolor{orange}\faExclamationTriangle $\hspace{0.1cm}$ / [16]\\
\raisebox{-.5\height}{\includegraphics[height=0.35in]{Figures/CH3OCH3.pdf}} & CH$_3$OCH$_3$ & Dimethyl Ether & yes & no & yes & no & \textcolor{red}\faTimesCircleO $\hspace{0.1cm}$ / ...\\
\raisebox{-.5\height}{\includegraphics[height=0.5in]{Figures/CH3COCH3.pdf}} & CH$_3$COCH$_3$ & Acetone & yes & no & yes & no & \textcolor{red}\faTimesCircleO $\hspace{0.1cm}$ / ...\\
\raisebox{-.5\height}{\includegraphics[height=0.23in]{Figures/N2.pdf}} & N$_2$ & Nitrogen & yes & no & yes & no & \textcolor{red}\faTimesCircleO $\hspace{0.1cm}$ / ...\\
\raisebox{-.5\height}{\includegraphics[height=0.20in]{Figures/O2.pdf}} & O$_2$ & Oxigen & yes & no & yes & no & \textcolor{red}\faTimesCircleO $\hspace{0.1cm}$ / ...\\
\hline
\end{tabular}
\tablefoot{\footnotesize
$^a$Taken from \url{https://pubchem.ncbi.nlm.nih.gov/}.
$^b$ Symbols for detection $-$ \textcolor{dgreen}\faCheckCircle: secure; \textcolor{orange}\faExclamationTriangle: tentative; \textcolor{red}\faTimesCircleO: no. $^c$ All these molecules have also been detected in the gas-phase, except the ions NH$_4^+$ and OCN$^-$ \citep[see 2021 census in][]{McGuire2021}. References of first observations: [1] \citet{Gillett1973}, [2] \citet{Lacy1984}, [3] \citet{deGraauw1996CO2}, [4] \citet{Lacy1991}, [5] \citet{Lacy1998}, [6] \citet{Grim1989}, [7] \citet{Schutte1996}, [8] \citet{Keane2001}, [9] \citet{Palumbo1995}, [10] \citet{Boogert1997}, [11] \citet{Schutte1997}, [12] \citet{Schutte1999_weak}, [13] \citet{Oberg2011}, [14] \citet{Scheltinga2018}, [15] \citet{Rachid2021}, [16] \citet{Rachid2022}
}
\end{table*}
\section{Data in the database}
\label{data_db}
In this section, we provide details on the experimental techniques used to measure the data available through LIDA. In summary, LIDA contains mid-infrared (mid-IR) spectra of ice samples ($\sim$4000$-$500~cm$^{-1}$; 2.5$-$20~$\mu$m) and UV/visible to mid-IR refractive indices of water ice in the 0.25$-$20~$\mu$m range. The IR ice spectra are available for pure and mixed ices for different settings. The H$_2$O ice refractive indices ($n$ and $k$) are available for ices deposited at different temperatures.
\subsection{Experimental setups and ice grow techniques}
Between 1990 and 2020, the majority of the IR data in Leiden were recorded with our HV-setup, a regular high vacuum setup (10$^{-7}$~mbar) in which the broadband light of a Fourier Transform IR spectrometer (best resolution 0.1~cm$^{-1}$) is transmitted through a cryogenically cooled substrate covered with ice that is grown under fully controlled laboratory conditions (Figure~\ref{exp_tecniques}a). The transmitted beam is focused into a detector and processed with a Fourier transform to provide the transmitted light per wavelength. In the '90s, this setup was also equipped with a microwave discharge hydrogen flow lamp that was used to irradiate ice samples with a flux of $\sim$10$^{15}$ photons cm$^{-2}$ s$^{-1}$ dominated by Lyman-$\alpha$ emission, in order to study radical or ionic ice constituents or species formed upon irradiation \citep{Gerakines1996}. The HV setup has been regularly upgraded and details are available from \citet{Oberg2009}. From 2020, a new setup has been used, IRASIS or InfraRed Absorption Setup for Ice Spectroscopy, that uses the same measurement principle but operates at substantially lower pressures (10$^{-9}$~mbar) to minimize contaminants. Moreover, laser interferometry has been incorporated to perform thickness measurements in order to derive experimental absorption cross-sections. All these data are recorded in transmission. Details about IRASIS are available from \citet{Rachid2021}. In the nearby future, also a quartz crystal micro-balance will be incorporated. A few spectra available from LIDA have been recorded using RAIRS (Reflection Absorption IR) spectroscopy (Figure~\ref{exp_tecniques}b), but they are not included in the collection of ice spectra presented in this paper. The reader can find more details about those experiments in \citet{vanBroekhuizen2005phd}, \citet{Fuchs2006}, \citet{Oberg2009} and \citet{Fayolle2011, Ligterink2018} and about CRYOPAD (CRYOgenic Photoproduct Analysis Device), a setup that uses RAIRS and is dedicated to study the impact of Vacuum UV irradiation on ice samples. %
Apart from ice spectra we also present real UV/vis refractive index measurements at cryogenic temperatures using our Optical Absorption Setup for Ice Spectroscopy \citep[OASIS;][]{kofman2019, He2022}. The base pressure on OASIS is around 6$\times$10$^{-8}$~mbar. In this setup, a light source impinges on the growing ice and the reflected beam creates an interference pattern (Figure~\ref{exp_tecniques}c). Specifically, a Xe arc lamp and a HeNe laser (632~nm) are the light sources for the interference technique (Figure~\ref{exp_tecniques}d). The light from the Xe arc lamp strikes the growing ice at 45 degrees and is reflected toward the aperture of an Andor 303i Shamrock spectrometer. In the spectrometer, the light is dispersed and collected onto a CCD (Andor iDus DV420 OE), which allows recording the interference pattern at different wavelengths in the 250 and 750~nm region. The HeNe beam strikes the ice at a small angle ($\sim$ 3$^{\circ}$) and is recorded by a photodetector. The interference pattern is later used to derive the refractive index of the ice (see Section~\ref{sec_uvvis}). So far, the experiments on OASIS have targeted measurements of pure ices. Next, this setup will be used to measure the refractive index of binary ice mixtures as well.
In IRASIS, OASIS and CRYOPAD, a single gas/vapour component or a gaseous mixture is introduced into the chamber through a controllable leak valve and deposited onto a cold substrate. Usually, the substrate used in transmission spectroscopy is one of the following materials: potassium bromide - KBr, zinc selenide - ZnSe, or germanium - Ge, whereas gold (Au) is used for RAIRS. An UV-enhanced aluminium mirror is used as a substrate in refractive index experiments. In most of the data available from LIDA, the ices are background deposited, which means that the gas inlet does not point toward the sample, allowing the molecules to impinge onto the substrate coming from random directions and stick on both sides of the substrate. This is more representative of the way molecules interact with an icy dust grain in space and generally causes the ice to be somewhat more porous. In the case of mixed ices, the samples can be prepared in a separate mixing system or by admitting the individual gas/vapor components in the chamber through different dosing lines. In either case, the molecules are considered to be homogeneously mixed before freezing out onto the cooled substrate. The ice thickness is often given in number of monolayers or Langmuir, where one monolayer (1L) corresponds to 10$^{15}$~cm$^{-2}$ \citep{Langmuir1938}. In IR spectroscopy experiments, the ice thickness can be as thin as a few monolayers. In some cases, when ice mixtures are used, the ice has to be thicker ($\sim$ 3000 monolayers) in order to allow the detection of the less abundant molecular component in the sample or guarantee that the deposition of background gases during the measurement is negligible \citep[see][]{Scheltinga2018}. In experiments to measure the ice refractive index, the ice thickness is generally much thicker ($\sim$45000~ML) because the technique requires to record several fringes in the interference pattern. It is worth mentioning that the shape and position of the IR bands are not affected by the ice thickness nor the underlying substrate used in the experiments performing transmission spectroscopy.
\subsection{Absorbance spectrum}
\label{abs_sec}
The majority of the IR absorbance spectra in LIDA has been measured using transmission spectroscopy. The ground principle to measure the absorbance spectrum is that the incident radiation is attenuated when crossing the ice sample due to the intrinsic properties of the material. The intensity of the transmitted light at each wavelength ($I_{\lambda}$) is calculated with Lambert-Beer's law, which is given by:
\begin{equation}
I_{\lambda} = I_{\lambda}^0 \mathrm{exp}\left(-\alpha_{\lambda} r \ell \right)
\label{transmittance}
\end{equation}
where $I_{\lambda}^0$ is the incident light intensity, $\alpha_{\lambda}$ is the wavelength-dependent absorption coefficient, $r$ is the concentration in the sample, and $\ell$ is the effective radiation path within the ice. The absorbance is derived from Equation~\ref{transmittance} as shown below:
\begin{equation}
Abs_{\lambda} = \mathrm{-log_{10}}\left( \frac{I_{\lambda}}{I_{\lambda}^0} \right)
= 0.434 \alpha r \ell
\label{abs_eq}
\end{equation}
where the absorbance is directly proportional to the molecular concentration and the radiation path in the ice. In transmission spectroscopy, the substrate is transparent to the IR light, and the absorption bands observed in the IR spectra are due to the molecules in the ice sample.
When RAIRS is used, the absorbance is no longer obtained from Equation~\ref{abs_eq}. The absorbance spectrum is calculated as a function of the reflected light, and depending on the ice thickness, the geometry of the light path, and the setup itself, the spectrum can change substantially. Briefly, in RAIRS, the IR light shines onto a reflective gold (Au) surface at grazing angles ($\sim$ 90$^{\circ}$ w.r.t. surface normal) and is reflected towards the detector (Figure \ref{exp_tecniques}b). Upon specular reflection, the s-polarized light becomes negligible, and only the p-polarized component interacts with the molecules. In this way, RAIRS has an additional selection rule for absorption, which imposes that the vibrational motions have a component orthogonal to the reflection surface \citep{palumbo2006}. RAIRS comes with the disadvantage that spectra cannot be directly compared with astronomical data, as in the case of spectra recorded in transmission. On the other hand, RAIRS has the advantage of increasing the signal-to-noise ratio of the data.
Either transmission or RAIRS can record data of pure or mixed ices before and after warm-up, or processed by UV radiation. The ice spectrum is taken by averaging a certain number of scans, allowing a higher signal-to-noise ratio, and typical spectral resolutions are within 0.5~cm$^{-1}$ and 2.0~cm$^{-1}$, whereas 0.1~cm$^{-1}$ spectra can be recorded if needed. Likewise, the absorbance accuracy is a characteristic of the IR spectrometer, and is around 1\%. Warm-up experiments are performed by depositing the ice at low temperature ($\sim$10~K), followed by a slow increase of the substrate temperature (e.g., 25~K per hour) while IR spectra are continuously taken. In the cases where experiments with UV ice processing are performed, the absorbance spectrum is taken after the irradiation process. The recorded IR absorbance spectrum often shows a curved baseline which needs to be corrected. Typically, a low-order polynomial function is used to flatten the ice spectrum and perform corrections to remove artefacts from the IR spectra. The baseline correction is made by interpolating a function for wavelengths where there is no absorption and subtracting it from the original signal. The spectra contained in the database have been previously baseline corrected using a polynomial or linear function to set the data to zero absorbance where there are no absorption features. When available, the non-baseline corrected spectrum is offered for download. The original spectra (raw data) are not offered for download to avoid publication of data in the literature that have not been treated correctly. This requires appropriate knowledge on how to deal with these datasets.
For astrochemical applications, the absorbance spectrum is often converted to an optical depth scale. The optical depth of experimental data is given by \citet{dHendecourt1986}:
\begin{equation}
\tau_{\rm{\lambda}}^{\rm{lab}} = 2.3 \cdot Abs_{\lambda}.
\label{od_eq}
\end{equation}
Ultimately, $\tau_{\lambda}^{\rm{lab}}$ is used to calculate the column density of the ice sample from the equation below:
\begin{equation}
N_{\rm{ice}} = \frac{1}{\mathcal{A}} \int_{\nu_1}^{\nu_2} \tau_{\lambda}^{\rm{lab}} d\nu,
\label{CD_eq}
\end{equation}
in which $\mathcal{A}$ is the band strength of the vibrational modes associated with the absorption features. Most of the $\mathcal{A}$ values in the literature have been derived for pure ice samples \citep[e.g.,][]{Gerakines1995, Gerakines1996, Kerkhof1999, Bouilloud2015, Hudson2017bs}. However, \citet{Oberg2007} and \citet{Bouwman2007} show that variation in the chemical composition of the ice leads to changes in the band strength of solid-phase molecules. These changes are often reported as relative values with respect to the pure ice, because information such as the ice density is unknown when the molecular concentrations change within the ice. In Table~\ref{ice_bs}, we compile $\mathcal{A}$ values from the literature for pure ices, which were then used to derive the column densities of the ices in LIDA. These values are also used to derive the column densities of most of the ice mixtures. Otherwise, we use tabulated values from \citet{Oberg2007} to derive the column densities of H$_2$O:CO$_2$, and the values from \citet{Scheltinga2018}, \citet{Rachid2020}, \citet{Scheltinga2021}, \citet{Rachid2021} to derive the column densities of ice-containing COMs.
\begin{table*}
\caption{\label{ice_bs} List of vibrational transitions and band strengths of the molecules in pure ices presented in the literature.}
\centering %
\begin{tabular}{lccccc}
\hline\hline
Molecule & $\lambda \; [\mu \mathrm{m}]$ & $\nu \; \mathrm{[cm^{-1}]}$ & Identification & $\mathcal{A} \; \mathrm{[cm \; molec^{-1}]}$ & References\\
\hline
H$_2$O & 3.01 & 3,322 & O$-$H stretch & $\mathrm{2.2 \times 10^{-16}}$ & \citet{Bouilloud2015}\\
H$_2$O & 6.00 & 1,666 & bend & $\mathrm{1.1 \times 10^{-17}}$ & \citet{Bouilloud2015} \\
H$_2$O & 13.20 & 760 & libration & $\mathrm{3.2 \times 10^{-17}}$ & \citet{Bouilloud2015} \\
CO$_2$ & 4.27 & 2,341 & CO stretch & $\mathrm{1.3 \times 10^{-16}}$ & \citet{Bouilloud2015}\\
CO$_2$ & 15.27 & 660, 665 & bend & $\mathrm{1.2 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
CO & 4.67 & 2,141 & CO stretch & $\mathrm{1.4 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
NH$_3$ & 2.96 & 3,376 & NH$_3$ asym-stretch & $\mathrm{2.3 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
NH$_3$ & 6.15 & 1,624 & NH$_3$ def. & $\mathrm{5.6 \times 10^{-18}}$ & \citet{Bouilloud2015}\\
NH$_3$ & 9.01 & 1,109 & NH$_3$ umbrella & $\mathrm{2.1 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
NH$_4^+$ & 6.85 & 1,460 & bend & $\mathrm{4.4 \times 10^{-17}}$ & \citet{Schutte2003}\\
OCN$^-$ & 9.01 & 1,109 & CN stretch & $\mathrm{1.3 \times 10^{-16}}$ & \citet{vanBroekhuizen2005}\\
SO$_2$ & 7.60 & 1,320 & SO$_2$ stretch & $\mathrm{3.4 \times 10^{-17}}$ & \citet{Boogert1997}\\
OCS & 4.93 & 2,025 & CO stretch & $\mathrm{3.4 \times 10^{-17}}$ & Rachid et al. (in prep.)\\
CH$_4$ & 3.32 & 3,010 & CH$_4$ deformation & $\mathrm{1.1 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
CH$_4$ & 7.67 & 1,303 & CH$_4$ deformation & $\mathrm{8.4 \times 10^{-18}}$ & \citet{Bouilloud2015}\\
H$_2$CO & 3.45 & 2,891 & CH$_2$ a-stretch & $\mathrm{4.7 \times 10^{-18}}$ & \citet{Bouilloud2015}\\
H$_2$CO & 3.53 & 2,829 & CH$_2$ s-stretch & $\mathrm{1.3 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
H$_2$CO & 5.79 & 1,725 & C$=$O stretch & $\mathrm{1.6 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
CH$_3$OH & 3.55 & 2,816 & O$-$H stretch & $\mathrm{1.0 \times 10^{-16}}$ & \citet{Bouilloud2015}\\
CH$_3$OH & 6.85 & 1,460 & O$-$H bend & $\mathrm{6.5 \times 10^{-18}}$ & \citet{Bouilloud2015}\\
CH$_3$OH & 8.85 & 1,130 & CH$_3$ rock & $\mathrm{1.8 \times 10^{-18}}$ & \citet{Bouilloud2015}\\
CH$_3$OH & 9.74 & 1,128 & C$-$O stretch & $\mathrm{1.8 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
HCOOH & 5.85 & 1,708 & C$=$O stretch & $\mathrm{5.4 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
HCOOH & 7.25 & 1,384 & OH bend & $\mathrm{2.6 \times 10^{-18}}$ & \citet{Schutte1999_weak}\\
HCOOH & 8.22 & 1,216 & C$-$O stretch & $\mathrm{2.9 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
HCOOH & 9.31 & 1,074 & CH bend & $\mathrm{3.1 \times 10^{-19}}$ & \citet{Bouilloud2015}\\
HCOOH & 10.76 & 929 & OH bend & $\mathrm{6.4 \times 10^{-17}}$ & \citet{Bouilloud2015}\\
CH$_3$CHO & 5.80 & 1,723 & C$-$O stretch & $\mathrm{1.3 \times 10^{-17}}$ & \citet{Schutte1999_weak}\\
CH$_3$CH$_2$OH & 9.17 & 1,090 & CH$_3$ rock & $\mathrm{7.3 \times 10^{-18}}$ & \citet{Hudson2017}\\
CH$_3$CH$_2$OH & 9.51 & 1,051 & CO stretch & $\mathrm{1.4 \times 10^{-17}}$ & \citet{Hudson2017}\\
CH$_3$CH$_2$OH & 11.36 & 880 & CC stretch & $\mathrm{3.2 \times 10^{-18}}$ & \citet{Hudson2017}\\
CH$_3$OCH$_3$ & 8.59 & 1,163 & COC stretch + CH$_3$ rock & $\mathrm{9.8 \times 10^{-17}}$ & \citet{Scheltinga2018}\\
CH$_3$OCH$_3$ & 10.85 & 921 & COC stretch & $\mathrm{5.0 \times 10^{-18}}$ & \citet{Scheltinga2018}\\
CH$_3$COCH$_3$ & 5.84 & 1,710 & C$=$O stretch & $\mathrm{2.7 \times 10^{-17}}$ & \citet{Hudson2018}\\
CH$_3$COCH$_3$ & 7.05 & 1,417 & CH$_3$ a-stretch & $\mathrm{9.2 \times 10^{-18}}$ & \citet{Hudson2018}\\
CH$_3$COCH$_3$ & 7.33 & 1,363 & CH$_3$ s-stretch & $\mathrm{1.4 \times 10^{-17}}$ & \citet{Hudson2018}\\
CH$_3$COCH$_3$ & 8.14 & 1,228 & CCC a-stretch & $\mathrm{7.3 \times 10^{-18}}$ & \citet{Hudson2018}\\
CH$_3$COCH$_3$ & 18.79 & 532 & CO deformation & $\mathrm{2.1 \times 10^{-18}}$ & \citet{Hudson2018}\\
CH$_3$OCHO & 5.80 & 1723 & C$=$O stretch & $\mathrm{5.0 \times 10^{-17}}$ & \citet{Modica2010}\\
CH$_3$OCHO & 8.25 & 1211 & C$-$O stretch & $\mathrm{2.9 \times 10^{-17}}$ & \citet{Modica2010}\\
CH$_3$OCHO & 8.58 & 1165 & CH$_3$ rock & $\mathrm{2.0 \times 10^{-17}}$ & \citet{Modica2010}\\
CH$_3$OCHO & 10.98 & 910 & O$-$CH$_3$ stretch & $\mathrm{4.8 \times 10^{-18}}$ & \citet{Modica2010}\\
CH$_3$OCHO & 13.02 & 768 & OCO deformation & $\mathrm{1.2 \times 10^{-18}}$ & \citet{Modica2010}\\
CH$_3$NH$_2$ & 3.47 & 2881 & CH$_3$ a-stretch & $\mathrm{2.6 \times 10^{-18}}$ & \citet{Rachid2021}\\
CH$_3$NH$_2$ & 3.58 & 2791 & CH$_3$ s-stretch & $\mathrm{3.8 \times 10^{-18}}$ & \citet{Rachid2021}\\
CH$_3$NH$_2$ & 6.76 & 1478 & CH$_3$ a-deformation & $\mathrm{1.1 \times 10^{-18}}$ & \citet{Rachid2021}\\
CH$_3$NH$_2$ & 6.87 & 1455 & CH$_3$ a-deformation & $\mathrm{7.0 \times 10^{-19}}$ & \citet{Rachid2021}\\
CH$_3$NH$_2$ & 7.024 & 1420 & CH$_3$ s-deformation & $\mathrm{2.0 \times 10^{-19}}$ & \citet{Rachid2021}\\
CH$_3$NH$_2$ & 8.63 & 1159 & CH$_3$ rock & $\mathrm{1.5 \times 10^{-18}}$ & \citet{Rachid2021}\\
CH$_3$CN & 3.33 & 3001 & CH$_3$ a-stretch & $\mathrm{1.5 \times 10^{-18}}$ & \citet{Rachid2022}\\
CH$_3$CN & 3.40 & 2940 & CH$_3$ s-stretch & $\mathrm{5.3 \times 10^{-19}}$ & \citet{Rachid2022}\\
CH$_3$CN & 4.44 & 2252 & CN stretch & $\mathrm{1.9 \times 10^{-18}}$ & \citet{Rachid2022}\\
CH$_3$CN & 7.09 & 1410 & CH$_3$ a-deformation & $\mathrm{1.90 \times 10^{-18}}$ & \citet{Rachid2022}\\
CH$_3$CN & 7.27 & 1374 & CH$_3$ s-deformation & $\mathrm{1.2 \times 10^{-18}}$ & \citet{Rachid2022}\\
CH$_3$CN & 9.60 & 1041 & CH$_3$ rock & $\mathrm{1.6 \times 10^{-18}}$ & \citet{Rachid2022}\\
CH$_3$CN & 10.88 & 919 & CC stretch & $\mathrm{3.5 \times 10^{-19}}$ & \citet{Rachid2022}\\
\hline
\end{tabular}
\end{table*}
When an ice column density is derived from RAIRS data, a correction must be performed on the band strength values from the literature. For spectra measured with RAIRS, the ice column density is given by:
\begin{equation}
N_{\rm{ice}} = \frac{1}{R\mathcal{A}} \int_{\nu_1}^{\nu_2} \tau_{\lambda}^{\rm{lab}} d\nu
\label{CD_eq_rairs}
\end{equation}
where $R$ is the correction factor. Specifically, in RAIRS experiments the path-length of the light beam is longer across the ice than in transmission IR spectroscopy. Consequently, band strengths measured with RAIRS are no longer the same of those measured in transmission. This correction depends on the molecule and individual calibration experiments are performed. Since this paper only presents results from transmission spectroscopy, the $R$ values are not provided. Future upgrades of LIDA will include RAIRS data and their respective correction factors for the ice column density calculation.
Spectra of mixed ices are shown in the database, with the ratio between the molecule given in the label. For example, in a H$_2$O:CO$_2$ (10:1) ice there are 10 molecules of H$_2$O for each molecule of CO$_2$ in the ice. Layered ices can also be made by depositing a certain number of monolayers (1~ML $\sim$ 10$^{15}$~molecules cm$^{-2}$) of a pure molecule on the substrate followed by a number of ML of another pure or mixed ice. When this is the case, the mixture is named as ``CO over CO$_2$'', which means that pure CO was deposited on top of a pure CO$_2$ ice. Similarly, ``CO under CO$_2$'' means that CO was deposited in the bottom layer, followed by CO$_2$ deposition at the top layer.
In Figure~\ref{piechart}, we show the fraction of pure and mixed samples hosted in LIDA. The majority of the IR spectra are of binary ice samples that account for 52.4\%. Most of these samples are mixtures of simple molecules (e.g., H$_2$O, CO, CO$_2$). Recently, binary ices containing COMs (e.g., CH$_3$CHO, CH$_3$CH$_2$OH, CH$_3$OCH$_3$, CH$_3$NH$_2$, CH$_3$OCHO, CH$_3$COCH$_3$, CH$_3$CN) have been included in the database. The next larger group (25.2\%) is of ice samples containing three compounds, which may include a COM. Pure ice samples make up the third group (17.5\%) and contain simple and complex molecules, as well as ions (NH$_4^+$ and OCN$^-$). We note, however, that these ions are formed via warm-up of HNCO:NH$_3$ ice \citep{Novozamsky2001}. Moreover, some of the pure ice samples were exposed to UV radiation with experimental details given in \citet{Gerakines1996}. Finally, the groups of quaternary and five-component ice samples are combined and account for 4.9\% of all spectra in the database. A full list with all ice analogues in the database is presented in Table~\ref{analogue_list} of Appendix~\ref{Laboratory_data_list}.
\subsection{UV/visible and mid-IR refractive index}
\label{sec_uvvis}
When light shines upon the ice surface, part of it refracts into the ice, and part is specularly reflected by the surface (Figure \ref{exp_tecniques}c,d). The refracted beam is reflected in the ice-substrate interface and eventually emerges back into the vacuum. The phase difference ($\Delta$) between the light rays that pass through the ice and the ones reflected by the surface is related to their optical path difference ($\delta$), which is given by:
\begin{subequations}
\begin{align}
\Delta &= \frac{2 \pi}{\lambda} \delta,\\
\delta &= 2nd \rm{cos}(\theta_2),
\label{phase-difference}
\end{align}
\end{subequations}
where $\lambda$ is the wavelength of the incoming light, $n$ is the real part of the ice refractive index at wavelength $\lambda$, $d$ is the ice thickness, and $\theta_2$ is refraction angle (see Figure~\ref{exp_tecniques}c), i.e., the angle between the refracted light and the normal plane perpendicular to the ice. The incident angle $\theta_1$ is related to refraction angle $\theta_2$ by Snell's law. When $\delta/\lambda$ is an even number, $\Delta$ is a multiple of 2$\pi$, resulting in constructive interference of the light beams. Conversely, when $\delta/\lambda$ is an odd number, the interference is destructive. Consequently, the intensity of the resulting beam reflected by the ice surface is an oscillation pattern with the form:
\begin{equation}
\begin{split}
I(t) & = A + B \; \mathrm{cos}[\Delta (t)] \\
&= A + B \; \mathrm{cos}\bigg( \frac{4 \pi n d(t) \; \mathrm{cos}(\theta_2)}{\lambda} \bigg ),
\end{split}
\label{intensity}
\end{equation}
where A and B are constants. Thus, the intensity of the interference pattern carries information about both the refractive index and the rate at which the ice thickness increases during deposition. Since each of these parameters is unknown, they cannot be derived from a single interference measurement. However, by recording the interference pattern of growing ice employing two different incident angles or wavelengths and employing Equation \ref{intensity}, both the ice refractive index and the growth rate can be derived. By recording the interference pattern of growing ice using two light beams of the same wavelength ($\lambda$) but different angles ($\alpha$, $\beta$), the refractive index expression can be derived from the frequency of the oscillations (Equation \ref{intensity}) and Snell's law:
\begin{equation}
n_{uv-vis} = \sqrt{\frac{\sin^2 \alpha-(P_{\beta}/P_{\alpha})^2 \sin^2 \beta}{1 - (P_{\beta}/P_{\alpha})^2}},
\label{refractive}
\end{equation}
where $P_{\rm{\alpha}}$ and $P_{\rm{\beta}}$ are the periods of the interference patterns generated by the light beams striking the ice at angles $\alpha$ and $\beta$, respectively. For more details about the derivation of Equation~\ref{refractive}, see for example \cite{tempelmeyer1968refractive}, \citet{beltran2015double} and \citet{He2022}.
While Equation~\ref{refractive} provides the ice refractive index in the UV-vis range, the refractive index in the mid-IR can be calculated using the Kramers-Kronig relations \citep{Kronig1926, Kramers1927}, which is given by:
\begin{equation}
n(\nu) = n_{\rm{670nm}} + \frac{2}{\pi} \mathcal{P} \int_{\nu_1}^{\nu_2} \frac{\nu' k(\nu')}{\nu'^2 - \nu^2}d\nu'
\label{n-value}
\end{equation}
where $n_{\mathrm{670nm}}$ is the refractive index of the sample at 670~nm and is within the UV-vis range for which the refractive index was derived (through Equation~\ref{refractive}), $\nu$ is the wavenumber corresponding to the peak of the band and $\nu'$ is the wavenumber before and after the $\nu$ value. The Cauchy principal value $\mathcal{P}$ is used to overcome the singularity when $\nu = \nu'$. The term ``$k$'' corresponds to the imaginary part of CRI, and is given by:
\begin{equation}
k = \frac{1}{4\pi \nu d} \cdot \left( 2.3 \times Abs_{\nu} + \mathrm{ln} \left| \frac{\tilde{t}_{01}\tilde{t}_{02}}{1 + \tilde{r}_{01} \tilde{r}_{12} e^{2i\tilde{x}}} \right|^2 \right)
\label{k-value}
\end{equation}
where $Abs_{\nu}$ is the absorbance spectrum value (Equation~\ref{abs_eq}), $d$ is the thickness of the ice sample, and $\tilde{t}_{01}$, $\tilde{t}_{02}$, $\tilde{r}_{01}$, $\tilde{r}_{12}$ are the Fresnel coefficients. The sub-label 0, 1 and 2 refers to regions of vacuum, ice sample, and substrate respectively. The refractive index of the substrate is implicit in the terms $\tilde{t}_{02}$ and $\tilde{r}_{12}$. Finally, the term $\tilde{x}$ is given by $\tilde{x} = 2\pi \nu d \tilde{m}$; $\tilde{m}$ is the CRI.
To determine the real and imaginary refractive index in the mid-IR, LIDA provide tools to solve Equations~\ref{n-value} and \ref{k-value} numerically in a iterative procedure. Specifically, LIDA uses the Maclaurin's formula described in \citet{Ohta1988} to obtain the real refractive index, and subsequently, the imaginary refractive index is derived. This methodology was also employed in other computational codes dedicated to calculate the CRI values of ice samples \citep{Rocha2014, Gerakines2020}.
In the current version of LIDA, we present the H$_2$O ice refractive index in the UV/vis, measured on the OASIS setup, and mid-IR optical constants calculated with the online tool available in LIDA (see Section~\ref{refrac_index}. In a follow-up paper, the refractive indexes of pure H$_2$O shown in this database will be systematically compared with the literature values (Rocha et al. {\it in prep.}). The refractive index values of other molecules (e.g., CO, CO$_2$, N$_2$, CH$_4$, CH$_3$OH) have been in part already measured and will be included in future LIDA upgrades. These will also include astronomically relevant ice mixtures.
\section{Features of the database}
\label{struct_db}
The upgraded LIDA is an extendable platform, designed to host IR spectra and UV/vis refractive indices of ice samples, as well as to support the upload of new datasets that will be obtained in future experiments. Access to these data is performed with dynamical and interactive visualization software, that is also linked to online tools to perform astronomy-oriented calculations. Additionally, all data are available for download in a standard ascii format. In the next subsections, we provide details about different aspects of the database. More information describing the software and approaches used to construct the database are given in Appendix~\ref{DB_design} and interactive documentation is available at \url{https://leiden-ice-database.readthedocs.io}. %
\subsection{User interface}
\label{us_interface}
The user interface of LIDA shows four sub-modules, namely, (i) spectral data, (ii) optical constants, (iii) online tools, and (iv) further information and a contact form. Access to these sub-modules is performed via the navigation bar at the top of the web interface. All IR ice spectra are available in the sub-module named {\it Spectral data}, which currently counts for more than 1100 ice spectra related to over 150 different ice samples. In the future, new data will be added, and the option exists to add previously recorded data that are currently scattered over the literature. The {\it Optical constants} section contains for now only the real refractive index of H$_2$O ice at different temperatures. However, more data from ongoing experiments will be added, which includes measurements of N$_2$, CO, CO$_2$, CH$_4$ and CH$_3$OH. LIDA is also equipped with {\it Online tools} focused on astronomy-oriented calculations. Finally, the user can visualize the {\it credits}, and {\it contact} the developers and scientific managers of the database. To render the database user interface, we used common web-technologies such as HTML (HyperText Markup Language), CSS (Cascading Style Sheets) and JavaScript (JS). A list of all software used to develop LIDA is available at \url{https://icedb.strw.leidenuniv.nl/Credit}.
\subsection{Searching capability and metadata}
\label{searching_cap}
The IR spectra and the UV/vis optical constants of the ice analogues in LIDA can be searched via a {\it search box} by accessing the tabs {\it Spectral data} and {\it Optical constants}, respectively, in the navigation bar. The searching capability uses SQLAlchemy\footnote{\url{https://www.sqlalchemy.org/}}, a python SQL (Structured Query Language) toolkit, and Object Relational Mapper to enable the searching in \texttt{Flask} applications.
To find a specific analogue in the database, either the chemical formula or the molecule name can be used. For example, the user can type \texttt{water} or \texttt{H$_2$O} to search for water ice spectra in the database. Searching for ice mixtures is possible by providing a list of the chemical formulas separated by space bar (e.g., \texttt{H2O CO2 CH3OH}). LIDA can also be used to search for molecules sharing common chemical structures. For example, when the query is \texttt{CO}, a list of all the molecules containing a carbon-oxygen bond (both simple and double) will be displayed on the web interface (e.g., CH$_3$OH, CH$_3$CHO, HCOOH). Search for more specific structures, such as functional groups, is also possible. As an example, if the query is \texttt{COOH}, a list with samples containing molecules that carry a carboxylic acid functional group will be returned (e.g., HCOOH). LIDA also supports searching by the type of ice processing. For example, water ice thermally processed can be searched with \texttt{H$_2$O category=warm-up}. Similarly, an energetically processed ice can be searched with \texttt{H$_2$O category=irradiation}. Finally, the user can also search a spectrum by the author who published the data with the command \texttt{H$_2$O author={{\"O}berg}}.
By searching for a specific ice sample, the user can also visualize the metadata. For example, information such as spectral resolution, deposition temperature, ice thickness and publication is visible. All spectra hosted in LIDA are available for download in ascii format (e.g., \texttt{.txt} extension), which is a standard format that can be imported to several software and computational codes. This feature offers the download of a single spectrum or all spectra related to an ice analogue.
\subsection{Data visualization}
\label{visual}
The data in LIDA are plotted interactively with \texttt{Bokeh}\footnote{\url{https://docs.bokeh.org/}}, a Python library for interactive visualization \citep{bokeh2018}. This software provides several control buttons by default to support the interactive inspection of the plots. More details can be accessed via the \texttt{Bokeh} documentation.
As an example of the data visualization in LIDA, Figure~\ref{spectrum} shows the IR absorbance spectrum of pure H$_2$O ice at temperatures of 15, 45, 75, 105, and 135~K \citep{Oberg2007}. The colour of each spectrum is linked to the temperature for which the data was recorded, i.e., blue and red are the lowest and highest temperature, respectively. The spectral visualization in LIDA also contains the annotations for the vibrational modes of the molecule. Four spectral features are indicated for H$_2$O ice. The feature around 3800~cm$^{-1}$ (2.63~$\mu$m) corresponds to the free OH stretching or dangling bond. This band is often observed in amorphous water ice, and decreases upon compaction after ion irradiation of the ice as shown by \citet{Palumbo2006_oh}. However, \citet{Bossa2014,Bossa2015} suggest that the OH dangling bond is only partially a suited tracer of ice porosity, as a non detection does not fully exclude that an ice is still somewhat porous. After the ice is warmed-up, the dangling bond is no longer observed in this water ice spectrum. The most prominent feature is the absorption band around 3300~cm$^{-1}$ (3~$\mu$m), which refers to the OH bulk stretching in the ice. This band is broad and relatively symmetric at low temperatures, whereas it becomes narrow and sharp at higher temperatures. This variation in the shape of the band is due to the phase transition of water ice from the amorphous to the crystalline structure. The water bending mode is observed at 1666~cm$^{-1}$ (6~$\mu$m). The effect of the temperature on this feature is flattening of the band during warm-up. Finally, the libration water band is observed around 800~cm$^{-1}$. The peak position of this band is also sensitive to the physical conditions of the ice, and is blue-shifted at higher temperatures.
In Figure~\ref{spectrum_refacindex}, we display the UV/vis and mid-IR refractive index (0.25$-$20~$\mu$m) of pure H$_2$O ice at 30, 50, 100 and 150~K. The UV/vis was measured on the the OASIS setup \citep{He2022}, whereas the mid-IR values were calculated using the refractive index calculator available via LIDA (see Section~\ref{refrac_index}). The water ice refractive index shows a clear dependence with the temperature. In particular, the real refractive index at 670~nm is adopted as 1.29 and 1.32 for the amorphous and crystalline phases, respectively \citep[e.g.,][]{Warren1984, Hudgins1993, Mastrapa2008, Mastrapa2009} which are higher than the values presented in this paper.
\subsection{3D molecule viewer}
\label{3dview}
The 3D molecule viewer aims to provide complementary information about the molecules in the ice analogues available through LIDA. The viewer is built with \texttt{Jmol}\footnote{\url{http://jmol.sourceforge.net/}}, an open-source Java package for visualization of chemical structures in 3D \citep{jmol}. The web rendering of the viewer is done via \texttt{JSmol}, an interactive browser object that is written in JavaScript and utilizes \texttt{HTML5}.
\texttt{JSmol} has several built-in functions that are also available in this tool, such as measurements of distances and angles, visualization of vibrational modes, animations, orbitals, and surfaces. As a 3D viewer, the molecule can be rotated to different angles, and change the type of the bonds to wireframes. A few dedicated controls are available in the viewer of LIDA, for example, {\it spin} to rotate the molecule; {\it vibration} to show an animation of the vibrational modes; {\it vectors} to show the direction of the vibration modes of the functional groups. All these capabilities are important for a better understanding of the spectroscopic properties of the molecules available in LIDA. It should be noted, that in the ice environment, molecular rotations are quenched and vibrations are hindered depending on the ice matrix. Furthermore, the ice geometry changes with the variation of the temperature and upon irradiation, which also affects the molecular vibrations.
With \texttt{JSmol} linked to LIDA, one can animate the normal vibrational modes of the molecules when visualizing their IR ice spectra. This is performed by reading ``.xyz'' via \texttt{JSmol}, which contains information about the molecular geometry in Cartesian coordinates, as well as the normal frequencies of the vibrational modes. The default \texttt{JSmol} buttons to control the vibrational modes animations are disabled when the ``.xyz'' is not available yet in LIDA. In Section~\ref{vibmodes} we provide further details about the calculation of the vibrational modes used in the database. This viewer only shows one molecule per ice analogue. This means that for an ice mixture such as H$_2$O:CH$_3$CH$_2$OH, only the H$_2$O molecule is immediately displayed in the viewer. To allow the user to visualize other molecules (e.g., CH$_3$CH$_2$OH), the 3D Molecule Viewer is linked to PubChem\footnote{\url{https://pubchem.ncbi.nlm.nih.gov/}} \citep{Pubchem2011, Pubchem2021} that is a comprehensive database of freely accessible chemical information maintained by the National Center for Biotechnology Information (NCBI). Searching for a molecule is as simple as typing \texttt{:ethanol} to visualize the 3D shape of CH$_3$CH$_2$OH. The colon symbol ``:'' provides the key to connect with the PubChem database. These databases contain detailed information on several molecules, that can help the user to understand different aspects of the molecular properties.
Figure~\ref{mol3d} shows an example of the 3D molecule viewer, that displays a screenshot of the bending mode animation of the H$_2$O molecule.
\subsection{Vibrational modes calculation}
\label{vibmodes}
The vibrational modes of the molecules in LIDA are calculated with the ORCA\footnote{\url{https://orcaforum.kofo.mpg.de}} software \citep{Orca2012, Orca2018, Orca2020} that contains a wide variety of quantum chemistry methods for different purposes. In the 2022 release of LIDA, the aim of the calculation of the vibrational modes is to show the animation of the vibrational modes, and, therefore, the focus is not on the accuracy of the vibrational frequencies. That have to be taken from experimental values. For the calculations, it is assumed that a molecule is isolated, not in a matrix surrounded by other molecules, and in the electronic ground-state. In addition, ORCA considers that all vibrations are strictly harmonic. The consequence of such approaches is that the wavenumbers of some vibrational modes calculated with ORCA deviate from the wavenumbers of the absorption bands observed in experimental IR spectra or may even not be present. The numerical error in the calculation of vibrational frequencies with ORCA may be as large as 50~cm$^{-1}$, although it is considerably lower in most of the cases. Nonetheless, vibrational mode assignments are correct and can be used as a tool to visualize the animation of the molecular motions.
For the molecule geometry optimization and calculation of the vibrational modes we adopt the Density Functional Theory (DFT) with the functional B3LYP that stands for ``Becke, 3$-$parameter, Lee-Yang-Parr'' \citep{Becke1993, Stephens1994}. The input geometry of the molecules is taken from the PubChem database. The vibrational frequencies calculated for the molecules in the database can be visualized in the 3D molecule viewer described in Section~\ref{3dview}. Additionally, the modes with calculated frequencies are indicated in green in the annotations of the spectrum visualization. Rotational transitions are not available in these files because they are quenched in the ice environment.
\section{Online tools and applications}
\label{on_tools}
In this section, we introduce two new online tools focused on the creation of synthetic spectra using the laboratory data from the database and the derivation of the CRI at IR wavelengths of ice samples. These tools also have an intuitive graphical user interface that makes it easier to use and download the output results. The details are given in the subsections below.
\subsection{SPECFY}
\label{synt_spec}
\texttt{SPECFY} is an online tool available through LIDA to construct synthetic spectra of protostars containing ice absorption bands. This tool uses Python \texttt{Flask} for rendering the web-page and \texttt{JavaScript} for showing the absorbance spectra in a drop-down menu to be used by \texttt{SPECFY}. The web interface of \texttt{SPECFY} is shown in Appendix~\ref{Specfy}. Next subsections describe the tool and show practical applications of how to use \texttt{SPECFY} to interpret astronomical observations.
\subsubsection{Synthetic spectra}
To construct a synthetic spectrum with multiple ice features, \texttt{SPECFY} performs a linear combination of experimental data in LIDA which is available via a drop-down menu in the web interface. The linear combination is given by:
\begin{equation}
\tau_{\rm{\lambda}}^{\rm{tot}} = \sum_{i=0}^{n} w_i \tau_{\rm{\lambda,i}}^{\rm{lab}},
\label{tot_tau}
\end{equation}
where $w_{i}$ is the weighting factor used to increase or decrease the intensity of the ice bands, and $\tau_{\rm{\lambda,i}}^{\rm{lab}}$ is calculated with Equation~\ref{od_eq}. The weighting factor $w_{i}$ is calculated by the following equation:
\begin{equation}
w_i = \frac{N_{\rm{ice}}^{\rm{inp}}}{N_{\rm{ice}}^{\rm{lab}}},
\end{equation}
where $N_{\rm{ice}}^{inp}$ is the input ice column density provided by the user in LIDA, and $N_{\rm{ice}}^{lab}$ is the ice column density of the sample itself, which is calculated with Equation~\ref{CD_eq}. For example, if the user requires a column density of $10^{18}$~cm$^{-2}$ and the experimental spectrum has a column density of $10^{17}$~cm$^{-2}$, the selected spectrum will be multiplied by a factor of 10 in Equation~\ref{tot_tau}. It is worth noting that all experimental data is interpolated during the linear combination to ensure consistency of the method and avoid spectral range variations of the input data.
Besides the ice spectra hosted in LIDA, the template amorphous silicate spectrum of the galactic center source GCS~3 taken from \citet{Kemper2004} is also available to be combined with the ices. This spectrum was observed with ISO towards the Galactic Center, and has been used as a template to remove the silicate features observed toward protostars in previous works \citep[e.g.,][]{Boogert2008, Bottinelli2010}. In LIDA, this silicate spectrum is important for synthetic spectrum calculations because it makes it possible to check the effects of the Si$-$O bands when blended with ice absorption features. However, we stress that no mixing rule such as Maxwell Garnett \citep{Garnett1904, Garnett1906} and Bruggeman \citep{Bruggeman1935, Bruggeman1936} theories is assumed in this procedure. In practice, \texttt{SPECFY} assumes isolated materials. Additionally, this tool does not include secondary effects of grain size and geometry nor scattering processes that might affect the shape of the ice bands. Those features will be included in future work dedicated to improving \texttt{SPECFY}.
The combined ice spectrum can be used to match observational data. As an example, we create a synthetic spectrum using the parameters described in Table~\ref{ss_par}. The results are shown in Figure~\ref{synhtetic_afgl} and the outputs in the web interface of \texttt{SPECFY} are displayed in Figure~\ref{synhtetic}. The LIDA model in optical depth scale is constructed with \texttt{SPECFY} by combining ice and silicate spectra with different input column densities. The ice components in this combination are composed of pure H$_2$O at 15~K, and the mixtures H$_2$O:CO$_2$ (10:1) and CO:CO$_2$ (2:1). These three ice samples comprise the most abundant ice molecules observed toward protostars \citep{Oberg2011, Boogert2015}. Superposed to the LIDA model, we display the spectrum of the protostar AFGL~989, observed with {\it ISO} \citep{Gibb2004}. The good agreement between the model and the strong bands in observations show that \texttt{SPECFY} is a useful tool to model astronomical data. This solution is not necessarily unique to the AFGL~989 spectrum, but this methodology provides the means to help in the quantification of the ice column densities as well as with the interpretation of astronomical observations.
The H$_2$O:CO$_2$ ice mixture dominates the absorption profile of the band at 3~$\mu$m, but it cannot explain entirely the absorption excess of the spectral red wing region of AFGL~989. The nature of this strong absorption profile is under debate, but is often attributed to scattering due to large grains \citep[e.g.,][]{Boogert2000} and ammonia hydrates \citep[H$_2$O:NH$_3$; e.g.,][]{Merrill1976, Dartois2002}. The water ice bending and libration modes are also observed around 6~$\mu$m and 13.6~$\mu$m. Likewise, the CO$_2$ bands at 4.27~$\mu$m and around 15~$\mu$m are not entirely modelled by the carbon dioxide fraction in the H$_2$O:CO$_2$ mixture. Additional CO$_2$ is added by the CO:CO$_2$ ice mixture. A fraction of carbon monoxide is expected to coexist in the same ice matrix of carbon dioxide as indicated in astronomical observations \citep{Pontoppidan2008, Poteet2013}. Although this combination matches relatively well the two CO$_2$ bands, it results in a higher CO ice peak at 4.67~$\mu$m. Finally, the absorption profile of the silicate is relatively well reproduced with the amorphous silicate of GCS~3. Similar to the unclear origin of the absorption excess around 3.3~$\mu$m, other strong absorptions are observed at 6~$\mu$m usually associated with organic refractory material \citep{Gibb2002, Boogert2008} and at 6.85~$\mu$m, that has been attributed to CH$_3$OH \citep[e.g.,]{Bottinelli2010} and NH$_4^+$ \citep[e.g.,][]{Keane2001, Schutte2003, Mate2009, Mate2012}. This exercise shows that the resources available in LIDA can be used to analyse the spectra of protostars and obtain ice column densities.
Next, the optical depth spectrum can be converted to a flux scale in Jy units by adopting the continuum SEDs of different protostars. We compiled and added to LIDA the continuum SED of seven protostars as calculated by \citet{Gibb2004} and \citet{Boogert2008}, which are listed in Table~\ref{SEDcont}. The sources are representative of objects Class I and Class II and have spectral data obtained with ground- and space-based telescopes. Except in the cases of Elias 29 and AFGL~989, which were observed with the ISO/short-wavelength spectrometer (SWS) in the entire range between 2 and 30~$\mu$m, all sources have coverage of 2.5$-$5~$\mu$m (except 4.0$-$4.4~$\mu$m) and 5$-$30~$\mu$m. The former interval is based on the VLT/ISAAC observations summarized in \citet{Pontoppidan2003} and \citet{vanBroekhuizen2005} or Keck NIRSPEC \citep{McLean1998} observations. The latter range is constrained by space-based observations with the Infrared Spectrograph (IRS) of the {\it Spitzer} Space Telescope. Despite the careful SED determination by \citet{Gibb2004} and \citet{Boogert2008}, inaccuracies may still occur, and this must be taken into account when using these data. Once the continuum SED is known, it can be used to convert the ice experimental spectra from optical depth to a flux scale. The conversion to the synthetic spectrum in flux scale is performed by:
\begin{equation}
F_{\lambda}^{\rm{synth}} = F_{\lambda}^{\rm{cont}} \rm{exp}(-\tau_{\lambda}^{lab})
\label{flux_scale}
\end{equation}
where $F_{\lambda}^{\rm{cont}}$ is the continuum SED of the protostar.
\begin{table*}
\caption{\label{SEDcont} Continuum SEDs available in the SPECFY tool compiled from \citet{Gibb2004} and \citet{Boogert2008}.}
\renewcommand{\arraystretch}{1.2}
\centering %
\begin{tabular}{lcccc}
\hline\hline
Protostar & Continuum Model (Jy) & Continuum method$^a$ & Observations\\
\hline
\multicolumn{4}{c}{\bf{Class I (disk, envelope)}}\\
\hline
Elias~29 & B2000 & Blackbody & ISO/SWS\\
AFGL~989 & G2004 & Polynomial + Blackbody & ISO/SWS\\
CrA~IRS7~A & B2008 & Polynomial & ISAAC/VLT \& {\it Spitzer}/IRS\\
CrA~IRS7~B & B2008 & Polynomial & ISAAC/VLT \& {\it Spitzer}/IRS\\
IRAS~23238+7401 & B2008 & Polynomial & NIRSPEC/Keck \& {\it Spitzer}/IRS\\
L1014~IRS & B2008 & Polynomial & NIRSPEC/Keck \& {\it Spitzer}/IRS\\
\hline
\multicolumn{4}{c}{\bf{Transition from Class I to Class II (disk, tenuous envelope)}}\\
\hline
DG~Tau~B & B2008 & Polynomial & NIRSPEC/Keck \& {\it Spitzer}/IRS\\
CRBR~2422.8-3423 & B2008 & Polynomial & NIRSPEC/Keck \& {\it Spitzer}/IRS\\
\hline
\end{tabular}
\tablefoot{$^a$Polynomial: low-order ($\le$ 3) polynomial function. Blackbody: a single or multiple blackbody curves to fit the local continuum adjacent to the ice absorption features.}
\end{table*}
Figure~\ref{synhtetic_cont} shows three synthetic spectra using the continuum templates from AFGL~989, Elias~29 and DG Tau B, which represent three protostar categories, i.e., a high mass protostar, a low mass protostar, and a protoplanetary disk, respectively. The continuum applied to the optical depth model is displayed in Figure~\ref{synhtetic_afgl} and the output in the web interface displayed in Figure~\ref{synhtetic}. The effect of the continuum in this example is characterized by different flux intensities and by changing the slope of the protostar SED. Additionally, Figure~\ref{synhtetic_cont} shows the sensitivity limits for the filters G235M and G395M of JWST/Near-Infrared Spectrometer integral field unit (NIRSpec/IFU) and all filters of the Mid-Infrared Instrument at Medium Resolution Spectroscopy (MIRI/MRS). These values represent the minimum detectable signal corresponding to signal-to-noise ratio of 10 obtained with an on-source integration time of 10000 seconds \citep[][]{Glasse2015, Pontoppidan2016}. This comparison shows that ices can be easily detected with JWST toward sources with continuum SED similar to AFGL~989, Elias~29 and DG Tau B. With this feature in LIDA, one can generate input spectra for the JWST Time Exposure Calculator\footnote{\url{https://jwst.etc.stsci.edu/}} (ETC) that can be used in future proposals cycles.
\begin{table*}
\caption{Selected ice spectra and continuum model to construct a synthetic protostar spectrum. As an example, see Figure~\ref{synhtetic_afgl}.}
\label{ss_par} %
\centering
\setlength{\tabcolsep}{3pt} %
\renewcommand{\arraystretch}{1.3} %
\begin{tabular}{c c c c} %
\hline\hline %
\multicolumn{4}{c}{\underline{Spectrum selection}}\\
Analogue & $T$ (K) & $N_{\rm{ice}}^{\rm{inp}}$ (cm$^{-2}$) & Reference\\
\hline
Pure H$_2$O & 15 & 1.4 $\times$ $10^{17}$ & \citet{Oberg2007} \\
H$_2$O:CO$_2$ (10:1) & 10 & 5.3 $\times$ $10^{18}$ & \citet{Ehrenfreund1997}\\
CO:CO$_2$ (2:1) & 15 & 9.5 $\times$ $10^{17}$ & \citet{vanBroekhuizen2006}\\
Silicate GCS~3 & ... & 1.0 $\times$ $10^{20}$ & \citet{Kemper2004}\\
\hline
\multicolumn{4}{c}{\underline{Continuum selection}}\\
Object & Continuum model & Flux unit & Reference\\
\hline
Elias~29 & B2008 & Jansky & \citet{Boogert2008}\\
AFGL~989 & G2004 & Jansky & \citet{Gibb2004}\\
DG~Tau~B & B2008 & Jansky & \citet{Boogert2008}\\
\hline
\end{tabular}
\tablefoot{All data is interpolate in the range between 2.6 and 20~$\mu$m.}
\end{table*}
\subsubsection{Functional groups in protostellar spectra}
LIDA also has the capability of searching for molecules containing similar associations of atoms and some functional groups, as described in Section~\ref{searching_cap}. Once these are chosen, one can select them from the dropdown menu in \texttt{SPECFY} to construct a model spectrum for comparison with the observations. A practical example is given in Figure~\ref{func_group}. We use two separate entries (\texttt{CO} and \texttt{CH}) in the ``Spectral data'' field of LIDA, to search for molecules sharing carbon-oxygen bonds (e.g., carbonyl-bearing molecules, alcohols) and carbon-hydrogen bonds as shown in the top panel of Figure~\ref{func_group}. From the \texttt{CO} entry, several ice analogues are found, including HCOOH, CH$_3$OH, CH$_3$CHO and CH$_3$COCH$_3$. Similarly, the \texttt{CH} entry returns the same molecules because they contain CO and CH chemical bonds. In addition, LIDA also finds CH$_4$ based on the query request.
The vibrational modes of functional groups containing a carbonyl group, C$-$O and C$-$H bonds have been assigned in the spectra of protostars \citep[e.g.,][]{Gibb2004, Boogert2015}. To illustrate how LIDA can further support astronomical data interpretation, we show in the middle panel of Figure~\ref{func_group}, the experimental spectra of HCOOH, CH$_3$CHO, CH$_3$OH, CH$_3$COCH$_3$ and CH$_4$ scaled to the spectra of the low-mass protostar HH46 \citep{Boogert2008}. The HH46 spectrum is subtracted of the water ice and silicate. The chemical bonds associated with the absorption bands are indicated in the green and blue shaded areas. The bottom panel of Figure~\ref{func_group} highlights the chemical bonds of the molecules contributing to the absorption bands towards HH46. The parameters used to scale laboratory data to the observations are given in Table~\ref{hh46_par}. This exercise shows that LIDA can be used to identify the chemical bonds related to different absorption bands, and provide upper limit column densities for ices. Figure~\ref{func_group} also highlights the blending of bands at different spectral regions. For example, the C$=$O stretching modes of HCOOH, CH$_3$CHO and CH$_3$COCH$_3$ lie almost at the same wavelength, which hints for the need of high sensitivity and spectral resolution observational data that will be provided by JWST. Clearly, this is an important tool to explore the contribution of different functional groups and chemical bonds to the overall absorption profile of features observed in interstellar ice spectra. It should be noted that such synthetic spectra allow reproducing observed data, but do not provide a necessarily unique solution. Other public codes, such as the \texttt{ENIIGMA} fitting tool \citep{Rocha2021} have the goal of quantifying the degeneracy of those fits when a large data-set of inputs is taken into account.
\begin{table*}
\caption{Ice spectra selected from LIDA entries \texttt{CO} and \texttt{CH} and their column densities after manually scaling to HH46 spectrum shown in Figure~\ref{func_group}.}
\label{hh46_par} %
\centering
\setlength{\tabcolsep}{3pt} %
\renewcommand{\arraystretch}{1.5} %
\begin{tabular}{l c c c} %
\hline\hline %
Analogue & $T$ (K) & $N_{\rm{ice}}^{\rm{inp}}$ (cm$^{-2}$) & Reference\\
\hline
Pure HCOOH & 15 & 1.9 $\times$ $10^{17}$ & \citet{Bisschop2007}\\
Pure CH$_3$CHO & 15 & 6.1 $\times$ $10^{17}$ & \citet{Scheltinga2018}\\
Pure CH$_3$COCH$_3$ & 15 & 1.7 $\times$ $10^{17}$ & \citet{Rachid2020}\\
Pure CH$_3$OH & 15 & 7.7 $\times$ $10^{17}$ & \citet{Fraser2004}\\
Pure CH$_4$ & 15 & 1.4 $\times$ $10^{17}$ & \citet{Fraser2004}\\
\hline
\end{tabular}
\end{table*}
\subsection{Infrared refractive index calculator}
\label{refrac_index}
In this section, we introduce the refractive index online calculator which is publicly available through LIDA. The web interface of this tool is shown in Figure~\ref{icenk_page} of Appendix~\ref{app_specfy}. This tool uses the approach adopted in \citet{Rocha2014} for the \texttt{NKABS} code and is briefly described below.
The goal of the tool is to calculate the real ($n$) and imaginary ($k$) parts of CRI ($\tilde{m}$) from the absorbance spectrum (Equation~\ref{abs_eq}) of the ice sample as a function of the wavenumber ($\nu$, in units of cm$^{-1}$). The input experimental data is the absorbance spectrum in an \texttt{ascii} format. Other input parameters are required before starting the calculation. They are the thickness of the ice sample ($d$) in $\mu$m, the refractive index of the sample around 670~nm ($n_0$), or at the wavelength of the HeNe laser used in the experiments, the real refractive index of the substrate, and the MAPE (Mean Average Percentage Error), that is used as stop criterion.
Equations~\ref{n-value} and \ref{k-value} are solved interactively, and new values of $k$ are used to calculate new values of $n$. Subsequently, new $n$ improves $k$, until the convergence criteria is reached. The numerical implementation of these equations is described in \citet{Rocha2014}, and follows the procedure presented in \citet{Ohta1988} to solve the Kramers-Kronig equation. As an example, we calculate the CRI values of pure H$_2$O at 30, 75, 105 and 135~K. The H$_2$O ice IR spectra used as input are taken from \citet{Oberg2007}, namely, \texttt{Pure H$_2$O (3000~ML)}\footnote{\url{https://icedb.strw.leidenuniv.nl/data/14}}. Table~\ref{nkabs_par} list the $n_0$ values, the number of iterations used by the tool and the final MAPE.
The data from OASIS and from the theoretical calculation cover two spectral ranges, i.e., 0.25 to 0.7~$\mu$m and 2 to 20~$\mu$m, respectively. The interval between 0.7 and 2~$\mu$m is not available in the experimental spectrum. We deal with this missing data using different approaches to determine $n$ and $k$. For $k$ values we extrapolate the imaginary refractive index at 2~$\mu$m (10$^{-4}$) until 0.25~$\mu$m. In the case of $n$, we use a low order polynomial to link the water ice $n$ values from \citet{He2022} to the data starting at 2~$\mu$m. The caveat in this approach is that we do not take into account the water ice absorption bands in the interval between 0.7 and 2~$\mu$m. However, the absorption features between 1.4 and 1.8~$\mu$m for amorphous and crystalline water ice are very weak \citep{Mastrapa2008}. For example, the $k$ values calculated by \citet{Mastrapa2008} range from 10$^{-5}$ to 10$^{-3}$ which is close to the value used in our extrapolation (10$^{-4}$). Similarly, the variation in $n$ is 0.2\% between the lowest and highest values.
The results are visualized in the ``Refractive index viewer'' shown in Figure~\ref{nk_viewer} for the data at 30~K. The download of the data files and plots is also available. A terminal-based version of this tool is available for download in the GitHub\footnote{\url{https://github.com/leiden-laboratory-for-astrophysics/refractive-index}} repository of LIDA. Both Linux and executable files for Windows platforms can be used for highly resolved spectral data that demands high computational efficiency.
\begin{table}
\caption{Input parameters used to calculate the mid-IR water ice CRI.}
\label{nkabs_par} %
\centering
\setlength{\tabcolsep}{3pt} %
\renewcommand{\arraystretch}{1.5} %
\begin{tabular}{l c c} %
\hline\hline %
Parameters & Values & Reference\\
\hline
Thickness$^a$ (cm) & 1.0 $\times$ 10$^{-4}$ & \citet{Oberg2007}\\
$n_0^{30K}$ & 1.16 & \citet{He2022}\\
$n_0^{75K}$ & 1.21 & \citet{He2022}\\
$n_0^{105K}$ & 1.23 & \citet{He2022}\\
$n_0^{135K}$ & 1.25 & \citet{He2022}\\
$n_{\rm{substrate}}$ & 1.73 & \citet{Querry1987}\\
Initial MAPE & 0.1\% & ...\\
\hline
\multicolumn{3}{c}{After calculation}\\
\hline
Final MAPE & $\leq$ 0.03\% & ...\\
Iterations & 5 & ...\\
\hline
\end{tabular}
\tablefoot{$^a$Assuming 1~ML $\sim$ 3.4$\AA$ \citep{Gonzalez2019}.}
\end{table}
\section{Future upgrades}
\label{future}
LIDA has already relevant data for all molecules securely or tentatively detected toward protostars (e.g., Tables~\ref{icedb_list} and \ref{analogue_list}), and covers the most abundant species for JWST, but more data are necessary to further boost the interpretation of upcoming observations. Table~\ref{missing_data} lists the molecules that are missing in the current version of LIDA, but are being measured or will be targeted in future experiments. Similarly, temperature and wavelength-dependent refractive index values of CO, CO$_2$, NH$_3$ and CH$_3$OH were recently measured on the OASIS setup, and will be added to the database after data reduction (Rachid et al. {\it in prep.}). We also mention that LIDA is available to host data from other astrochemistry groups. Here we hope that one central point to search for ice properties will be considered helpful for the full ice community.
\begin{table}
\caption{List of missing molecules in LIDA, which will be included via new measurements or data sharing from other laboratories.}
\label{missing_data}
\centering
\begin{tabular}{cc}
\hline\hline
\multicolumn{2}{c}{\bf{< 6 atoms}}\\
Molecule & Name\\
\hline
C$_2$H$_2$ & Acetylene\\
H$_2$S & Hydrogen sulfide\\
HNCO & Isocyanic acid\\
HCN & Hydrogen cyanide\\
\hline
\multicolumn{2}{c}{\bf{> 6 atoms}}\\
\hline
NH$_2$OH & Hydroxylamine\\
NH$_2$CHO & Formamide\\
H$_2$CO$_3$ & Carbonic acid\\
HOCH$_2$CHO & Glycolaldehyde\\
\hline
\end{tabular}
\end{table}
The online tools will also be further developed to support astronomical data interpretation. With this goal, the effect of grain shape will be available when simulating synthetic spectra of protostars. Additionally, the UV/vis $n$ and $k$ values of difference ices ans ice mixtures will be included in LIDA. Another forthcoming upgrade on LIDA is the inclusion of diagnostic plots relating the peak position and full width at half maximum (FWHM) of ice features that can be compared with the similar information from different astronomical observations. %
\section{Summary and outlook}
\label{summary}
The Leiden Ice Database has served the astronomical community for more than 20 years by providing IR spectra of ice samples. In 2015, all ice IR spectra were assembled in one server and visualization tools were developed. In this paper, we present the most recent version of LIDA that includes over 1100 IR spectra of ice samples in astrophysically relevant conditions, as well as the UV/vis and mid-IR refractive index of H$_2$O at different temperatures. In addition to the large ensemble of experimental data, the current upgrade includes astronomy oriented online tools to help the interpretation of observations provided by JWST, in general, as well as the past ice observations. Both data and tools are offered in a user-friendly format to boost the usability of the database. It is worth mentioning that LIDA is a specific deliverable within ICE AGE, an ERS JWST program.
The database is under expansion, and spectra of several COMs and refractive index values of other ices will become publicly available in the next months and years. It is also hoped that other laboratory groups will make their ice spectra available through LIDA. Also, the online tools in the database will be further developed to attend to the needs of interpretation of ice observations in the upcoming years with the JWST, the METIS (Mid-Infrared Extremely Large Telescope Imager and Spectrograph) on the Extremely Large Telescope (ELT), and the SPHEREx (Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer). More information about LIDA can be found in the public-access online documentation: \url{https://leiden-ice-database.readthedocs.io}.
\begin{acknowledgements}
We thank the thoughtful comments of an anonymous referee on both manuscript and the LIDA website. WRMR thanks Leiden Observatory for financial support. We thank the many (under)graduates, postdocs and staff who have been contributing over many years to the data available in LIDA. We furthermore acknowledge the ICE AGE team whose JWST observing plans have been the trigger for updating the ``old'' Leiden Ice Database. We specifically mention Dr. Adwin Boogert for many useful discussions. LIDA is currently also at the base of interpreting data from MIRI GTO protostar program. We are grateful for continuing support through NOVA, the Netherlands Research School for Astronomy, the NWO through its Dutch Astrochemistry Program (DANII), and the NWO VICI grant ``Unlocking the chemistry of the heavens''. The present work is closely connected to ongoing research within INTERCAT, the Center for Interstellar Catalysis located in Aarhus, Denmark. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 291141 MOLDISK). We also acknowledge the technical support of the Computer group at Leiden Observatory.
\end{acknowledgements}
\bibliographystyle{aa}
\bibliography{References}
\appendix
\section{List of ice samples in LIDA.}
\label{Laboratory_data_list}
The current version of LIDA contains the IR spectrum of over 1100 ice samples, which are listed in Table~\ref{analogue_list}. They are categorized by ices in pure samples, and mixtures with, two, three, four and five components. This also includes warmed-up samples or processed by UV radiation.
\longtab[1]{
\begin{landscape}
\begin{longtable}{lccccrrrr}
\caption{\label{analogue_list} Ice analogues hosted in LIDA. Irradiated samples are indicated by the symbol $\leadsto$. ``s'' and ``h'' indicate seconds and hour, respectively.}\\
\hline
\hline
Sample & Thickness & $N_{\rm{ice}}$ $^a$ & Resolution & Ratios & Temperature (K)/ & Substrate/ & Reference\\
& (ML) & (cm$^{-2}$) & (cm$^{-1}$) & & UV radiation (time) & $n_{\rm{substrate}}$ & \\
\hline
\endfirsthead
\caption{Continued.}\\
\hline
Sample & Thickness & $N_{\rm{ice}}$ $^a$ & Resolution & Ratios & Temperature (K)/ & Substrate/ & Reference\\
& (ML) & (cm$^{-2}$) & (cm$^{-1}$) & & UV radiation (time) & $n_{\rm{substrate}}$ & \\
\hline
\endhead
\hline
\endfoot
\hline
\endlastfoot
\multicolumn{8}{c}{\bf{Pure ices}}\\
\hline
H$_2$O & 500 & 5e17 & 1.0 & ... & 10$-$160 & CsI/1.73 & \citet{Gerakines1996}\\
H$_2$O & 3000 & 1e17 & 2.0 & ... & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O & 10000 & 3.5e17 & 2.0 & ... & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O ($\leadsto$) & 500 & 4.4e17 & 1.0 & ... & 10/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
H$_2$O ($\leadsto$) & 500 & 4.4e17& 1.0 & ... & 25$-$160/1h & CsI/1.73 & \citet{Gerakines1996} \\
CO & 600 & 6e17 & 0.5 & ... & 15$-$45 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO ($\leadsto$) & 500 & 5e17& 1.0 & ... & 10/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
CO ($\leadsto$) & 500 & 5e17& 1.0 & ... & 26$-$105/1h & CsI/1.73 & \citet{Gerakines1996} \\
CO$_2$ & 600 & 6e17 & 0.5 & ... & 15$-$130 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO$_2$ ($\leadsto$) & 500 & 5e17 & 1.0 & ... & 10/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
CO$_2$ ($\leadsto$) & 500 & 5e17 & 1.0 & ... & 30,70/1h & CsI/1.73 & \citet{Gerakines1996} \\
CH$_4$ ($\leadsto$) & 500 & 5e17 & 1.0 & ... & 10/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
CH$_4$ ($\leadsto$) & 500 & 5e17 & 1.0 & ... & 25$-$280/1h & CsI/1.73 & \citet{Gerakines1996} \\
NH$_3$ & 500 & 5e17 & 1.0 & ... & 10 & CsI/1.73 & \citet{Gerakines1996} \\
NH$_3$ ($\leadsto$) & 500 & 5e17 & 1.0 & ... & 25$-$280/1h & CsI/1.73 & \citet{Gerakines1996} \\
NH$_3$ ($\leadsto$) & 500 & 5e17 & 1.0 & ... & 27,60/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
SO$_2$ & 4000 & 4.5e17 & 1.0 & ... & 10 & CsI/1.73 & \citet{Boogert1997} \\
H$_2$CO & 500 & 3.7e18 & 1.0 & ... & 10 & CsI/1.73 & \citet{Gerakines1996} \\
H$_2$CO ($\leadsto$) & 500 & 5.1e17 & 1.0 & ... & 10/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
H$_2$CO ($\leadsto$) & 500 & 1.1e17 & 1.0 & ... & 30$-$275/1h & CsI/1.73 & \citet{Gerakines1996} \\
NH$_4^+$ & <500 & 3e18 & 1.0 & ... & 80 & CsI/1.73 & \citet{Novozamsky2001}\\
OCN$^-$ & <500 & <5e17& 1.0 & ... & 80 & CsI/1.73 & \citet{Novozamsky2001}\\
OCS & 620 & 6.2e17 & 0.9 & ... & 10$-$170 & KBr/1.54 & Rachid et al. (in prep.)\\
CH$_3$OH & 3400 & 3.4e18 & 0.5 & ... & 15 & ZnSe/2.54 & \citet{Scheltinga2018}\\
CH$_3$OH & 4000 & 6.1e16 & 0.5 & ... & 15$-$160 & ... & \citet{Fraser2004}\\
CH$_3$OH ($\leadsto$) & 500 & 5e17& 1.0 & ... & 10/5s$-$1h & CsI/1.73 & \citet{Gerakines1996} \\
CH$_3$OH ($\leadsto$) & 500 & 5e17& 1.0 & ... & 25$-$230/1h & CsI/1.73 & \citet{Gerakines1996} \\
HCOOH & 900 & 1.4e17 & 1.0 & ... & 30$-$105 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH & 900 & 1.4e17& 1.0 & ... & 145 (deposition) & CsI/1.73 & \citet{Bisschop2007}\\
CH$_3$CHO & 4500 & 9.7e18 & 1.0 & ... & 30$-$120 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CH$_3$OCH$_3$ & 4500 & 2.9e18 & 1.0 & ... & 30$-$100 & ZnSe/2.54 & \citet{Scheltinga2018}\\
CH$_3$CH$_2$OH & 4500 & 3e18 & 1.0 & ... & 30$-$150 & ZnSe/2.54 & \citet{Scheltinga2018}\\
CH$_3$COCH$_3$ & 2800 & 2.1e18 & 0.5 & ... & 15$-$140 & ZnSe/2.54 & \citet{Rachid2020}\\
CH$_3$OCHO & 2000 & 2.2e18 & 0.5 & ... & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2021}\\
CH$_3$NH$_2$ & 850 & 1.6e18 & 0.5 & ... & 15$-$140 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$CN & 5000 &3.0e18 & 1.0 & ... & 15$-$150 & KBr/1.54 & \citet{Rachid2022}\\
\hline
\multicolumn{8}{c}{\bf{Binary mixtures}}\\
\hline
H$_2$O:CO & 2500 & 5.1e17 & 1.0 & 1:100 & 15,30 & CsI/1.73 & \citet{Ehrenfreund1997}\\
H$_2$O:CO & 2500 & 4.2e17 & 1.0 & 100:14 & 10 & CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:OCS & ... & ... & 0.9 & 20:1 & 11,20 & KBr/1.54 & Rachid et al. (in prep.)\\
CO$_2$:OCS & ... & ... & 0.9 & 24:1 & 11$-$70 & KBr/1.54 & Rachid et al. (in prep.)\\
H$_2$O:OCS & ... & ... & 0.9 & 20:1 & 11$-$120 & KBr/1.54 & Rachid et al. (in prep.)\\
CO:O$_2$ & 2500 & 1.2e17 & 1.0 & 100:50 & 10,35 & CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:O$_2$ & 2500 & 1e17 & 1.0 & 100:70 & 10 & CsI/1.73 & \citet{Ehrenfreund1997}\\
H$_2$O:CO$_2$ & 2500 &3.9e17 & 1.0 & 1:100 & 10,30 & CsI/1.73 & \citet{Ehrenfreund1997}\\
H$_2$O:CO$_2$ & 2500 &1.7e17 & 1.0 & 1:10 & 10,80 & CsI/1.73 & \citet{Ehrenfreund1997}\\
H$_2$O:CO$_2$ & 2500 &1.2e17 & 1.0 & 1:6 & 10$-$75 &CsI/1.73 & \citet{Ehrenfreund1997}\\
H$_2$O:CO$_2$ & 2500 &4.6e17 & 1.0 & 100:14 & 10 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:CO$_2$ & 2500 &2e17 & 1.0 & 100:4 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:CO$_2$ & 2500 & 1.5e17& 1.0 & 100:8 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:CO$_2$ & 2500 & 1.4e17 & 1.0 & 100:16 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:CO$_2$ & 2500 & 1.4e17 & 1.0 & 100:21 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:CO$_2$ & 2500 & 1e17 & 1.0 & 100:23 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:CO$_2$ & 2500 & 1.3e17 & 1.0 & 100:26 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO$_2$:O$_2$ & 2500 &1e17 & 1.0 & 1:1 & 10 & CsI/1.73& \citet{Ehrenfreund1997}\\
HCOOH:CH$_3$OH & 1800 & 8.3e17 & 1.0 & 1:9 & 15$-$75 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:CO & 1800 & 1.25e18 & 1.0 & 1:9 & 15$-$165 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:H$_2$O & 900 & 5.5e17 & 1.0 & 0.25:1 & 15$-$165 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:H$_2$O & 900 & 3.7e17 & 1.0 & 0.5:1 & 15$-$165 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:H$_2$O & 900 & 5.5e17 & 1.0 & 1:1 & 15$-$165 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:H$_2$O & 900 & 6e17 & 1.0 & 0.1:1 & 15$-$165 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:CO$_2$ & 900 & 6e17 & 1.0 & 0.1:1 & 15$-$165 & CsI/1.73 & \citet{Bisschop2007}\\
H$_2$O:C$^{18}$O$_2$ & 2000 & 1.1e17 & 2.0 & 1:1 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 6000 & 1.95e17 & 2.0 & 1:1 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 20000 & 6.2e17 & 2.0 & 1:1 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 9000 & 1.3e17 & 2.0 & 1:2 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 30000 & 6.4e17 & 2.0 & 1:2 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 15000 & 1e18 & 2.0 & 2:1 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 4500 & 2.8e17 & 2.0 & 2:1 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 3750 & 1.3e17 & 2.0 & 4:1 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:C$^{18}$O$_2$ & 15000 & 2.76e18 & 2.0 & 1:4 & 15$-$135 & CsI/1.73 & \citet{Oberg2007}\\
H$_2$O:CO$_2$ & 500 &1.65e18 & 1.0 & 10:1 & 10$-$185 & CsI/1.73 & \citet{Ehrenfreund1999} \\
H$_2$O:CO$_2$ & 500 & 5.75e17 & 1.0 & 1:1 & 10$-$187 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 & 6e17 & 1.0 & 10:1 & 10$-$75 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 & 4.6e17 & 1.0 & 3:1 & 10$-$130 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 & 3.7e17 & 1.0 & 2:1 & 10$-$145 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 & 2.5e17 & 1.0 & 1:1 & 10$-$145 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 & 4.9e17 & 1.0 & 1:2 & 10$-$155 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 & 5.75e17 & 1.0 & 1:3 & 10$-$160 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CO$_2$:CH$_3$OH & 500 &6.4e17 & 1.0 & 1:10 & 10$-$180 & CsI/1.73 & \citet{Ehrenfreund1999} \\
CH$_3$OH:SO$_2$ & ... & 2.7e17 & 1.0 & 1:1 & 10 & CsI/1.73& \citet{Boogert1997} \\
CH$_3$OH:SO$_2$ & ... & 5e17 & 1.0 & 11:1 & 10 & CsI/1.73& \citet{Boogert1997} \\
CO:CO$_2$ & 8000 & 1e17 & 0.5 & 1:1 & 15$-$100 & ... & \citet{Fraser2004} \\
CO over HCOOH & 16000 & 9.7e16 & 0.5 & ... & 15$-$160 & ... & \citet{Fraser2004} \\
CO under HCOOH & 16000 & 9.7e16 & 0.5 & ... & 15$-$160 & ... & \citet{Fraser2004} \\
CO over CO$_2$ & 16000 & 3.1e17 & 0.5 & ... & 15$-$160 & ... & \citet{Fraser2004} \\
CO under CO$_2$ & 16000 & 4.3e17 & 0.5 & ... & 15$-$100 & ... & \citet{Fraser2004} \\
CO:HCOOH & 8000 & 1.2e17 & 0.5 & 1:1 & 15$-$160 & ... & \citet{Fraser2004} \\
CO under CH$_3$OH & 16000 & 7.4e17 & 0.5 & ... & 15$-$160 & ... & \citet{Fraser2004} \\
CO over CH$_3$OH & 16000 & 7.4e17 & 0.5 & ... & 15$-$160 & ... & \citet{Fraser2004} \\
CO over CH$_4$ & 8000 & 2.4e17 & 0.5 & ... & 15$-$40 & ... & \citet{Fraser2004} \\
CO under CH$_4$ & 8000 & 2.4e17 & 0.5 & ... & 15$-$40 & ... & \citet{Fraser2004} \\
CO over CO$_2$ & 1200 & 1e17 & 0.5 & 1:1 & 15$-$110 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO over CO$_2$ & 1800 & 1.6e17 & 0.5 & 1:2 & 15$-$110 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO$_2$ over CO & 1200 & 9.8e16 & 0.5 & 1:1 & 15$-$110 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO$_2$ over CO & 1800 & 3e17 & 0.5 & 2:1 & 15$-$110 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO$_2$ over CO & 2400 & 1.9e17 & 0.5 & 3:1 & 15$-$110 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO$_2$ over CO & 6600 & 6.7e17 & 0.5 & 10:1 & 15$-$110 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO:CO$_2$ & 1200 & 4.4e17 & 0.5 & 1:1 & 15$-$130 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO:CO$_2$ & 1800 & 1.9e17 & 0.5 & 2:1 & 15$-$130 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
CO:CO$_2$ & 6600 & 5.5e17 & 0.5 & 1:10 & 15$-$130 & CsI/1.73 & \citet{vanBroekhuizen2006} \\
H$_2$O:CH$_3$CHO & 2000 & 3.4e18 & 1.0 & 20:1 & 15$-$160 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CO:CH$_3$CHO & 2000 & 3.1e18 & 1.0 & 20:1 & 15$-$160 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CH$_3$OH:CH$_3$CHO & 2000 & 1.4e18 & 1.0 & 20:1 & 15$-$140 & ZnSe/2.54 & \citet{Scheltinga2018} \\
H$_2$O:CH$_3$CH$_2$OH & 2000 & 2.5e18 & 1.0 & 20:1 & 15$-$160 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CO:CH$_3$CH$_2$OH & 2000 & 2.6e18 & 1.0 & 20:1 & 15, 30 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CH$_3$OH:CH$_3$CH$_2$OH & 2000 & 4.6e18 & 1.0 & 20:1 & 15$-$150 & ZnSe/2.54 & \citet{Scheltinga2018} \\
H$_2$O:CH$_3$OCH$_3$ & 2000 & 2.3e18 & 1.0 & 20:1 & 15$-$160 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CO:CH$_3$OCH$_3$ & 2000 & 2.7e18 & 1.0 & 20:1 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2018} \\
CH$_3$OH:CH$_3$OCH$_3$ & 2000 & 3.8e18 & 1.0 & 20:1 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2018} \\
H$_2$O:CH$_3$COCH$_3$ & 3500 &2e18 & 0.5 & 5:1 & 15$-$160 & ZnSe/2.54 & \citet{Rachid2020} \\
H$_2$O:CH$_3$COCH$_3$ & 3500 &2.6e18 & 0.5 & 20:1 & 15$-$160 & ZnSe/2.54 & \citet{Rachid2020} \\
CO:CH$_3$COCH$_3$ & 3500 & 1.9e18 & 0.5 & 5:1 & 15, 30 & ZnSe/2.54 & \citet{Rachid2020} \\
CO:CH$_3$COCH$_3$ & 3500 & 2e18 & 0.5 & 20:1 & 15, 30 & ZnSe/2.54 & \citet{Rachid2020} \\
CO$_2$:CH$_3$COCH$_3$ & 3500 & 1.3e18 & 0.5 & 5:1 & 15$-$100 & ZnSe/2.54 & \citet{Rachid2020} \\
CO$_2$:CH$_3$COCH$_3$ & 3500 & 1.4e18 & 0.5 & 20:1 & 15$-$100 & ZnSe/2.54 & \citet{Rachid2020} \\
CH$_3$OH:CH$_3$COCH$_3$ & 3500 & 2.6e18 & 0.5 & 5:1 & 15$-$140 & ZnSe/2.54 & \citet{Rachid2020} \\
CH$_3$OH:CH$_3$COCH$_3$ & 3500 & 5.5e18 & 0.5 & 20:1 & 15$-$140 & ZnSe/2.54 & \citet{Rachid2020} \\
CH$_3$NH$_2$:H$_2$O & 3500 & 2.2e18 & 0.5 & 1:5 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:H$_2$O & 3500 & 2.5e18 & 0.5 & 1:10 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:H$_2$O & 3500 &2.6e18 & 0.5 & 1:20 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:CH$_4$ & 3500 &1.9e18 & 0.5 & 1:5 & 15$-$45 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:CH$_4$ & 3500 &2e18 & 0.5 & 1:10 & 15$-$45 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:CH$_4$ & 3500 &2e18 & 0.5 & 1:20 & 15$-$45 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:NH$_3$ & 3500 &2.2e18 & 0.5 & 1:5 & 15$-$115 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:NH$_3$ & 3500 &2.4e18 & 0.5 & 1:10 & 15$-$115 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$NH$_2$:NH$_3$ & 3500 &2.6e18 & 0.5 & 1:20 & 15$-$115 & ZnSe/2.54 & \citet{Rachid2021} \\
CH$_3$OCHO:CO & 2000 & 1.8e18 & 0.5 & 1:20 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2021}\\
CH$_3$OCHO:H$_2$CO & 2000 & 2.2e18 & 0.5 & 1:20 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2021}\\
CH$_3$OCHO:CH$_3$OH & 2000 & 1.6e18 & 0.5 & 1:20 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2021}\\
CH$_3$OCHO:H$_2$O & 2000 & 1.5e18 & 0.5 & 1:20 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2021}\\
CH$_3$CN:H$_2$O & 5000 & 1.7e18 & 1.0 & 1:5 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:H$_2$O & 5000 & 2.3e18 & 1.0 & 1:10 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:H$_2$O & 5000 & 1.6e18 & 1.0 & 1:20 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:CO & 5000 & 1.3e18 & 1.0 & 1:5 & 15,30 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:CO & 5000 & 1.6e18 & 1.0 & 1:10 & 15,30 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:CO$_2$ & 5000 & 1.2e18 & 1.0 & 1:5 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:CO$_2$ & 5000 & 1.8e18 & 1.0 & 1:10 & 15$-$150 &Ge/4.0 & \citet{Rachid2022}\\
CH$_3$CN:NH$_3$ & 5000 & 2e18 & 1.0 & 1:5 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:NH$_3$ & 5000 & 2.1e18 & 1.0 & 1:10 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
CH$_3$CN:NH$_3$ & 5000 & 2.1e18 & 1.0 & 1:20 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
\hline
\multicolumn{8}{c}{\bf{Tertiary mixtures}}\\
\hline
HCOOH:H$_2$O:CO$_2$ & 1800 & 1.5e18 &1.0 & 0.1:1:0.4 & 15 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:H$_2$O:CH$_3$OH & 1800 & 1.6e18 & 1.0 & 0.1:1:0.4 & 15 & CsI/1.73 & \citet{Bisschop2007}\\
HCOOH:H$_2$O:CO & 1800 & 1.74e18 & 1.0 & 0.1:1:0.4 & 15 & CsI/1.73 & \citet{Bisschop2007}\\
H$_2$O:CO:O$_2$ & 2500 & 3.8e17 & 1.0 & 1:80:20 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
H$_2$O:CO:CO$_2$ & 2500 & 3.3e17 & 1.0 & 1:50:50 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
H$_2$O:CO:CO$_2$ & 2500 & 3.7e18 & 1.0 & 1:50:56 & 10,45 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:50:4 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:50:8 & 10 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:50:16 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:50:21 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:50:32 & 10 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:54:10 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 &1.2e17 & 1.0 & 100:20:11 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 &1.2e17 & 1.0 & 100:11:20 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:O$_2$:CO$_2$ & 2500 & 1.2e17 & 1.0 & 100:10:23 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
H$_2$O:CO:N$_2$ & 2500 & 2e17 & 1.0 & 1:40:50 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
CO:N$_2$:CO$_2$ & 2500 &1.2e17 & 1.0 & 100:50:20 & 10,30 & CsI/1.73& \citet{Ehrenfreund1997}\\
H$_2$O:CO$_2$:CO & 2500 &2.7e17 & 1.0 & 100:20:3 & 20 & CsI/1.73& \citet{Ehrenfreund1997}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &1.2e18 & 1.0 & 9:1:2 & 10$-$185 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &2.8e17 & 1.0 & 0.2:0.6:1 & 10$-$140 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &2.8e17 & 1.0 & 0.4:0.6:1 & 10$-$140 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 & 6e17& 1.0 & 1:0.6:1 & 10$-$180 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &9.8e17 & 1.0 & 0.7:0.7:1 & 10$-$146 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &1.5e18 & 1.0 & 0.8:0.9:1 & 10$-$135 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &2.7e18 & 1.0 & 1:1:1 & 10$-$145 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &4.9e17 & 1.0 & 0.7:1:1 & 10$-$120 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &1.8e16 & 1.0 & 0.6:1:0.8 & 10$-$121 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &7.8e17 & 1.0 & 1.2:0.7:1.0 & 10$-$119 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &3e17 & 1.0 & 0.7:0.9:1.0 & 10$-$134 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &5e16 & 1.0 & 0.5:1:1 & 10 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &9.7e16 & 1.0 & 0.9:1.4:1 & 10$-$125 &CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &7.9e16 & 1.0 & 0.2:0.5:1 & 10, 98 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &7.7e16 & 1.0 & 0.3:0.5:1 & 10$-$95 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &7.1e16 & 1.0 & 0.3:0.7:1 & 10$-$82 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &1.2e18 & 1.0 & 1.1:1.2:1 & 10$-$131 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 & 3e17 & 1.0 & 0.7:0.9:1 & 10$-$134 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$ & 500 &7.2e17 & 1.0 & 0.9:0.3:1 & 10$-$115 & CsI/1.73 & \citet{Ehrenfreund1999}\\
CO:CH$_3$OH:CH$_3$CHO & 2000 &1.8e18 & 1.0 & 20:20:1 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2018}\\
CO:CH$_3$OH:CH$_3$CH$_2$OH & 2000 &1.8e18 & 1.0 & 20:20:1 & 15$-$150 & ZnSe/2.54 & \citet{Scheltinga2018}\\
CO:CH$_3$OH:CH$_3$OCH$_3$ & 2000 &1.7e18 & 1.0 & 20:20:1 & 15$-$100 & ZnSe/2.54 & \citet{Scheltinga2018}\\
CH$_3$COCH$_3$:H$_2$O:CO$_2$ & 3500 & 7.2e17 & 0.5 & 1:2.5:2.5 & 15$-$160 & ZnSe/2.54 & \citet{Rachid2020}\\
CH$_3$COCH$_3$:H$_2$O:CO$_2$ & 3500 & 6.8e17 & 0.5 & 1:10:10 & 15$-$160 & ZnSe/2.54 & \citet{Rachid2020}\\
CH$_3$COCH$_3$:CO:CH$_3$OH & 3500 & 9.7e17 & 0.5 & 1:2.5:2.5 & 15$-$140 & ZnSe/2.54 & \citet{Rachid2020}\\
CH$_3$COCH$_3$:CO:CH$_3$OH & 3500 & 1e18 & 0.5 & 1:10:10 & 15$-$140 & ZnSe/2.54 & \citet{Rachid2020}\\
CH$_3$NH$_2$:H$_2$O:CH$_4$ & 2500 &1.1e18 & 0.5 & 1:5:5 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$NH$_2$:H$_2$O:CH$_4$ & 2500 &1.2e18 & 0.5 & 1:10:10 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$NH$_2$:H$_2$O:NH$_3$ & 2500 &1.2e18 & 0.5 & 1:5:5 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$NH$_2$:H$_2$O:NH$_3$ & 2500 &1.3e18 & 0.5 & 1:10:10 & 15$-$150 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$NH$_2$:CH$_4$:NH$_3$ & 2500 &1.1e18 & 0.5 & 1:5:5 & 15$-$115 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$NH$_2$:CH$_4$:NH$_3$ & 2500 &1.2e18 & 0.5 & 1:10:10 & 15$-$115 & ZnSe/2.54 & \citet{Rachid2021}\\
CH$_3$CN:H$_2$O:CO$_2$ & 5000 & 1.3e18 & 1.0 & 1:5:2 & 15$-$150 & Ge/4.0 & \citet{Rachid2022}\\
\hline
\multicolumn{8}{c}{\bf{Quaternary mixtures}}\\
\hline
H$_2$O:CO:O$_2$:N$_2$ & 2500 &2.6e17 & 1.0 & 1:40:40:15 & 10,30 &CsI/1.73 & \citet{Ehrenfreund1997}\\
H$_2$O:CH$_3$OH:CO$_2$:NH$_3$ & 500 &3.1e17 & 1.0 & 0.7:0.7:1:0.7 & 10$-$104 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$:CH$_4$ & 500 &3.1e17 & 1.0 & 0.6:0.7:1:0.1 & 10$-$119 & CsI/1.73 & \citet{Ehrenfreund1999}\\
H$_2$O:CH$_3$OH:CO$_2$:CH$_4$ & 500 & 1e17 & 1.0 & 0.4:0.6:1:0.23 & 10 & CsI/1.73 & \citet{Ehrenfreund1999}\\
CO:O$_2$:N$_2$:CO$_2$ & 2500 &4.4e18 & 1.0 & 1:50:25:32 & 10,30 & CsI/1.73 & \citet{Ehrenfreund1997}\\
CH$_3$OCHO:CO:H$_2$CO:CH$_3$OH & 2000 & 8.9e17 & 0.5 & 1:20:20:20 & 15$-$120 & ZnSe/2.54 & \citet{Scheltinga2021}\\
CH$_3$NH$_2$:H$_2$O:CH$_4$:NH$_3$ & 3400 & 9.3e17 & 0.5 & 3:10:10:10 & 15$-$120 & ZnSe/2.54& \citet{Rachid2021}\\
CH$_3$CN:H$_2$O:CH$_4$:NH$_3$ & 5000 & 2.3e18 & 1.0 & 1:20:2:2 & 15$-$150 & Ge/4.0& \citet{Rachid2022}\\
\hline
\multicolumn{6}{c}{\bf{Five components mixture}}\\
\hline
H$_2$O:CO:O$_2$:N$_2$:CO$_2$ & 2500 & 3.8e17 & 1.0 & 1:50:35:15:3 & 10 & CsI/1.73& \citet{Ehrenfreund1997}\\
H$_2$O:CO:O$_2$:N$_2$:CO$_2$ & 500 & 8e16 & 1.0 & 1:50:35:15:3 & 10 & CsI/1.73& \citet{Ehrenfreund1997}\\
\hline
\end{longtable}
\tablefoot{$^a$ Except in the case of pure ices, $N_{\rm{ice}}$ values correspond to the major ice component.
}
\end{landscape}
}
\twocolumn
\section{Database design and back-end information}
\label{DB_design}
The structure of LIDA is built with \texttt{Flask}\footnote{\url{https://flask.palletsprojects.com/en/2.0.x/}} \citep{flask2018}, an open-source web framework written in Python. \texttt{Flask} is widely extensible in the sense that external software can be embedded in the web application. LIDA has two major interfaces that provide access to administrators and users, respectively. The user interface is described in Section~\ref{us_interface}. Here, we provide details about the administrator interface, which obviously only accessible via login and is restricted to collaborators and developers.
The administrator interface that provides access to all information hosted in the database, as well as the capability to add and modify data. In this module, the database is structured in a relation design between {\bf Analogues} and {\bf Spectrum}. {\it Analogues} are the name of the ice sample (e.g., Pure H$_2$O), whereas {\it Spectrum} is the IR spectrum of the analogue at a specific temperature (e.g., Pure H$_2$O at 15~K). Table~\ref{database_design} shows a scheme of the information contained in the database. All this information is also visible in the user interface, which is introduced in Section~\ref{us_interface}.
\begin{table*}
\caption{\label{database_design} Example of relational database for pure H$_2$O ice. All information is visible in the user interface.}
\renewcommand{\arraystretch}{1.1}
\scalebox{0.9}{
\centering %
\begin{tabular}{lccccccc}
\hline\hline
\multicolumn{7}{c}{\bf{Analogue}}\\
ID$^a$ & Name$^b$ & Deposition & Author & DOI & Upload date & Annotation$^b$\\
& & temperature (K)& & & &\\
\hline
14 & Pure H$_2$O~3000~ML & 15 & {\"O}berg et al. & 10.1051/0004-6361:20065881 & 2021-10-27 & Pure\_H2O\_3000\_ML.csv \\
\hline
\multicolumn{7}{c}{\bf{Spectrum}}\\
ID$^a$ & Temperature & Column density & Ice thickness & Resolution & Wavenumber range & File name$^c$\\
& (K) & (molec./cm$^{2}$) & ($\mu$m) & (cm$^{-1}$) & (cm$^{-1}$) & \\
\hline
14 & 15 & 1e17 & 3000 & 2.0 & 500$-$4000 & 96\_15.0K\\
\hline
\end{tabular}}
\tablefoot{$^a$ identifier number that relates the analogue to the spectrum.\\
$^b$ Annotation file containing the respective position and assignments of the vibration modes.\\
$^c$ Name of the file containing the wavenumber and absorbance of the ice sample when uploaded to the database. This file is stored in HDF5 format in LIDA, but available for download as \texttt{ascii} file to the user.\\
}
\end{table*}
In addition to the IR spectra of ice samples, the database hosts data of experimentally derived UV-vis refractive index values (optical constants), and calculated values for the mid-IR. The continuum spectral energy distribution (SED) of protostars is also hosted, but only accessible via the online tool \texttt{SPECFY}. The database files containing this information, are also structured in a relational design as used for {\it Analogues} and {\it Spectrum}. In common, the files containing the spectral data, optical constants, and continuum SED, are stored on the server using HDF5 (Hierarchical Data Format) format, that has been designed to store large amount of data. The web interface allows the administrator to upload the data as a simple two-column file. The first column (X-axis) is the wavenumber (cm$^{-1}$) for the absorbance and refractive index data, whereas wavelength ($\mu$m) is used for continuum SED data. Likewise, the second column (Y-axis) gives the physical quantities, such as absorbance, refractive index and flux in Jy, respectively. Python has a package supporting HDF5 called H5PY\footnote{\url{https://docs.h5py.org/en/stable/}}, which is used to generate compressed files in LIDA to improve the efficiency of the database. Despite the files being stored in HDF5 format, they are available for download by the user in ASCII files (\texttt{.txt} extension). For security reasons, LIDA performs a check when uploading data which consists of validating the file extension, structure and size. Absorbance data can be uploaded under the category of warm-up or irradiation time (exposition). Similarly, the refractive index is uploaded under the category of real or imaginary values. The continuum SED data is uploaded in the categories of polynomial or blackbody.
The administrator module also contains the {\it access information tracker} that allows to track the number of accesses and downloads over the months and years. The goal of this feature is to check the impact of LIDA in providing the astronomical community with essential and accurate data to interpret telescope observations.
\section{Web interfaces of the online tools}
\label{app_specfy}
The web interface of \texttt{SPECFY} is show in Figure~\ref{Specfy}. In ``Step 1'', \texttt{SPECFY} sets the wavelength range to create the synthetic spectrum. This step is crucial because some absorbance spectra in the database have different ranges. By setting the range, all the absorbance spectra selected in step 2 are evenly interpolated to ensure that all spectral components have the same range. Next, in ``Step 2''the laboratory ice spectrum from LIDA can be selected to be converted to an optical depth scale and combined to other ice spectrum. This step can be repeated multiple times. Finally, in ``Step 3'', the optical depth scale spectrum is converted to a spectrum in flux units based on the object and the continuum SED model adopted by the user. An example of the output files are shown in Figure~\ref{synhtetic}. The {\it top} panel shows the combined spectrum used to match the AFGL~989 {\it ISO} spectrum. The {\it bottom} panel displays the synthetic spectrum in flux scale that adopts the continuum SED of Elias~29 protostar.
Figure~\ref{icenk_page} shows the web interface of the refractive index calculator. This tool requires the upload of an external file containing the absorbance spectrum, which is done via the button ``Submit''. Next, the user is asked to parse the values of three physical parameters (ice thickness, $n_{670~\rm{nm}}$, $n_{\rm{subs}}$) and the stop criteria (MAPE). The calculations can be started by clicking on the blue button ``Start calculation'', and they roughly last 1$-$4 seconds for a spectrum with 18000 rows. The output data can be download by clicking on the green button ``Download the refractive index''. One of the outputs is called ``\texttt{lnk\_optool}''. This file contains the real and imaginary parts of the refractive index and are formatted to be used as input in the computational code \texttt{optool}\footnote{\url{https://github.com/cdominik/optool}} \citep{Dominik2021}, a command-line tool written in \texttt{Fortran}, which is dedicated to derive opacity of ice and bare grains.
|
Title:
A Swift X-ray view of the SMS4 sample -- X-ray properties of 31 quasars and radio galaxies |
Abstract: We present Swift observations of 31 sources from the SMS4 catalog, a sample
of 137 bright radio sources in the Southern Hemisphere. All these sources had
no Chandra or XMM-Newton observations: 24 of these were observed with Swift
through a dedicated proposal in 2015, and data for the remaining seven were
retrieved from the Swift archive. The reduction and analysis of data collected
by the Swift X-ray Telescope (XRT) led to 20 detections in the 0.3--10 keV
band. We provide details of the X-ray emission in this band for these 20
detections, as well as upper limits for the remaining 11 SMS4 sources. When
statistics allowed, we investigated the extent of the X-ray emission, the
hardness ratio, and we carried out a spectral analysis. We matched the 20 X-ray
detected sources with infrared (AllWISE, CatWISE2020) and optical (GSC 2.3.2,
DES DR2) catalogs to establish associations with infrared and optical sources,
and compared our results with previously published counterparts in these bands.
Requiring a detection in both the infrared and the optical bands to establish a
candidate counterpart for our X-ray detections, we obtain reliable counterparts
for 18 sources, while the remaining two sources need further investigation to
establish firm identifications. We find that ~35% of all the SMS4 sources lie
below the lower limit of 10.9 Jy for the flux density at 178 MHz. We present
the list of 56 SMS4 sources that in 2022 March remain to be observed in the
X-rays with narrow-field instruments.
| https://export.arxiv.org/pdf/2208.04763 | command.
\usepackage{tabularx,rotating}
\usepackage{xcolor}
\usepackage{tablefootnote}
\usepackage{ulem}
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\newcommand{\sw}{{Swift}}
\newcommand{\cha}{{Chandra}}
\newcommand{\xmm}{XMM-{Newton}}
\shorttitle{A \sw~X-ray view of the SMS4 sample}
\shortauthors{Maselli et al.}
\graphicspath{{./}{figures/}}
\begin{document}
|
Title:
Molecular Clouds as Gravitational Instabilities in Rotating Disks: A Modified Stability Criterion |
Abstract: Molecular gas disks are generally Toomre stable ($Q_T>$1) and yet clearly
gravitationally unstable to structure formation as evidenced by the existence
of molecular clouds and ongoing star formation. This paper adopts a 3D
perspective to obtain a general picture of instabilities in flattened rotating
disks, using the 3D dispersion relation to describe how disks evolve when
perturbed over their vertical extents. By explicitly adding a vertical
perturbation to an unperturbed equilibrium disk, stability is shown to vary
with height above the mid-plane. Near to $z$=0 where the equilibrium density is
roughly constant, instability takes on a Jeans-like quality, occurring on
scales larger than the Jeans length and subject to a threshold
$Q_M=\kappa^2/(4\pi G\rho)=1$ or roughly $Q_T\approx 2$. Far from the
mid-plane, on the other hand, stability is pervasive, and the threshold for the
total disk (out to $z=\pm\infty$) to be stabilized is lowered to $Q_T=1$ as a
consequence. In this new framework, gas disks are able to fragment through
partial 3D instability even where total 2D instability is suppressed. The
growth rates of the fragments formed via 3D instability are comparable to, or
faster than, Toomre instabilities. The rich structure in molecular disks on the
scale of 10s of pc can thus be viewed as a natural consequence of their 3D
nature and their exposure to a variety of vertical perturbations acting on
roughly a disk scale height, i.e. due to their situation within the more
extended galaxy potential, participation in the disk-halo flow, and exposure to
star formation feedback.
| https://export.arxiv.org/pdf/2208.01888 |
\defcitealias{GLB}{GLB}
\title{Molecular clouds as gravitational instabilities in rotating disks: a modified stability criterion
}
\author{Sharon E. Meidt}
\affiliation{Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, B-9000 Gent, Belgium}
\section{Introduction\label{sec:intro}}
\setcounter{footnote}{0}
One of the long-standing idiosyncracies of star formation theory is that the molecular gas disks of galaxies where stars are formed -- and which are rich in multi-scale structure -- lie largely above the Toomre `Q' threshold \citep{toomre} used to predict instability and fragmentation in rotating disks \citep[see][]{leroy08,romeowiegert,romeomogotsi, elm11}. Yet there is no denying the appeal of a picture for star formation in which the initial stages involve passing the threshold for gravitational instability after which feedback from subsequent star formation returns the molecular medium to the brink of gravitational instability \citep[so-called `Q' regulation; e.g][]{silk97, ko01, hopkins}.
The modern observation that molecular disks have $Q$$>$1 \citep[i.e.][]{kenn89, MK01, leroy08} has thus prompted a number of revisions of the criterion \citep[e.g.][]{romeo10,elm11,romeowiegert,grivgedalin, romeoagertz, agertz15}. These follow along the lines of studies that take into account magnetic fields \citep[e.g.][]{chandrasekhar54,balbus,elm87,elm94,gammie96,ko01}, cooling and the dissipative nature of turbulent gas \citep{elm89, gammie01}, and the role of non-axisymmetry on gas disk stability \citep{GLBb,JT66}. These all suggest, either phenomenologically, analytically or numerically, that instability and fragmentation may be possible above the $Q$=1 threshold.
In another school of thought, the Toomre instability in its traditional form is only indirectly relevant to the star formation process \citep{koda,elm11}, i.e. because the instability is really taking place in an inherently multi-component galaxy disk \citep{jogsolomon,bertinromeo,wangsilk94,ko07,romeowiegert} and the product of the instability under typical conditions in star-forming disks is large-scale structures that organize the molecular medium, rather than the molecular clouds \citep{elm11}. In this school, molecular cloud formation requires alternative channels \citep[see review by][and references therein]{dobbs14} and these must necessarily act on short timescales \citep{maclow17}, given the developing consensus from both simulations and observations that molecular clouds are rapidly destroyed by early stellar feedback together with galactic shear (see review by \citealt{dale15} and e.g. \citealt{kk018,meidt15,chevance20,chevance21,kim21}).
From another perspective, it may not be surprising that the Toomre criterion fails to be predictive of molecular structures that are 10s of parsecs in scale -- close to the disk scale height -- given that it was designed to describe stability to large-scale perturbations confined specifically to thin disks. Indeed, the perturbations that are typically considered are most often confined to two-dimensions, approximating the disk's internal response (mediated by self-gravity) to some local impulse, again acting from within. These are the types of perturbations relevant for describing the stability of density wave perturbations \citep{linshu,toomre}.
In contrast to collisionless rotating stellar disks with smooth vertical profiles, however, molecular gas disks reveal their susceptibility to impulses and perturbations that are not necessarily 2D, restricted to the mid-plane, or tied to the disk's vertical extent, e.g. triggered by phase transitions, star formation feedback and the disk-halo flow \citep[e.g.][]{fraternalibinney06,walch2015,elm14}, or related to non-axisymmetric structures in the surrounding stellar disk or interaction with the local environment \citep[e.g. ram pressure stripping; ][]{vollmer,lee17}.
In this paper alternative vertical perturbations are proposed, designed with (molecular) gas disks embedded in thicker gas and stellar disks in mind, and used to derive an analytical condition for disk stability on scales near the disk scale height.
The paper centers on the 3D dispersion relation, which relates the evolution of the perturbation to the vertical and radial motions that develop from self-gravity, rotation and gas pressure. Readers primarily interested in the application of the new framework to the stability of molecular disks are pointed to $\S$~\ref{sec:3Dv2D}. The interested reader can find the details of the derivation of the 3D and 2D dispersion relations in $\S\S$~\ref{sec:framework} and \ref{sec:2dstability}. A summary of how disks are shown to behave in the presence of different types of perturbations (examined in detail in $\S\S$~\ref{sec:framework} and ~\ref{sec:2dstability}) is given at the end of $\S$~\ref{sec:2dstability}. There the reader can also obtain an overview of the two main modes of instability in 3D flattened rotating disks: 2D Toomre instability and the 3D instability endemic to the mid-plane identified in this work.
In more detail, after introducing the framework used to obtain solutions to the 3D linearized equations of motion in $\S\S$~\ref{sec:framework21} and \ref{sec:equationsofmotion}, the 3D dispersion relation is obtained and used to assess stability near to and far from the disk mid-plane in $\S$~\ref{sec:3ddispersionRelation}. %
Then in $\S$~\ref{sec:2dstability}, following \cite{toomre} and \cite{GLB} the 2D version of the dispersion relation is used to determine the conditions for instability (perturbation growth) in a number of scenarios. The threshold calculated by GLB in the case of infinite vertical perturbations with no phase variation is recovered in $\S$~\ref{sec:infGLB}. The impact of wave-like behavior on this threshold is considered in $\S$~\ref{sec:infwave}. Then in $\S$~\ref{sec:finWKB} the Toomre criterion is obtained using a vertical perturbation that is both wave-like, with a specific relation between the vertical and radial wavenumbers, and also extended relative to the disk scale height $h$. Finally, a modified, higher threshold is obtained for wave and non-wave perturbations near the mid-plane. To assess the prominence of fragments formed via gravitational instability in these different scenarios, in $\S$~\ref{sec:3Dv2D}
the growth rates of unstable 3D perturbations are calculated over a range of spatial scales.
\section{Three-dimensional instability in rotating disks}\label{sec:framework}
\subsection{The Basic Framework}\label{sec:framework21}
To examine the conditions that lead to gravitational instability in 3D rotating gas disks we adopt the idealized configuration proposed by \cite[][hereafter GLB]{GLB}, in which the disk is infinitely extended in the radial and vertical directions but significantly compressed in the vertical direction parallel to the axis of rotation. The gas in this disk is assumed to be approximately isothermal and undergo non-uniform rotation at a rate $\Omega$ that depends on galactocentric radius.
With this framework,
we obtain the dispersion relation for density perturbations propagating in the gas disk by combining the continuity equation
\begin{equation}
\frac{\partial\rho}{\partial t}=\vect{\nabla}\cdot(\rho\vect{v})=0
\end{equation}
with solutions to the Euler equations of motion for the rotating disk plus a small perturbation,
\begin{equation}
\frac{\partial\vect{v}}{\partial t}+(\vect{v}\cdot\vect{\nabla})\vect{v} = -\frac{1}{\rho}\vect\nabla p-\vect{\nabla}\Phi. \label{eq:EOM}
\end{equation}
Here $\rho$ is the gas density, $p$ is the thermal plus turbulent gas pressure \citep[following][]{chandrasekhar51} and the gravitational potential $\Phi$ represents gas self-gravity together with a possible background potential defined by a surrounding distribution of gas, stars and dark matter.
Although it has become common to only consider the linearized equations of motion in 2D polar coordinates, adopting perturbations that involve no motion in the vertical direction, a full 3D treatment in cylindrical coordinates has also been previously considered \citep[i.e.][]{GLB}.
We follow the latter approach, and employ a number of the techniques common to 2D and 3D calculations. For one, the equations of motion are typically satisfied by adopting an $m$-mode perturbation of the form $\propto$ $\exp{i (m\phi-\omega t+\vect{k}\cdot\vect{r})}$ propagating in the direction $\vect{r}$ with wavenumber $\vect{k}$ where $\omega$ is the oscillation frequency of the mode \citep[e.g.][]{toomre,linshu,BT}.
The unstable growing modes can then be identified by the condition $\omega^2$$<$0.
The equations of motion are also typically simplified using the WKB (Wentzel-Kramers-Brillouin) approximation, in which the phase of the radial perturbation is assumed to be rapidly varying ($k R$$>>$1) so that the variation in the perturbation amplitude is negligible and terms of order $1/R$ are neglected in favor of those of order $k$.
The two distinguishing features of this work (described in more detail below) are a non-zero vertical velocity dispersion and the possibility of motion in the vertical direction described by vertical perturbations that explicitly include phase variation (wave-like behavior) and which are either infinite in extent or finite and described in the WKB approximation. Thus, the approach is most similar to \citetalias{GLB}, except that here a broader set of perturbations are considered and stability is examined from both the 2D and 3D perspectives. That is, before obtaining the 2D dispersion relation, this work uses the 3D dispersion relation
in a number of different regimes to make transparent predictions for the scales of instabilities and assess how instability various with height above the mid-plane. Then, following \cite{toomre} and \citetalias{GLB}, the 2D version of the dispersion relation is obtained to identify the conditions for the overall stability of the disk.
\subsubsection{Vertical Motions in the Unperturbed Disk}
A major motivation for including the vertical dimension is to obtain a realistic description of molecular gas disks in which turbulent gas motions are three-dimensional and nearly isotropic and in which the embedded clouds are triaxial.
As will be shown in $\S$~\ref{sec:largeh1}, the 3D dispersion relation derived in this work approaches the 2D Lin-Shu dispersion relation in the limit $\sigma_z\rightarrow 0$.
For the more general scenario of interest here, both vertical and radial components of motion are allowed, each given the following 1D velocity dispersion \citep[i.e.][]{chandrasekhar51}
\begin{equation}
\sigma_{\rm i}^2=v_s^2+\sigma_{\rm turb,i}^2
\end{equation}
which combines the sound speed in the gas $v_s$ with turbulent motions $\sigma_{\rm turb,i}$ in direction $i$. Although we allow that $\sigma_z$$\neq$$\sigma_r$, velocity dispersions in the gas are generally assumed to be isotropic.
The turbulent motions in the gas are envisioned as arising from two main sources that combine to yield an effective (non-thermal) pressure that places the disk in dynamical equilibrium. Star formation feedback (plus turbulent dissipation) is assumed to set a base pressure $p_{\rm FB}$. This combines with the effective pressure $p_{\rm eff}$ set up by the averaged kinematic response of many individual fluid elements to the force set up by the remainder between the gravitational force and the gradient in the baseline $p_{\rm FB}$. For a given $p_{\rm FB}$, $p_{\rm eff}$ is thus the pressure that maintains the gas disk in overall equilibrium. Thus, in what follows, dynamical equilibrium is applied even when feedback-driven turbulent pressure is either zero or very small due to the absence star formation.
For this equilibrium scenario, unless otherwise noted, the unperturbed vertical density distribution is envisioned as falling between the self-gravitating profile
\begin{equation}
\rho_0(z)=\rho_c sech^2(z/h),\label{eq:vertdistrib1}
\end{equation}
where $h=\sigma_z/(2\pi G\rho_c)^{1/2}$, and
\begin{equation}
\rho_0(z)=\rho_c e^{-z^2/h^2}\label{eq:vertdistrib2}
\end{equation}
in the presence of a dominant external potential generated by the background distribution with density $\rho_b$, where $h=\sigma_z/(4\pi G\rho_b)^{1/2}$. The equilibrium vertical velocity dispersion associated with these profiles is constant (independent of $z$).
\subsubsection{Vertical and Radial Perturbations}\label{sec:perturbations}
Allowing the gas disk to have some thickness, we now wish to incorporate realistic perturbations that are compatible with the sorts of influences that a gas disk experiences. The goal with these perturbations is not to represent a specific process but rather to invoke a generic influence that is non-negligible away from the galactic mid-plane in a manner that satisfies the linearized equations of motion in 3D.
Thus, we adopt wave perturbations of the form
\begin{equation}
\Phi_1(R,\phi,z,t)=Re[\mathcal{F}(R,z)e^{i(m\phi-\omega t)}e^{ik R}e^{i k_z z}] \label{eq:pertex}
\end{equation}
or
\begin{equation}
\Phi_1(R,\phi,z,t)=i Im[\mathcal{F}(R,z)e^{i(m\phi-\omega t)}e^{ik R}e^{i k_z z}] \label{eq:pertex2}
\end{equation}
where the wavenumbers $k$=$2\pi/\lambda_r$ and $k_z$=$2\pi/\lambda_z$ describe the wavelengths of the perturbation in the radial and vertical directions, respectively.
As is typical, these perturbations are assumed to satisfy the WKB approximation in the radial direction, i.e. $\partial \mathcal{F}(R,z)/\partial R= \mathcal{F}(R,z)/R_{p}<<ik\mathcal{F}(R,z)$ where $R_{p}$ is the characteristic scale over which the amplitude of the perturbation varies in the radial direction. In practice this is adopted as the criterion $kR>>1$.
In the vertical direction, three different scenarios are considered. In the first two scenarios, the perturbations are assumed to be infinite, pervading every location in the disk, but either the wave nature is neglected in the vertical direction and so $k_z=0$ (the first scenario, as considered by GLB) or $k_z$ is assumed to be a constant and independent of height $z$ above the mid-plane (the second scenario). In both of these cases, the amplitude of the perturbation must vary faster in the vertical direction than the vertical density variation in the unperturbed disk, in order that the perturbation remains small with respect to $\rho_0$ everywhere ($\rho_1<<\rho_0$). Defining $Z_d=(d\textrm{ln}\Phi_0/dz)^{-1}$ and $Z_p$=$(d\textrm{ln} \mathcal{F}(R,z)/dz)^{-1}$, then for these infinite perturbations, $Z_p\lesssim Z_d$. As in $\S\S$ \ref{sec:infGLB} and \ref{sec:infwave} this can be cast in terms of the vertical gradients in the unperturbed and perturbed densities, i.e. $z_p=z_d$ with $z_d=(d\textrm{ln}\rho_0/dz)^{-1}$ and $z_p$=$(d\textrm{ln} \rho_1/dz)^{-1}$.
The wave type of these infinite perturbations are not examined in the WKB approximation, given that they entail rapid variation in the perturbed amplitude. Thus, any infinitely extended 3D perturbation considered in this work satisfies the most general form of the perturbed Poisson's equation
\begin{equation}
\Phi_1=\frac{4\pi G\rho_1}{-k_{\textrm{plane}}^2-k_z^2+T^2}
\end{equation}
where $k_{\textrm{plane}}^2\equiv k^2+m^2/R^2$ and
\begin{eqnarray}
T^2&\equiv&(\nabla_z^2\Phi_1)/\Phi_1+k_z^2\nonumber\\
&=&\nabla_z\left(\frac{1}{Z_p}\right)+\frac{1}{Z_p^2}+\frac{2ik_z}{Z_p}.
\end{eqnarray}
In the third scenario, wave perturbations are restricted to a finite extent above and below the mid-plane. Such perturbations can be studied without the requirement that $Z_p=Z_d$ (or $z_p=z_d$), since they are not at risk of becoming non-negligible as the unperturbed disk density drops towards $z\rightarrow\pm\infty$. These perturbations could vary arbitrarily in amplitude in the $z$ direction (as long as this variation is negligible with respect to the perturbation's phase variation, in the WKB approximation) and would thus not need to satisfy $k_z z_d>>1$, or necessarily share the overall variation of the unperturbed disk. The amplitude of the perturbation might instead vary much more slowly than $\rho_0$, perhaps tied to the density distribution of an embedding disk or a process active therein, for instance. In this scenario, the WKB approximation is invoked as $k_z z_p>>1$ (or $T\approx 0$).\footnote{These finite perturbations could be selected to also satisfy $k_z z_d>>1$ (although it would not be necessary be design). This might be equivalent to $k_z z>>1$, in the case of a flattened logarithmic potential $\Phi_0\propto \ln(R^2+z^2/q^2)$, for example, or $k_z h>>1$ in the case of an exponential vertical distribution with scale length $h$. }
In practice, finite (WKB) wave perturbations are described in what follows by introducing a truncation at some height $h_1$, above which the density becomes zero. (Note that, beyond $h$, $\rho_1$ is still assumed to be considerably less than $\rho_0$.) Such finite WKB perturbations are maximally flexible as they require only $k_z z_p>>1$ and not $k_z z_d>>1$ or $k_z h>>1$.
Introducing a truncation in the perturbation at some height $h_1$ comes with one important additional requirement.
To make the perturbation physical, it must satisfy both Poisson's equation and Laplace's equation beyond $h_1$. This introduces a strict boundary condition at the interface $\vert z\vert=h_1$, which places restrictions on the relationship between the vertical and radial perturbations. This condition is determined below by matching
the solution to Poisson's equation
with the solution to Laplace's equation at $z=h_1$ and requiring a similar matching of the gravitational force $\partial\Phi_1/\partial z$ at the interface (so that the gravitational force remains smooth). Here $h_1$ is taken to be the disk scale height or greater.
To satisfy Laplace's equation
\begin{equation}
-k^2\Phi_1-\frac{m^2}{R^2}\Phi_1+\frac{\partial^2\Phi_1}{\partial z^2}=0
\end{equation}
above and below the perturbation (at and beyond the vertical extent), the solution must have $k_z$=$i k_{\textrm{plane}}$. Outside the perturbed part of the disk the potential thus becomes a decaying function $\Phi_1$$\propto e^{-\vert k_{\textrm{plane}} z\vert}$.
Meanwhile, over the extent of the perturbation, Poisson's equation
\begin{equation}
-k_{\textrm{plane}}^2\Phi_1+\frac{\partial^2\Phi_1}{\partial z^2}=4\pi G\rho_1
\end{equation}
implies that
\begin{equation}
\Phi_1=\frac{-4\pi G\rho_1}{k_{\textrm{plane}}^2+k_z^2}\label{eq:pertpotential}
\end{equation}
in the WKB approximation, where again $k_{\textrm{plane}}^2=k^2+m^2/r^2$ and now $\partial^2\Phi_1/\partial z^2\approx -k_z^2\Phi_1$. For Lin-Shu perturbations that are confined to an infinitssimally thin sheet $\Phi_1=-2\pi G\Sigma_1/\vert k_\textrm{plane}\vert$, in contrast \citep{toomre}.
Below both even and odd density and potential wave perturbations are considered, although even perturbations are the focus of the remainder of the paper. (Odd wave perturbations can be shown to yield a consistent view of the main features of 3D disk instability.) As discussed later in $\S$~\ref{sec:largeh1}, the 2D Lin-Shu dispersion relation and Toomre criterion are retrieved adopting an even wave perturbation, which can be envisioned as an over-density in the galaxy mid-plane. This is the nominal perturbation for describing, i.e., the propagation of density waves in the disk. The amplitudes of all perturbations considered in this work (whether even or odd) are assumed to be even functions of distance from the mid-plane. Perturbations with amplitudes that are odd functions of $z$ have been shown to be stable by \citetalias{GLB}. \\
\noindent\underline{Even WKB Perturbations}\\
\vspace*{-.1in}
In the case that the potential perturbation in the disk is an even function $\Phi_1\propto\cos{k_zz}$ and symmetric about the mid-plane with extent $h_1$, we obtain the following matching conditions
\begin{eqnarray}
A e^{-k_{\textrm{plane}}h_1}&=&B cos(k_z h_1)\nonumber\\
-k_{\textrm{plane}} A e^{-k_{\textrm{plane}}h_1}&=&-k_z B sin(k_z h_1)\nonumber
\end{eqnarray}
at the interface $h_1$, where $A$ is the amplitude of the potential perturbation beyond $\vert z\vert=h_1$ and $B$ is the amplitude within $h_1$. %
These conditions yields the relation \citep[see also][]{grivgedalin}:
\begin{equation}
\arctan{\frac{k_{\textrm{plane}}}{k_z}} = k_z h_1 \label{eq:klinkeven}
\end{equation}
In the standard long wavelength scenario with
$k_{\textrm{plane}}<<k_z$, eq.(\ref{eq:klinkeven}) reduces to $k_{\textrm{plane}}/k_z\approx k_z h_1$. Taking $h_1$=$h$, this yields the approximation
\begin{equation}
\frac{\rho_0 k_{\textrm{plane}}^2}{k_{\textrm{plane}}^2+k_z^2}\approx\frac{\Sigma_0 k_{\textrm{plane}}}{2(1+k_{\textrm{plane}}h)}
\end{equation}
in terms of the unperturbed disk volume density $\rho_0$ and surface density $\Sigma_0\approx 2h\rho_0$.
This approximation is analogous to corrections incorporated into 2D dispersion relations (of single or multi-component disks) to account for weakened self-gravity when finite thickness is assumed \citep{toomre,vandervoort,jogsolomon,romeo92,ylin05,elm11}.
This paper is also interested in the short wavelength regime where the above approximation is not valid. Here `short' is used in relation the vertical wavelength, not the disk scale height. (Instability is indeed still restricted below the Jeans length.) Most relevant in this scenario is the extent of the perturbation $h_1$. In the limit $k_{\textrm{plane}}>>k_z$, the boundary condition in eq. (\ref{eq:klinkeven}) requires $k_z\approx(\pi/2) h_1^{-1}$ to lowest order, or that $\lambda_{R}$$<<$$\lambda_z\approx h_1$. This scenario is consistent with $\lambda_z > h$
when we envision the perturbation's vertical edges at $h_1>h$, and it can thus be used to probe the instability regime in which $\lambda_R$ is brought down near the size of disk scale height. \\
\noindent\underline{Odd WKB Perturbations}\\
\vspace*{-.1in}
In the case that the potential perturbation in the disk is an odd function, $\Phi_1\propto\sin{k_zz}$, the matching conditions at $h_1$ above the plane become
\begin{eqnarray}
A e^{-k_{\textrm{plane}}h_1}&=&B sin(k_z h_1)\nonumber\\
-k_{\textrm{plane}} A e^{-k_{\textrm{plane}}h_1}&=&k_z B cos(k_z h_1)\nonumber
\end{eqnarray}
requiring that
\begin{equation}
\arctan{\frac{-k_z}{k_{\textrm{plane}}}} = k_z h_1. \label{eq:klinkodd1}
\end{equation}
Similarly, below the plane at -$h_1$, the boundary condition requires
\begin{equation}
\arctan{\frac{k_z}{k_{\textrm{plane}}}} = k_z h_1. \label{eq:klinkodd1}
\end{equation}
Thus in the long wavelength scenario $\vert k_{\textrm{plane}}\vert<<\vert k_z\vert$, allowed perturbations have a vertical wavenumber $k_z\sim(\pi/2)h_1^{-1}$ and radial wavenumbers are restricted to $\vert k_{\textrm{plane}}\vert$$<<$$1/h_1$ or $\vert k_{\textrm{plane}}\vert$$<$$1/h$ when $h_1>h$. Perturbations in the short-wavelength limit $\vert k_{\textrm{plane}}\vert>>\vert k_z\vert$ have $\vert k_{\textrm{plane}}\vert\approx1/h_1$ and $k_z<<1/h_1$, which again can correspond to the case $\vert kh\vert\lesssim1$ and $\vert k_zh\vert<<1$ where $h_1>h$. \\
\subsection{Obtaining the Conditions for Stability in 3D}\label{sec:equationsofmotion}
\subsubsection{Overview}
This section introduces the 3D perturbations from the previous section (corresponding to infinite non-periodic perturbations, infinite waves and finite WKB waves) into the linearized 3D equations of motion to solve for the perturbed motions in the radial, azimuthal and vertical directions. The 3D dispersion relation is then obtained using the continuity equation, which couples these motions to the time evolution of the perturbed density.
Using either the 3D version of the dispersion relation derived below ($\S$~\ref{sec:3ddispersionRelation}) or the 2D version ($\S$~\ref{sec:2dstability}), the conditions for stability can be easily determined
according to the expectation that
a stable, non-growing mode (with Real $\omega$) must have $\omega^2$$>$0. (Thus the line of stability is usually taken to be $\omega^2$=0; e.g. \citealt{BT}) In the interest of diagnosing the basic stability of disks to 3D perturbations, sections~\ref{sec:3ddispersionRelation} and on examine in detail the axisymmetric scenario with $m$=0, in which case $k_\textrm{plane}=k$. The calculations presented in what immediately follows, though, adopt an arbitrary $m$.
\subsubsection{Motions in the Plane and in the Vertical Direction}\label{sec:motions}
To describe motions in our rotating gas disk, we adopt the Euler equations of motion in cylindrical coordinates, with $z$ oriented parallel to the axis of rotation. We then introduce a small perturbation.
Writing all quantities as the sum of perturbed and small unperturbed components (i.e. $\rho=\rho_{0}+\epsilon\rho_{1}$ and $v_R=v_{R,0}+\epsilon v_{R,1}$, etc., where $\epsilon$ is small) and keeping only terms to first order in small quantities, the linearized versions of the equations of motion are obtained \citep[see][]{BT}. These are satisfied in this work by the perturbations introduced in $\S$~\ref{sec:perturbations} with the form $\Phi_1(R,\phi,z,t)=\Phi_a(R,z) e^{i(m\phi-\omega t)}$ where
$\Phi_a(R,z)=\mathcal{F}(R,z) e^{ikR+ik_z z}$ and
the radial gradient of $\mathcal{F}(R,z)$ is neglected in the WKB approximation. Through Poisson's equation the density perturbation has a similar dependence, i.e. $\rho_{1}=\rho_{a}(R,z) e^{i(m\phi-\omega t)}$ where $\rho_{a}(R,z)=\mathcal{R}(R,z)e^{ikR+ik_z z}$. Solutions to the linearized equations of motion (eq. (\ref{eq:EOM})) thus also have the form $v_{R,1}=v_{R,a}(R,z) e^{i(m\phi-\omega t)}$, $v_{\phi,1}=v_{\phi,a}(R,z) e^{i(m\phi-\omega t)}$ and $v_{z,1}=v_{z,a}(R,z) e^{i(m\phi-\omega t)}$ where $v_{R,a}(R,z)$, $v_{R,a}(R,z)$ and $v_{r,a}(R,z)$ are all $\propto e^{ikR+ik_z z}$.
Substituting the density and potential perturbations into the perturbed radial and azimuthal equations of motion, it can be shown \citep[adopting the convention in][]{BT} that
\begin{eqnarray}%
v_{R,a}&=&-\frac{(\Phi_a+\sigma^2\frac{\rho_a}{\rho_0})}{\Delta}
\left(k(\omega-m\Omega)+i\frac{2m\Omega}{R}\right)\nonumber\\
&-&v_{z,a}\frac{2\Omega}{\Delta} \frac{dV_c}{dz}\label{eq:radvelocity}
\end{eqnarray}
\begin{eqnarray}
v_{\phi,a}&=&-\frac{(\Phi_a+\sigma^2\frac{\rho_a}{\rho_0})}{\Delta}\left(2Bik+\frac{m(\omega-m\Omega)}{R}\right)\nonumber\\
&+&iv_{z,a}\frac{(\omega-m\Omega)}{\Delta}\frac{dV_c}{dz}\label{eq:phivelocity}
\end{eqnarray}
where $V_c=\Omega R$ and
\begin{eqnarray}
B&=&-\Omega-\frac{1}{2}R\frac{d\Omega}{dR}\\\nonumber
\Delta&=&\kappa^2-(m\Omega-\omega)^2\\\nonumber
\kappa&=&-4B\Omega.\nonumber
\end{eqnarray}
In the vertical direction, the linearized equation of motion
\begin{equation}
i(-\omega+m\Omega)v_{z,1}=-\nabla\Phi_1-\sigma^2\left(\frac{\nabla\rho_1}{\rho_0}\right)
\end{equation}
implies
\begin{equation}
v_{z,a}=-\frac{k_z(\Phi_a+\sigma^2\frac{\rho_a}{\rho_0})}{(m\Omega-\omega)}+i\frac{(\nabla \mathcal{F}+\sigma^2\frac{\nabla \mathcal{R}}{\rho_0})}{(m\Omega-\omega)}e^{ikR+ik_z z}.
\label{eq:vertvelocity}
\end{equation}
where, at this stage, no assumption has been made about the relative sizes of the vertical perturbation's phase and amplitude variations. Different choices for $k_z$ in relation to the perturbation amplitude and the unperturbed disk will be examined later in this work.
These expressions for $v_{r,a}$, $v_{\phi,a}$ and $v_{z,a}$ are based on the assumption of equilibrium in the unperturbed disk, such that $v_{r,0}$=0, $v_{z,0}$=0 and $v_{\phi,0}$$\approx(R d\Phi_0/dR)^{1/2}=\Omega R$, neglecting the pressure term $(\partial p_{0,\phi}/\partial R)/\rho_0$ since $\sigma_{\phi,0}$$<<$$\Omega R$.
Adopting the WKB approximation in the radial direction leads to further simplification. This work focuses on scenarios in which the radial variation in the amplitude of the potential perturbation is comparable to (and no less than) the radial gradient in the unperturbed disk (as discussed in \ref{sec:perturbations}). As is typical, then, the factors proportional to $1/R$ in eqs. (\ref{eq:radvelocity}) and (\ref{eq:phivelocity}) are neglected relative to those that are proportional to $k$ \citep[e.g.][]{BT}. The WKB condition $k R_p>>1$ ($\S$~\ref{sec:perturbations}) is satisfied by $k R>>$1 assuming $k$ increases towards small $R$.
The terms proportional to $v_{z,1}$ in the expressions for $v_{r,1}$ and $v_{\phi,1}$ are similarly neglected in the set-up of interest here, since the rotational lag
\begin{equation}
\frac{dV_c}{dz}\approx\frac{1}{2\Omega}\frac{d}{dz}\frac{d}{dR}\Phi_{0}
\end{equation}
(again assuming that the radial pressure gradient is negligible)
contains a factor considerably smaller than $k\Phi_a$.\footnote{Appendix \ref{sec:rotationallagappendix} identifies the precise set of perturbations for which the lag term is negligible. }.
With an identical vertical perturbation specifically in the WKB approximation, \cite{grivgedalin} arrive at a different expression for the perturbed vertical velocity $v_{z,1}$, as the disk in their scenario of interest is out of hydrostatic equilibrium. This introduces factors proportional to $v_{z,0}$, such that the numerator in eq. (\ref{eq:vertvelocity}) includes include a term proportional to $\nu$, the vertical epicyclic frequency.
\subsection{The 3D Dispersion Relation}\label{sec:3ddispersionRelation}
Next we consider the perturbed continuity equation in cylindrical coordinates
\begin{equation}
i(m\Omega-\omega)\rho_1+\frac{1}{R}\frac{d}{d R}(R\rho_0v_{R,1})+\frac{im\rho_0}{R}v_{\theta,1}+\frac{d}{dz} (\rho_0 v_{z,1})=0,\label{eq:fullcontinuity}
\end{equation}
including the vertical term, using the fact that $v_{z,0}$=0 for the continuity-obeying equilibrium unperturbed disk and keeping only terms lowest order in the perturbation.
Adopting the WKB approximation with the assumption that $kR>>1$ \footnote{This assumption is weakened in Appendix \ref{sec:nonaxisym} to examine the conditions for stability in the presence of perturbations that are non-axisymmetric in the plane. }
leads to the simplification %
\begin{equation}
i(m\Omega-\omega)\rho_1+\rho_0\frac{\partial v_{R,1}}{\partial R}+\rho_0\frac{\partial v_{z,1}}{\partial z}+v_{z,1}\frac{\partial \rho_0}{\partial z}=0.\label{eq:3ddispersion}%
\end{equation}
(The $v_{\phi,1}$ term is small compared to the other two velocity terms in the WKB approximation and is dropped.)
Before considering the generic case in which $k_z\neq0$ and $k\neq0$ (in section \ref{sec:3dmidplane}), below we will considered radial and vertical perturbations separately. \\
\subsubsection{Vertical-only Perturbations ($k_z\neq$0, $k$=0)}\label{sec:verticalonly}
Now taking $z_d=(\partial \ln\rho_0/\partial z)^{-1}$,
substituting in the expression for $v_{z,1}$ and setting $k$=0, eq. (\ref{eq:3ddispersion}) becomes
\begin{eqnarray}
0&=&(m\Omega-\omega)^2\rho_1\nonumber\\
&-&k_z^2\left(\Phi_1\rho_0+\sigma^2\rho_1\right)%
+i k_z\Phi_1\rho_0\left(\frac{1}{z_d}\right)\nonumber\\
&+&\mathcal{A}e^{i(m\Omega-\omega)t}\label{eq:vertonly2}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{A}&=&e^{ikr+ik_zz}\Big[\rho_0\nabla^2\mathcal{F}+\sigma^2\nabla^2\mathcal{R}\nonumber\\
&+&\left(\frac{1}{z_d}\right)\rho_0\nabla \mathcal{F}\nonumber\\
&+&2ik_z\left(\rho_0\nabla \mathcal{F}+\sigma^2\nabla \mathcal{R}\right)\Big]
\end{eqnarray}
In the non-wave scenario ($k_z=0$) the dispersion relation reads
\begin{eqnarray}
0&=&(m\Omega-\omega)^2\rho_1+\Big[\rho_0\nabla^2\mathcal{F}+\sigma^2\nabla^2\mathcal{R}\nonumber\\
&+&\left(\frac{1}{z_d}\right)\rho_0\nabla \mathcal{F}\Big]e^{ikr+i(m\Omega-\omega)t}
\label{eq:vertGLB}
\end{eqnarray}
whereas in a WKB scenario,
\begin{eqnarray}
0&=&(m\Omega-\omega)^2\rho_1-k_z^2\left(\Phi_1\rho_0+\sigma^2\rho_1\right)\nonumber\\
&+&i k_z\Phi_1\rho_0\left(\frac{1}{z_d}\right)\label{eq:vertWKB}
\end{eqnarray}
The imaginary, out-of-phase term in eq. (\ref{eq:vertWKB}) is notably negligible when $k_z z_d>>1$.
This would be equivalent to the condition required to keep an infinite perturbation consistent with WKB approximation (using the requirement $z_p$=$z_d$ to keep the perturbation small with respect to the unperturbed disk as $\vert z\vert~\rightarrow~\infty$). Likewise, the second factor in the term in square brackets in eq. (\ref{eq:vertGLB}) drops when $k_z z_d>>1$, considerably simplifying the expression. %
However, in the case of the unperturbed Gaussian vertical profile (for which $1/z_d=z/h^2$), $k_z z_d>>1$ applies only very near the galactic mid-plane, i.e. where $z<<(k_z h)h$. Thus, both the in-phase and out-of-phase terms are relevant for the overall evolution of extended perturbations.
Indeed, in the special case highlighted in the next section in which the perturbation extends to $\pm$ infinity and tracks the decrease in $\rho_0$ with increasing $z$ (as in our equilibrium disks) then integration over the vertical direction from $-\infty$ to $\infty$ yields zero when all terms in eqs. (\ref{eq:vertonly2}), (\ref{eq:vertGLB}) and (\ref{eq:vertWKB}) are included. In other words, $\rho_0v_{z,1}\vert_{-\infty}^{\infty}=0$. This is the `no mass flux at infinity' requirement invoked by GLB. As a result, the vertical-only 2D dispersion relation in this 'no mass flux' scenario reads $(m\Omega-\omega)^2=0$ signifying that the vertical direction is neutrally stable to infinite axisymmetric perturbations and stable to all non-axisymmetric perturbations (since $\omega^2=m^2\Omega_p^2$ when $k$=0).
This neutral stability characteristic of the vertical direction is leveraged when calculating the 2D dispersion relation later in $\S$~\ref{sec:2dstability}, following GLB. Below, it will first be useful to examine how the terms in eqs. (\ref{eq:vertGLB}) and (\ref{eq:vertWKB}) proportional to $1/z_d$ contribute to this neutral vertical stability. \\
\noindent\underline{Stability Away from the Mid-plane}\\
\vspace*{-.1in}
In the case of the non-wave perturbation, the third term in eq. (\ref{eq:vertGLB}) dominates away from the mid-plane and
\begin{equation}
(-\omega+m\Omega)^2\rho_1\approx-\frac{\rho_0}{z_d}\nabla \Phi_1
\end{equation}
where $\Phi_1=\mathcal{F}$. This corresponds to stability ($\omega^2>0$) everywhere since the right-hand side is always positive when $\nabla \Phi_1>0$ and $z_d<0$, as it is in the equilibrium disks under consideration.
Stablility far above the mid-plane is also a feature of periodic wave perturbations. Consider eq. (\ref{eq:vertWKB}) in a scenario in which the perturbation is extended but finite, for example.\footnote{The perturbation must be finite or it will not satisfy the WKB approximation assumed for the present exercise. This requirement is not invoked in other sections unless noted. }
In the regime $k_z z_d<<1$, or at heights much larger than $h$ (far away from the mid-plane), the vertical-only continuity equation reads
\begin{equation}
-\frac{\omega^2 (k^2+k_z^2-T^2)h^2}{f_g\nu^2k_z z}=\textrm{cot}(k_z z+kr+(m\Omega-\omega)t)
\end{equation}
adopting our Gaussian vertical density profile and letting $\nu^2=4\pi G\rho_0 f_g$.
Since the arccotangent of the left hand side is $\pm\pi/2$ for all $z>>h$, $\omega$ is always real. It remains real as $z$ approaches nearer to $h$, where the arccotangent of the left hand side is a small positive or negative quantity. \\
\noindent\underline{(In)stability Near the Mid-plane}\\
\vspace*{-.1in}
The stability away from the mid-plane is in contrast to the situation very near the mid-plane. In the case of wave perturbations, the dispersion relation where $k_z z_d>>1$ (and adopting $m$=0 for simplicity) becomes:
\begin{equation}
\omega^2=-4\pi G\rho_0\frac{k_z^2}{k_z^2-T^2}+k_z^2\sigma^2\label{eq:3dmid}
\end{equation}
or
\begin{equation}
\omega^2=-4\pi G\rho_0+k_z^2\sigma^2\label{eq:3dmid}
\end{equation}
in the limit $k_z>>T$. (The stability of non-wave perturbations near the mid-plane is examined in $\S$~\ref{sec:3dmidplane}.)
In this situation, perturbations have the opportunity for growth as long as $k_z <(4\pi G\rho_0)/\sigma_z^2=k_J$. In other words, very near to the roughly constant-density mid-plane,
instability in the vertical direction proceeds in a Jeans-like manner, unaffected by rotation \citep{chandrasekhar54} and restricted to similar scales. \\
\noindent\underline{Total Neutral Stability}\\
\vspace*{-.1in}
As exemplified by the `no-mass-flux at infinity' case described above and considered in detail in $\S\S$ \ref{sec:infGLB} and \ref{sec:infwave}, the combination of instability near the plane with stability away from the plane results in a neutrally-stable disk.\footnote{It is worth noting that the vertical variation in $\omega$ discussed here has not been made explicit in writing eqs. (\ref{eq:vertonly2}), (\ref{eq:vertGLB}) and (\ref{eq:vertWKB}). This corresponds to the assumption that the perturbation's amplitude and/or phase variations are faster, i.e. $\vert T\vert >>(d\omega/dz) t$ or $\vert k_z\vert >>(d\omega/dz) t$ everwhere. } The disk is thus neither as unstable as predicted where $k_z z_d>>1$ or as stable as predicted where $k_z z_d<<1$.
Still, eq. (\ref{eq:3dmid}) does suggest that an avenue to avoid stability would be to perturb the disk over a limited extent, very near the mid-plane.
As demonstrated later in $\S$ \ref{sec:finWKB}, perturbations extending a finite distance above and below $z=0$ are defined by the non-zero mass flux they entail over their extent, with the consequence that the 2D dispersion relation retains terms associated with the vertical direction. As the perturbation's height decreases, stability approaches the behavior predicted in the limit $k_z z_d>>1$ near the mid-plane.
\subsubsection{Radial-only Perturbations ($k\neq$0, $k_z$=0)}\label{sec:radialonly}
\indent Setting $m$=0 and $k_z$=0 in eq. (\ref{eq:3ddispersion}) and substituting in the expression for $v_{r,1}$, the axisymmetric radial-only dispersion relation reads
\begin{equation}
\omega^2=\kappa^2-4\pi G\rho_0\frac{k^2}{k^2-T^2} +\sigma_r^2k^2\label{eq:raddisp}\\
\end{equation}
or
\begin{equation}
\omega^2=\kappa^2-4\pi G\rho_0 +\sigma_r^2k^2\label{eq:raddisp}\\
\end{equation}
in the limit that $k>>T$.
This is a restatement of the finding that wave perturbations perpendicular to the axis of rotation (in this case, the radial direction) can be stabilized by rotation, since the scales of instability are pushed over the Jeans length. This was first found by \cite{chandrasekhar54} in the case of uniform rotation, then generalized by \citet{BelSchatzman} for non-uniform rotation (as considered here) and then confirmed to apply in the presence of vertical flattening \citep{safronov}.
This scenario resembles the case of `no vertical mass flux at infinity' perturbations and 3D perturbations near the mid-plane and so a discussion of the instability scale is postponed until $\S\S$~\ref{sec:3dmidplane} and~\ref{sec:infGLB} . For now it should be noted that simply omitting a vertical perturbation very clearly does not retrieve the 2D Lin-Shu dispersion relation.\footnote{As discussed later in $\S$~\ref{sec:largeh1}, to retrieve the Lin-Shu dispersion relation in the long-wavelength limit starting from the 3D dispersion relation requires taking the limit in which the disk is an infinitesimally thin sheet with $\sigma_z\rightarrow 0$).
} Instabilities instead have a Jeans-like quality
even in the presence of rotation. (Eq. [\ref{eq:raddisp}] indeed approaches the condition for Jeans instability in the limit $\kappa\rightarrow0$.)
\cite{jogsolomon} pointed out this resemblance to Jeans instability by taking the 2D dispersion relation (see eq. (\ref{eq:linshu}), $\S$~\ref{sec:largeh1}) in the small wavelength (large $k$) limit, opposite to the standard long wavelength regime. As examined in $\S$~\ref{sec:3dmidplane} (and later in $\S$~\ref{sec:smallh1}), this 3D quality signifies a change in the disk stability threshold compared to the value required for stability to Lin-Shu density-wave perturbations.
\subsubsection{An Assessment of 3D Stability Near the Mid-plane}\label{sec:3dmidplane}
This section describes stability and fragmentation from a fully 3D perspective embedded within the gas disk, near to the galactic mid-plane. Later this view is traded for a 2D perspective that can be used to assess the overall stability of the disk (including all material out to $z=\pm\infty$).
Including both radial and vertical perturbations (with $k_z\neq0$ and $k\neq0$) and substituting in the expression for $v_{r,1}$, equation (\ref{eq:3ddispersion}) can be rewritten as
\begin{eqnarray}
&\rho_1&\bigg[(m\Omega-\omega)+\left(-\frac{4\pi G\rho_0}{k_\textrm{plane}^2+k_z^2-T^2}+\sigma_r^2\right)\frac{k^2(m\Omega-\omega)}{\Delta}\nonumber\\
&+&\frac{C_z}{(m\Omega-\omega)}\bigg]=0, \label{eq:3dstep1}
\end{eqnarray}
where $C_z$ represents vertical stability and is equated with either the second term on the right hand side of eq. (\ref{eq:vertGLB}) or the sum of last two terms on the right side of eq. (\ref{eq:vertWKB}), specifically including both in-phase and out-of-phase terms. Notice that when $C_z$ is positive (negative) in eqs. (\ref{eq:vertGLB}) or (\ref{eq:vertWKB}) the vertical direction is unstable (stable).
The 3D dispersion relation in eq.(\ref{eq:3dstep1}) is quadratic in $\omega^2$, with solutions
\begin{equation}
\omega^2=\frac{\omega_{min}^2}{2}\left(1\pm\sqrt{1+\frac{4C_z\kappa^2}{\omega_{min}^4}}\right)
\end{equation}
where
\begin{equation}
\omega_{min}^2=\kappa^2+\left(\frac{-4\pi G \rho}{k^2+k_z^2-T^2}+\sigma_r^2\right)k^2
-C_z
\end{equation}
in the case that $m$=0.
It is straightforward to show that the condition $\omega^2$$<$0 can be met when both
\begin{equation}
\omega_{min}^2<0 \label{eq:3dstability}
\end{equation}
and $C_z>0$, corresponding to vertical instability.
(When $\omega_{min}^2$ is positive, there is a limited range of conditions under which one of two branches of $\omega^2$ can still become negative. But we neglect such a scenario here, considering that the criterion in eq. (\ref{eq:3dstability}) is readily met.)
Notice that when $\omega_{min}^2<0$ and $C_z>0$, then $\omega_{min}^2$ is the minimum that $\omega^2$ can reach.
In what follows, eq. (\ref{eq:3dstability}) is used as the condition for instability, with the understanding that growth may happen faster than indicated by $\omega_{min}$. Below, conditions on $k$ (and/or $k_z$) for instability specifically near the mid-plane are obtained from eq. (\ref{eq:3dstability}) in the case of both wave and non-wave 3D perturbations. \\
\noindent\underline{Wave Perturbations}\\
\vspace*{-.1in}
For wave perturbations near the mid-plane (i.e. in the limit $k_z z_d>>1$) that are also assumed to locally satisfy the WKB approximation ($k_z>>T$) for illustration purposes, the 3D dispersion relation is written
\begin{equation}
\kappa^2+\left(\frac{-4\pi G \rho_c}{k^2+k_z^2}+\sigma_r^2\right)k^2
+\left(\frac{-4\pi G \rho_c}{k^2+k_z^2}
+\sigma_z^2\right)k_z^2<0. \label{eq:3dstabilitywave}
\end{equation}
using that
\begin{equation}
C_z=
-k_z^2\left(-\frac{4\pi G\rho_c}{k_\textrm{plane}^2+k_z^2}+\sigma_z^2\right)\label{eq:czmidplane}
\end{equation}
in this limit (see previous section) with $\rho_c=\rho(z\rightarrow 0)$.
Now, setting $k_S^2=k^2+k_z^2$ (with $S$ denoting 'shell'), instability is found to require
\begin{equation}
\kappa^2-4\pi G \rho+\sigma^2k_S^2+k_z^2(\sigma_z^2-\sigma_r^2)<0 \label{eq:nearshellmidplane}
\end{equation}
or
\begin{equation}
k_S^2<k_J^2\left(1-\frac{\kappa^2}{4\pi G\rho_c}\right)
\end{equation}
assuming that the velocity dispersion is isotropic ($\sigma_z$=$\sigma_r$=$\sigma$).
Stability within the roughly constant density region staddling the mid-plane
thus takes on a Jeans-like quality, though rotation succeeds in increasing the size of stable fragments.
Rotation can also eventually suppress instabilities above a threshold
\begin{equation}
Q_M\equiv\kappa^2/(4\pi G\rho_c)>1.
\end{equation}
It is notable that the form of this threshold resembles the 3D threshold $\kappa^2/(\pi G\rho_c)\approx 0.3$ determined for the overall disk by GLB better than it matches $Q_T$. As discussed in detail later in $\S$~\ref{sec:infGLB}, the difference in the numerical value of the threshold is a consequence of the vertical extent of the perturbed region.
The threshold $Q_M=1$ also corresponds to higher stability threshold than $Q_T=1$. In the case of weakly self-gravitating disks (with $\Sigma=\rho_c\sqrt{2\pi}h$),
\begin{equation}
Q_M=\frac{\pi\alpha^2f_gQ_T^2}{8}, \label{eq:QMQT}
\end{equation}
while in the case of fully self-gravitating disks (with $\Sigma=\rho_c2h$)
\begin{equation}
Q_M=\frac{\alpha^2f_gQ_T^2}{4}.
\end{equation}
Thus, $Q_M=1$ is equivalent to $Q_T\approx 2$, signifying that disks are more susceptible to partial 3D instability (endemic to the mid-plane) than to total destablization described by the 2D Toomre criterion, as discussed more in $\S$~\ref{sec:growthrates}.
It is also noteworthy that, as a criterion specifically on the radial $k$ wavenumber, eq. (\ref{eq:3dstabilitywave}) implies
\begin{equation}
k^2<k_J^2\left(1-Q_M-k_z^2h^2\right)\label{eq:midplane3Djeans}
\end{equation}
with a stability threshold $Q_M=1-k_z^2h^2$. Here the radial Jeans length $k_J=4\pi G\rho/\sigma_r^2$. Since $h$ is roughly equivalent to the effective Jeans length (applicable in the presence of thermal and non-thermal motion), it is only when the disk is perturbed on scales larger than the Jeans length that radial fragmentation is seeded. That is, the largest perturbations, with $k_zh<<1$, correspond to the highest threshold and thus most easily seed radial fragmentation. \\
From a more qualitative perspective, the onset of this `mid-plane' Jeans-like instability can be described as follows:
At the mid-plane where the density is approximately constant, gas pressure applies a negligible force and only the perturbed pressure force is left to compete with self-gravity. The vertical component of this force is negligible when the wavelength of the perturbation is large, i.e. $k_zh<1$ or when the disk is perturbed above the vertical (effective) Jeans length. As a result, the primary competition against self-gravity comes from the pressure force in the plane. Since the pressure force is scale-dependent while self-gravity at the mid-plane is not, the result is that the disk is able to destabilize, but only on scales larger than the radial Jeans length (lengthened by rotation).
It is worth noting that, although this instability is described as occurring `at the mid-plane', it is limited by pressure to scales larger than the vertical Jeans length. Thus the scale height sets the minimum vertical extent of the region that becomes unstable. Indeed, in either the radial or vertical directions, the disk is stabilized by pressure below the Jeans length. According to eq. (\ref{eq:midplane3Djeans}), rotation also contributes to stability on the largest scales. \\
\noindent\underline{Non-wave Perturbations}\\
\vspace*{-.1in}
Non-wave ($k_z<<T$) vertical perturbations that are infinite (and satisfy $1/z_p=1/z_d$) exhibit almost identical behavior near the mid-plane. For these,
\begin{equation}
C_z=\rho_0\nabla_z^2\Phi_1+\sigma^2\nabla_z^2\rho_1+\left(\frac{1}{z_d}\right)\left(\nabla_z \mathcal{F}+\sigma^2\frac{\nabla_z \mathcal{R}}{\rho_0}\right)\label{eq:czmidnonwave}
\end{equation}
(see previous section). In the limit, $z<<h$ where the unperturbed and perturbed densities are roughly constant and $1/z_d=z/h^2<<1/h$ (for our adopted Gaussian equilibrium vertical profile), the second and third (pressure) terms drop.
Thus, substituting eq. (\ref{eq:czmidnonwave}) into eq. (\ref{eq:3dstep1}), the condition for instability becomes
\begin{equation}
\kappa^2+\frac{\rho_0}{\rho_1}k^2\Phi_1+\sigma_r^2k^2
-\frac{\rho_0}{\rho_1}\nabla_z^2\Phi_1<0
\label{eq:3dstabilityGLBmidv0}
\end{equation}
or
\begin{equation}
\kappa^2-4\pi G \rho_0+\sigma_r^2k^2<0
\label{eq:3dstabilityGLBmid}
\end{equation}
using Poisson's equation for the substitution $\nabla_z^2\Phi_1=4\pi G\rho_1+k^2\Phi_1$.
Specifically near the mid-plane, instability in the presence of an arbitrary infinite perturbation is possible as long as
\begin{equation}
k^2<k_J^2(1-Q_M)
\end{equation}
with stability once again setting in above the threshold $Q_M=1$. The minimum instability scale is thus the radial Jeans length in the radial direction and effectively the scale height in the vertical direction (or, more precisely, the extent of the region where the disk density is approximately constant).
As discussed in $\S$~\ref{sec:verticalonly}, the factors proportional to $1/z_d$ that were neglected here become important away from the mid-plane and serve to lower the threshold for the overall stability of the disk to infinitely extended perturbations. This was previously determined by GLB, who calculated a total (non-wave) perturbation-weighted threshold $\bar{Q}_M<Q_M=1$ from the 2D dispersion relation derived by integrating over the vertical direction from $-\infty$ to $+\infty$. The next section examines this further, expanding the calculation to include the infinite and finite wave perturbations considered in this work.
\section{2D Stability criteria}\label{sec:2dstability}
\subsection{Overview}
The 3D dispersion relation encodes the evolution of the perturbed density in the presence of the radial and vertical motions that develop in response to gravity, rotation and gas pressure.
In the previous section, this evolution was shown to correspond to stability or growth in a manner that is sensitive to distance from the mid-plane ($\S$ \ref{sec:verticalonly});
near $z=0$, perturbations can be unstable above the radial and vertical Jeans lengths, while beyond $\vert z\vert \approx h$, the disk is characterized by stability. This has two important implications. First, the entire disk is neither as unstable (or stable) as predicted near (or far) from the mid-plane, and we can expect the overall stability threshold (determined from the 2D dispersion relation, after integration over the vertical direction) to be lower than $Q_M$=1 predicted near $z=0$. Second, perturbations representing density enhancements with varying extents around on the mid-plane will have different stability thresholds, with the most confined perturbations best able to avoid the stability at locations far beyond $h$.
To examine these implications further, the following sections derive the 2D dispersion relation in a number of scenarios. The first and second of these focus on the case of infinite non-wave and wave perturbations that satisfy the no-mass flux at infinity requirement. Like the unperturbed disk density, these perturbations fall slowly to zero with increasing $z$ by setting $1/z_p\gtrsim 1/z_d$ where $z_d$ captures the gradient in the equilibrium density. %
(This also keeps them small with respect to $\rho_0$ everywhere.) The infinite case is then compared with a scenario in which the perturbation is wave-like and allowed to have some finite extend $h_1$ above and below the mid-plane.
As discussed earlier, these wave perturbations can be studied using the WKB approximation (assuming some arbitrary amplitude variation), since their truncation prevents them from violating the required $\rho_1/\rho_0<<1$ as $\rho_0\rightarrow 0$. By examining these finite perturbations in two main regimes $h_1/h<<1$ and $h_1/h>>1$, bounds are placed on the possible range of stability thresholds that apply to 3D disks.
For illustration purposes, in what follows the Gaussian vertical distribution is specifically adopted, although vertical integration in the case of the $sech^2$ profile is also discussed. In addition, only even vertical perturbations are considered.
As a diagnostic of stability in general, the case of axisymmetry in the plane ($m$=0) is specifically highlighted, although non-axisymmetry is considered in Appendix \ref{sec:nonaxisym}.
\subsection{Zero Vertical Mass Flux Infinite Non-wave (GLB) Perturbations}\label{sec:infGLB}
To serve as a reference for stability thresholds calculated in this work, this section presents a derivation of the threshold implied by the 2D dispersion relation in the scenario examined by GLB. This involves a radial WKB wave perturbation and a generic infinite non-wave (non-periodic) vertical perturbation that satisfies the `no vertical mass flux at infinity' condition introduced by those authors.
For the perturbations under consideration, Poisson's equation reads
\begin{equation}
(-k^2+T^2)\Phi_1=4\pi G\rho_1
\end{equation}
where $T$ measures the amplitude variation, defined in the previous section.
These perturbations entail no mass flux at $z=\pm\infty$ when their amplitudes are even functions of $z$ and fall to zero as $\vert z\vert\rightarrow\infty$. In practice this amplitude variation has to be faster than the vertical variation of the density in the unperturbed (equilibrium) disk, in order that it remains small at all locations ($\rho_1/\rho_0<<1$) . In this case, the integral of the third (vertical) term in the continuity equation
\begin{equation}
\int_{-\infty}^{\infty}\frac{d(\rho_0v_{z,1})}{dz}dz=\rho_0v_{z,1}\vert_{-\infty}^{\infty}=0.
\end{equation}
For this scenario, the 2D dispersion relation obtained by vertical integration of the continuity equation becomes
\begin{eqnarray}
&0&=\int_{-\infty}^{\infty}\Delta\rho_1dz-\int_{-\infty}^{\infty}\rho_0 \frac{4\pi G\rho_1}{k^2-T^2}k^2dz\nonumber\\&+&\int_{-\infty}^{\infty}\sigma_r^2\rho_1k^2dz+\int_{-\infty}^{\infty}\frac{\Delta}{(-\omega+m\Omega)}\frac{d (\rho_0 v_z)}{dz}dz
\end{eqnarray}
which can be written as
\begin{equation}
\bar{\omega}^2=\kappa^2-\gamma_T 4\pi G\rho_c +\sigma_r^2k^2\label{eq:GLB2d}
\end{equation}
where the perturbation-weighted
\begin{equation}
\bar{\omega}^2=\frac{\int_{-\infty}^{\infty}\omega^2\rho_1dz}{\int_{-\infty}^{\infty}\rho_1dz}
\end{equation}
and the factor
\begin{equation}
\gamma_T=\frac{\int_{-\infty}^{\infty}\frac{(4\pi G\rho_0)k^2}{k^2-T^2}\rho_1dz}{\int_{-\infty}^{\infty}\rho_1dz}.\label{eq:gammaInt}
\end{equation}
(Note that vertical variation in $\kappa^2$ is neglected here but is considered in Appendix \ref{sec:rotationallagappendix}.)
For the overall disk to become unstable, $\bar{\omega}^2$ must be less than zero. This translates into the instability condition
\begin{equation}
k^2<\frac{4\pi G\bar{\rho}}{\sigma_r^2}(\sqrt{2}\gamma_T-\bar{Q}_M),
\end{equation}
which is associated with the stability threshold
\begin{equation}
\bar{Q}_M=\sqrt{2}\gamma_T
\end{equation}
in terms of the mean density
\begin{equation}
\bar{\rho}=\frac{\int\rho_0^2 dz}{\int\rho_0 dz}=\frac{\rho_c}{\sqrt{2}}
\end{equation}
for the Gaussian vertical profile\footnote{
(For the self-gravitating disk, $\bar{\rho}=2/3\rho_c$.)} and where $\bar{Q}_M=\kappa^2/(4\pi G\bar{\rho})$.
The quantity $1/(4\sqrt{2}\gamma_T)$ is equivalent to the function $\mathscr{F}$ evaluated analytically (with great effort) by GLB in the case of the fully self-gravitating disk. (In their formalism, $\mathscr{F}$ sets the threshold on the quantity $\pi G\bar{\rho}/\kappa^2$.) A few simplifying assumptions make it possible to perform the integral with greater transparency while still obtaining the main features of $\mathscr{F}$. In the estimate below, the Gaussian profile (which applies to the idealized weakly self-gravitating case) is adopted (as opposed to assuming that the gas is self-gravitating) and the quantity $\nabla^2\Phi_1/\Phi_1=T^2$ is approximated as $2/z^2$ in the present case that $1/z_p\approx 1/z_d$.\footnote{From the perturbed vertical equation of motion, \begin{equation}
\nabla\Phi_1=-\frac{\sigma^2}{\rho_0}\nabla\rho_1+f(z)
\end{equation}
where $f(z)=-iv_{z,1}(-\omega+m\Omega)$, it can be shown that when $z_p=z_d$,
\begin{equation}
T^2=\frac{\nabla^2\Phi_1}{\Phi_1}=\frac{2}{z^2}\frac{(1+\nabla f(z))}{\left(1+\frac{(2/z^2)\int f(z)dz}{4\pi G\rho_{1,c}}\right)}.
\end{equation}
Below this is approximated as $2/z^2$, but the full expression for $T^2$ is handled in the derivation by GLB.
}
With these assumptions,
\begin{equation}
\gamma_T\approx 1/\sqrt{2} - i \frac{e^{-2/(k_zh)^2} \sqrt{\pi}}{k_zh} -
\frac{2 \textrm{Dawson}\left(\frac{\sqrt{2}}{k_zh}\right)}{k_zh}.
\end{equation}
in terms of the Dawson integral
\begin{equation}
\textrm{Dawson}\left(y\right)=e^{-y^2}\int_0^y e^{t^2}dt.
\end{equation}
Although rough, this approximation brings us close to the result of GLB (see Figure \ref{fig:gamma}), mainly by capturing three main features: at small $k_z z$, the integrand in eq. (\ref{eq:gammaInt}) is proportional to $(k_z z)^2/2$ and negative, there is a singularity at $z=T=2/k_z$, and at large $k_z z$ the integrand is independent of $k_z$ and positive. The similarity between $\mathscr{F}$ and $1/\gamma_T$ is also helped by the similarity between Gaussian and $\mathrm{sech^2}$ profiles generally, and especially near $z\approx 0$ where $T^2/k^2=2/(k z)^2$ is large.
Following GLB, from $\mathscr{F}$ (or $1/(4\sqrt{2}\gamma_T)$) we can identify the following characteristics in the stability behavior of disks overall (out to $\pm\infty$): there are two critical regimes, $kh\rightarrow0$ and $kh\rightarrow1$, and a critical most-unstable wavenumber $kh\sim 0.5-0.6$ where the minimum stability threshold of $\mathscr{F}\approx 0.6$ is reached, corresponding to $\bar{Q}_M=\sqrt{2}Q_M\approx0.45$. \footnote{The estimates for stability above $\bar{Q}_M=0.45$ or below $\pi G\bar{\rho}/\kappa^2=0.56$ due to rotation were determined by GLB in the case of a fully self-gravitating isothermal disk. (This is a recalculation of the threshold 0.73 determined by GLB, located by finding the minimum in their function $\mathscr{F}(m=kT)$ for the self-gravitating isothermal disk.) Note that, as determined by GLB, in the case of the steeper equation of state $P\propto\rho^2$, the threshold lowers to $\bar{Q}_M=0.27$ ($\pi G\bar{\rho}/\kappa^2=1.1$) and reduces still further to $\bar{Q}_M=0.14$ ($\pi G\bar{\rho}/\kappa^2=1.75$) for an incompressible disk. } The sign change above $kh>1$ indicates that the disk is always stable in this regime.
Since $\mathscr{F}$ (or $1/\gamma_T$) is relatively flat across the range $0\lesssim kh\lesssim 1$, in practice the condition for instability can be well approximated by
\begin{equation}
k^2<\frac{4\pi G\bar{\rho}}{\sigma_r^2}(0.45-\bar{Q}_M)
\end{equation}
with stability threshold
\begin{equation}
\bar{Q}_M=0.45.
\end{equation}
This corresponds to $Q_M=\bar{Q}_M/\sqrt{2}\approx0.3$ or, according to eq. (\ref{eq:QMQT}), roughly $Q_T\approx 1$,
assuming $\alpha\sim1$ and $f_g\sim1$.
Thus,
the entire disk has a lower stability threshold than found specifically near the mid-plane, where $Q_M=1$ applies regardless of the vertical density distribution and regardless of the type of perturbation ($\S$~\ref{sec:3dmidplane}).
As examined in the next section, the introduction of wave-like behavior ($k_z\neq0$) out to $z=\pm\infty$ modifies this threshold, but only significantly when $k_z>>k$ and the radial self-gravity force is weakened.
Thresholds for perturbations that are finite (and do not extend to $\pm\infty$), on the other hand, tend to be raised when either the velocity dispersion is highly non-isotropic $\alpha<<1$ or the perturbation is present only well inside $h$ ($h_1/h<<1$).
\subsection{Zero Vertical Mass Flux Infinite (Non-WKB) Wave Perturbations}\label{sec:infwave}
Now consider vertical wave perturbations that include phase variation ($k_z\neq 0$)
but still fall off with height above the mid-plane, to satisfy the GLB 'no-mass flux at infinity' condition. In this case, Poisson's equation implies %
\begin{equation}
\Phi_1=-\frac{4\pi G\rho_1}{k^2+k_z^2-T^2}
\end{equation}
with $T$ the same as in the previous section. The potential is thus weakened with the introduction of non-zero $k_z$.
This weakening is minimal in scenarios with $k>>k_z$, which are identical to the GLB scenario. But when $k<<k_z$, the radial self-gravity term in the dispersion relation is considerably smaller.
After integration, the 2D dispersion relation becomes
\begin{equation}
\bar{\omega}^2=
\begin{cases}
\kappa^2-4\pi G\rho_c\gamma_T \frac{k^2}{k_z^2}+\sigma_r^2k^2 &\text{$k<<k_z$}\\
\kappa^2- 4\pi G\rho_c\gamma_T +\sigma_r^2k^2 &\text{$k>>k_z$}
\end{cases}
\end{equation}
where %
$\gamma_T\approx 0.3$ (see previous section). %
These two regimes yield the stability condition ($\bar{\omega}^2<0$) that can be written as
\begin{equation}
k^2<\bar{k}_{J,r}(\zeta^2\sqrt{2}\gamma_T-\bar{Q}_M),
\end{equation}
assuming $k/k_z$ is a fixed ratio. Here $\zeta=1$ ($k>>k_z$) or $\zeta=k/k_z$ ($k<<k_z$) and the (radial) Jeans wavenumber $\bar{k}_{J,r}=4\pi G\bar{\rho}/\sigma_r^2$.
Rotation thus acts to stabilize above a threshold
\begin{equation}
\bar{Q}_M=
\begin{cases}
\gamma_T\sqrt{2}(k/k_z)^2& \text{$k<<k_z$}\\
\gamma_T\sqrt{2}& \text{$k>>k_z$}.
\end{cases}
\label{eq:waveQ}
\end{equation}
The behavior of $\mathscr{F}=(1/\gamma_T)$ also implies that disks are stable wherever $k_z h>>1$ (in the regime $k<<k_z$) or $kh>>1$ (in the regime $k>>k_z$).
Thus we see that the impact of the wave nature of the vertical perturbation is different in the two regimes. In the first ($k<<k_z$) scenario, the condition for instability can also be written as
\begin{equation}
k^2<\frac{\kappa^2}{\frac{4\pi G\bar{\rho}\sqrt{2}\gamma_T}{k_z^2}-\sigma_r^2}
\end{equation}
which can be solved as long as
\begin{equation}
k_z^2<\bar{k}_J^2\alpha^2\sqrt{2}\gamma_T
\end{equation}
in terms of $\bar{k}_J=\bar{k}_{J,r}/\alpha^2$.
Instability in the radial direction is thus unable to proceed without instability in the vertical direction.
In the opposite $k>>k_z$ scenario, instability is nearly insensitive to the vertical direction and possible as long as
\begin{equation}
k^2<\bar{k}_{J}^2\alpha^2(\sqrt{2}\gamma_T-\bar{Q}_M)\approx k_J^2\alpha^2(\gamma_T-Q_M)
\end{equation}
in terms of $Q_M=\kappa^2/(4\pi G\rho_c)$ and $k_J^2=4\pi G\rho_c/\sigma_z^2$.
Stability is indeed identical to the case of the generic vertical (non-WKB) perturbation considered by GLB and here in $\S$ \ref{sec:infGLB}.
It is notable that, in the long-wavelength regime $k<<k_z$, the $Q_M$ threshold in eq. (\ref{eq:waveQ}) is always lower than $\sqrt{2}\gamma_T\approx 0.45$, which makes it lower than the $Q_T$=1 threshold (see previous section).
As examined in the next section, $Q_T$=1 can be viewed as the highest threshold that applies to extended perturbations in the limit of very small $h$ (or small $\sigma_z$ or highly non-isotropic velocity dispersions).
Accounting for the 3D nature of the disk, the $Q_T$ threshold is lowered, as also indicated by the lowered $Q_M$ calculated in this section.
\subsection{Finite WKB-wave Perturbations}\label{sec:finWKB}
To illustrate how the threshold $Q_M=1$ endemic to the mid-plane transforms smoothly in to the $Q_T$=1 threshold characteristic of the total destabilization of the disk, this section calculates the 2D dispersion relation for perturbations that extend a finite distance around the mid-plane.
As discussed in $\S$~\ref{sec:perturbations}, truncations represent an opportunity to describe perturbations with amplitudes that vary more slowly than $\rho_0$ in the vertical direction (since they would drop to zero before the requirement $\rho_1/\rho_0<<1$ is violated). This has the practical advantage that perturbations can be examined using the WKB approximation, and the amplitude variation can be arbitrarily with $z$ as long as it is slow, i.e. $k_z z_p>>1$.
More critically, since $k_z>>T$, these perturbations entail higher self-gravity than infinite wave perturbations, enhancing the possibility for growth.
However, unlike for the infinite perturbations with unrestricted $k$ and $k_z$, for finite perturbations the boundary condition couples $k$ to $k_z$ and ties them both to the perturbation extent $h_1$. This puts a strong limit on $k_z$ in the short regime in particular, preventing $k$ from dropping below $1/h$ (when $h_1<<h$). Thus we can expect perturbations in the short regime to be more readily stable than in the long regime, which is the reverse of the scenario predicted for infinite wave perturbations in $\S$~\ref{sec:infwave}.
Finite perturbations also have the influential property that they entail mass flux through the perturbed region, as indeed, integration of the vertical terms even in the limit $h_1/h>>1$ does not necessarily yield zero.
This is captured here by the 2D stability condition
\begin{equation}
\kappa^2+\left(\frac{-4\pi G\rho_c F_r(x)}{k^2+k_z^2}+\sigma_r^2\right)k^2 +\left(\frac{-4\pi G\rho_cF_z(x)}{k^2+k_z^2}+\sigma_z^2\right)k_z^2 <0
\end{equation}
calculated by integrating the 3D dispersion relation over the vertical direction with bounds $\pm h_1$ and then identifying when $\omega^2<0$ (see $\S$ \ref{sec:3dmidplane}).
Here $x$=$h_1/h$ and the factors $F_z(x)$ and $F_r(x)$ (see below) depend on the vertical density distribution of the unperturbed disk and the perturbation itself.
For demonstration purposes (and for the sake of analytical simplicity), below focuses on the basic scenario in which the perturbation amplitude is constant. This is a good approximation for any perturbations with amplitudes that vary more slowly with $z$ than $\rho_0$. Indeed, for most other 'slow' choices, $F_r(x)$ and $F_z(x)$ recover essentially identical behavior in the limits $h_1/h<<1$ and $h_1/h>>1$ as determined in the constant amplitude case, even if their functional forms differ in detail from what is presented below.
Combining this slow (constant) perturbation amplitude with the Gaussian vertical profile associated with the nominal weakly-self gravitating equilibrium disk, vertical integration yields
\begin{equation}
F_z(x)=e^{-x^2/2}\label{eq:fzofx}
\end{equation}
and
\begin{eqnarray}
F_r(x)&=&e^{-\frac{h^2k_z^2}{2}}\frac{k_zh}{2\sin{k_zh_1}}\sqrt{\frac{\pi}{2}}\label{eq:frofx}\\
&\hspace{1cm}&\times\left[\textrm{Erf}\left(\frac{x-ik_zh}{\sqrt{2}}\right)+\textrm{Erf}\left(\frac{x+ik_zh}{\sqrt{2}}\right)\right]\nonumber\\
&\approx& \frac{1}{x}\sqrt{\frac{\pi}{2}}\textrm{Erf}\left(\frac{x}{\sqrt{2}}\right)\textrm{\hspace{0.5cm}$k_z h<<1$ and $k<<k_z$}\nonumber\\
&\approx& k_z h\sqrt{\frac{\pi}{2}}\textrm{Erf}\left(\frac{x}{\sqrt{2}}\right)\textrm{\hspace{0.28cm}$k_z h<<1$ and $k>>k_z$}\nonumber
\end{eqnarray}
using that $\sin{k_zh_1}\sim k_z h_1$ in the limit $k_z h_1<<1$ appropriate for our adopted finite perturbations in the regime $k<<k_z$ or $\sin{k_zh_1}=1$ when $k_z=\pi/(2h_1)$ as required for $k>>k_z$. Note that, in the limit $k_zh>>1$, $F_r(x)\rightarrow 0$.
These factors simplify considerably in the limits $h_1/h<<1$ or $h_1/h>>1$, becoming
\begin{eqnarray}
F_z(x)&\approx&
\begin{cases}
1 & \text{$x<<1$}\\%$x\rightarrow 0$}\\
0& \text{$x>>1$}\\%$x\rightarrow \infty$}
\end{cases}\\
F_r(x)&\approx&
\begin{cases}
\begin{cases}
1 & \text{$k<<k_z$}\\%$x\rightarrow 0$}\\
\frac{\pi}{2} & \text{$k>>k_z$}\end{cases}&\text{$x<<1$}\\
\begin{cases}
\sqrt{\pi/2}(1/x)& \text{$k<<k_z$}\\
\sqrt{\pi/2}k_z h& \text{$k>>k_z$}%
\end{cases}&\text{$x>>1$}
\end{cases}
\end{eqnarray}
The behavior of $F_r(x)$ and $F_z(x)$ in these opposite limits is key to the recovery of both stability above $Q_M$=1 when $h_1/h<<1$ and stability above $Q_T=1$ when $h_1/h>>1$.
\subsubsection{$h_1/h>>1$: The Lin-Shu Dispersion Relation and $Q_T$=1 Threshold}\label{sec:largeh1}
In the limit $x>>1$, the 2D
stability condition reads
\begin{equation}
0>\kappa^2-\frac{4\pi G\rho_c k^2}{k^2+k_z^2}\sqrt{\frac{\pi}{2}}\frac{h}{h_1}+\sigma_r^2k^2+\sigma_z^2k_z^2,
\end{equation}
which can be used to predict the behavior of stability in the regimes $k<<k_z$ and $k>>k_z$. \\
\noindent\underline{The Long Wavelength Regime $k<<k_z$}\\
\vspace*{-.1in}
In the first case, inserting the specific relation between $k$ and $k_z$ appropriate for this scenario ($k=k_z^2h_1$), the factor
\begin{equation}
\frac{k^2}{(k^2+k_z^2)}\approx \frac{k^2/k_z^2}{(k^2/k_z^2+1)}\approx \frac{kh_1}{1+kh_1}
\end{equation}
which can be approximated by $kh_1$ to lowest order. (Note that in this regime, $k_zh_1<<1$ and $kh_1<<1$.)
The 2D dispersion relation thus yields the instability condition
\begin{equation}
0>\kappa^2-2\pi G\Sigma_c k+\sigma_r^2k^2+\sigma_z^2\frac{k}{h_1}\label{eq:linshu}
\end{equation}
where $\Sigma_c=\rho_c\sqrt{2\pi}h$ for our unperturbed disk.
In the limit $h_1>>h$ (or in the limit $\sigma_z\rightarrow 0$), the fourth term above
\begin{equation}
\sigma_z^2\frac{k}{h_1}=\frac{\sqrt{8\pi} G\Sigma}{f_g} \left(\frac{h}{h_1}\right)k
\end{equation}
and can be neglected, and the 2D dispersion relation is the axisymmetric Lin-Shu dispersion relation, which can solved under the condition $\omega^2$$<$0 for the onset of instability to obtain the familiar requirement
\begin{equation}
k<\frac{k_T}{2}\left[1\pm (1-Q_T^2)^{1/2}\right]\label{eq:ktoomre}
\end{equation}
for unstable (growing) modes, in terms of
\begin{equation}
Q_T=\frac{\sigma_r\kappa}{\pi G\Sigma}
\end{equation}
and the wavenumber associated with the Toomre length
\begin{equation}
k_T=\frac{2\pi G\Sigma}{\sigma_r^2}.
\end{equation}
The inequality in eq. (\ref{eq:ktoomre}) gives us the well-known stability condition $Q_T>1$ that describes the suppression of long-wavelength instabilities by rotation.
The threshold for stability is lowered when the disk is allowed to have some thickness (or non-negligible $\sigma_z$), weakening the perturbed gravitational force in the plane \citep[e.g.][]{toomre, jogsolomon,ghoshjog}.
This is evident here by letting $\alpha>0$, or keeping all terms to lowest order in $h/h_1$, such that the condition for instability becomes
\begin{equation}
0>\kappa^2-\left(2\pi G\Sigma_c-\frac{\sigma_z^2}{h_1}\right) k +\sigma_r^2k^2
\end{equation}
with the term in parentheses corresponding to weakened self-gravity.
This can be solved to yield
\begin{equation}
k<\left[\frac{k_T}{2}-\frac{\alpha^2}{h_1}\right](1\pm \sqrt{1-Q_{T,t}})\label{eq:klongiso}
\end{equation}
in terms of the thickened $Q$ parameter %
\begin{equation}
Q_{T,t}=\frac{Q_T^2}{\left(1-\frac{\alpha^2}{k_Th_1}\right)^2}.
\end{equation}
In the limit $\alpha<<(k_T h_1)^{1/2}$, eq. (\ref{eq:klongiso}) implies that rotation suppresses the growth of 3D perturbations above a threshold
\begin{equation}
Q_T=\left(1-\frac{\alpha^2}{k_Th_1}\right)
\end{equation}
in terms of the velocity anisotropy parameter $\alpha=\sigma_z/\sigma_r$. This is approximately $Q_T=1-h/h_1$, since $k_T\approx\sqrt{\pi/2}/h$. Stability for a 3D disk with nearly isotropic velocity dispersion is thus predicted to set in above a threshold that is slightly lower than $Q_T=1$.
Note that this constitutes a higher threshold than calculated for infinite perturbations in the same regime, for which $Q_T\approx 2(\gamma_T)^{1/2}(k/k_z)\approx(k/k_z)$. This reflects the stronger self-gravity associated with the slowly varying amplitudes in the present case compared with fall-off required in the infinite case.
In the opposite limit $\alpha>>k_T h_1$, instability (which eq. [\ref{eq:klongiso}] implies would require $Q_{T,t}<0$) is entirely suppressed since $Q_{T,t}$ is positive definite. \\
\noindent\underline{The Short Wavelength Regime $k>>k_z$}\\
\vspace*{-.1in}
In the opposite regime where $k>>k_z=\pi/(2h_1)$, the 2D dispersion relation implies that instability can occur as long as
\begin{equation}
0>\kappa^2-\frac{4\pi G\rho_c k^2}{k^2+k_z^2}\sqrt{\frac{\pi}{2}}k_z h+\sigma_r^2k^2+\sigma_z^2k_z^2
\end{equation}
or approximately
\begin{equation}
0>\kappa^2-2\pi G\Sigma_c k_z+\sigma_r^2k^2+\sigma_z^2k_z^2\label{eq:2Dshortbigh1}
\end{equation}
to lowest order in $k_z/k$.
This can be treated as a condition on $k_z$ (and $h_1$)
in a manner that parallels the condition for Toomre instability, i.e.
\begin{equation}
k_z<\frac{k_T}{2}(1\pm \sqrt{1-Q_{T,ep}})\label{eq:klongiso}
\end{equation}
in terms of %
\begin{equation}
Q_{T,ep}=Q_{T,z}^2\left(1-\frac{\sigma_r^2 k^2}{\kappa^2}\right)
\end{equation}
with $Q_{T,z}=Q_T(\sigma_z/\sigma_r)$.
This suggests the stability threshold
\begin{equation}
Q_{T,z}=\frac{1}{1+(R_{ep}k)^2}
\end{equation}
in terms of the epicyclic radius $R_{ep}=\sigma_r/\kappa$. When perturbations satisfy $R_{ep}k<<1$, the valid stability threshold remains at $Q_{T,z}\approx Q_T=1$. In the limit $h_1>>h$, $k$ in this regime will indeed always be smaller $1/h$, which can be expected to be near $1/R_{ep}$.
\subsubsection{$h_1/h<<1$: The Mid-plane Dispersion Relation and $Q_M$=1 Threshold}\label{sec:smallh1}
In the limit $h_1/h<<1$, it is perhaps not surprising that %
the 2D dispersion relation is identical to the dispersion relation very near the mid-plane calculated in $\S$~\ref{sec:3dmidplane}. \\
\noindent\underline{The Long Wavelength Regime $k<<k_z$}\\
\vspace*{-.1in}
In the limit $k<<k_z$ and $h_1<<h$, instability is found to require
\begin{equation}
0>\kappa^2-4\pi G\rho_c +\sigma_r^2 k^2+\sigma_z^2k_z^2\label{eq:2dmid}.
\end{equation}
adopting $F_r(x)$ and $F_r(x)$ appropriate for the present scenario.
This suggests the general condition
\begin{equation}
k_S^2=k^2+k_z^2<k_J^2(1-Q_M)
\end{equation}
when the velocity dispersion is isotropic, and a stability threshold $Q_M=1$.
More precisely, eq. (\ref{eq:2dmid}) is quadratic in $k$ and yields the condition
\begin{equation}
k_z^2h_1\approx k<-\frac{\alpha^2}{2h_1}\left(1\pm\sqrt{1-Q_{2D,mid}}\right)
\end{equation}
where the parameter
\begin{equation}
Q_{2D,mid}=\left(\frac{2f_gh_1}{\alpha h}\right)^2(Q_M-1).
\end{equation}
For this to yield a real solution for $k_z$, $Q_{2D,mid}>0$, once again yielding the stability threshold $Q_M=1$. \\
\noindent\underline{The Short Wavelength Regime $k>>k_z$}\\
\vspace*{-.1in}
In the short limit $k>>k_z=\pi/(2h_1)$,
\begin{equation}
0>\kappa^2-\frac{4\pi G\rho_c k^2}{k^2+k_z^2}\left(\frac{\pi}{2}\right)+\sigma_r^2 k^2+\sigma_z^2k_z^2-\frac{4\pi G\rho_c k_z^2}{k^2+k_z^2}\label{eq:2dmid}.
\end{equation}
which is approximately
\begin{equation}
0>\kappa^2-4\pi G\rho_c \frac{\pi}{2} +\sigma_r^2 k^2+\sigma_z^2k_z^2\label{eq:2dmid}.
\end{equation}
to lowest order in $k_z/k$.
This yields the following condition on $k_z$ (or $h_1$)
\begin{equation}
k_z^2<k_J^2\left(\frac{\pi}{2}-Q_M-Q_M\left(R_{ep}k\right)^2\right)
\end{equation}
suggesting the stablity threshold
\begin{equation}
Q_M=\frac{\frac{\pi}{2}}{1+\left(R_{ep}k\right)^2}.
\end{equation}
Since $k$ is not guaranteed to be smaller than $1/h$ (or $R_{ep}$) in the limit $h_1<<h$, however, perturbations are easily stabilized, with a stability threshold that is considerably lower than $Q_M=1$.
\subsection{Summary}
In the previous sections the stability threshold for 3D rotating disks (above which disks are stabilized) was found to be influenced by the presence of a vertical perturbation: whether it has wave- or non-wave traits, how strongly the amplitude varies, how far it extends in the vertical direction, and its relation to the scale height of the unperturbed disk. {Lower predictions for the threshold are a mark of stability (since the threshold is more easily surpassed), while higher thresholds suggest that the disk is more unstable. The reference adopted for this study is the overall threshold $\kappa^2/\pi G\bar{\rho}=0.45$ (near $Q_T=1$) determined by GLB for stability out to $z=\pm\infty$ in the presence of a non-wave infinite vertical perturbation that falls off with $z$ like the unperturbed disk density.
For these infinite perturbations, adding phase variation (wave-like behavior) as a rule reduces the self-gravity of the perturbation and thus lowers the stability threshold (signifying a more easily stabilized disk), although this is a negligible change when $k>>k_z$. The self-gravity can be increased again (even with wave-like behavior) when the perturbation has a more slowly varying amplitude than infinite perturbations and is also necessarily finite (so as to avoid violating the requirement $\rho_1/\rho_0<<1$). In this manner, finite but extended ($h_1/h>>1$) long-wavelength WKB perturbations are shown to have a higher threshold than when they are infinite, with more rapidly varying amplitudes. The The threshold in this case is exactly $Q_T=1$, signifying an increase back up near to the level $\kappa^2/\pi G\bar{\rho}=0.45$. As ever, though, allowing for non-negligible thickness lowers this threshold (see $\S$ \ref{sec:largeh1}).
An even more consequential factor, capable of shifting the stability threshold {\it above} $Q_T\sim1$ (or $\kappa^2/\pi G\bar{\rho}=0.45$) -- and widening the avenue for instability -- is the extent of the perturbation and its relation to the scale height of the unperturbed disk. This is a consequence of the sensitivity of vertical stability to height above the mid-plane, which was identified in the case of generic wave or non-wave perturbations in $\S$ \ref{sec:3dmidplane} using the 3D dispersion relation. As a rule, perturbations near the mid-plane or finite (WKB) perturbations extending only out to $h_1<<h$, are subject to a stability threshold $Q_M=1$ which corresponds to $Q_T=2/(\alpha f_g^{1/2})\approx 2$. As the perturbation vertically extends across more of the disk, its stability threshold is lowered back to $Q_T\sim1$.
In this light, disks are expected to be more stable to features that pervade the entire vertical extent of the disk than to perturbations local to the mid-plane. In other words, it is harder to prevent fragmentation near the mid-plane at a given $Q_T$ than it is to stop the whole disk from becoming unstable.
From this perspective, there are two stability regimes of consequence for the appearance of disks. These are referred to in what follows as either `partial 3D', in which the radial instability is localized around the mid-plane (but still limited to scales larger than the Jeans length), subject to threshold $Q_M=1$, or `total 2D', in which radial instability is present throughout the entire vertical extent of the disk and the relevant threshold is $Q_T$=1. The latter choice is meant to bring to mind that $Q_T=1$ is the threshold calculated for a 2D disk with perturbation restricted to the plane.
\section{The onset of partial 3D vs. total 2D instability}\label{sec:3Dv2D}
\subsection{Overview}\label{sec:longshortsummary}
In the previous sections the 2D and 3D dispersion relations were used to show that there are two relevant thresholds for describing 3D disk instability.
The first threshold -- the Toomre threshold
\begin{equation}
Q_T\equiv \frac{\sigma_r\kappa}{\pi G\Sigma}=1\nonumber
\end{equation}
(see $\S$~\ref{sec:largeh1}) -- applies to the disk's total ability to destabilize, across the entire disk out to $z\rightarrow\pm\infty$. The second threshold
\begin{equation}
Q_M\equiv\frac{\kappa^2}{4\pi G\rho_c}=1\nonumber
\end{equation}
(see $\S\S$~\ref{sec:3dmidplane} and $\S$\ref{sec:smallh1}) applies to the 3D instability at the mid-plane, which more closely resembles Jeans instability than Toomre instability.
Under most normal circumstances \citep[adopting the typical masses and rotational properties of disk galaxies; see e.g. ][]{meidt18} the $Q_M$ threshold that applies in 3D is higher than the 2D Toomre $Q_T$ threshold. The two additional degrees of freedom introduced by the vertical direction more than compensate for the stabilizing influence of disk thickness, similar to the role that a secondary (stellar) disk has been shown to play on gas stability \citep[e.g.][]{ko07}. The difference in 2D and 3D thresholds signifies that
fragmentation at the mid-plane should be possible even where the Toomre threshold is surpassed.
Turbulent dissipation and cooling also favor gravitational instability even where gas is Toomre stable \citep{gammie01,elm11}, i.e. once the gas velocity dispersion (and pressure support) is lowered through turbulent dissipation. The present work shows that (even without incorporating dissipation or cooling), pressure forces can be overcome by self-gravity preferentially at the disk mid-plane, where the gas density is approximately constant in equilibrium.
Considering exclusively the basic equilibrium scenario discussed in this work (ignoring cooling, turbulence dissipation and magnetic forces), whether disk fragmentation is ultimately a partial 3D or total 2D process
can be expressed in terms of the growth rates of perturbations and proximity to the critical density associated with each stability threshold, as discussed below.
Before proceeding, it is worth noting that the onset of instability triggered by 3D perturbations is unaffected by a (vertical) rotational lag in either the partial or total instability regimes.
Non-axisymmetry also does not alter the $Q_M$=1 threshold (Appendix \ref{sec:largeh1}), although it has been shown to modestly increase the $Q_T$ threshold \citep[][Appendix \ref{sec:largeh1}]{laubertin, bertinplus, grivgedalin}). The increase is, however, substantially smaller than the increase in the $Q_T$ threshold represented by $Q_M=1$.
Indeed, $Q_T$=1 is expected to remain valid in most scenarios with $m>0$ \citep{BT}.
\subsection{The Critical Density}\label{sec:rhocrit}
The molecular gas disks of nearby galaxies are observed to sit near $Q_T\sim$2 \citep{leroy08}, placing them just at the threshold for stability at the mid-plane, i.e. $Q_M=1$. In general, proximity to $Q_M$=1 depends on the degree of self-gravitation; the more weakly self-gravitating the disk, the quicker the $Q_M$=1 threshold is passed.
Again letting
$f_g=\rho_0/(\rho_0+\rho_b)$ in terms of the background density $\rho_b$,
then $Q_M=\kappa^2/(4\pi G\rho_c)$ can be rewritten as
\begin{equation}
Q_M=\frac{1}{f_g}\left(\frac{\kappa}{\nu}\right)^2
\end{equation}
where $\nu=4\pi G(\rho_c+\rho_b)$.
Considering that typically $\kappa/\nu\sim0.5$ in nearby star forming (disk) galaxies, then wherever the gas fraction $f_g$ is below $\sim$0.25, $Q_M$ will exceed unity. Thus a background potential can be viewed as a source of stability for any embedded disk, suppressing structures unless the disk's density is increased above a critical value
\begin{equation}
\rho_{crit}=\kappa^2/(4\pi G).
\end{equation}
This has also been discussed by \cite{jog2014}, who emphasized that in weakly self-gravitating disks, rotation and the epicyclic frequency $\kappa$ are decoupled from the embedded disks's mass distribution (tracking instead the dominant background distribution). This necessitates a comparable increase in the local disk density for instability to occur.
In many disks, $\rho_{crit}$ is a lower threshold to pass than the volume density associated with the Toomre critical density
$\Sigma_{crit,T}=\sigma_r\kappa/(\pi G)$ since
\begin{equation}
\rho_{crit}=\rho_{crit,T}\frac{f_g\alpha}{4}
\end{equation}
where $\rho_{crit,T}=\Sigma_{crit,T}/(2h)$. This makes disk instability easier near the mid-plane than overall.
\subsection{Growth Rates}\label{sec:growthrates}
The closer molecular disks are to the critical density, the lower $Q_M$ and the more favorable they are to small-scale Jeans-like 3D instabilities. The prominence of the structures that result from this gravitational instability can be assessed by considering the growth rates of different perturbations over different scales, under a given set of conditions.
For 3D perturbations endemic to the mid-plane, growth rates can be approximated as
\begin{equation}
\omega_{3D}^2/\kappa^2\approx
1-\frac{4}{Q_T^2}\frac{1}{f_g}+\frac{4}{Q_T^2}\left(\frac{k}{k_T}\right)^2+\frac{4}{Q_T^2}\left(\frac{k_z}{k_T}\right)^2 \label{eq:growth3d}
\end{equation}
taking $\omega_{min}^2$ as the lower bound on $\omega^2$ and rewriting $Q_M$ in terms of $Q_T$. Here $\alpha$ is set to unity and $m=0$ is adopted for simplicity. The maximum growth rates at a given $k$ are associated with the largest vertical perturbations $k_z<<k_T$, as exclusively considered below.
Under the same conditions ($m=0$, $\alpha=1$ and using that $Q_M=\alpha^2Q_T^2 f_g/4$), the growth rates of 2D perturbations predicted by the Lin-Shu dispersion relation can be written as
\begin{equation}
\omega_{2D}^2/\kappa^2\approx
1-\frac{4}{Q_T^2}\left(\frac{k}{k_T}\right)+\frac{4}{Q_T^2}\left(\frac{k}{k_T}\right)^2\label{eq:growth2d}.
\end{equation}
Note that neither this expression or eq. (\ref{eq:growth3d}) is expected to be valid when $k$ is small, since both are derived assuming
$kR>>1$ (Binney \& Tremaine 2008). These expressions also only apply
to the fastest growing perturbations with negligible vertical rotational lag (see Appendix \ref{sec:rotationallagappendix}). %
The growth rates of tightly-wound non-axisymmetric $m\neq0$ instabilities (estimated in Appendix \ref{sec:largeh1}) are similar.
Figure \ref{fig:rates} compares the growth rates Re($i\omega$) of instabilities on different scales in the partial 3D and total 2D regimes.
Perturbations have mostly comparable growth rates in the two regimes over the entire range in $k$. But there is a scenario signified by $Q_T>1$ in which only 3D perturbations at the mid-plane can grow, demarcated by the red curves. Like the growth in the 2D regime, 3D growth appears everywhere above the Jeans scale. But, under certain conditions, this growth can occur below $k_T$, as illustrated in the right panel of the figure. These are situations where the disk is only weakly self-gravitating ($f_g<1$), and the Jeans length exceeds the scale height (since $\lambda_J=h/f_g$). In these cases, 3D instabilities are still able to occur on small scales, closer to $h$ and $\lambda_J$ than $\lambda_T$. Embedding the gas disk in a dominant external potential therefore does not suppress small scale structure. It may even favor vertical perturbations of the kind adopted in this work.
Another characteristic of 3D mid-plane instability in the low-$f_g$ scenario is faster growth at fixed $Q_T$ than when $f_g$=1. This can make weakly self-gravitating disks more prone to 3D mid-plane instabilities than 2D instabilities. Since $Q_T$ increases as $f_g$ decreases (and equilibrium velocity dispersions reflect more and more the background potential), a given $Q_T$ corresponds to a lower $Q_M$ as $f_g$ decreases. The result is faster growth on smaller scales.
\subsection{Discussion}
\subsubsection{Molecular Clouds as Instabilities}
The modified criterion presented here applies to axisymmetric ($m$=0) ring instabilities and non-axisymmetric instabilities (Appendix \ref{sec:largeh1}). It should thus provide a useful diagnostic for the development of the rich small scale structure observed in gas disks, in much the same way that the axisymmetric Toomre criterion serves as a gauge of stability in general, including to non-axisymmetric stability.
Indeed, following \cite{wangsilk94}, the growth rates of 3D instabilities in Figure \ref{fig:rates} provide an estimate for the cloud formation rate. Given the properties of molecular disks and stellar disks in nearby galaxies, $f_g\approx0.5$ \citep[see e.g.][]{sun20,meidt21} and eq. (\ref{eq:growth3d}) predicts that clouds and cloud complexes can form rapidly, at a rate $\sim2\kappa$ or with a characteristic formation timescale of $\sim t_{orb}/3$.
A prerequisite for the growth of any cloud structures is still the availability of vertical seed perturbations. Gas disks embedded in thicker gas and stellar disks would seem to readily encounter such perturbations, which might take the form of stellar bar and spiral arms or stellar overdensities, in general (including stellar clusters), phase transitions, and pockets of gas that participate in the disk-halo flow or respond to triggers originating internal or external to the disk. The multi-scale impact of feedback from star formation can also be envisioned as prompting perturbations at or near the mid-plane and beyond \citep[e.g.][]{kimtigress}.
\subsubsection{Instabilities in Numerical Simulations}
In principle, existing realistic multi-phase 3D numerical disk simulations (as opposed to razor-thin models) should already capture the 3D disk instability described in this work, although it may be easiest to recognize in the absence of, e.g., a fixed spiral pattern and when controlling for magnetic forces (not included in the present calculation). These are important factors for cloud formation via collisions, the wiggle instability and magneto-Jeans instability instability, for example \citep{elm87,WadaKoda,ko06,dobbs08}.
Cloud formation through gravitational instabilities assisted by turbulence dissipation and/or cooling \citep[e.g.][]{gammie96,elm11} is also in principle recoverable in modern 3D numerical simulations, but it may be distinguishable from 3D disk fragmentation as it would not necessarily favor regulation to a particular $Q_T$ value. From the perspective adopted in this work, the more profound consequence of the turbulent nature of molecular gas is to allow the deep interiors of the pressure-supported cloud fragments formed via 3D instability to collapse into the dense cores that go on to form stars, as proposed by \cite{KM05} \citep[see also][]{padoan,fk12}.
\subsubsection{Instabilities in Stellar Disks}
To the extent that the dynamics of stellar disks can be represented by fluid mechanics \citep[e.g.][]{jogsolomon}, the modified stability criterion derived here has implications for their stability and structure as well. A source of 3D perturbations with $k_zh\lesssim1$ may be less obvious than in the case of molecular gas disks, though (except in exceptional cases, like interactions), so even if locally $Q_M<1$, fragmentation near the disk scale height is not guaranteed. Still, it may be interesting that the stellar component of nearby galaxies has been measured to have $Q_T\gtrsim2$ \citep{bottema,kregel,westfall14} and some numerical simulations suggest that fragmentation is suppressed only once similar values are reached \citep[see][and references therein]{grivgedalin}.
In this context, it is notable that for self-gravitating systems, the $Q_M$ stability threshold is equivalent to a constraint on geometry. The epicyclic frequency can be written as $\kappa^2\approx2V_c^2/R^2=(2/3)4\pi G\rho_{sphere}$ in terms of the circular velocity $V_c$ and the volume density $\rho_{sphere}$ that would be equivalent to arranging all the mass internal to $R$ in a sphere. The flatter the arrangement, the lower $Q_M=(2/3)\rho_{sphere}/\rho_0$, thus indicating a preference for instability and fragmentation in flatter, disk-like geometries.
\section{Summary and Conclusions}
This paper examines the stability of disks to a diversity of 3D perturbations, with the aim of describing situations apart from the Lin-Shu density wave scenario in which the waves are confined to an infinitely thin disk. The chosen perturbations are meant to roughly represent the impact of events and processes taking place within gas disks as a consequence of their thickness and the fact that they are themselves i/ embedded within more extended gas and stellar disks and ii/ subject to on-going events like phase transitions and feedback from star formation.
For the equilibrium disks under consideration (wherein pressure and gravity are the two most important factors, neglecting cooling, dissipation and magnetic forces), the inclusion of a vertical perturbation is found to be consequential. This is fully characterized using the 3D dispersion relation ($\S$~\ref{sec:3ddispersionRelation}), which is shown to encode variations in disk stability with height above the mid-plane ($\S$~\ref{sec:verticalonly}). This applies regardless of the chosen vertical form of the perturbation: with or without periodic (wave) components and either extending to infinity (as treated by GLB) or to a finite height above the mid-plane (and treatable with the WKB approximation).
Near the mid-plane, in particular, where the unperturbed gas density is overall roughly constant, instability is found to proceed in a manner that is more Jeans-like than Toomre-like. The onset of instability in this scenario is restricted to scales larger than the effective Jeans length (in the presence of thermal and non-thermal motions) in both the radial and vertical directions. The instability is moreover subject to a modified threshold $Q_M=\kappa^2/(4\pi G\rho_c)=1$, or roughly $Q_T=2$, in terms of the Toomre $Q_T$, the radial epicyclic frequency $\kappa$ and the gas volume density $\rho_c$ at $z=0$. This applies in the presence of a rotational lag (Appendix \ref{sec:rotationallagappendix}) or non-axisymmetry in the plane (Appendix \ref{sec:nonaxisym}).
At locations well beyond a disk scale height $h$, however, the 3D dispersion relation describes characteristic stability. This leads the total disk to be stabilized at a lower overall threshold than found endemic to the mid-plane. The lowered threshold is, namely, the threshold obtained from the 2D dispersion relation ($\S$~\ref{sec:2dstability}), which is either $\kappa^2/(4\pi G\rho_c)\approx0.3$, as determined by GLB in the case of infinitely extended non-periodic vertical perturbations ($\S$~\ref{sec:infGLB}), or the Toomre threshold $Q_T=1$ obtained in this work using coupled radial and vertical wave perturbations in the limit of negligible disk scale height $h$ ($\S$~\ref{sec:finWKB}).
The difference in the thresholds for partial and total 3D instability indicate that disks may be able to fragment at their mid-planes (above the Jeans length) even where the Toomre threshold is surpassed, as long as $Q_M<1$.
The instabilities that are seeded at the mid-plane grow rapidly, comparable to Toomre instabilities, and
with characterstic scales near the disk scale height in most scenarios of interest. If we equate the formed fragments with molecular clouds stabilized from within by gas pressure, their formation is predicted to be fast, with a rate of approximately 2$\kappa$ and thus a characteristic timescale of roughly $t_{orb}/3$ (given the properties of nearby galaxy disks). This would make cloud formation compatible with fast destruction by early stellar feedback \citep{elm11,maclow17}.
Overall, considering the possibility of a broad variety of perturbations, the results of this study imply that pervasive gravitational instability is a characteristic of gas disks \citep[see also][]{elm11}, responsible for their rich multi-scale structure, the efficient conversion of ordered motion into turbulent motion and ultimately star formation.\\
Many thanks to the referee for a constructive, detailed review of the paper. Thanks also to Arjen van der Wel and the members of the PHANGS (http://phangs.org) `Large-scale Dynamics Processes' Science Working Group for their feedback.
\appendix
\section{The impact of a rotational lag on the conditions for instability}\label{sec:rotationallagappendix}
The main text exclusively considers perturbations for which the effect of a rotational lag is negligible. Here precise bounds on the perturbations that meet this criterion are determined at the mid-plane from the 3D dispersion relation and overall from the 2D dispersion relation.
For this calculation, the rotational lag in the perturbed radial velocity, which is proportional to
\begin{equation}
\frac{dV_c}{dz}=-\frac{1}{2\Omega}\frac{d}{dz}\frac{d\Phi}{dr}
\end{equation}
(since $d V_c^2/dz=2V_c dVc/dz=d(-Rd\Phi/dR)/dz$), is retained and evaluated assuming that the potential is a separable function of $z$ and $R$. In this case, assuming that the disk is weakly self-gravitating and embedded in a background distribution with approximately constant density $\rho_b$, then
\begin{equation}
\frac{d}{dz}\frac{d\Phi_0}{dR}=z\frac{d \nu^2}{dR}\equiv z\frac{\nu^2}{R_\nu}
\end{equation}
where $d\Phi_0/dz=\nu^2 z$, $\nu^2=4\pi G\rho_{b}$ and $R_\nu$ is defined as the scale length of the variation in $\nu^2$ with radius.
\subsection{At the Mid-plane}
Consider a perturbation that is WKB-like at the mid-plane.
With the rotational lag term included, the continuity equation becomes
\begin{eqnarray}
0&=&(-\omega+m\Omega)\rho_1+\frac{(-\omega+m\Omega)}{\Delta}C_r\nonumber\\
&+&\frac{\nu^2L_z}{\Delta(-\omega+m\Omega)}\\
&+&\frac{C_z}{(-\omega+m\Omega)}
\end{eqnarray}
substituting in the expression for $v_{z,1}$ and
setting
\begin{equation}
C_r=\left(\frac{-4\pi G\rho_1\rho_0}{k^2+k_z^2}+\rho_1\sigma_z\right)k^2
\end{equation}
and
\begin{equation}
L_z=-kk_z\frac{z}{R_p}\left(\frac{-4\pi G\rho_1\rho_0}{k^2+k_z^2}+\rho_1\sigma_z\right).\label{eq:lz}
\end{equation}
The 3D dispersion relation is again quadratic in $\omega^2$ (as in $\S$ \ref{sec:3dmidplane}), now with solution
\begin{equation}
\omega^2=\frac{\omega_{min}^2}{2}\left(1\pm\sqrt{1+4\frac{\nu^2 L_z+C_z\kappa^2}{\omega_{min}^4}}\right)\label{eq:laggrowth}
\end{equation}
in terms of $\omega_{min}^2$ defined in the main text. Instability can thus identified once again from $\omega_{min}^2<0$, yielding an identical stability threshold as determined in the absence of a rotation lag. However, now the condition $(C_z\kappa^2+\nu^2 L_z)>0$ must also be met. This yields a condition on $k_z$ for instability (substituting in the expression for $L_z$ defined in eq. [\ref{eq:lz}]), i.e.
\begin{equation}
\rho_1\kappa^2k_z^2\left(\frac{-4\pi G\rho_0}{k^2+k_z^2}+\sigma_z^2\right)>-kk_z\frac{z}{R_\nu}\rho_1\nu^2\left(\frac{-4\pi G\rho_0}{k^2+k_z^2}+\sigma_z^2\right)
\end{equation}
or
\begin{equation}
k_z>k z\frac{\nu^2}{\kappa^2}\frac{1}{R_{0}}\label{eq:kzconditionlag}
\end{equation}
where $R_{\nu}$ is approximated as $-R_0$ assuming that background density falls off approximately exponentially with scale length $R_0$.
The above condition is most easily met precisely at the mid-plane ($z=0$) with vanishing rotational lag. Elsewhere, it adds a negligible constraint when the background density distribution is a slowly decreasing function of $R$ such that $kR_0>>1$, as it is indeed assumed when invoking the WKB approximation in the radial direction.
At a small distance $z$ above the above the mid-plane, eq. (\ref{eq:kzconditionlag}) can be used to place a condition on $k$, given an accessory requirement $k_zh<<1$, i.e.
\begin{equation}
k<\frac{1}{h}\frac{R_0}{z}\frac{\kappa^2}{\nu^2}.
\end{equation}
A rotational lag thus places a height-dependent minimum on the wavelength of the radial perturbations that can lead to instability, adding together with the condition on $k$ determined from identifying when $\omega_{min}^2<0$. The latter is the stronger constraint assuming $1/R_{\nu}$ is indeed small.
Notice that, according to eq. (\ref{eq:laggrowth}), the growth rates of perturbations can be slowed in the presence of a rotational lag, depending on vertical stability. Wherever $C_z>0$ and the vertical direction is unstable, $L_z<0$. As the lag term $\vert L_z\vert$ increases, $\omega^2$ decreases until the point $L_z>-C_z\kappa^2/\nu^2$ and real solutions are no longer permitted.
For very large $d V_c/dz$, our adopted 3D WKB perturbations would no longer satisfy the equations of motion. Indeed, for large enough $\vert L_z\vert$, the perturbed radial velocity in eq. (\ref{eq:radvelocity}) is dominated less by self-gravity (and pressure) and more by the outward motion associated with moving up in the weakening potential.
\subsection{In the Overall Disk}
The $z$-dependence of the lag term $L_z$ in the previous section gives a non-negligible rotational lag very little influence on the overall stability of the disk, since
\begin{equation}
\int_{-\infty}^{\infty}\nu^2L_z dz=0
\end{equation}
and
\begin{equation}
\int_{-h_1}^{h_1}\nu^2L_z dz\approx 0 \hspace*{.5cm}
\end{equation}
for either $h_1>>h$ or for $h_1<<h$.
Thus, the lag term drops from the 2D dispersion relation for infinite wave and non-wave perturbations and for all finite WKB perturbations, leaving the stability conditions exactly as determined in $\S$ \ref{sec:2dstability}.
Indeed, the variation in $\kappa$ with height above the mid-plane implied by the presence of a rotational lag introduces negligible change in the overall stability threshold in these cases.
As an illustration, take $\kappa=\sqrt{2}\Omega$ in the flat part of the rotation curve, which implies
\begin{eqnarray}
\frac{d\kappa^2}{dz}&=&2\sqrt{2}\frac{\Omega}{R}\frac{dV_c}{dR}\\
&\approx&\frac{\sqrt{2}}{R}\frac{d}{dR}\nu^2z
\end{eqnarray}
from which it can be estimated that
\begin{eqnarray}
\kappa^2(z)&=&\kappa^2(z=0)+\int \frac{d\kappa^2}{dz} dz\\
&=&\kappa^2(z=0)-\frac{\sqrt{2}}{R}\frac{z^2}{2}\frac{\nu^2}{R_0}
\end{eqnarray}
with $R_0$ as used in the previous section.
In this case, the perturbation-weighted $\kappa^2$ that would appear in the 2D dispersion relation is
\begin{eqnarray}
\bar{\kappa}^2&=&\frac{\int_{-\infty}^{\infty}\kappa^2(z)\rho_1 dz}{\int_{-\infty}^{\infty}\rho_1 dz}\\
&=&\kappa^2(z=0)-\frac{h^2\nu^2}{\sqrt{2}RR_0}\\
&=&\kappa^2(z=0)-\frac{\sigma^2}{\sqrt{2}RR_0}
\end{eqnarray}
The first term by far dominates in the gas disks of nearby galaxies since $V_c>>\sigma$. Even in extremely puffy disks with $V_c/\sigma$=2, $\bar{\kappa}^2$ easily remains within a factor of 2 of $\kappa^2(z=0)$ within $4R_0$. (Note, though, that such puffy-disk scenarios are unlikely a good match for the weakly-self-gravitating assumption adopted for this approximation.)
\section{Stability of (Less) Tightly-wound Non-axisymmetric Perturbations}\label{sec:nonaxisym}
In this section we appeal to linear theory to examine the stability of disks to non-axisymmetric ($m\neq$0) perturbations as considered by \cite{GLBb, JT66, laubertin, bertinplus, grivgedalin}. Introducing the azimuthal forces associated with these perturbations involves a weakening of the requirement that $k R>>1$ normally adopted with the WKB approximation. The result is a picture of the destabilizing influence of azimuthal forces that lead to growth, in the manner ultimately described by swing amplification \citep{GLBb, JT66,toomre81}. A basic diagnostic of this behavior is an increase in the $Q_T$ threshold for stability, as shown in a number of studies. In order to compare this change to the increase from $Q_T$=1 to $Q_T=2/(\alpha f_g^{1/2})$ predicted endemic to the mid-plane ($\S\S$ \ref{sec:3dmidplane} and \ref{sec:smallh1}) the calculation of $Q_T$ for $m>0$ is reproduced here, adopting the 3D perturbations and framework described in the main text.
The first steps involve substituting the full expressions for $v_{r,1}$ and $v_{\theta,1}$ in eqs. (\ref{eq:radvelocity}) and (\ref{eq:phivelocity}) into the continuity equation (eq.[\ref{eq:fullcontinuity}]). Here, the perturbed pressure term is written in terms of the enthalpy $\eta_1$, i.e. setting $\eta_1=\sigma^2\rho_1/\rho_0$.
\begin{eqnarray}
0&=&-(\omega-m\Omega)\frac{\rho_1}{\rho_0}\nonumber\\
&-&\frac{(\Phi_1+\eta_1)}{\Delta}\Big[(\omega-m\Omega)k^2+\frac{2m\Omega}{R^2}\left(1+R\frac{\partial\ln\Sigma_0}{\partial R}\right)\nonumber\\
&+&2m\frac{d\Omega}{dR}+i\frac{k}{R}(\omega-m\Omega)\left(1+R\frac{\partial\ln\Sigma_0}{\partial R}\right)-i\frac{k}{R}2m\Omega\Big]\nonumber\\
&-&\frac{(\Phi_1+\eta_1)}{\Delta}\Big[\frac{m^2}{R^2}(\omega-m\Omega)-2iBk^2\Big]\nonumber\\
&+&\frac{C_z}{\rho_0(-\omega+m\Omega)}\label{eq:2Ddispwithm}
\end{eqnarray}
where the second and third terms originate with the radial and azimuthal components of the velocity, respectively. The rotational lag terms have been neglected (see Appendix \ref{sec:rotationallagappendix}).
From this point, \cite{laubertin} argue that that the out-of-phase terms arising with the in-plane imaginary parts of eq. (\ref{eq:2Ddispwithm}) are not important for stability and growth and can be neglected. The continuity equation thus becomes
\begin{eqnarray}
0=\frac{\rho_1}{\rho_0}\Delta&+&\left(k^2+\frac{m^2}{R^2}\right)(\Phi_1+h_1)\nonumber\\
&+&\left[2m\frac{\Omega}{R(\omega-m\Omega)}\left(\frac{d\ln\Omega}{dR}-\frac{d\ln\Sigma_0}{d R}\right)\right](\Phi_1+\eta_1)\nonumber\\
&+&\frac{\Delta C_z}{\rho_0(-\omega+m\Omega)^2}\label{eq:full3dnonaxi}
\end{eqnarray}
Another simplification involves continuing to require that the characteristic scale of variation in the perturbation's amplitude is small compared to $1/k$ and tied to the unperturbed disk. Following \cite{morozov}, then, we assume $kL>>1$ where $L=\textrm{min}(\vert d\ln\Omega/d R\vert^{-1},\vert d\ln\Sigma_0/d R\vert^{-1})$, such that the term in square brackets can be neglected.
Before examining the 3D dispersion relation at the mid-plane in $\S$ \ref{sec:3dnonaxi}, for reference 2D dispersion relation derived by adopting a delta function perturbation $\rho_1=\Sigma_1\delta(z)$ is first presented below. In this case it is typical to let $\Phi_1=(2\pi G\Sigma_1/k) e^{-kz}$ (see BT), neglecting disk thickness.
\subsection{2D Stability using Delta Function Perturbations}\label{sec:2ddelta}
In the absence of perturbation that entails explicit vertical motion, integration of the continuity equation yields the 2D dispersion relation
\begin{equation}
(\omega-m\Omega)^2=\kappa^2+\left(-\frac{2\pi G\Sigma_0}{k}+\sigma_r^2\right)\left(k^2+\frac{m^2}{R^2}\right).\label{eq:nonaxidispersion}
\end{equation}
Now with the requirement $(\omega-m\Omega)^2<0$ sufficient for identifying the condition $\omega^2<0$ for growth, the conditions on $k$ for instability can be identified from
\begin{equation}
\kappa^2-2\pi G\Sigma_0 k\left(1+\frac{m^2}{k^2R^2}\right)+\sigma_r^2 k^2\left(1+\frac{m^2}{k^2R^2}\right)<0.\label{eq:nonaxigrowth}
\end{equation}
At this stage, it is typical to effectively assume that the pitch angle $i_p$ of the perturbation is unvarying, such that the quantity $m^2/(kR)^2=\tan^2{i_p}$ is roughly constant. Thus eq. (\ref{eq:nonaxigrowth}) can be easily solved for $k$, i.e.
\begin{equation}
k<\frac{\pi G\Sigma_0}{\sigma_r^2}\left[1\pm\left(1-\frac{Q_T^2}{\left(1+\tan^2(i_p)\right)}\right)^{1/2}\right]
\end{equation}
yielding the stability criterion
\begin{equation}
Q_T>\left(1+\tan^2(i_p)\right)^{1/2}.\label{eq:Qnonaxi}
\end{equation}
This is identical to the $Q_T$ threshold derived by \cite{grivgedalin} to lowest order in $m^2/(kR)^2$ in the case of a flat rotation curve. (Note that eq. (\ref{eq:nonaxigrowth}) in the limit $kR<<m$ to lowest order in $k$ implies that the disk is always unstable and there is no $Q_T$ threshold for exceptionally loose perturbations.)
Eq. (\ref{eq:Qnonaxi}) is also equivalent to the change in $Q_T$ threshold found in the presence of non-axisymmetric structure by \cite{laubertin} \citep[and][]{bertinplus} when substituting the value of $k$ associated with the most unstable mode, i.e $k=2\pi G\Sigma_0/\sigma_r^2=2\kappa/(Q_T\sigma_r)$. In this case, stability requires
\begin{equation}
Q_T>\left(1+\frac{m^2\sigma_r^2 }{4\kappa^2R^2}\right)^{1/2}
\end{equation}
to lowest order in $1/(\kappa R)$ or
\begin{equation}
Q_T>\left(1+\frac{m^2\sigma_r^2 }{8V_c^2}\right)^{1/2}\label{eq:QnonaxiLB}
\end{equation}
when the rotation curve is flat and $\kappa=\sqrt{2}V_c/R$.
The $Q_T$ threshold is thus generally raised for tightly wound non-axisymmetric structures. However, the increase estimated here is negligible for most scenarios, and the $Q_T$=1 threshold mostly remains accurate \citep{BT}.
In the stellar disks of nearby galaxies with $\sigma_r/V_c\sim0.2$ or lower, the change to the $Q_T$ threshold is only appreciable for the very loosest perturbations ($Q_T$$\lesssim$1.2 for all $m<10$). In gas disks with even lower $\sigma_r/V_c\lesssim0.1$, the stability threshold is raised to $Q_T\sim1.5$ only for $m\gtrsim 30$ (although eq. (\ref{eq:Qnonaxi}) looses its accuracy for such loose perturbations.)
\subsection{3D (In)stability at the Mid-plane}\label{sec:3dnonaxi}
Now consider a 3D perturbation that is WKB-like near the mid-plane with non-axisymmetry in the plane such that
\begin{equation}
\Phi_1=\frac{4\pi G\rho_1}{k^2+\frac{m^2}{R^2}+k_z^2}.
\end{equation}
(from Poisson's equation). Now substituting in eq. (\ref{eq:czmidplane}) into eq. (\ref{eq:full3dnonaxi}), the 3D dispersion relation is once quadratic in $\omega^2$, but now with the addition of the in-plane non-axisymmetric terms. Following the arguments in $\S$~\ref{sec:3dmidplane}, the condition for instability in this case becomes
\begin{eqnarray}
0&>&\kappa^2+\left(-\frac{4\pi G\rho_0}{k^2+\frac{m^2}{R^2}+k_z^2}+\sigma^2\right)\left(k^2+\frac{m^2}{R^2}\right)\nonumber\\
&+&\left(-\frac{4\pi G\rho_0}{k^2+\frac{m^2}{R^2}+k_z^2}+\sigma^2\right)k_z^2
\end{eqnarray}
or
\begin{equation}
0>\kappa^2-4\pi G\rho_0+\sigma_r^2k^2+\sigma_r^2\frac{m^2}{R^2}+\sigma_z^2k_z^2.
\end{equation}
Once again adopting the assumption of a fixed pitch angle, instabillity is possible as long as
\begin{equation}
k^2<\frac{k_J^2(1-Q_M-k_z^2h^2)}{\left(1+\tan^2{(i_p)}\right)}
\end{equation}
provided that $Q_M<1$ in the limit $k_zh<<1$.
Dropping the fixed $i_p$ assumption in practice yields a similar $Q_M$ threshold. Instability would proceed where
\begin{equation}
k^2<k_J^2(1-Q_M-k_z^2h^2)-\frac{m^2}{R^2}
\end{equation}
suggesting the stability threshold $Q_M=1-m^2h^2/R^2$ (in the limit $k_zh<<1$), which is equivalent to $Q_M\approx 1$ for thin gas disks. The introduction of non-axisymmetry is thus of negligible impact on the mid-plane stability threshold.
|
Title:
Kinematic data rebuild the Nuclear star cluster as the most metal rich region of the Galaxy |
Abstract: The Galactic centre (GC) is located at only 8 kpc from Earth and constitutes
a unique template to understand Galactic nuclei. Nevertheless, the high
crowding and extinction towards the GC hamper the study of its main stellar
components, the nuclear stellar disc (NSD) and the nuclear star cluster (NSC).
Recent work has suggested that the NSD and the NSC can be distinguished along
the line of sight towards the NSC via the different extinction of their stars.
This motivated us to analyse the proper motion, radial velocity, and the
metallicity distributions of the different extinction groups. We use
photometric, kinematic, and metallicity data to distinguish between probable
NSD and NSC stars in a region centred on the NSC. We detected two different
extinction groups of stars and obtained significantly different proper motion
distributions for each of them, in agreement with the expected kinematics for
the NSD and the NSC. We derived radial velocity maps that appear to be
different for the NSD and the NSC. We also found different metallicities for
each of the components, with the largest one measured for the most extinguished
group of stars. We obtained that the metallicity distribution of each
extinction group is best fitted by a bimodal distribution, indicating the
presence of two metallicity components for each of them (a broad one slightly
below solar metallicity, and a more metal rich narrower one, that is largest
for the high extinction group of stars). We conclude that both extinction
groups are distinct GC components with different kinematics and metallicity,
and correspond to the NSD and the NSC. Therefore, it is possible to distinguish
them via their different extinction. The high mean metallicity,
$[M/H]\sim0.3$\,dex, obtained for the NSC metal rich stars, supports that the
NSC is arguabily the most metal rich region of the Galaxy.
| https://export.arxiv.org/pdf/2208.13218 |
\title{Kinematic data rebuild the Nuclear star cluster as the most metal rich region of the Galaxy}
\author{F. Nogueras-Lara
\inst{1}
}
\institute{
Max-Planck Institute for Astronomy, K\"onigstuhl 17, 69117 Heidelberg, Germany
\email{nogueras@mpia.de}
}
\date{}
\abstract
{The Galactic centre (GC) is located at only 8 kpc from Earth and constitutes a unique template to understand Galactic nuclei. Nevertheless, the high crowding and extinction towards the GC hamper the study of its main stellar components, the nuclear stellar disc (NSD) and the nuclear star cluster (NSC).}
{Recent work has suggested that the NSD and the NSC can be distinguished along the line of sight towards the NSC via the different extinction of their stars. This motivated us to analyse the proper motion, radial velocity, and the metallicity distributions of the different extinction groups.}
{We use photometric, kinematic, and metallicity data to distinguish between probable NSD and NSC stars in a region centred on the NSC.}
{We detected two different extinction groups of stars and obtained significantly different proper motion distributions for each of them, in agreement with the expected kinematics for the NSD and the NSC. We derived radial velocity maps that appear to be different for the NSD and the NSC. We also found different metallicities for each of the components, with the largest one measured for the most extinguished group of stars. We obtained that the metallicity distribution of each extinction group is best fitted by a bimodal distribution, indicating the presence of two metallicity components for each of them (a broad one slightly below solar metallicity, and a more metal rich narrower one, that is largest for the high extinction group of stars).}
{We conclude that both extinction groups are distinct GC components with different kinematics and metallicity, and correspond to the NSD and the NSC. Therefore, it is possible to distinguish them via their different extinction. The high mean metallicity, $[M/H]\sim0.3$\,dex, obtained for the NSC metal rich stars, supports that the NSC is arguabily the most metal rich region of the Galaxy.}
\keywords{Galaxy: nucleus -- Galaxy: centre -- Galaxy: structure -- dust, extinction -- infrared: stars -- proper motions
}
\section{Introduction}
The Milky Way's centre is the closest galaxy nucleus and the only one where we can resolve individual stars down to milliparsec scales. Besides the supermassive black hole, Sagittarius A*, two main stellar structures outline the Galactic centre (GC): (1) the nuclear star cluster (NSC), a massive stellar cluster \citep[$\sim2.5\times10^7$\,M$_\odot$, e.g.][]{Launhardt:2002nx,Schodel:2014bn,Feldmeier:2014kx} placed at the heart of the Galaxy with an effective radius of $\sim 5$\,pc \citep[e.g.][]{Graham:2009lh,Schodel:2011ab,Feldmeier-Krause:2017kq,gallego-cano2019}, and (2) the nuclear stellar disc (NSD), a much larger stellar structure \citep[with a radius $\sim200$\,pc, e.g.][]{Launhardt:2002nx,Nishiyama:2013uq,gallego-cano2019,Sormani:2020aa,Sormani:2022wv} surrounding the NSC, and partially overlapping with the dense gas from the central molecular zone \citep[e.g.][]{Henshaw:2022vl}.
In spite of being placed at only 8\,kpc from Earth \citep[e.g.][]{Gravity-Collaboration:2018aa,Do:2019aa}, the study of the GC is hampered by the high extinction and the extreme source crowding \citep[e.g.][]{Nishiyama:2008qa,Schodel:2010fk,Nogueras-Lara:2018aa,Nogueras-Lara:2020aa,Nogueras-Lara:2021wj}. Therefore, disentangling the GC components and analysing their main properties is a formidable challenge. Recent studies point towards a different stellar population and formation scenario for the NSC and the NSD \citep{Nogueras-Lara:2019ad,Schodel:2020aa,Nogueras-Lara:2021wm}. In this way, in spite of being dominated by old stars (more than 80\,\% of their stellar population is older than $8$\,Gyr), the NSC contains an intermediate age stellar population \citep[up to $15$\,\% of the stellar mass was formed $\sim 3$\,Gyr ago][]{Schodel:2020aa}, that is not present in the NSD. Analogously, a significant mass fraction of the NSD stars ($\sim5$\,\%) formed about $1$\,Gyr ago, corresponding to a time during which the NSC experienced no significant star formation activity \citep{Pfuhl:2011uq,Nogueras-Lara:2019ad,Schodel:2020aa}. On the other hand, the metallicity of the NSC stars seems to be higher than that of the NSD \citep{Schultheis:2021wf,Feldmeier-Krause:2022vm}.
Recently, \citet{Nogueras-Lara:2021wm} have shown that NSD and NSC stars are subject to significantly different reddening. Here, we follow up on their work by including kinematic and spectroscopic data. We find that the kinematics and metallicity distributions are different for each of the extinction groups, and that they are in agreement with the expected results for the NSD and the NSC.
\section{Data}
\subsection{Photometry}
The photometric observations used in this work were obtained with the HAWK-I instrument \citep[][]{Kissler-Patig:2008fr}, placed at the ESO Very Large Telescope in Chile (UT4). They are $J$ and $K_s$ data of a field of $\sim 2.8' \times 4.9'$ centred on the NSC (Fig.\,\ref{GNS}, coordinates 17$^h$ 45$^m$ 38$^s$, -29$^\circ$ 00$'$ 12$''$). The $J$ data belong to the GALACTICNUCLEUS survey \citep[GNS, ][]{Nogueras-Lara:2018aa,Nogueras-Lara:2019aa}, a high-angular resolution ($\sim 0.2''$) $JHK_s$ catalogue specially designed to observe the GC. The $K_s$-band data were obtained in 2013 and form part of a pilot study for the GNS survey \citep{Nogueras-Lara:2018aa} that was obtained under excellent observing conditions ($K_s$ seeing $\sim 0.4''$), allowing a photometry $\sim 1$\,mag deeper than for the corresponding GNS field (see Table in \citealt{Nogueras-Lara:2018aa}).
To correct potential saturation problems affecting bright stars in $K_s$ \citep[e.g.][]{Nogueras-Lara:2019aa}, we used the SIRIUS/IRSF survey \citep{Nagayama:2003fk,Nishiyama:2006tx} to replace the $K_s$ photometry of stars with $K_s< 11.5$\,mag \citep[for further details, see ][]{Nogueras-Lara:2019ad}.
\subsection{Proper motions}
We used a publicly available proper motion catalogue of the GC \citep{Shahzamanian:2019aa,Shahzamanian:2021wu}, that overlaps with the region analysed in this paper. This catalogue was specifically designed to study the GC and constitutes an unprecedented kinematic data set for the NSD region. The proper motions were computed using two data sets (the GNS $H$-band data, \citealt{Nogueras-Lara:2018aa,Nogueras-Lara:2019aa}, and the HST Paschen-$\alpha$ survey, \citealt{Wang:2010fk,Dong:2011ff}), with a timeline of $\sim 7-8$\,years between them. The high angular resolution of the used data sets \citep[$\sim0.2''$][]{Shahzamanian:2021wu}, allows the catalogue to supersede previous surveys covering the same area \citep[e.g. the VIRAC survey][]{Smith:2018aa}, that are limited to a narrow magnitude range in the analysed region, due to saturation and seeing-limited resolution \citep{Smith:2018aa, Shahzamanian:2021wu}.
Given the complex procedure of matching the data from the two different epochs, their different tiling patterns, and the different size of the detectors, the proper motion catalogue does not have a homogeneous coverage across its surveyed field \citep[see Fig.\,2 in ][]{Shahzamanian:2021wu}. Nevertheless, we checked that $\gtrsim 80\,\%$ of the region analysed in this study is properly covered as shown in Fig.\,\ref{GNS}.
The computed proper motions are relative, not absolute, meaning that they were computed assuming an average zero motion between the stellar positions from the used reference catalogue to calculate the proper motions. This only means that there is an offset between proper motions considering an absolute relative frame and the ones in the catalogue. \citet{Shahzamanian:2021wu} carried out an in-detail comparison of their catalogue with absolute calibrated work \citep{Libralato:2021td}, and concluded that the proper motions perfectly agree once converted the reference frame. In any case, this paper does not pursue the calculation of absolute proper motions, but to analyse the proper motion distribution of the NSC and the NSD. Given that we are using the same pointings for the NSC and the NSD (instead of comparing different GC regions), the use of relative proper motions does not pose any problem.
\subsection{Metallicity and radial velocity}
\label{metal}
We used metallicity and radial velocity KMOS \citep{Sharples:2013aa} medium resolution spectra data for around 1,000 stars in the NSC region (see Fig.\,\ref{GNS}) obtained by \citet{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv}. They derived the radial velocities using the IDL routine {\it pPXF} \citep{Cappellari:2004us}, using as templates the high-resolution spectra of \citet{Wallace:1996vy}. Then, they applied a full spectral fitting method with PHOENIX models \citep{Husser:2013uu} to derive the metallicities and other stellar parameters. The absolute metallicities were calibrated using existing empirical spectra for $[M/H] < 0.3$\,dex. This might produce an overestimation of metallicities $[M/H] > 0.3$\,dex. To check this, \citet{Schultheis:2021wf} derived the metallicities using the data from \citet{Feldmeier-Krause:2020uv} applying an alternative method using CO and NaI index, and empirical spectra calibrated for $[M/H] < 0.6$\,dex \citep[for further details on the method see][]{Fritz:2020aa}. They concluded that there is not systematic bias for stars with $[M/H] < 0.5$\,dex, whereas stars with $[M/H] > 0.5$\,dex show significantly higher metallicities when applying the methodology in \citet{Feldmeier-Krause:2017kq}. On the other hand, this metallicity cut at $[M/H] = 0.5$\,dex also agrees with the most metal rich stars detected in the NSC when using high resolution spectroscopy \citep[e.g. ][]{Do:2015ve,Rich:2017rm,Thorsbro:2020uq}.
\section{Proper motion analysis}
\subsection{Disentangling the stellar populations}
Recent work by \citet{Nogueras-Lara:2021wm} used $HK_s$ photometry to show that the stellar populations from the NSC and the NSD can be identified along the line of sight towards the NSC via their significantly different extinction \citep[$A_{Ks\ NSD} \sim 1.7$\,mag, and $A_{Ks\ NSC} \sim 2.3$\,mag, ][]{Nogueras-Lara:2021wm}. In this work, we use $J$ data instead $H$ to improve the identification of the NSD and the NSC, given that the extinction is higher for shorter wavelengths \citep[e.g.][]{Nogueras-Lara:2021wj}, and thus the use of $J$-band data increases the difference between both stellar populations. Figure\,\ref{CMD} shows the colour-magnitude diagram (CMD) $K_s$ versus $J-K_s$, where we visually detect two red clump \citep[stars in their helium burning core sequence][]{Girardi:2016fk} bumps with different extinction, that correspond to the NSD (low extinction group) and the NSC (high extinction group), according to \citet{Nogueras-Lara:2021wm} (see also Fig.\,16 of \citealt{Nogueras-Lara:2018aa}).
To analyse the proper motion distribution of each of the extinction groups, we searched for common stars between the HAWK-I photometric data and the proper motion catalogue \citep{Shahzamanian:2021wu}. Figure\,\ref{uncer} shows a CMD $K_s$ versus $J-K_s$ of the stars with available proper motions in the field. We found $\sim 7,000$ common stars and checked that more than 90\,\% of them have proper motion uncertainties below 0.7\,mas/yr for both components of the proper motions ($\mu_l$ and $\mu_b$, where $l$ and $b$ refer to Galactic longitude and latitude, respectively), as indicated in the right panels of Fig.\,\ref{uncer}.
We defined two boxes in the CMD (Fig.\,\ref{uncer}, left panel) according to the colour cut at $J-K_s\sim5$\,mag that divides the two red clump bumps with different extinction (Fig.\,\ref{CMD}). The colour cut applied to distinguish between each extinction group accounts for the predominantly old stellar population ($\gtrsim 80$\,\% of the stellar mass) that dominates the NSD and the NSC \citep{Nogueras-Lara:2019ad,Schodel:2020aa,Nogueras-Lara:2021wm}. In this way, to reduce the confusion between both extinction groups, the shape of the boxes defined in Fig.\,\ref{uncer} is in agreement with the slope of an old stellar isochrone \citep[for further details see Fig.\,4 in ][]{Nogueras-Lara:2018ab}. Moreover, the blue cut of the first box ($J-K_s\sim4$\,mag) was chosen to remove the foreground stellar population, that mainly belongs to the Galactic disc (and to some extend to the Galactic bulge/bar), and presents a significantly lower extinction than GC stars \citep{Nogueras-Lara:2021uz}. We also defined a brightness cut for each box at $K_s=14.2$\,mag, according to the detection limit of the proper motion catalogue.
\subsection{Proper motion distribution}
\label{GMM_method}
Figure\,\ref{proper} shows the comparison between the proper motion distribution of each of the extinction groups (Fig.\,\ref{uncer}). We observed a clearly different distribution of the proper motions parallel to the Galactic plane ($\mu_l$) between both groups. In this way, the stars belonging to the low extinction group present a larger $\mu_l$ in comparison to those from the high extinction group. On the other hand, the distribution of the proper motion component perpendicular to the Galactic plane ($\mu_b$) appears to be similar for both groups.
We further analysed the proper motion distribution using the SCIKIT-LEARN Python function GaussianMixture \citep[GMM, ][]{Pedregosa:2011aa} to obtain the probability density function that originates the underlying distributions based on Gaussian models. First of all, we applied the Akaike Information criterion \citep[AIC, ][]{Akaike:1974aa} to obtain the number of Gaussian models that best describe the probability density function of the data. Following the results of \citet{Shahzamanian:2021wu} for the NSD, we tried up to three Gaussian models for the proper motion component parallel to the Galactic plane ($\mu_l$), and two models for the perpendicular component ($\mu_b$). We obtained that the $\mu_l$ distribution of the low extinction group of stars is best described by the combination of two Gaussian models, whereas a three-Gaussians model is favoured for the high extinction group. The $\mu_b$ distribution of both groups is similar and best fitted by a single Gaussian model. Figure\,\ref{GMM} shows the obtained results. To estimate the parameters and the associated uncertainties of each Gaussian model, we resorted to a Jackknife approach repeating 1,000 times the GMM modelling randomly dropping $20$\,\% of the used stars for each iteration. Table \ref{proper_motions} shows the obtained results.
\begin{table*}
\caption{Results from the GMM analysis of the proper motion distribution.}
\label{proper_motions}
\begin{center}
\def\arraystretch{1.3}
\setlength{\tabcolsep}{3.8pt}
\begin{tabular}{c|ccc|ccc}
\multicolumn{1}{c}{} & $A_l$ & $\mu_l$ & $\sigma_{\mu_l}$ & $A_b$ & $\mu_b$ & $\sigma_{\mu_b}$\tabularnewline
\multicolumn{1}{c}{} & (normalised units) & (mas/yr) & (mas/yr) & (normalised units) & (mas/yr) & (mas/yr)\tabularnewline
\hline
\hline
NSD & 0.61 $\pm$ 0.01 & 2.25 $\pm$ 0.10 & 1.94 $\pm$ 0.05 & - & -0.09 $\pm$ 0.05 & 2.39 $\pm$ 0.04\tabularnewline
& 0.39 $\pm$ 0.01 & 0.23 $\pm$ 0.21 & 3.85 $\pm$ 0.16 & & & \tabularnewline
\hline
NSC & 0.30 $\pm$ 0.01 & 1.80 $\pm$ 0.06 & 2.60 $\pm$ 0.05 & - & 0.01 $\pm$ 0.02 & 2.58 $\pm$ 0.02\tabularnewline
& 0.418 $\pm$ 0.003 & -0.36 $\pm$ 0.04 & 2.12 $\pm$ 0.03 & & & \tabularnewline
& 0.29 $\pm$ 0.01 & -2.50 $\pm$ 0.05 & 2.84 $\pm$ 0.08 & & & \tabularnewline
\end{tabular}
\end{center}
\footnotesize
\textbf{Notes.} $A_i$, $\mu_i$, and $\sigma_i$ indicate the amplitude, the mean value, and the standard deviation of each of the components of the GMM modelling, where the subindex $i$ indicates Galactic longitude ($i=l$) or latitude ($i=b$).
\end{table*}
\subsection{Discussion}
\label{dis}
We found that the $\mu_l$ distribution of each of the extinction groups is significantly different. Therefore, their stellar populations have different kinematics, reinforcing that they correspond to different GC components that we identify as the NSD and the NSC. We explain the presence of two Gaussian models in the low extinction group because of the observation of the stellar population from the NSD that is in front of the NSC. In this way, the Gaussian component with $\mu_l = 2.25\pm0.10$\,mas/yr corresponds to the NSD stellar population moving eastwards, and agrees within the uncertainties with the value obtained by \citet{Shahzamanian:2021wu} when analysing the NSD kinematics ($\mu_l = 1.99\pm0.13$\,mas/yr). On the other hand, the secondary Gaussian ($\mu_l = 0.23\pm0.21$\,mas/yr) is probably due to stars belonging to the Galactic bulge/bar, that present a broader distribution, in agreement with previous work \citep{Clarkson:2008aa,Kunder:2012wn,Soto:2014ww,Shahzamanian:2021wu}. A third Gaussian component corresponding to the stellar population moving westwards \citep[e.g.][]{Shahzamanian:2021wu} is not visible due to the presence of the NSC, due to its high stellar density and also the increase of interstellar reddening towards the farther edge of the NSD.
We interpret the three Gaussian components detected in $\mu_l$ distribution of the high extinction stellar group, as the result of the rotation of the NSC that produces two main peaks due to stars moving eastwards ($\mu_l = 1.80\pm0.06$\,mas/yr), and westwards ($\mu_l = -2.50\pm0.05$\,mas/yr). The third peak centred on $\mu_l = -0.36\pm0.04$\,mas/yr is probably due to the differential rotation of the NSC. In this way, stars rotating slower than the eastwards and westwards population producing the previously described peaks, are likely originating the central one. This is compatible with the presence of a radial dependence of the rotation velocity, that was measured to be smaller ($\sim 0.5$\,mas/yr) for the innermost parsec of the NSC \citep{Trippe:2008it,Schodel:2009zr}. Moreover, some contamination from the NSD and the Galactic bulge/bar might also contribute to the observed distribution. However, due to the higher density of stars present in the NSC in comparison with the NSD, in combination with the significantly larger extinction due to the given colour cut, we expect that the contamination from the Galactic bulge/bar will be less important for this stellar population than for the previous case of the NSD.
The fact that the third peak is not centred on zero is probably because the proper motions in the used catalogue were computed as relative ones assuming that the mean proper motion of all the stars in a given field is zero \citep{Shahzamanian:2021wu}. In this way, due to the presence of the NSC in the studied region, it is not possible to observe stars from the far side of the NSD, and thus the reference frame for $\mu_l$ is dominated by a majority of stars moving eastwards. Hence, the positive component of $\mu_l$ is underestimated, whereas the negative one is overestimated. This scenario is consistent with the larger absolute value of $\mu_l$ obtained for the stars moving westwards in comparison with the ones moving eastwards ($\mu_l = -2.50\pm0.05$\,mas/yr, and $1.80\pm0.06$\,mas/yr, respectively). Actually, assuming that the $\mu_l$ component of the proper motions is shifted $\mu_l = +0.36\pm0.04$\,mas/yr, we obtain that the components due to the rotation of the NSC are equal in absolute value within the uncertainties, and that the third Gaussian peak is centred on zero.
Analysing the $\mu_l$ distribution of the NSC for different colour cuts, we checked that the stellar population moving westwards is more extinguished. This is compatible with the lower number of stars belonging to this Gaussian component obtained in the GMM modelling (Table\,\ref{proper_motions}), which is probably due to the extinction by dust within the NSC \citep{Chatzopoulos:2015uq}. Therefore, we obtained that the direction of rotation is the same for both the NSC and the NSD, in agreement with previous work \citep[e.g.][]{Feldmeier:2014kx,2015ApJ...812L..21S,Shahzamanian:2021wu}. Moreover, the values obtained for the eastwards moving component of the NSD and the NSC indicate a faster rotation of the NSD in comparison to the NSC, as it was expected \citep[e.g. Fig.\,18 in ][]{Sormani:2022wv}.
On the other hand, both extinction groups show similar distributions of $\mu_b$ (except for a somewhat larger dispersion for stars from the high extinction group), that can be best reproduced by a single Gaussian model approximately centred on zero. Although the presence of some contamination from the Galactic bulge/bar in the low extinction group (NSD) might have originated a two Gaussians distribution \citep{Shahzamanian:2021wu}, this was probably not observed due to the low number of stars from the Galactic bulge/bar in the sample. However, we measured a broadening of the detected single Gaussian component ($\sigma_{\mu b} = 2.39\pm0.05$\,mas/yr) in comparison with the expected one for the NSD according to previous work \citep[$\sigma_{\mu b} = 1.499\pm0.0002$\,mas/yr, ][]{Shahzamanian:2021wu}, that is probably due to the influence of the expected Galactic bulge/bar stars. This is in agreement with the results of \citet{Sormani:2022wv}, that estimated the contamination of stars from the Galactic bulge/bar for different fields distributed across the NSD. They obtained that the contribution from Galactic bulge/bar stars is less important for their fields closest to the NSC ($\lesssim20$\,\% of the stars with $H-K_s>1.3$\,mag belong to the Galactic bulge/bar stars, see Table\,2 in \citealt{Sormani:2022wv}). Although the regions analysed in \citet{Sormani:2022wv} avoid the NSC, that is precisely the target field of this work, we expect that the relative contribution of Galactic bulge/bar will be lower than in their innermost fields, due to the presence of a significantly higher number of stars belonging to the NSC. On the other hand, in this work we use the near infrared bands $J$ and $K_s$, that allow us to remove the foreground stellar population (from the Galactic disc and also from the bulge/bar), in a more efficient way than the $H$ and $K_s$ bands used by \citet{Sormani:2022wv}, due to the longer wavelength base line.
\section{Radial velocities}
We analysed the distribution of radial velocities of spectroscopically characterised stars from \citet{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv}. We searched for common stars between our photometric catalogue and the stars with known metallicities ($\sim 1,000$ common stars), and classified them following the boxes defined in Fig.\,\ref{CMD}. We obtained that $\lesssim 10$\,\% of the stars with known metallicities belong to the low extinction group. We used a photometric criterion instead of a proper motion analysis due to the low number of common detections between the proper motion catalogue and the stars with known metallicities (see the gaps of the proper motion catalogue in Fig.\,\ref{GNS}).
We built a radial velocity map for the stars belonging to each of the extinction groups. We defined a pixel size of $\sim 50''$, and $\sim 25''$ for the radial velocity maps corresponding to the NSD and the NSC, given the significantly lower number of stars detected for the NSD (around 10\,\% of the total number of stars). We computed the radial velocity for a given pixel calculating the mean value of the radial velocities from the stars located within the pixel imposing a 3-sigma clipping algorithm to remove outliers. We only computed a value for a given pixel if at least 4 stars were present within it. Figure\,\ref{vr} shows the obtained maps. The radial velocity distribution appears to be different between the NSD and the NSC. In this way, the NSD map is dominated by positive values. This might be due to the NSD geometry that is not well known yet, the low number of stars to produce this map ($\sim 60$\, stars), and also the possible contamination from the Galactic bulge/bar that can bias the results. On the other hand, the NSC map clearly shows a rotation pattern that is also compatible with the differential rotation suggested in Sect.\,\ref{dis} to explain the three Gaussian components observed for the $\mu_l$ distribution.
\section{Metallicity}
We studied the metallicity distribution of the stars in the target region, using the previously computed list of common stars between our photometric catalogue and stars with known metallicities \citep{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv}. Figure\,\ref{Met} represents the metallicity distribution of the stars corresponding to each of the extinction groups. Our results show different metallicity distributions for each extinction group, being the high extinction group more metal rich than the low extinction one. This indicates the presence of two components with different metallicity and agrees with the results obtained for the NSD and the NSC \citep[e.g.][]{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv,Fritz:2020aa,Schultheis:2021wf,Feldmeier-Krause:2022vm}. The NSC metallicity obtained after removing the contribution from the NSD is somewhat higher than the obtained without considering the presence of the NSD \citep[as done in previous work, e.g.][]{Feldmeier-Krause:2017kq}. This is because NSD stars are less metal rich in average and bias the NSC sample when they are not removed.
We further study the metallicity distribution of each component restricting the analysis to stars with metallicity [M/H] < 0.5\,dex. In this way, we removed stars whose metallicity might have been overestimated due to the calibration used in \citealt{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv} (see Sect.\,\ref{metal}). We applied the GMM method previously explained (Sect.\,\ref{GMM_method}) to each of the extinction groups. We used the AIC to distinguish between models considering one and two Gaussians. We obtained that the metallicity distribution of both extinction groups is best represented by a two-Gaussians model (Fig.\,\ref{Met_GMM_fig}). To estimate the mean value and uncertainty of each component, we generated 1,000 Monte Carlo samples for the metallicity distribution, assuming Gaussian uncertainties for the metallicity of each star and randomly varying its value considering a Gaussian distribution. Table\,\ref{Met_GMM} shows the obtained results, where the mean and the standard deviation were determined averaging over the results of the 1,000 Monte Carlo samples, applying a 3-sigma outlier-resistant criterion to remove outliers.
For both extinction groups the metallicity distributions present a broad component centred around [M/H] $\sim$ -0.20\,dex, and a more metal rich narrower one, whose metallicity is larger for the NSC stellar population. Our results agree with the recent work by \citet{Schultheis:2021wf}, where two stellar components with different metallicities were found for both, the NSC and the NSD, probably indicating a different formation for each of these two components. Actually, part of the data used in \citet{Schultheis:2021wf} to analyse the NSC corresponds to the same data set that we used in our analysis \citep{Feldmeier-Krause:2020uv}. Nevertheless, \citet{Schultheis:2021wf} assumed all the stars in the sample to be part of the NSC. Here, we checked that $\lesssim 10$\,\% of the stars that we used belong to the NSD and have a lower mean metallicity in comparison to the stars from the NSC. Therefore, considering them as NSC stars might slightly bias the results towards lower metallicity values for the NSC. On the other hand, we are able to disentangle the stellar populations from the NSD and the NSC using the same data set and line of sight for both components, reinforcing the results obtained in \citet{Schultheis:2021wf} when comparing different data sets and lines of sight.
Previous high-resolution spectroscopic studies have been carried out for some stars in the NSC. In particular, \citet{Ryde:2016wr} found a metal-poor ($[Fe/H]\sim-1.0$\,dex), red giant star at a projected distance of 1.5\,pc from Sagittarius\,A*, and determined that it probably belongs to one of the nuclear components (NSD or NSC), confirming the presence of metal poor stars in the GC. On the other hand, \citet{Do:2018aa} targeted two stars found to be very metal rich in previous spectral-template fitting work \citep{Do:2015ve}. They confirmed that at least one of them has an unusually high metallicity $[M/H]>0.6$\,dex, proving the presence of these kind of stars in the GC. Moreover, \cite{Rich:2017rm} determined the metallicity of 17 stars in the NSC to span between $-0.5 < [Fe/H] < +0.5$\,dex, being these values also compatible with the range obtained in this work. Finally, the recent work by \cite{Thorsbro:2020uq} found that the most metal rich stars in their NSC sample reach $[Fe/H] = +0.5$\,dex. These values end up in higher metallicities when considering the overall metallicity, $[M/H]$, that are in agreement with the most metal rich stars that we use from \citet{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv}. We would like to stress that the use of $JK_s$ photometry and proper motions can be very helpful to reanalyse previous work where some of the stars that were considered to belong to the NSC might be part of the NSD.
\begin{table}
\caption{Results from the GMM analysis of the proper motion distribution.}
\label{Met_GMM}
\begin{center}
\def\arraystretch{1.3}
\setlength{\tabcolsep}{3.8pt}
\begin{tabular}{cccc}
\hline
Component & $A_{[M/H]}$ & $[M/H]$ & $\sigma_{[M/H]}$ \tabularnewline
\hline
\hline
NSD & $0.38$ $\pm$ $0.13$ & $-0.22$ $\pm$ $0.16$ & $0.47$ $\pm$ $0.15$\tabularnewline
& $0.62$ $\pm$ $0.13$ & $0.17$ $\pm$ $0.08$ & $0.30$ $\pm$ $0.05$\tabularnewline
\hline
NSC & $0.40$ $\pm$ $0.02$ & $-0.17$ $\pm$ $0.04$ & $0.49$ $\pm$ $0.03$\tabularnewline
& $0.60$ $\pm$ $0.02$ & $0.28$ $\pm$ $0.02$ & $0.31$ $\pm$ $0.01$\tabularnewline
\hline
& & & \tabularnewline
\end{tabular}
\end{center}
\footnotesize
\textbf{Notes.} $A_{[M/H]}$, $[M/H]$, and $\sigma_{[M/H]}$ indicate the amplitude, the mean value, and the standard deviation of each of the components of the GMM modelling.
\end{table}
\section{Conclusion}
In this paper we analysed the kinematics and metallicity of the stars belonging to two extinction groups identified along the line of sight towards the NSC \citep{Nogueras-Lara:2018aa,Nogueras-Lara:2021wm}. We detected two kinematically distinct components associated to the each of the extinction groups, that we confirmed as the NSD and the NSC. In this way, our results show the potential of proper motions to disentangle the stellar population belonging to each of the GC structures. We analysed their proper motion components parallel ($\mu_l$) and perpendicular ($\mu_b$) to the Galactic plane, and found that the $\mu_l$ distributions of the NSD and the NSC are best fitted by two and three Gaussian models, respectively. In this way, we explained the NSD proper motion distribution as the combination of the stellar population from the closest edge of the NSD (that is rotating eastwards), and some contamination from Galactic bulge/bar stars that cannot be easily removed from the sample due to their large extinction. We concluded that the presence of the NSC impedes the detection of stars from the far side of the NSD, that are rotating westwards \citep{Shahzamanian:2021wu}. The $\mu_l$ distribution of the NSC shows three Gaussian components that we explain as a consequence of the rotation of the NSC. These components correspond with stars moving eastwards (positive $\mu_l$) and westwards (negative $\mu_l$), and stars moving with relatively slower velocities from the innermost regions of the NSC. We obtained relatively lower values for the rotation of the NSC in comparison with the NSD, as it was expected \citep[e.g.][]{Sormani:2022wv}. The $\mu_b$ distributions of both, the NSD and the NSC, seem to be similar except for a somewhat higher dispersion for the NSC values.
We created radial velocity maps using spectroscopically characterised stars from each of the extinction groups and found that the velocity distributions seem to be different between the NSD and the NSC. On the other hand, we observed a velocity pattern for the NSC compatible with differential rotation.
We also analysed stars with known metallicities from each of the extinction groups. We found that they follow different distributions, being the group corresponding to the NSD less metal rich, in agreement with previous studies \citep[e.g.][]{Feldmeier-Krause:2017kq,Feldmeier-Krause:2020uv,Fritz:2020aa,Schultheis:2021wf,Feldmeier-Krause:2022vm}. We obtained that both components can be best described by a two-Gaussians model with a less metal-rich wider component, and a predominant metal rich narrower one, that might have a different origin in agreement with \citet{Schultheis:2021wf}. Moreover, we measured a mean value of the NSC metal rich stellar population of $[M/H] \sim 0.3$\,dex, that arguably makes the NSC the most metal rich region of the Galaxy.
Our results confirm that the NSC and the NSD can be distinguished along the line of sight towards the NSC via their different extinction and agree with previous work on the NSD and the NSC suggesting different formation scenarios and stellar populations \citep[e.g.][]{Nogueras-Lara:2019ad,Schodel:2020aa,Schultheis:2021wf,Feldmeier-Krause:2022vm,Nogueras-Lara:2021wm}.
\begin{acknowledgements}
F. N.-L. gratefully acknowledges the sponsorship provided by the Federal Ministry for Education and Research of Germany through the Alexander von Humboldt Foundation. This work is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program IDs 60.A-9450(A), 091.B-0418, 093.B-0368, and 195.B-0283. F. N.-L. thanks Rainer Sch\"odel, Nadine Neumayer, and Mattia Sormani for very useful discussion.
\end{acknowledgements}
\bibliography{BibGC.bib} |
Title:
Constraining the physical properties of the first lensed $z\sim10-16$ galaxy candidates with JWST |
Abstract: The first deep-field observations of the JWST have immediately yielded a
surprisingly large number of very high redshift candidates, pushing the
frontier of observability well beyond $z\gtrsim10$. We here present a detailed
SED-fitting analysis of the 15 gravitationally lensed $z\sim10-16$ galaxy
candidates detected behind the galaxy cluster SMACS J0723.3-7327 in Atek et al.
(2022) using the BEAGLE tool. Our analysis makes use of dynamical
considerations to place limits on the ages of these galaxies and of all three
published SL models of the cluster to account for lensing systematics. We find
these galaxies to have relatively low stellar masses
$M_{\star}\sim10^7-10^8\,\mathrm{M}_{\odot}$ and young ages
$t_{\mathrm{age}}\sim10-100$\,Myr. Due to their very blue UV-slopes, down to
$\beta\sim-3$, all of the galaxies in our sample have extremely low dust
attenuations $A_V\lesssim0.02$. Placing the measured parameters into relation,
we find a very shallow $M_{\star}-M_{\mathrm{UV}}$-slope and high sSFRs above
the main sequence of star-formation with no significant redshift-evolution in
either relation. This is in agreement with the bright UV luminosities measured
for these objects and indicates that we are naturally selecting galaxies that
are currently undergoing a star-bursting episode at the time they are observed.
Finally, we discuss the robustness of our high-redshift galaxy sample regarding
low-redshift interlopers and conclude that low-redshift solutions can safely be
ruled out for roughly half of the sample, including the highest-redshift
galaxies at $z\sim12-16$. These objects represent compelling targets for
spectroscopic follow-up observations with JWST and ALMA.
| https://export.arxiv.org/pdf/2208.05473 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
galaxies: high-redshift -- dark ages, reionization, first stars -- galaxies: evolution -- galaxies: dwarfs -- gravitational lensing: strong -- ultraviolet: galaxies
\end{keywords}
\section{Introduction} \label{sec:intro}
The advent of the JWST has initiated a new era in high-redshift galaxy observations. Due to its unprecedented near-infrared (NIR) sensitivity, the JWST spectacularly expanded the frontier of observability beyond the $z\gtrsim10$ limit of the \textit{Hubble Space Telescope} (HST) and enables us to observe out to the first generation of galaxies -- the very first luminous structures that formed in the the Universe. The on-set of galaxy formation in the Universe is believed to have taken place at $z>15$. Observing the first stars and galaxies thus represents one of the fundamental challenges in modern astronomy and is one of the primary missions of the JWST.
Indeed, in the short time since the release of the first scientific observations of the JWST, the Early Release Observations \citep[ERO;][]{pontoppidan22} and the Early Release Science (ERS) programs GLASS-JWST \citep[PI: T. Treu;][]{treu22} and CEERS \citep[PI: S. Finkelstein;][]{finkelstein23,bagley22} have already yielded the first detections of galaxies beyond $z\gtrsim10$: Thus far, up to 11 bright galaxy candidates at $z\sim10-18$ were detected in the blank fields, GLASS-JWST and CEERS \citep[][]{naidu22b,castellano22,donnan23,finkelstein22,labbe22}, and up to 12 $z\sim9-16$ candidates \citep[][]{adams23,atek23} in the strong lensing (SL) cluster SMACS~J0723.3-7327 (SMACS0723) which had previously been imaged with the HST as part of the \textit{Reionization Lensing Cluster Survey} \citep[RELICS;][]{coe19}. There have even been some tentative detections out to $z\sim20$ \citep[][]{yan22}. These recent detections have also already yielded the first estimates of the rest-frame ultra-violet (UV) luminosity function \citep[][]{donnan23,harikane22}, galaxy physical parameters \citep[e.g.][]{labbe22,whitler23,topping22,rodighiero23,cullen22} and even possible implications for the star-formation history (SFH) of the Universe \citep[e.g.][]{mason22,boylan-kolchin22}. This demonstrates the formidable capability of the JWST to probe galaxies in the early Universe and foreshadows the numerous detections to come with the planned deep imaging programs of both SL clusters and blank fields. Caution is, however, also in order, since new populations of possible low-redshift interlopers in these kinds of studies have also already been identified \citep[][]{nonino22,fudamoto22,nelson22,barrufet22,zavala22,naidu22c,glazebrook22}. Moreover, while JWST has already delivered the first rest-frame optical spectroscopy of galaxies at $z\lesssim9$ which allows us to robustly characterize the stellar populations in these galaxies \citep[e.g.][]{carnall23,schaerer22b,laporte22,roberts-borsani22,williams22}, we remain limited to imaging data for galaxies $z\gtrsim10$. This means that we mostly observe these galaxies with rest-frame UV broad-band photometry which has been shown to be prone to parameter degeneracies and to not be well-suited for probing galaxy parameters in high-redshift studies with HST \citep[e.g.][]{grazian15,furtak21}. In order to infer the physical processes occurring within the first galaxies at $z\gtrsim10$ with JWST photometry, we therefore need to implement the lessons learned from HST observations and carefully account for parameter degeneracies and uncertainties.
In our previous work, \citet{atek23}, we presented the detection of 10 lensed $z\sim9-16$ galaxy candidates in SMACS0723 via the dropout selection technique in the JWST ERO observations of the cluster and measured their photometry, photometric redshifts and a first estimate of some galaxy parameters such as e.g., stellar mass. In order to address the issues explained above and derive first robust parameter estimations with JWST, we present here an in-depth analysis of the spectral energy distributions (SEDs) of this high-redshift galaxy sample with the Bayesian tool \texttt{BEAGLE} \citep{chevallard16}. Using all the information currently available (e.g., photometric and morphological measurements, lensing models, etc.), we make a first assessment of what can be learned of these objects with JWST and what are the uncertainties involved. The goal of this study is also to establish a method to estimate physical parameters at high redshifts which will no doubt be of use to future JWST surveys set to observe these kinds of galaxies in the early Universe.
This paper is organized as follows: We present our sample and derivation of physical parameters in section~\ref{sec:SED-fit}. We then place our resulting galaxy parameters in relation to each other in section~\ref{sec:relations} and discuss our results in section~\ref{sec:discussion}. Finally, we summarize our analysis and findings in the conclusions, in section~\ref{sec:conclusion}. Throughout this paper, we assume a standard flat $\Lambda$CDM cosmology with $H_0=70\,\frac{\mathrm{km}}{\mathrm{s}\,\mathrm{Mpc}}$, $\Omega_{\Lambda}=0.7$, and $\Omega_\mathrm{m}=0.3$. All magnitudes quoted are in the AB system \citep{oke83} and all quoted uncertainties represent $1\sigma$ ranges.
\section{Galaxy parameters} \label{sec:SED-fit}
In this study, we use the $z\sim9-16$ galaxy sample detected in \citet{atek23} which is composed of 6 candidates in the redshift range $z\sim9-11$ and 4 candidates at $z\gtrsim12$, up to $z\sim16$. For this sample, observations in seven broad-band filters are available: the F090W, F150W, F200W, F277W, F356W and F444W bands from the \textit{Near-Infrared Camera} \citep[NIRCam;][]{rieke05} and the F115W band from the \textit{Near-Infrared Imager and Slitless Spectrograph} \citep[NIRISS;][]{doyon12} in a smaller field-of-view centered on the cluster core. The photometry was measured with \texttt{SExtractor++} \citep[\texttt{SE++};][]{bertin20,kuemmel20}. We refer the reader to \citet{atek23} for the details of data reduction, source detection and photometry with \texttt{SE++}, and the high-redshift dropout selection methods used for our sample.
In order to overcome some of the degeneracies inherent to fitting multiple galaxy parameters to only a hand-full of broad-band rest-frame UV photometric filters, we use all of the information available to place as many constraints on the galaxy parameters as possible. This is done in section~\ref{sec:priors}, before performing an SED-fit in section~\ref{sec:BEAGLE-setup} and correcting the resulting stellar mass for gravitational magnification in section~\ref{sec:SL}.
\subsection{External priors} \label{sec:priors}
In \citet{atek23}, we measured UV-continuum slopes $\beta$ and half-light-radii $r_e$ for each object (cf. Tab.~2 in \citealt{atek23}), which will both come in handy to constrain the physical parameters of our high-redshift galaxies.
We first use the well known relation between the UV-continuum slope and the dust reddening $E(B-V)$ \citep[e.g.][]{meurer99,reddy18a} to compute the effective \textit{V}-band dust attenuation optical depth needed in our SED-fit (cf. section~\ref{sec:BEAGLE-setup}) for each galaxy in our sample. For that we adopt the relation measured by \citet{reddy18a} for an SMC-like dust attenuation curve:
\begin{equation} \label{eq:ebv}
E(B-V)=\frac{\beta-\beta_0}{11.259}
\end{equation}
where $\beta_0=-2.616$ is the intrinsic slope measured in \citet{reddy18a}. We then use $R_V=2.505$ \citep[][]{reddy15} to convert the reddening to optical depth $\hat{\tau}_V$. We obtain very low values $\hat{\tau}_V\lesssim0.01$ due to the very blue UV-slopes of our sample \citep[cf.][]{atek23} and the relatively steep slope of the SMC law. Using slightly different intrinsic slopes \citep[e.g. the $\beta_0=-2.23$ from][]{meurer99} and values of $R_V$, we therefore do not find significantly different $\hat{\tau}_V$. Note that this is in agreement with the expectation for high-redshift galaxies to have very low dust attenuation and rules out the extreme dust attenuation scenarios discussed in \citet{rodighiero23}.
Next, we use the (lensing-corrected) half-light-radii and first stellar mass estimates (cf. Tab~2 in \citealt{atek23}) to compute the dynamical time scale of each galaxy \citep[see also, e.g., discussion in][]{verma07} as,
\begin{equation} \label{eq:tdyn}
t_{\mathrm{dyn}}\sim\frac{r}{v}\sim\sqrt{\frac{2r^3}{GM_{\star}}}
\end{equation}
\noindent This does not necessarily need to be a precise measurement but rather a rough estimate of order of magnitude. Following our approach in \citet{furtak21}, we will use this estimate to place a lower boundary on the range of allowed stellar ages in the SED-fit, thus assuming that a galaxy cannot be younger than the lower range of these dynamical timescales, and to place a prior on the characteristic star-formation timescale. We overall find our sample of high-redshift galaxies to have small dynamical times of the order $\sim2-10$\,Myr which is not surprising given their very compact morphologies \citep[$r_e\lesssim0.6$\,kpc for the majority, cf.][]{atek23}.
Note that throughout this analysis, we propagate the uncertainties of $\beta$ and $r_e$ to the parameters computed in~\eqref{eq:ebv} and~\eqref{eq:tdyn} so that they can be properly accounted for in the SED-fit.
\subsection{SED-fit setup} \label{sec:BEAGLE-setup}
Said SED-fit is performed with the \texttt{BayEsian Analysis of GaLaxy sEds} \citep[\texttt{BEAGLE};][]{chevallard16} tool, the fully Bayesian nature of which is ideally suited to optimize numerous galaxy parameters at once, including priors, and robustly probe and combine their uncertainties in the joint posterior probability distribution function (PDF). It uses SED templates by \citet{gutkin16} which combine the latest version of the stellar population synthesis models by \citet{bc03} with the photoionization code \texttt{CLOUDY} \citep[][]{ferland13} to account for nebular emission. These templates all assume a \citet{chabrier03} initial mass function (IMF) and the latest models of intergalactic medium (IGM) attenuation by \citet{inoue14}.
For the fit, we assume a delayed exponential star-formation history (SFH) $\psi\propto t\exp(-t/\tau)$ with the possibility of an ongoing star-burst over the last 10\,Myr. This allows for maximum flexibility of the SFH to be either rising or declining with a maximum at $t=\tau$. We furthermore assume an SMC-like dust attenuation law \citep[][]{pei92} which has been found to match high-redshift galaxies best \citep[][]{capak15,reddy15,reddy18a}, in particular in the low-metallicity regime \citep[][]{shivaei20}.
With this setup, we fit four free parameters:
\begin{itemize}
\item Stellar mass $M_{\star}$ with a log-uniform prior $\log(M_{\star}/\mathrm{M}_{\odot})\in[6,10]$.
\item Star-formation rate (SFR) $\psi$ over the last $10^7$\,yr with a log-uniform prior $\log(\psi/\mathrm{M}_{\odot}\,\mathrm{yr}^{-1})\in[-4,4]$.
\item Maximum stellar age $t_{\mathrm{age}}$ with a log-uniform prior $\log(t_{\mathrm{age}/\mathrm{yr}})\in[6.3, t_{\mathrm{universe}}]$ where the lower boundary was chosen according to our analysis of the dynamical time (cf. section~\ref{sec:priors}) and the upper boundary is the age of the Universe at the redshift of the galaxy.
\item Stellar metallicity $Z$ with a log-uniform prior $\log(Z/\mathrm{Z}_{\odot})\in[-2,-0.3]$.
\end{itemize}
For the other parameters necessary for the fit we use the values measured independently by setting Gaussian priors with the measured values as the mean and the measured uncertainties as the standard deviation:
\begin{itemize}
\item Characteristic star-forming time scale $\tau$ with the dynamical time $t_{\mathrm{dyn}}$ computed in section~\ref{sec:priors} as prior.
\item Effective \textit{V}-band dust attenuation optical depth $\hat{\tau}_V$ with the attenuation inferred from the UV-slope with Eq.~\eqref{eq:ebv} as prior.
\item Photometric redshift $z_{\mathrm{phot}}$ with the value measured in \citet{atek23} as prior.
\end{itemize}
This method allows for the uncertainties on the measured parameters to propagate to the SED-fit and into the posterior PDF. We run the SED-fit on all seven bands of broad-band photometry available for our galaxy sample.
\subsection{Gravitational magnification} \label{sec:SL}
\begin{table}
\centering
\caption{Gravitational magnifications of our high-redshift objects in the three JWST-based SL models of SMACS0723: \citet{mahler22} (M22), \citet{pascale22} (P22) and \citet{caminha22} (C22).}
\begin{tabular}{lcccc}
\hline
ID & $\mu_{\mathrm{M22}}$ & $\mu_{\mathrm{P22}}$ & $\mu_{\mathrm{C22}}^{\mathrm{a}}$ & $\mu^{\mathrm{b}}$\\\hline
SMACS\_z10a & $3.98\pm0.30$ & $6.14\pm1.74$ & $8.78$ & $6.30\pm2.21$\\
SMACS\_z10b & $1.36\pm0.03$ & $1.70\pm0.14$ & $1.70$ & $1.56\pm0.18$\\
SMACS\_z10c & $1.41\pm0.03$ & $1.70\pm0.14$ & $1.69$ & $1.60\pm0.16$\\
SMACS\_z10d & $1.13\pm0.01$ & $1.35\pm0.07$ & $1.34$ & $1.28\pm0.11$\\
SMACS\_z10e & $1.07\pm0.01$ & $1.24\pm0.04$ & $1.22$ & $1.18\pm0.08$\\
SMACS\_z11a & $1.05\pm0.01$ & $1.22\pm0.04$ & $1.20$ & $1.16\pm0.08$\\
SMACS\_z12a & $1.05\pm0.01$ & $1.21\pm0.04$ & $1.18$ & $1.15\pm0.07$\\
SMACS\_z12b & $1.35\pm0.03$ & $1.65\pm0.13$ & $1.64$ & $1.55\pm0.16$\\
SMACS\_z16a & $1.86\pm0.05$ & $2.41\pm0.29$ & $2.27$ & $2.18\pm0.29$\\
SMACS\_z16b & $1.04\pm0.01$ & $1.19\pm0.03$ & $1.17$ & $1.13\pm0.07$\\\hline
\end{tabular}
\par\smallskip
\begin{flushleft} $^{\mathrm{a}}$\, Uncertainties for the \citet{caminha22} model are not publicly available.
\par $^{\mathrm{b}}$\, Average magnification used for parameter estimation in this study (cf. section~\ref{sec:SL} for details).
\end{flushleft}
\label{tab:SL}
\end{table}
Since we are observing sources behind a lensing cluster, SMACS0723, we need to account for the gravitational magnification and correct certain parameter measurements measurements (cf. section~\ref{sec:SED-fit_results}). Other parameters estimated in sections~\ref{sec:priors} and~\ref{sec:BEAGLE-setup} depend on relative fluxes, i.e. colors, and are therefore not affected by the achromatic SL magnification.
The JWST ERO of SMACS0723 yielded numerous new multiple image systems to constrain the SL models of the cluster, increasing the number of 5 multiple image systems known from HST observations \citep[][]{golubchik22} up to more than 20. To date, there are three SL models of SMACS0723 based on the new JWST observations: A parametric model built with \texttt{lenstool} \citep[][]{kneib96,jullo07,jullo09} by \citet{mahler22}, an analytic model built with a revised version of the \citet{zitrin15a} parametric implementation, by \citet{pascale22}, and another \texttt{lenstool}-based parametric model by \citet{caminha22}.
We show the magnifications of our objects in each of the three JWST-based models in Tab.~\ref{tab:SL}. While the \citet{mahler22} model shows slightly lower magnifications than the \citet{pascale22} and \citet{caminha22} models, the overall scatter between models is relatively low. This is not surprising since none of our sources lies particularly close to a critical line where the impact of the modeling systematics is the most severe. Indeed, we do not have any extremely magnified objects in our sample which means that we are spared the worst of the lensing systematics that have been found to dominate the uncertainties in studies of lensed high-redshift galaxies with HST \citep[e.g.][]{bouwens17,bouwens22b,bouwens22a,atek18,furtak21}. In order to take into account all of the lensing information in this study, we follow a similar approach to the one of \citet{bhatawdekar19}, \citet{furtak21} and \citet{bouwens22b} and use the mean magnification $\mu$ from the three JWST-based SL models in the following. The magnification uncertainties $\Delta\mu$ are then computed by propagating the individual magnification uncertainties of each model, if available, and adding the scatter between the models in quadrature. The resulting values are also reported in Tab.~\ref{tab:SL}. With this approach we make sure that all sources of magnification uncertainties are accounted for and propagated to our final results.
\subsection{SED-fit results} \label{sec:SED-fit_results}
\begin{table*}
\caption{\texttt{BEAGLE} SED-fit results for our whole $z\sim9-16$ sample. Best-fit results are taken as the median and the $1\sigma$-range of the joint posterior distribution of each galaxy. The last column contains the UV luminosities computed in section~\ref{sec:mass-light}.}
\begin{tabular}{lcccccccc}
\hline
ID & $\log(M_{\star}/\mathrm{M}_{\odot})$ & $\log(\psi/\mathrm{M}_{\odot}\,\mathrm{yr}^{-1})$ & $\log(t_{\mathrm{age}/\mathrm{yr}})$ & $\log(Z/\mathrm{Z}_{\odot})$ & $\log(\tau/\mathrm{yr})^{\mathrm{a}}$ & $\hat{\tau}_V^{\mathrm{a}}$ & $z_{\mathrm{phot}}^{\mathrm{a}}$ & $M_{\mathrm{UV}}^{\mathrm{b}}$\\\hline
SMACS\_z10a & $8.86_{-0.15}^{+0.16}$ & $-1.97_{-0.54}^{+0.63}$ & $7.96_{-0.04}^{+0.05}$ & $-0.39_{-0.23}^{+0.07}$ & $6.28\pm0.03$ & $0.032\pm0.002$ & $9.77_{-0.02}^{+0.02}$ & $-18.3\pm0.4$\\
SMACS\_z10b & $10.21_{-0.05}^{+0.05}$ & $-1.42_{-0.54}^{+0.61}$ & $8.32_{-0.02}^{+0.02}$ & $-1.19_{-0.02}^{+0.01}$ & $6.44\pm0.02$ & $0.037\pm0.002$ & $9.03_{-0.01}^{+0.01}$ & $-20.6\pm0.2$\\
SMACS\_z10c & $9.72_{-0.05}^{+0.05}$ & $-0.33_{-1.06}^{+0.27}$ & $8.27_{-0.04}^{+0.02}$ & $-1.83_{-0.08}^{+0.13}$ & $6.48\pm0.01$ & $0.019\pm0.004$ & $9.78_{-0.01}^{+0.01}$ & $-20.1\pm0.2$\\
SMACS\_z10d & $6.95_{-1.40}^{+2.45}$ & $0.54_{-0.05}^{+0.47}$ & $6.96_{-0.54}^{+1.27}$ & $-1.21_{-0.12}^{+0.11}$ & $7.56\pm0.07$ & $0.017\pm0.006$ & $9.32_{-0.07}^{+0.07}$ & $-19.6\pm0.2$\\
SMACS\_z10e & $6.87_{-1.35}^{+1.42}$ & $1.16_{-0.04}^{+0.05}$ & $7.70_{-0.69}^{+0.64}$ & $-1.42_{-0.21}^{+0.17}$ & $6.85\pm0.11$ & $0.026\pm0.010$ & $10.88_{-0.10}^{+0.11}$ & $-18.8\pm0.3$\\
SMACS\_z11a & $6.46_{-1.06}^{+1.16}$ & $0.77_{-0.03}^{+0.03}$ & $7.78_{-0.61}^{+0.58}$ & $-0.88_{-0.07}^{+0.09}$ & $6.82\pm0.11$ & $0.021\pm0.011$ & $11.08_{-0.05}^{+0.06}$ & $-18.4\pm0.4$\\
SMACS\_z12a & $8.27_{-0.10}^{+0.12}$ & $-1.33_{-0.52}^{+0.63}$ & $7.32_{-0.17}^{+0.18}$ & $-0.57_{-0.32}^{+0.20}$ & $7.26\pm0.10$ & $0.007\pm0.004$ & $12.16_{-0.08}^{+0.08}$ & $-19.7\pm0.2$\\
SMACS\_z12b & $8.26_{-0.29}^{+0.29}$ & $-0.98_{-0.79}^{+0.91}$ & $7.75_{-0.49}^{+0.41}$ & $-1.63_{-0.29}^{+0.38}$ & $8.32\pm0.11$ & $0.025\pm0.008$ & $12.27_{-0.10}^{+0.09}$ & $-19.9\pm0.2$\\
SMACS\_z16a & $8.02_{-2.28}^{+1.07}$ & $1.22_{-1.96}^{+0.07}$ & $7.45_{-0.49}^{+0.50}$ & $-1.13_{-0.29}^{+0.38}$ & $7.42\pm0.17$ & $0.006\pm0.003$ & $15.93_{-0.11}^{+0.11}$ & $-20.4\pm0.2$\\
SMACS\_z16b & $7.89_{-1.99}^{+2.37}$ & $1.76_{-0.31}^{+0.22}$ & $6.54_{-0.18}^{+0.83}$ & $-1.41_{-0.40}^{+0.36}$ & $8.18\pm0.26$ & $0.016\pm0.009$ & $15.25_{-0.09}^{+0.09}$ & $-20.9\pm0.2$\\\hline
\end{tabular}
\par\smallskip
\begin{flushleft}
$^{\mathrm{a}}$\, Parameter fit with a Gaussian prior corresponding to values (and their uncertainties) previously derived from independent analyses as detailed in sections~\ref{sec:priors} and~\ref{sec:BEAGLE-setup}.
\par $^{\mathrm{b}}$\, Updated luminosities computed with the magnifications $\mu$ reported in Tab.~\ref{tab:SL}.
\end{flushleft}
\label{tab:galay_parameters}
\end{table*}
With the magnifications in hand, we correct the stellar masses and SFRs for the effects of lensing after the SED-fit. As was shown in \citet{furtak21}, this is equivalent to correcting the photometry prior to SED-fitting but has the advantage that we do not need to propagate the uncertainties of the magnification to the photometry but only to the parameters affected by it.
The best-fit galaxy parameter values, taken as the median and $1\sigma$-range of the posterior distribution from our \texttt{BEAGLE} fit of each $z\sim9-16$ galaxy detected in SMACS0723, are shown in Tab.~\ref{tab:galay_parameters}. We also show an example of a best-fit SED and the posterior distribution of \texttt{BEAGLE} parameters for one object, SMACS\_z16a, in Fig.~\ref{fig:z16a_fit} as an example. As can be seen in Fig.~\ref{fig:z16a_fit} and in Tab.~\ref{tab:galay_parameters}, while the SED-fit is relatively good, there remain some parameter degeneracies between the stellar mass, the SFR and the age which result in the relatively large uncertainties on these parameters. This is due to the fact that we are fitting rest-frame UV photometry exclusively for the majority of our sample which is not ideally suited to probe galaxy parameters \citep[cf. e.g.][]{furtak21} as will be discussed in more detail in section~\ref{sec:BEAGLE_limits}. There is however a clearly defined maximum-a-posteriori solution for each parameter.
We overall find these objects to have relatively low stellar masses $M_{\star}\sim10^7-10^8\,\mathrm{M}_{\odot}$, down to $M_{\star}\simeq10^{6.5}\,\mathrm{M}_{\odot}$ which corresponds to the lowest stellar masses ever detected at moderate redshifts ($z\sim6-7$) with HST \citep[e.g.][]{bhatawdekar19,kikuchihara20,furtak21}. There are however also a few high-mass $M_{\star}\sim10^9-10^{10}\,\mathrm{M}_{\odot}$ objects on the low-redshift end of our sample. We also find relatively young ages $t_{\mathrm{age}}\sim10-100$\,Myr and high SFRs for most of our sample which is not surprising given the very blue UV-slopes of these galaxies \citep{atek23} and in agreement with other studies of the first observations of the JWST high-redshift frontier \citep[e.g.][]{nanayakkara22,whitler23,topping22}. The SFR-$M_{\star}$ relation and its implications will be discussed further in section~\ref{sec:mass-sfr}. While the metallicity is in general not very well constrained, there is a strong tendency towards $Z<0.1\,\mathrm{Z}_{\odot}$ in most galaxies of our sample. We will discuss the limits of our SED-fits with regards to stellar age and metallicity in detail in section~\ref{sec:BEAGLE_limits}.
We however also note the presence of several older objects with ages $t_{\mathrm{age}}\gtrsim100$\,Myr in this sample on the low-redshift end at $z\sim9-10$. These are the same objects which also have higher stellar masses mentioned before. This is due to the fact that at redshifts $z\sim9-10$ the Balmer-break still falls into the F444W-band, as already mentioned in \citet{atek23}. These three galaxies in particular show a pronounced color offset between the F356W- and the F444W-band which is caused by the presence of a strong Balmer-break as can be seen in the example presented in the upper panel of Fig.~\ref{fig:balmer_break}. The Balmer-break indicates a significant population of older and redder stars than the young massive stars probed at lower wavelengths. This is in agreement with the findings of e.g. \citet{laporte21a} on the existence of evolved stellar populations at $z\gtrsim9$. There are nonetheless two candidates at $z\sim10$ in our sample which do not show a significant excess in the rest-frame optical F444W-band and thus do not seem to have a pronounced Balmer-break (see example in the lower panel of Fig.~\ref{fig:balmer_break}). These galaxies appear to be young with high SFRs and low stellar masses and in general are in accordance with the rest of our sample at higher redshifts for which we only probe the rest-frame UV emission. On the other hand, the Balmer-break in combination with the Lyman-break represents a strong constraint on a galaxy's redshift whereas the absence of the former in these two candidates might instead indicate that we are in fact looking at red low-redshift galaxies as will be discussed in detail in section~\ref{sec:lowz_solutions}.
Finally, while the photometric redshifts were fixed to the \citet{atek23} results with a Gaussian prior in our fit with \texttt{BEAGLE} (section~\ref{sec:BEAGLE-setup}) and therefore broadly agree with them, we find marginally different values for $z_{\mathrm{phot}}$ and its uncertainties and will use these updated values from here on.
\section{Parameter relations} \label{sec:relations}
With the parameters derived from our SED-fitting procedure in section~\ref{sec:SED-fit}, we are now able to place some of these parameters into context and derive the first mass-to-light ratios and mass-SFR relations of lensed galaxies at $z\sim9-16$ in sections~\ref{sec:mass-light} and~\ref{sec:mass-sfr}.
\subsection{Mass-to-light ratios} \label{sec:mass-light}
The total rest-frame UV luminosity $M_{\mathrm{UV}}$ of each galaxy in our sample is computed as the absolute AB magnitude in the band that contains 1500\,\AA\ at the galaxy's photometric redshift and using the magnification $\mu$ reported in Tab.~\ref{tab:SL}. The UV luminosities are shown in the last column of Tab.~\ref{tab:galay_parameters}. All of our galaxies are UV-bright with luminosities $M_{\mathrm{UV}}\lesssim-18.5$. This is however not surprising since we do not have any extremely magnified objects in this sample (cf. Tab.~\ref{tab:SL}) which means that we miss intrinsically faint sources too faint to be detected without large gravitational magnifications in this sample. Note that this might also be due to selection effects in these JWST observations as discussed in \citet{mason22}.
With both UV luminosity and stellar mass in hand, we are now able to compute the mass-to-light ratios of our $z\sim9-16$ galaxies which are shown in Fig.~\ref{fig:M-L_ratio}. We do not find any significant evolution with redshift apart from the fact that the three $z\sim9-10$ sources with Balmer-breaks have significantly higher stellar masses as already discussed in section~\ref{sec:SED-fit_results} (however also cf. section~\ref{sec:BEAGLE_limits}). These galaxies, shown as open circles in Fig.~\ref{fig:M-L_ratio}, seem to form a separate regime in $M_{\star}-M_{\mathrm{UV}}$ space from the galaxies without Balmer breaks. While we expect to underestimate the stellar masses when no rest-frame optical photometry is available (cf. section~\ref{sec:BEAGLE_limits}), the $z\sim9-10$ objects in our sample that do not show any significant Balmer-breaks in their photometry align well with the higher redshift galaxies for which we only have UV photometry, suggesting that this lower-mass population with lower mass-to-light ratios is genuine. Note that our two $z\sim12$ candidates have stellar masses $M_{\star}\sim10^{8.2}\,\mathrm{M}_{\odot}$ which places them roughly between the two stellar mass populations but well withing the uncertainty range of the lower-mass population. Our very small sample size however does not allow us to draw conclusions about a redshift evolution of stellar mass at this stage.
We fit the $M_{\star}-M_{\mathrm{UV}}$-relation with a linear function using 100 Monte-Carlo Markov Chains (MCMC) of $10^4$ steps each with the \texttt{emcee} package \citep[][]{foreman-mackey13}. The best-fit relations are
\begin{equation} \label{eq:m-l_low}
\log\left(\frac{M_{\star}}{\mathrm{M}_{\odot}}\right)=(-0.6\pm0.9)(M_{\mathrm{UV}}+20)-(0.6\pm1.0)+8
\end{equation}
for the majority of our sample (blue line in Fig.~\ref{fig:M-L_ratio}) and
\begin{equation} \label{eq:m-l_balmer}
\log\left(\frac{M_{\star}}{\mathrm{M}_{\odot}}\right)=(-0.6\pm0.2)(M_{\mathrm{UV}}+20)+(1.8\pm0.1)+8
\end{equation}
for the three Balmer-break sources (purple line in Fig.~\ref{fig:M-L_ratio}). Note that we exclude the two $z\sim12$ candidates from the fits since their low stellar mass errors would give them too much relative weight and falsify the $M_{\star}-M_{\mathrm{UV}}$-fit. Both slopes are similar and relatively shallow, at least compared to what has been measured at lower redshifts $z\sim6-9$ \citep[e.g.][]{song16,bhatawdekar19,kikuchihara20,furtak21}. Note however that these relations were also measured for fainter galaxies than our sample, at lower redshifts and with rest-frame optical photometry to constrain eventual evolved stellar populations.
\subsection{SFR-stellar mass relations} \label{sec:mass-sfr}
The SFR of galaxies has been established to have a tight correlation with the stellar mass, forming the main sequence (MS) of star-formation \citep{daddi07,noeske07,elbaz07}. The MS has been shown to be in place since high redshifts, and its evolution with redshift has been parametrized out to $z\sim6$ \citep[e.g.,][]{speagle14}. However, the MS is exceptionally difficult to establish at high redshifts due to various systematics and selection effects in compiling representative galaxy samples at such high redshifts \citep[e.g.][]{grazian15,foerster-schreiber20,furtak21}. Despite the unknown territory that is the $z\gtrsim10$ Universe just starting to be revealed by JWST, we analyse the SFR-$M_{\star}$ relation of our sample and compare with extrapolations of the relations calibrated at $z<6$.
Fig.~\ref{fig:SFR-Ms-z} shows the SFR-$M_{\star}$ relation (left-hand panel) and the specific SFR (sSFR; $\psi_{s}=\psi M_{\star}^{-1}$) for our $z\sim9-16$ sample. The filled hexagons use the SFR derived from the \texttt{BEAGLE} SED-fits reported in Tab.~\ref{tab:galay_parameters}, while the empty hexagons use an SFR estimate derived from the UV luminosity following the \citet{kennicutt12} calibration. The points are color-coded according to their redshift or stellar mass in the left- and right-hand panels respectively. We compare these relations with a parametrized form of the redshift-evolution of the SFR-$M_{\star}$ and sSFR-$z$ relations. We use the SFR-$M_{\star}$ MS vs. $z$ parametrization of \citet{speagle14} that has been established in a consistent analysis of a compilation of 25 studies from the literature. It has been calibrated to $z=6$ but we extrapolate it to $z=13$ and to lower masses for comparison purposes. This is shown in the black solid line in the left-hand panel of Fig.~\ref{fig:SFR-Ms-z}. For the sSFR vs. $z$ relation, we in addition use the parametrization measured by \citet{whitaker14} for $\log(M_{\star}/\mathrm{M}_{\odot})\sim9.3$, using SFRs derived from $\beta$-corrected UV luminosities on complete low-mass samples from $0.5<z<2.5$. This is shown as the black solid line in the right-hand panel of Fig.~\ref{fig:M-L_ratio}.
Our galaxy sample at $z\sim9-16$ shows elevated UV SFR and sSFR compared to the extrapolations from \cite{speagle14} and \cite{whitaker14}. The UV sSFR is of order $\psi_s\gtrsim100\,\mathrm{Gyr}^{-1}$ for the majority of sources, which indicates a very high of star-formation activity. The SFRs estimated from the \texttt{BEAGLE} SED-fits show a larger scatter, and are lower than the UV-inferred ones, especially for the most massive galaxies, which fall below the MS. These are the $z\sim9-10$ galaxies with significant Balmer-break detections and the $z\sim12$ galaxies already mentioned in sections~\ref{sec:SED-fit_results} and~\ref{sec:mass-light}. However, they still fall onto the MS extrapolations and above when considering their UV-inferred SFRs. Overall, the high sSFRs coupled with the relatively young ages measured in our sample indicate that these UV-bright galaxies are observed during a star-bursting episode. This is in agreement with \citet{whitler23} who suggest that the observed number density of bright galaxies at $z>12$ can be explained if they are being observed during a burst of star formation. Their conclusion is based on the estimation of young stellar populations ($\sim30$\,Myr) of bright $z\sim8-11$ galaxies, whose $z\sim15$ progenitors would be relatively faint. This would imply a rapid decline of the number densities of bright galaxies between $z\sim8-11$ and $z\sim15$ which is in tension with the first results from JWST of little-to-no evolution of the large number densities of bright galaxies \citep[e.g.][]{castellano22,naidu22b,atek23}. Observing the bright $z>12$ galaxies during an episode of a star-burst would alleviate this tension. The bursty sSFRs and relatively young ages of our candidates are consistent with this scenario.
Our results are also consistent with the theoretical model of \citet{mason22} which suggests that observed $z\gtrsim10$ galaxies are predominantly extremely star-forming, with young ages, fast formation timescales and high $M_{\rm UV}-M_{\rm halo}$ ratios. This requires high star-formation efficiencies, where a high fraction of the available gas within the dark matter (DM) halo is converted to stars. The elevated star-formation efficiency is also required to explain the apparent observed number-densities of bright galaxies. These, although high compared to current theoretical predictions, are still below the upper limit of the UV luminosity function imposed by a star-formation efficiency of unity \citep{mason22}.
\section{Discussion} \label{sec:discussion}
We have presented our results on the physical parameters of the first lensed $z\sim9-16$ galaxies observed with JWST in SMACS0723 and now discuss these results regarding several issues in this section. First, we use the Balmer-break detections and non-detections to determine if we are missing a significant fraction of galaxies in section~\ref{sec:duty-cycles}. Our analysis is subject to certain limitations which we will discuss in section~\ref{sec:BEAGLE_limits}, in particular regarding the higher redshift objects in our sample. We then furthermore discuss the possibilities of low-redshift interlopers falsely identified as high-redshift sources in section~\ref{sec:lowz_solutions}. Finally, we briefly discuss the implications of our findings for the bigger picture of structure formation in the early Universe in section~\ref{sec:DM}.
\subsection{Star-formation duty cycles in $z\sim9-10$ galaxies} \label{sec:duty-cycles}
As can be seen in Tab.~\ref{tab:galay_parameters} and was pointed out in section~\ref{sec:mass-light}, our galaxy sample is comprised of very bright galaxies with $M_{\mathrm{UV}}\lesssim-19$ which indicates them to be undergoing phases of intense star-formation at the time of observation as discussed in section~\ref{sec:mass-sfr}. It is indeed expected of high-redshift galaxies to not have continuous SFHs but rather episodic star-bursting phases recurring over short duty cycles \citep[e.g.][]{stark09}, in particular in the low-mass and compact dwarf galaxy regime \citep{atek22a}. In this scenario, evolved stellar populations are eventually built-up by each consecutive star-burst. At low redshifts, star-bursts have been found to last for one to a few dynamical timescales \citep[e.g.][]{lehnert96,kennicutt98}.
The fact that we appear to observe galaxies both with and without Balmer-breaks at $z\sim9-10$ therefore enables us to perform the following thought-experiment: The selection window for the $z\sim9-11$ galaxies in our sample \citep{atek23} corresponds to a time $t_{\mathrm{obs}}\sim200$\,Myr during which these galaxies can be observed. Our Balmer-break galaxies have ages up to $10^{8.3}$\,yr which results in $t_{\mathrm{age}}/t_{\mathrm{obs}}\sim1$. Interestingly, $t_{\mathrm{age}}/t_{\mathrm{obs}}\lesssim0.5$ for the two $z\sim9-10$ objects without Balmer-break detections which is relatively close to the ratio of Balmer-break to total $z\sim9-10$ objects of $0.4$. This indicates that we might not necessarily be missing a significant population of dark galaxies but are instead simply looking at episodic star-bursting galaxies at various stages within their star-bursting duty cycles: older galaxies that already have built-up a population of red stars and are currently in between two star-bursts and younger galaxies that are currently within their first or first few duty cycles of star-formation.
Interestingly, the Balmer-break galaxies in our sample show a significant difference between their SED-fitting and UV luminosity inferred SFRs (cf. Fig.~\ref{fig:SFR-Ms-z}). These two SFR measurements typically probe different time scales, $\sim10$\,Myr for the \texttt{BEAGLE}-inferred SFRs (cf. section~\ref{sec:BEAGLE-setup}) and averaged over $\sim100-200$\,Myr for UV luminosity inferred SFRs \citep[e.g.][]{leitherer99,hao11,kennicutt12,calzetti13}. For our Balmer break objects, where the SED-fitting SFRs are significantly lower than the UV SFRs, this discrepancy could therefore indicate that we might be observing these galaxies within 100-200\,Myr after a strong star-bursting episode \citep[e.g.][]{weisz12,dominguez15,emami19} which would be in accordance with our hypothesis from above that we might be looking at star-bursting galaxies at various stages within or between star-bursting episodes. Note that since we are here talking about only 5 galaxies for which the rest-frame optical photometry is available, we do of course not have enough statistics to support this scenario and will require both much larger sample sizes over different observation fields to obtain real statistics and spectroscopic observations to robustly measure the current SFRs from the nebular emission lines.
\subsection{The limits of our SED-analysis} \label{sec:BEAGLE_limits}
This study heavily relies on the Bayesian SED-fitting tool \texttt{BEAGLE}, the effectiveness and flexibility of which has been proven in numerous galaxy studies \citep[e.g.][]{chevallard16,plat19,endsley21,furtak21,topping22}. However, we nonetheless estimate our SED-fitting analysis of this $z\sim9-16$ galaxy sample to be prone to three major limitations detailed in the following.
First, as already mentioned in sections~\ref{sec:SED-fit_results} and~\ref{sec:mass-light}, we are limited to rest-frame UV photometry for all of our galaxies beyond $z>10$. Since the rest-frame UV wavelengths only probe the short-lived young and massive stars of a galaxy which do not necessarily make out the bulk of its mass (though cf. section \ref{sec:duty-cycles}), stellar masses can only accurately be derived from the UV continuum if a galaxy is $\lesssim10$\,Myr old (i.e. the average life-time of massive stars). Beyond that, accurate derivation of the stellar mass requires a continuum detection red-ward of the Balmer-break. This is further corroborated by the fact that the stellar masses of our three galaxies with Balmer-break detections have very small uncertainties whereas the uncertainties on the stellar masses of many other galaxies in our sample are very large (cf. Tab.~\ref{tab:galay_parameters}). More quantitatively, it has been shown that SED-fitting UV photometry only can underestimate stellar masses by up to 0.6\,dex \citep[][]{furtak21}. Interestingly, that is about the order of magnitude that our results are offset from the $M_{\star}$-SFR main sequence as shown in Fig.~\ref{fig:SFR-Ms-z} and discussed in section~\ref{sec:mass-sfr}. In order to determine if galaxies at $z\gtrsim10$ do have an evolved stellar population, which is predicted by recent simulations \citep[][]{mason22}, the JWST/NIRCam imaging at wavelengths $\lambda\leq5\,\mu$m will need to be complemented by deep imaging with the \textit{Mid-Infrared Instrument} \citep[MIRI;][]{bouchet12,rieke15}, also aboard the JWST, which has also already proven its unprecedented sensitivity at wavelengths $\lambda>5\,\mu$m \citep[][]{ling22}. This will allow to probe the rest-frame optical emission of these very high redshift galaxies.
The next limitation of our analysis is that the fiducial templates used by \texttt{BEAGLE} are ionization bounded, i.e. assume a Lyman-continuum (LyC) escape fraction of $f_{\mathrm{esc}}=0$. The escape fraction of LyC however has a significant impact on the shape of the SED through the nebular emission: It weakens both nebular line and continuum emission which makes the SED bluer and the UV-slope steeper \citep[e.g.][]{zackrisson13,zackrisson17,plat19}. Indeed, observations of known LyC leakers at low redshifts clearly show a correlation between UV-slope and LyC escape fraction \citep[e.g.][]{chisholm22}. This is a particularly important effect to take into account in highly star-forming primeval galaxies in the early Universe as it could lead to difficulties reproducing the extremely blue UV-slopes of some of some JWST-detected $z\gtrsim10$ galaxies. In our sample in particular, the bluest UV-slopes are measured for the four highest-redshift candidates at $z\sim12-16$ \citep[down to $\beta\simeq-2.8$;][]{atek23}. To verify if the SED-fits performed in section~\ref{sec:SED-fit} hit this ionization boundary limit, we also fit the photometry of our candidates in a separate \texttt{BEAGLE} run using only stellar continuum templates and show the two bluest galaxies, SMACS\_z16a and SMACS\_z12a, in Fig.~\ref{fig:continuum-fit}. As can be seen from the blue lines in Fig.~\ref{fig:continuum-fit}, this run without nebular continuum provides a similar fit to the initial run that includes nebular emission which proves the validity our SED-analysis. Nonetheless, this possible issue needs to be kept in mind in future high-redshift studies with JWST. In order to derive accurate physical parameters of high-redshift galaxies through SED-fitting, future studies will therefore need to include templates that allow for $f_{\mathrm{esc}}>0$, such as e.g. those by \citet{plat19}, as was already done for some JWST sources in \citet{topping22}. Extremely blue UV-slopes such as presented by our highest-redshift candidates at $z\gtrsim12$ ($\beta\lesssim-2.5$) are rarely observed at low redshifts and often in galaxies with peculiar properties such as exotic nebular emission spectra or LyC leakage \citep[e.g.][]{furtak22,chisholm22}. These might however be more common features at very high redshifts $z\gtrsim12$. Indeed, while \citet{cullen22} find the average UV-slopes at $z\sim8-15$ to not be significantly bluer than the slopes at $z\lesssim8$, individual galaxies at very high redshifts up to $z\sim16$ seem to strongly tend towards extreme UV-slopes down to $\beta\sim-3$ \citep{atek23,topping22}.
Finally, another possible issue in our analysis is the metallicity since the templates used in \texttt{BEAGLE} are limited to metallicities $Z\geq0.01\,\mathrm{Z}_{\odot}$ \citep{chevallard16}. As can be seen in Tab.~\ref{tab:galay_parameters}, we find metallicities in the range $Z\sim0.01-0.1\,\mathrm{Z}_{\odot}$ for the majority of our galaxies which concurs with the findings of \citet{topping22}. Since galaxies at $z\gtrsim15$ represent the first luminous structures formed in the Universe, there is a distinct possibility that in particular our highest-redshift candidates at $z\sim16$ are in fact systems that contain significant amounts pristine Population~III (Pop.~III) stars which would have metallicities approaching $Z=0$. If this were the case, we would however not be able to measure it because: (i) the \texttt{BEAGLE} templates do not extend that far in metallicity space and (ii) the rest-frame UV photometry is not sensitive to stellar metallicity \citep[e.g.][]{furtak21}. Note also that simulations have found Pop.~III dominated galaxies to be too faint to be detected with JWST \citep[][]{riaz22}. On the other hand, simulations also suggest that the very first stars would rapidly enrich their surrounding medium in metals and thus make galaxies with significant metallicities possible even at very high redshifts \citep[][]{sanati22}. Accurate measurements of the metallicities of these kinds of galaxies will require ultra-deep spectroscopy with JWST's \textit{Near-Infrared Spectrograph} \citep[NIRSpec;][]{jakobsen22} to search for spectroscopic signatures of Pop.~III stars in the highest redshift galaxies \citep[e.g.][]{cassata13,berzin21,katz22a}.
\subsection{Possible contamination by low-redshift interlopers} \label{sec:lowz_solutions}
Photometric selection of high-redshift galaxies based on broad-band photometry has always been prone to contamination by low-redshift galaxies and cold stars which, if they are red and faint enough, can mimic the colors and brightness of high-redshift galaxies. For example, studies with HST have shown that dropout selection techniques can reach low-redshift contamination levels of up to 40\,\% \citep[e.g.][]{bouwens11}. We therefore need to take this possibility into account. This is in particular true in the light of recent analyses of the JWST ERO and ERS data which have revealed a class of very dusty star-forming spiral galaxies at low redshifts whose color-signature is indistinguishable from that of $z\gtrsim10$ galaxies in the JWST imaging \citep[][]{fudamoto22,nelson22,zavala22,naidu22c,glazebrook22}. Indeed, millimeter observations of the $z\sim17$ candidate detected in the CEERS field by \citet{donnan23} have already revealed it to possibly be a $z\sim5$ dusty star-forming galaxy (DSFG) instead \citep{zavala22,naidu22c}. Note that this could in part explain the disk-like morphologies measured for JWST high-redshift candidates \citep[e.g.][]{naidu22b}, including our sample \citep[][]{atek23}.
In an attempt to assess the level of low-redshift contamination of our sample we conduct additional SED fits with both \texttt{BEAGLE} and \texttt{EAZY} \citep[][]{brammer08} in which we force the photometric redshifts to low values $z<9$ and allow older and more dusty galaxies as would be expected at low redshifts. We find that the best-fitting forced low-redshift SEDs in general provide fits of lower significance than the open redshift ones performed in \citet{atek23} which clearly favor the high-redshift solutions. They are in particular incapable of reproducing the UV-slopes of the bluest of our candidates, even with extremely strong nebular emission lines (equivalent widths up to $\mathrm{EW}_0\sim2500$\,\AA), as illustrated by the example shown in Fig.~\ref{fig:lowz_nebular-fit}. The forced low-redshift fits in these cases in particular favor stellar ages that but against the lower boundary ($\log(t_{\mathrm{age}}/\mathrm{yr})\lesssim6.3$ at $3\sigma$), i.e. the dynamical time as explained in section~\ref{sec:BEAGLE-setup}, and prefers solutions with significant extinctions, $A_V\sim2$. Since galaxies younger than their dynamical time are deemed not physical (cf. sections~\ref{sec:priors} and~\ref{sec:duty-cycles}), we can discard the forced low-redshift fits for these sources. We therefore estimate that the sources in our sample with the steepest UV-slopes $\beta\lesssim-2.5$, which include our highest-redshift candidate SMACS\_z16a at $z\sim16$ and the two $z\sim12$ candidates, are most probably robust high-redshift galaxies. In addition, both SED-fitting codes are incapable to find a low-redshift solution for the two $z\sim9-10$ objects that have Balmer-break detections (cf. section~\ref{sec:SED-fit_results}) which suggests that the combination of the Lyman- and Balmer-breaks is a robust probe of photometric redshift. For the remaining three candidates in our sample, this is more unclear and a low-redshift SED that provides a somewhat reasonable alternative fit can be found as can be seen in the example given in Fig.~\ref{fig:lowz_fit}. For these objects we conclude that the high- and low-redshift solutions are hard to distinguish with the current type of data, i.e. broad-band rest-frame UV photometry.
While all galaxies in our sample passed the rigorous selection process described in \citet{atek23} and have clearly preferred high-redshift photometric redshift estimates, the only way to definitely nail-down their redshifts are of course deep NIRSpec observations to obtain spectroscopic redshifts, as already done for the first time in \citet{roberts-borsani22} and \citet{williams22}. Another useful indicator would be millimeter and sub-millimeter observations with the \textit{Atacama Large Millimeter/sub-millimeter Array} (ALMA) which would probe the rest-frame far infrared (FIR) emission of these galaxies, as already attempted in \citet{fujimoto22}. Direct detections of rest-frame FIR emission lines or dust continuum can provide strong constraints on the dust and would allow distinguishing between a high-redshift galaxy and a low-redshift DSFG as demonstrated in \citet{zavala22}.
\subsection{Implications for scenarios of galaxy formation in DM halos} \label{sec:DM}
One of the most surprising aspects of the numerous reported discoveries of $z\gtrsim10$ galaxies with JWST have been their exceptionally high stellar masses, UV luminosities and number densities. Considerations of the baryon-to-stellar mass conversion in DM halos have already pointed out tensions with what is allowed by theoretical models based on the $\Lambda$CDM paradigm \citep{boylan-kolchin22,naidu22c}. Within this paradigm, galaxies form in DM halos by conversion of the available baryonic gas given by the cosmic baryon fraction $f_{\rm b}\sim0.16$, resulting in a relation between the stellar and the halo mass $M_{\star}=\epsilon_{\rm SF}\,f_{\rm b}\,M_{\rm h}$, as well as between their number densities,
\begin{equation} \label{eq:Phis}
\Phi_{\star}(M_{\star}, z) = \Phi_{\rm h}(M_{\star}\, f_b^{-1}\, \epsilon_{\rm SF}^{-1},z)
\end{equation}
\noindent
where $\epsilon_{\rm SF}$ is the efficiency of converting baryons to stars, i.e., the star-formation efficiency (SFE). This relation imposes an upper limit on the maximum stellar mass and stellar mass function that can be formed in a DM halo if the efficiency is $100\%$, i.e. all the available baryons are turned into stars \citep{behroozi18}.
In Fig.~\ref{fig:N_dens}, we put the stellar mass and the number density of the $z\sim16$ galaxies in this work into perspective by comparing with theoretical predictions adopting a similar approach as e.g. \citet{behroozi18,mason22,boylan-kolchin22,naidu22c}. For the theoretical predictions, we use the halo mass function (HMF) from \citet{sheth01} and Eq.~\ref{eq:Phis} to predict the stellar mass number density. The upper limit imposed by $\epsilon_{\rm SF} = 100\%$ for $z=10-16$ are marked by the thin solid lines and shaded regions. For comparison, we show the stellar mass number densities at $z\sim10$ from \citet{stefanon21} estimated from Lyman-break galaxies in HST and {\it Spitzer Space Telescope} observations. We also over-plot the stellar mass functions predicted by assuming an SFE of $\sim2\%$, which is indicated by numerous studies at $z<10$ \citep[e.g.,][]{behroozi13,tacchella18,stefanon21,shuntov22}. It is worth noting that quantitatively, this comparison is very sensitive to the choice of halo mass function. Nonetheless, qualitatively, the comparisons are sufficiently robust for the purpose of this discussion here.
The number density of our $z\sim16$ candidates indicates $\epsilon_{\rm SF} \sim 0.15$ -- an elevated SFE by a factor of about 7 compared to $z<10$ measurements. This number density is comparable to that at $z\sim10$ assuming $\epsilon_{\rm SF} \sim 0.02$, suggesting no significant evolution of the stellar mass number density at these epochs. This is in tension with findings by previous studies that find no evolution of the SFE \citep[e.g.,][]{mason15,tacchella18}.
One way to reconcile this is with an evolution of the SFE, as indicated by the thick solid line that marks a SFE of about $15\%$ at $z\sim 16$. However, we should be cautious with constraining the SFE in the early Universe from observations such as the ones presented in this study. This is because in addition to the issues discussed in sections~\ref{sec:BEAGLE_limits} and~\ref{sec:lowz_solutions}, there is a scatter in the relation between stellar and halo mass and these galaxies might not necessarily be representative of the whole population. Indeed, they are likely extreme cases of highly star-forming and bright galaxies with negligible dust attenuations \citep[e.g.][]{mason22,ferrara22} as pointed out at multiple occasions in this work. Finally, despite the disagreement with theoretical predictions of the evolution of the number density, our estimates are within the limits imposed by $\Lambda$CDM with $\epsilon_{\rm SF}=1$.
Note that the number density discussed in this section is of course only a very crude estimate since: (i) we lack statistics with only two $z\sim16$ candidates and (ii) completeness and accurate survey volume that probe all the underlying uncertainties in lensing cluster observations are highly non-trivial to compute and have been shown to have large and complex effects on the derivation of number densities \citep{bouwens17,bouwens22b,atek18,furtak21} which is beyond the scope of this work.
\section{Conclusion} \label{sec:conclusion}
We presented a detailed SED-fitting analysis of the 10 lensed $z\sim9-16$ galaxy candidates detected in the JWST ERO observations of the SL cluster SMACS0723 in \citet{atek23}. This is the first sample of gravitationally lensed galaxies at these extremely high redshifts observed with JWST. Using the \texttt{BEAGLE} tool with priors derived from photometric and morphological measurements and combining all of the SL information currently available, we carefully derived physical parameters of these galaxies and robustly probed their uncertainty space. We then put these results in relation to each other and computed mass-to-light ratios and mass-SFR relations. Finally, we discussed our results regarding SED-fitting limitations and probed the robustness of our sample regarding low-redshift galaxies that can imitate the photometric signatures of high-redshift galaxies. Our main results are the following:
\begin{itemize}
\item We find the $z\sim9-16$ candidates in our sample to in general have relatively low stellar masses, $M_{\star}\sim10^{6.5}-10^{8.3}\,\mathrm{M}_{\odot}$, young ages $t_{\mathrm{age}}\sim10-100$\,Myr and very low dust attenuations $A_V\sim0.01$ based on their UV-continuum slope.
\item The detection of strong Balmer-breaks in some of our $z\sim10$ candidates results in significantly higher stellar masses ($M_{\star}\sim10^9-10^{10}\,\mathrm{M}_{\odot}$) and ages ($t_{\mathrm{age}}\gtrsim100$\,Myr) which confirms that evolved stellar populations exist even at redshifts $z\gtrsim10$.
\item The non-detection of Balmer-breaks in other galaxies at $z\sim9-10$, for which the F444W-band probes rest-frame optical wavelengths, also indicates the existence of a younger and lower-mass population of galaxies at $z\sim9-10$ the properties of which are similar to the rest of our sample at higher redshifts. There is however also a possibility that these objects are low-redshift interlopers.
\item We find a relatively shallow mass-to-light relation (slopes $\simeq-0.6$) that does not seem to significantly evolve with redshift as the majority of our sources fall onto that relation. The $z\sim9-10$ galaxies with Balmer-break detections for a separate population in $M_{\star}-M_{\mathrm{UV}}$-space but present a very similar mass-to-light relation slope.
\item The $z\sim9-16$ galaxies in our sample have relatively high sSFRs that lie above the main sequence of star-formation extrapolated out to these high redshifts. Combined with the low stellar masses and young ages in our sample, this indicates that these galaxies are going through an episode of intense star-formation while they are observed. This also explains their low mass-to-light ratios.
\item Since we are observing rest-frame UV emission only for the galaxies at $z>10$, we are only probing the current episode of star-formation in these galaxies and are perhaps missing older stellar populations. However, the existence of $z\sim9-10$ galaxies without Balmer-breaks in our sample and whose properties align with the UV-inferred ones from the higher-redshift objects, indicates that young galaxies in their first episode of star-formation are also present at these redshifts.
\item There is no significant evolution of parameters with redshift other than a tendency for the highest-redshift objects ($z\geq12$) to also have the bluest UV-continuum slopes.
\item Two photometric signatures indicate robust high-redshift solutions: Extremely blue UV-slopes ($\beta\lesssim-2.5$) and Balmer-break detections in rest-frame optical bands in addition to the Lyman-break used for selection. Half of our galaxy sample full-fills at least one of these conditions, including our highest-redshift candidate SMACS\_z16a at $z_{\mathrm{phot}}\simeq15.93_{-0.11}^{+0.11}$ and the two $z\sim12$ candidates. For the remaining objects in our sample, possible low-redshift solutions cannot robustly be ruled out with the data currently available.
\item A crude estimate of number density of galaxies at $z\sim16$ inferred from our sample is consistent with expected DM halo mass functions if the SFE is higher ($\epsilon_{\mathrm{SF}}\sim 0.15$) than expected at lower redshifts $z\sim10$.
\end{itemize}
Our results in general demonstrate JWST's unprecedented ability to not only detect but also characterize galaxies in the early Universe at $z\gtrsim10$ which foreshadows many more new discoveries to come. Since our sample does not contain any high-magnification sources, likely due to the small critical area of SMACS0723, we only probe the UV-bright galaxy population in this study. Upcoming JWST observations of other SL clusters, for example those with much larger critical areas than SMACS0723, may therefore yield some highly magnified sources and probe the UV-fainter, lower-mass or less intensely star-forming population of galaxies at $z\gtrsim10$. Since at these redshifts we are observing the rest-frame UV emission with NIRCam, studies such as this one can only robustly probe the currently ongoing episode of star-formation in these galaxies. It is therefore crucial that observations of this nature be complemented by deep MIRI imaging in the future in order to also probe the rest-frame optical emission of these galaxies at $z\gtrsim10$ and better constrain their physical properties. Rest-frame FIR follow-up observations with ALMA will be of great use to distinguish between very high-redshift galaxies and very dusty objects at lower redshifts. Ideally, given the high rate of possible low-redshift contamination, we will need spectroscopic observations with NIRSpec to confirm the high-redshift nature of these galaxies and further constrain their properties. Our highest-redshift candidate, SMACS\_z16a, and the two $z\sim12$ candidates are of particular interest for spectroscopic follow-up observations because of their intriguingly steep UV-slopes, a feature rarely observed at low redshifts, which indicates that their high photometric redshift is genuine. Given these object's properties, we might well be looking at representatives of the very first generations of galaxies formed in the Universe.
\section*{Acknowledgements}
We warmly thank the anonymous referee for their comments and feedback which greatly helped to improve the paper. LF and AZ acknowledge support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF) and support by the Ministry of Science \& Technology, Israel. HA acknowledges support from CNES. JC acknowledges funding from the FirstGalaxies Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No.~789056).
\noindent This study is based on observations obtained with the NASA/ESA/CSA JWST, retrieved from the \texttt{Barbara A. Mikulski Archive for Space Telescopes} (\texttt{MAST}) at the \textit{Space Telescope Science Institute} (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. This study has made use of the \texttt{CANDIDE} Cluster at the \textit{Institut d'Astrophysique de Paris} (IAP), made possible by grants from the PNCG and the region of ГЋle de France through the program DIM-ACAV+. This research made use of \texttt{Astropy},\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{astropy13,astropy18} as well as the packages \texttt{NumPy} \citep{vanderwalt11}, \texttt{SciPy} \citep{virtanen20} and \texttt{Matplotlib} \citep{hunter07}. This research also made use of {\tt SourceXtractor++}\footnote{\url{https://github.com/astrorama/SourceXtractorPlusPlus}}, an open source software package developed for the \textit{Euclid} satellite project.
\section*{Data Availability}
The data underlying this article are publicly available on the \texttt{Barbara A. Mikulski Archive for Space Telescopes}\footnote{\url{https://archive.stsci.edu/}} (\texttt{MAST}), under program ID 2736.
\bibliographystyle{mnras}
\bibliography{references}
\appendix
|
Title:
Gravitational waves from bubble collisions and fluid motion in strongly supercooled phase transitions |
Abstract: We estimate the gravitational wave spectra generated in strongly supercooled
phase transitions by bubble collisions and fluid motion. We derive analytically
in the thin-wall approximation the efficiency factor that determines the share
of the energy released in the transition between the scalar field and the
fluid. We perform numerical simulations including the efficiency factor as a
function of bubble radius separately for all points on the bubble surfaces to
take into account their different collision times. We find that the efficiency
factor does not significantly change the gravitational wave spectra and show
that the result can be approximated by multiplying the spectrum obtained
without the efficiency factor by its value at the radius $R_{\rm eff} \simeq
5/\beta$, where $\beta$ is the approximate inverse duration of the transition.
We also provide updated fits for the gravitational wave spectra produced in
strongly supercooled transitions from both bubble collisions and fluid motion
depending on the behaviour of the sources after the collision.
| https://export.arxiv.org/pdf/2208.11697 |
\title{Gravitational waves from bubble collisions and fluid motion \\in strongly supercooled phase transitions}
\author{Marek Lewicki}
\email{marek.lewicki@fuw.edu.pl}
\affiliation{Faculty of Physics, University of Warsaw ul.\ Pasteura 5, 02-093 Warsaw, Poland}
\author{Ville Vaskonen}
\email{vvaskonen@ifae.es}
\affiliation{Institut de Fisica d'Altes Energies, Campus UAB, 08193 Bellaterra (Barcelona), Spain}
\section{Introduction}
The first observations of gravitational waves (GWs) by LIGO/Virgo signified the beginning of a new era in astrophysics and cosmology. While up to now all observed events were produced by compact object binaries~\cite{LIGOScientific:2018mvr,LIGOScientific:2020ibl,LIGOScientific:2021usb,LIGOScientific:2021djp}, this new messenger brings hope also for detection of primordial signals in the form of stochastic GW backgrounds. Given the tremendous advancements in sensitivity that are expected throughout a broad frequency spectrum with the upcoming experiments~\cite{Punturo:2010zz,Hild:2010id,Janssen:2014dka,Graham:2016plp,LISA:2017pwj,Graham:2017pmn,Badurina:2019hst,AEDGE:2019nxb,Bertoldi:2021rqk,Alonso:2022oot,Badurina:2021rgt}, the prospects for probing the early Universe processes are great even though the compact object binaries that will contribute to the stochastic GW background make the detection of its primordial components more difficult~\cite{Lewicki:2021kmu}. Interestingly, the recent pulsar timing observations~\cite{NANOGrav:2020bcs,Goncharov:2021oub,Chen:2021rqp,Antoniadis:2022pcn} feature a common spectrum-process which could be an early indication of the upcoming first detection of a stochastic GW background, potentially of primordial origin~\cite{Ellis:2020ena,Blasi:2020mfx,Vaskonen:2020lbd,DeLuca:2020agl,Nakai:2020oit,Ratzinger:2020koh,Kohri:2020qqd,Vagnozzi:2020gtf,Neronov:2020qrl,Blanco-Pillado:2021ygr,Wang:2022wwj,RoperPol:2022iel,Ferreira:2022zzo}.
Many high-energy processes, including phase transitions~\cite{Caprini:2015zlo,Caprini:2019egz}, cosmic strings~\cite{Auclair:2019wcv} and inflation~\cite{Bartolo:2016ami}, occurring in the early Universe may generate a detectable stochastic GW background. In this paper we focus on cosmological first-order phase transitions featured in various particle physics models. They are intensive processes where bubbles of the new phase nucleate, expand and eventually convert the whole Universe in the true vacuum phase~\cite{Coleman:1977py}. Interactions between the expanding bubble walls and the surrounding fluid cause motion and inhomogeneities in the fluid, and both the collisions of the bubble walls and the motion of fluid inhomogeneities source GWs~\cite{Kosowsky:1992vn,Kamionkowski:1993fg}. The resulting GW spectra from these components have been extensively studied with numerical and semi-analytical methods (see e.g.~\cite{Hindmarsh:2019phv,Cutting:2019zws,Lewicki:2020jiv,Lewicki:2020azd,Jinno:2020eqg} for recent progress). These studies indicate that different sources active during the transition can produce different GW spectra.
In order to determine the GW spectrum generated in a phase transition in a given particle physics model, we need to estimate how much each of the GW sources contributes to the final GW spectrum. The vacuum energy released in the transition is split between the gradient energy of the scalar field bubble wall and motion in the fluid. How the total released energy is split depends on the strength of the interactions between the wall and the particles in the fluid, and on the strength of the transition.
In strongly supercooled phase transitions it is possible that the interactions of the bubble wall with fluid do not stop the wall from accelerating before it collides with other bubbles. In this case most of the released energy is in the bubble walls and the bubble collisions give the dominant contribution to the GW spectrum. This can happen in particular in quasi-conformal models~\cite{Jinno:2016knw,Iso:2017uuu,Marzola:2017jzl,Prokopec:2018tnq,Marzo:2018nov,Baratella:2018pxi,VonHarling:2019rgb,Aoki:2019mlt,DelleRose:2019pgi,Wang:2020jrd,Ellis:2020nnr,Lewicki:2021xku}. If the interactions instead are sufficiently strong, the bubble wall reaches a terminal velocity before the collisions and majority of the released energy goes into fluid motion. This is the typical case in extensions of the Standard Model featuring polynomial scalar potentials~\cite{Grojean:2006bp,Dorsch:2014qpa,Huang:2016cjm,Artymowski:2016tme,Vaskonen:2016yiu,Dorsch:2016nrg,Ellis:2018mja,Beniwal:2018hyi,Fairbairn:2019xog,Ellis:2019oqb,Lewicki:2021pgr}.
In this paper we derive analytically an efficiency factor that determines how large is the contribution from each of the GW sources. We perform numerical simulations of the phase transition, describing both of the GW sources, bubble walls and fluid motion, in the thin-wall limit, to show how the efficiency factor affects the final GW spectrum. Moreover, we derive analytically the probability density function for the radius at which a given point on the surface of a bubble collides with another bubble and verify the results against our numerical simulations. Finally we also provide updated fits to the spectral shapes of the GW signals that can be produced by all sources active in very strong phase transitions.
\section{Energy budget}
We estimate how the energy released in the bubble expansion is shared between the scalar field gradients and the fluid motion in strongly supercooled phase transitions by studying the bubble expansion under the influence of pressure terms caused by the interactions of the wall with the ambient fluid. We perform the computation consistently in the thin-wall limit, which gives a good description of the system if the bubble reaches ultra-relativistic velocities. The following analysis improves earlier approximations used in the literature~\cite{Ellis:2019oqb,Ellis:2020nnr,Cai:2020djd}.
In the thin-wall limit the energy carried by the bubble walls can be modeled using a simple analytical prescription. This assumes that the bubble walls are spherical shells with a certain surface energy density and the interactions of the walls with the ambient fluid are local. In this limit, neglecting the expansion of the Universe, the bubble can be described by the Lagrangian~\cite{Darme:2017wvu,Ellis:2019oqb,Lewicki:2022nba}
\be \label{eq:lagr}
L = \frac{4\pi}{3} R^3 \Delta P(R) - 4\pi \sigma R^2 \sqrt{1-\dot R^2} \,,
\ee
where $R$ denotes the bubble radius, dot derivative with respect to time, $\Delta P(R)$ the pressure difference across the bubble wall, and $\sigma$ the surface tension of the wall. The latter is defined through the scalar potential $V$ as~\cite{Coleman:1977py}
\be
\sigma \equiv \int_0^{\varphi_c} \td \varphi \sqrt{2V(\varphi)} \,,
\ee
where $\varphi_c>0$ denotes the field value at which the potential energy is the the same as in the false vacuum that lies at the origin, $V(\varphi_c) = V(0)$.
In terms of the Lorentz factor of the bubble wall, $\gamma = 1/\sqrt{1-\dot R^2}$, the equation of motion arising from the Lagrangian~\eqref{eq:lagr} is given by
\be \label{eq:eom}
\frac{\td\gamma}{\td R} + \frac{2\gamma}{R} = \frac{1}{\sigma} \left[\Delta P + \frac13 \frac{\td \Delta P}{\td R}\right] \,.
\ee
The bubble nucleates at rest, $\gamma=1$, with an initial radius $R_0$. By Eq.~\eqref{eq:eom} we can relate the wall tension to the initial radius as $R_0 = 2\sigma/\Delta P_0$, where $\Delta P_0 \equiv \Delta P(R_0)$. The solution of the equation of motion can then be written as
\be \label{eq:gammaequ}
\gamma = \frac{2 R \Delta P(R)}{3 R_0 \Delta P_0} + \frac{R_0^2}{3R^2} \,.
\ee
The total pressure difference across the bubble wall, accounting for $1\to 1$ scatterings and $1\to N$ splittings at the bubble wall, is given by
\be
\Delta P(R) = \Delta V - \Delta P_{1\to 1} - \Delta P_{1\to N}(R) \,,
\ee
where $\Delta V$ denotes the potential energy difference between the minima. The pressure arising from $1\to 1$ scatterings quickly reaches a constant value in the relativistic limit~\cite{Bodeker:2009qy,Lewicki:2022nba}. Subsequently, the $R$ dependence of the total pressure difference arises only from the velocity dependence of $\Delta P_{1\to N}$, for which we consider two forms. The first one, suggested in~\cite{Bodeker:2017cim,Azatov:2020ufh} is linear in the Lorentz factor $\Delta P_{1\to N} = \tilde \Delta P_{1\to N} \gamma$, and the second one, suggested in~\cite{Hoche:2020ysm,BarrosoMancha:2020fay}, is quadratic in the Lorentz factor, $\Delta P_{1\to N} = \tilde \Delta P_{1\to N} \gamma^2$ In both cases $\tilde \Delta P_{1\to N}$ is a constant.
By plugging $\Delta P(R)$ into~\eqref{eq:gammaequ}, we get $\gamma$ as a function of $R$. When $\Delta P_{1\to N} \ll \Delta V - \Delta P_{1\to 1}$ the solution can be approximated by taking $\Delta P(R) \approx \Delta P_0$ in Eq.~\eqref{eq:gammaequ}. Assuming in addition that $R\gg R_0$, the Lorentz factor grows linearly with the radius, $\gamma \approx 2R/(3R_0)$. Eventually, as the bubble wall accelerates, $\gamma$ becomes large enough for the $1\to N$ splittings to be important, $\Delta P_{1\to N} \sim \Delta V - \Delta P_{1\to 1}$, after which it asymptotically reaches the value
\be \label{eq:gammaeq}
\gamma_{\rm eq} \equiv \left[\frac{\Delta V - \Delta P_{1\to 1}}{\tilde \Delta P_{1\to N}} \right]^{\frac1c} \,,
\ee
where $c=1,2$ depending on the scaling of the $1\to N$ pressure, $\Delta P_{1\to N} \propto \gamma^c$. The change from the linear growth to asymptotically constant behaviour occurs when the radius reaches
\be \label{eq:Req}
R_{\rm eq} \equiv \frac32 R_0 \gamma_{\rm eq} \,.
\ee
The solution $\gamma(R)$, expanded to the leading order in $\gamma_{\rm eq}\gg1$, can be written as\footnote{Also the full solution is analytic, but not illustrative. For $c=1$, the next term in the expansion of $\gamma(R)$ is $\propto \gamma_{\rm eq}^0$, but suppressed by another power of $1+R_{\rm eq}/R$, and for $c=2$, the next term is $\propto \gamma_{\rm eq}^{-1}$.}
\be \label{eq:gammaapr}
\frac{\gamma(R)}{\gamma_{\rm eq}} =
\begin{cases}
\left( 1+\frac{R_{\rm eq}}{R} \right)^{-1} \,, & c = 1 \,, \\
\sqrt{\left(\frac{R_{\rm eq}}{2R}\right)^2+1}-\frac{R_{\rm eq}}{2R} \,, & c = 2 \,.
\end{cases}
\ee
In Fig.~\ref{fig:gamma} we show $\gamma(R)$ for different values of $\gamma_{\rm eq}$ for both $\Delta P_{1\to N}\propto \gamma$ (solid) and $\Delta P_{1\to N}\propto \gamma^2$ (dashed). In both cases the transition from linear growth, $\gamma\propto R$ to the constant value $\gamma \approx \gamma_{\rm eq}$ is quite fast, and the difference between the two cases is small. The main effect of the scaling of $\Delta P_{1\to N}$ is that it changes $\gamma_{\rm eq}$ and $R_{\rm eq}$.
We define the efficiency factor $\kappa$ as the fraction of the total released energy within a unit solid angle that goes into the bubble wall energy,
\be
\kappa(R) = \frac{3(\gamma R^2 -R_0^2)\sigma}{(R^3 - R_0^3) \Delta V} \,.
\ee
The rest of the released energy, $1-\kappa(R)$, goes into fluid motion. This is a good approximation for strongly supercooled transitions. In weaker transitions one also needs to keep in mind that some of the energy going into the fluid will be lost on its heating which will reduce the overall GW signal from the fluid motion~\cite{Espinosa:2010hh,Ellis:2019oqb}.
Using the approximation~\eqref{eq:gammaapr}, we can express the efficiency factor as
\be \label{eq:kappaapr}
\kappa(R) \approx \mathcal{K} \, \frac{R_{\rm eq}}{R} \frac{\gamma(R)}{\gamma_{\rm eq}} ,
\ee
where
\be \label{eq:K}
\mathcal{K} \equiv \left[1-\frac{\alpha_\infty}{\alpha}\right] \left[ 1-\frac{1}{\gamma_{\rm eq}^c} \right]
\ee
is a constant, $\mathcal{K} < 1$. The parameters $\alpha$ and $\alpha_\infty$ are defined by scaling with the radiation energy density $\rho_R$ as $\alpha = \Delta V/\rho_R$ and $\alpha_\infty = \Delta P_{1\to 1}/\rho_R$ (see~\cite{Ellis:2019oqb} for more details). Typically for strongly supercooled transitions $\mathcal{K} \approx 1$. As shown in the right panel of Fig.~\ref{fig:gamma}, the efficiency factor remains constant at $R\ll R_{\rm eq}$ and decreases as $\kappa\propto 1/R$ at $R\gg R_{\rm eq}$. In the same way as for $\gamma(R)$, the difference between the cases $\Delta P_{1\to N}\propto \gamma$ and $\Delta P_{1\to N}\propto \gamma^2$ is small.
\section{Bubble nucleation and collisions}
Soon after the bubble has nucleated, we can neglect its initial radius, and, if the friction terms are sufficiently small ($\gamma_{\rm eq}\gg 1$), we can approximate that the bubble radius grows as $R = t-t_n$, where $t_n$ denotes the nucleation time of the bubble. Moreover, assuming that the bubbles are much smaller than the Hubble horizon, we can neglect the expansion of the Universe. The expected number of bubbles reaching a given point is then given by
\be
N(t) = \frac{4\pi}{3} \int_{-\infty}^t \!\! \td t' (t-t')^3 \Gamma(t') \,,
\ee
where $\Gamma(t)$ denotes the bubble nucleation rate per unit time and volume, and the probability that the given point still is in the false vacuum at time $t$ is
\be
P(t) = e^{-N(t)} \,.
\ee
Let us consider a bubble nucleation rate $\Gamma(t) = C e^{A(t)}$. Around the time $t_*$ when the transition proceeds, we can expand $A(t)$ to get $\Gamma(t) = C e^{A(t_*) + \beta (t-t_*)} = \Gamma_0 e^{\beta t}$, where $\beta \equiv \td \ln\Gamma/\td t |_{t=t_*}$ and $\Gamma_0 \equiv C e^{A(t_*) - \beta t_*}$. As the transition is not an instantaneous process, the choice of $t_*$ includes some ambiguity. It is convenient to choose $t_*$ by requiring that $P(t_*) = 1/e$, which gives $\Gamma_0 = \beta^4/(8\pi)$, and $N(t) = e^{\beta t}$.
Next, let us consider a point on the surface of a bubble that nucleated at time $t_n$. If the point is still in the false vacuum when the radius of the bubble is $R$, then it has stayed in the false vacuum for the whole time $0 \leq t-t_n < R$. The probability for this is $P(t_n + R)$. So, the probability that a bubble nucleated within time $t_n < t < t_n + \td t_n$ in a volume $V$, and that a point on its surface is still in the false vacuum at radius $R$, is given by $\td t_n \,V \,\Gamma(t_n) P(t_n + R)$. By integrating this over the nucleation time $t_n$ we get the probability density function for the radius at which a bubble surface element collides with the surface of another bubble,
\be \label{eq:pRc}
p(R_c) \propto \int \td t_n \, \Gamma(t_n) P(t_n + R_c) \,,
\ee
which we normalize to unity, $\int \td R_c \,p(R_c) = 1$. For the exponential bubble nucleation rate, $\Gamma(t) \propto e^{\beta t}$, this gives (independently of the prefactor $\Gamma_0$)
\be \label{eq:pRcexp}
p(R_c) = \beta e^{-\beta R_c} \,.
\ee
The above result provides a good cross-check for the numerical simulations that we will use for the GW computation. In Fig.~\ref{fig:pRc} the solid curve and the gray band indicate the mean and variance of the $R_c$ distribution obtained from 90 simulations with simulation volume $(16/\beta)^3$ and each of the simulation including at least 70 bubbles. In these simulations we nucleate thin-wall bubbles according to the exponential rate inside a cubic box with periodic boundary conditions, evolve them according to $R = t-t_n$, discretise the bubble surfaces, and find the radius at which each of the points on the bubble surface collide with a wall of another bubble using the cosine rule. We label the bubbles with index $j$ and denote the position vectors of the bubble centers by $\vec{x}_j$. Consider a point defined by the angles $\theta$ and $\phi$ on the surface of the bubble $j=j'$. The radius at which that point collides with a surface of another bubble is given by
\be \label{eq:Rc}
R_{c} = \min_{j\neq j'}\left[\frac{d_{j}^2 - \Delta t_{j}^2}{2(d_{j} \cos\theta_{j} - \Delta t_{j})} \right] \,,
\ee
where the minimum is taken over all bubbles, $d_{j}^2 \equiv |\vec{x}_j - \vec{x}_{j'}|^2$ is the distance between the bubble nucleation centers, $\Delta t_{j} \equiv t_{n,j} - t_{n,j'}$ is the time between their nucleation, and $\theta_{j}$ is the angle between the vector $\vec{x}_j - \vec{x}_{j'}$ and the vector corresponding to the angles $\theta$ and $\phi$. As shown in Fig.~\ref{fig:pRc}, the simulation result agrees well with the analytical result~\eqref{eq:pRcexp}.
A widely used approximation for the bubble radius upon collision comes from the bubble number density $n_{\rm bubbles} = \int \td t_n \Gamma(t_n) P(t_n)$ which leads to $ R_* = n_{\rm bubbles}^{-1/3} = (8\pi)^{1/3}/\beta$. From $p(R_c)$ we can calculate moments of the bubble radius when a bubble surface element collides with the surface of another bubble, $\langle R_c^n\rangle = \int \td R_c\, R_c^n p(R_c)$. For the exponential bubble nucleation rate this gives
\be \label{eq:Rcn}
\langle R_c^n \rangle = n! \beta^{-n} \,.
\ee
Given that the released energy scales with the radius to the third power, this leads to a different estimate of the average bubble radius $R_* = \langle R_c^3 \rangle^{1/3} = 6^{1/3}/\beta$ more appropriate for computation of the GW spectrum.
\section{Gravitational waves}
The energy released in the bubble expansion is divided between the bubble wall and the fluid shell that follows right behind the wall. Both the bubble walls and the fluid shells source GWs. We model these sources in the thin-wall limit and calculate the GW spectrum accounting for the efficiency factor $\kappa(R)$ for the bubble collisions and $1-\kappa(R)$ for the fluid motion. The modeling of the fluid motion in the thin-wall limit is based on the assumption that the released energy going to fluid motion is strongly localized in a thin shell. Before collision this fluid shell is right behind the bubble wall and after the collision it propagates to the same direction as before the collision,however , depending on how strong the interaction are between the fluid and the scalar field, it's velocity can slow down to the speed of sound.
We calculate the GW spectrum as e.g. in Refs.~\cite{Lewicki:2020jiv,Lewicki:2020azd} assuming that, as in the previous section, the bubble nucleation follows exponential rate per unit volume, $\Gamma \propto e^{\beta t}$. Each of the contributions ($l = $bubbles, fluid) to the resulting energy spectrum of GWs can be expressed as
\be \label{eq:Omega}
\Omega_{{\rm GW},l}(f) = \left[\frac{H}{\beta}\right]^2\left[\frac{\alpha}{1+\alpha}\right]^2 \,S_l(f) \,,
\ee
where
\be \label{eq:S}
S_l(f) \!=\! \left(\frac{2\pi f}{\beta}\right)^3 \frac{3\beta^5}{2 V_s} \int \!\frac{\td\Omega_k}{4\pi} \left[ |C_{l,+}(f)|^2 + |C_{l,\times}(f)|^2 \right]
\ee
encodes the spectral shape of the signal. The integral is over the wavevector $\vec{k}$ directions, and the integrand is $\propto V_s/\beta^5$ if the volume $V_s$ over which we average the GW energy spectrum is sufficiently big.
Using the thin-wall limit, the functions $C_{l,+}$ and $C_{l,\times}$ in the direction $\hat{k} = (0,0,1)$, can be expressed as
\bea \label{eq:Cpc}
C_{l,+,\times}(f) \approx \frac{1}{6\pi} \sum_j &\int_{t_{n,j}} \!\td t\, \td \Omega\, \sin^2\theta\, g_{+,\times}(\phi) \\ &\times R_j^3 f_l(R_j) \,e^{i 2\pi f (t - z_j - R_j\cos\theta)} \,.
\eea
The sum runs over all the bubbles nucleated in the volume $V_s$, $t_{n,j}$ is the time of nucleation of the bubble $j$, $z_j$ is the $z$ coordinate of its center, and $R_j = v_l (t-t_{n,j})$, where $v_l$ is the bubble wall/fluid shell velocity, denotes the radius of the bubble/fluid shell $j$ at time $t$. For the bubble walls we use $v_{\rm bubbles}=1$ both before and after the collision, whereas for the fluid shells we use $v_{\rm fluid}=1$ before the collision and after the collision we consider two cases: $v_{\rm fluid} = 1$ and $v_{\rm fluid} = c_s = 1/\sqrt{3}$. The former is appropriate for very strong transitions~\cite{Jinno:2019jhi}, whereas the latter is realized for weaker transitions~\cite{Jinno:2020eqg}. The functions $g_{+,\times}$ read $g_+(\phi) = \cos(2\phi)$ and $g_\times(\phi) = \sin(2\phi)$.
The function $f_l(R)$ encodes the scaling of the GW source~\cite{Lewicki:2020azd}. For the bubble collisions contribution, we follow the results of lattice simulations~\cite{Lewicki:2020jiv,Lewicki:2020azd}, which showed that the maximum of the stress-energy tensor scales as $T_{rr} \propto R^{-\xi}$ after the collision. The power $\xi$ in general depends on the underlying particle physics model. In particular, it was shown in~\cite{Lewicki:2020jiv} that breaking of a global symmetry corresponds to $\xi=2$ while in~\cite{Lewicki:2020azd} it was shown that in models where the phase transition breaks a gauge symmetry correspond to $\xi=3$. Accounting also for the efficiency factor $\kappa$, the $f_l$ function for bubble collisions is given by
\be \label{eq:Edecay}
f_{\rm bubbles}(R) =
\begin{cases}
\kappa(R) \,, & R \leq R_c \,, \\
\kappa(R_c) \left[\frac{R_c}{R}\right]^{\xi+1} \,, & R > R_c \,,
\end{cases}
\ee
where $R_c$ denotes the bubble radius at the moment of collision, $t=t_c$. In contrast with Refs.~\cite{Lewicki:2020jiv,Lewicki:2020azd}, where $R_c$ was determined numerically by the bisection method, we find $R_c$ using Eq.~\eqref{eq:Rc}.
Also for the fluid motion we assume that the maximum of the stress-energy tensor scales as $R^{-\xi}$ after the collision. The function $f_l$ for fluid motion then reads
\be \label{eq:Edecay2}
f_{\rm fluid}(R) =
\begin{cases}
1-\kappa(R) \,, & R \leq R_c \,, \\
\left[1-\kappa(R_c)\right] \left[\frac{R_c}{R}\right]^{\xi+1} \,, & R > R_c \,.
\end{cases}
\ee
In the perfect fluid description, that assumes the fluid to remain in local equilibrium at all times, the transverse-traceless part of the stress energy tensor of the fluid reads $T_{ij} = \gamma^2 v_i v_j w$, where $\vec{v}$ is the fluid velocity and $w$ is its enthalpy density. By the interactions of the fluid with the wall, an overdense fluid shell with radial velocity $v_r > 0$ builds up around the bubble wall. If the wall reaches a terminal velocity, the fluid shell settles into a self-similar profile~\cite{Espinosa:2010hh}. The shell continues to propagate after the bubble wall collides with the wall of another bubble. We track the consequent evolution of the fluid shell with a simplified lattice simulation assuming spherical symmetry~\cite{Jinno:2020eqg,KURGANOV2000241}. As shown in Fig.~\ref{fig:Efluid}, we find that the maximum of the $rr$ component of the stress energy tensor reaches $T_{rr} \propto R^{-3}$ scaling soon after the collision. This motivates us to consider $\xi=3$ for the scaling of the fluid related GW source. For comparison, we consider also $\xi = 2$ which corresponds to the bulk flow model~\cite{Konstandin:2017sat}.
For certain simple forms of the $f_l$ function the time integral in Eq.~\eqref{eq:Cpc} can be done analytically, which makes the simulation significantly faster. In particular, it can be done analytically if $f_l$ is a broken power-law with integer powers. We consider the form~\eqref{eq:kappaapr} for the efficiency factor $\kappa$ with $c=1$. Strictly speaking our results then hold for the case that $\Delta P_{1\to N} \propto \gamma$. However, since the difference in $\kappa(R)$ for $c=1$ and $c=2$ is very small, our results give a good approximation also of the case $\Delta P_{1\to N} \propto \gamma^2$. The pressure $\Delta P_{1\to N}$ mainly just determines the asymptotic radius $R_{\rm eq}$ through Eqs.~\eqref{eq:Req} and \eqref{eq:gammaeq}. In our simulations $R_{\rm eq}$ is an input parameter, and we perform the numerical simulations for several values of $R_{\rm eq}$. We also assume that $\mathcal{K}\approx 1$, which typically holds for strongly supercooled transitions, so that
\be \label{eq:efficiency}
\kappa(R) \approx \frac{1}{1+R/R_{\rm eq}} \,.
\ee
\section{Results}
We perform 90 simulations with simulation volume $(16/\beta)^3$, each including at least 70 bubbles, for a range of $R_{\rm eq}$ values including both signals due to bubble walls and the surrounding fluid in each of the cases described in the previous section. From the simulations we compute the spectral shape function~\eqref{eq:S}. In each case we fit the data combined from the 90 simulations with a broken power-law spectrum of the form
\be \label{eq:fit}
S_{\rm fit}(f) = \frac{A\,(a+b)^c}{\left[b \!\left(\frac{f}{f_p}\right)^{\!\text{-}\frac{a}{c}} \!+ a \!\left(\frac{f}{f_p}\right)^{\!\frac{b}{c}}\right]^c} \,,
\ee
where $a,b>0$ determine the low and high frequency power-law tails of the spectrum, $c>0$ the width of the transition between these power-laws, while $f_p$ and $A$ the peak frequency and amplitude of the spectrum respectively. The resulting GW spectra are shown in Fig.~\ref{fig:gws} with the solid curves. The color coding indicates different values of $R_{\rm eq}$.
For the solid curves in Fig.~\ref{fig:gws} the efficiency factor is directly included in the simulation as in Eqs.~\eqref{eq:Edecay} and \eqref{eq:Edecay2}. A commonly used approximation for the effect of the efficiency factor on the GW spectrum is to multiply the spectra obtained for bubble collisions and fluid motion without any efficiency factor by $\kappa(R_{\rm eff})^2$ and by $(1-\kappa(R_{\rm eff}))^2$, respectively. To check this, we have computed the amplitude of the GW spectrum in each case relative to the $R_{\rm eq}$ case that gives the largest amplitude and fitted $R_{\rm eff}$. The data points and resulting fits for all cases are shown in Fig.~\ref{fig:kappapfitplot} and the corresponding fitted values of $R_{\rm eff}$ in the last line of Table~\ref{table:fit}. We find that the effect of the efficiency factor is almost independent of the behaviour of the GW source after the collisions. In all cases our results give $R_{\rm eff} \simeq 5/\beta$, showing that the often used approximation with $R_{\rm eff} = (8\pi)^{1/3}/\beta \approx 2.9/\beta$ slightly underestimates $R_{\rm eff}$. Moreover, the results of applying the fitted efficiency factor are shown in Fig.~\ref{fig:gws} by the dashed curves. For these curves we have used the mean values given in Table~\ref{table:fit} that are obtained by averaging over the fits with different equilibrium radius, except for the amplitude for which we use the strongest signal for each source. We see that the dashed curves agree very well with the fully numerical results shown by the solid curves. This shows that the efficiency factor does not change the shape of the GW spectrum but gives only an overall suppression factor.
To summarize, we have shown that for very strong transitions, $\alpha\gg\alpha_\infty$, the GW spectrum from bubble collisions and from fluid motion, accounting for the distribution of energy between these sources, is given by
\be \label{eq:OmegaFit}
\!\!\Omega_{{\rm GW}}(f) \!=\! \left[\frac{H}{\beta}\right]^2 \!\left[\frac{\kappa(R_{\rm eff}) \, \alpha}{1+\alpha}\right]^2 \!\! \frac{A\,(a+b)^c}{\left[b \!\left(\frac{f}{f_p}\right)^{\!\text{-}\frac{a}{c}} \!+ a \!\left(\frac{f}{f_p}\right)^{\!\frac{b}{c}}\right]^c} \,,
\ee
where the efficiency factor is given by Eq.~\eqref{eq:efficiency}. The fitted values of the parameters $A, a, b, c, f_p$ and $R_{\rm eff}$ are given in Table~\ref{table:fit}. For weaker transitions, $\alpha\lesssim \alpha_\infty$, also the prefactor $\mathcal{K}$, given in Eq.~\eqref{eq:K}, needs to be accounted for, as well as the suppression arising from heating of the fluid around the bubble wall~\cite{Espinosa:2010hh}. In the limit of large wall velocity appropriate for strong transitions this reduction takes a simple form~\cite{Ellis:2019oqb}
\be
\kappa_{\rm fluid}=\frac{\alpha_{\rm eff}}{\alpha}\frac{\alpha_{\rm eff}}{0.73+0.083\sqrt{\alpha_{\rm eff}}+\alpha_{\rm eff}} \,,
\ee
where $\alpha_{\rm eff} = [1-\kappa(R_{\rm eff})]\alpha$.
\begin{table*}[t]
\centering
\begin{tabular}{@{\extracolsep{8pt}} lccccccc @{}}
& \multicolumn{3}{c}{Bubbles} & \multicolumn{4}{c}{Fluid} \\
\cline{2-4} \cline{5-8} \\[-9pt]
& \multirow{2}{*}{envelope} & \multirow{2}{*}{$T_{rr}\propto R^{-2}$} & \multirow{2}{*}{$T_{rr}\propto R^{-3}$} & \multicolumn{2}{c}{$T_{rr}\propto R^{-2}$} & \multicolumn{2}{c}{$T_{rr}\propto R^{-3}$} \\
\cline{5-6} \cline{7-8} %
& & & & $v_{\rm fluid}=1$ & $v_{\rm fluid}=c_s$ & $v_{\rm fluid}=1$ & $v_{\rm fluid}=c_s$ \\
\hline
$100\,A$ & $3.78 \pm 0.04$ & $5.93\pm 0.05$ & $5.13\pm 0.05$ & $5.94\pm 0.02$ & $3.36\pm 0.01$ & $5.14\pm 0.04$ & $3.64\pm 0.02$ \\
$a$ & $3.08 \pm 0.04$ & $1.03\pm 0.04$ & $2.41\pm 0.10$ & $1.03\pm 0.05$ & $1.00\pm 0.05$ & $2.36\pm 0.09$ & $2.02\pm 0.08$ \\
$b$ & $0.98 \pm 0.05$ & $1.84\pm 0.17$ & $2.42\pm 0.11$ & $1.87\pm 0.18$ & $1.39\pm 0.15$ & $2.36\pm 0.09$ & $1.38\pm 0.06$ \\
$c$ & $1.91 \pm 0.29$ & $1.45\pm 0.34$ & $4.08\pm 0.77$ & $1.39\pm 0.38$ & $0.71\pm 0.26$ & $3.69\pm 0.48$ & $1.48\pm 0.32$ \\
$2\pi f_p/\beta$ & $1.33 \pm 0.19$ & $0.64\pm 0.09$ & $0.77\pm 0.12$ & $0.57\pm 0.04$ & $0.44\pm 0.04$ & $0.66\pm 0.04$ & $0.44\pm 0.04$ \\
$\beta R_{\rm eff}$ & $4.10 \pm 0.31$ & $5.07\pm 0.51$ & $4.81\pm 0.45$ & $5.66\pm 0.51$ & $5.71\pm 0.52$ & $5.34\pm 0.49$ & $5.47\pm 0.50$ \\
\hline
\end{tabular}
\caption{Fitted values for the parametrization of the spectral shape~\eqref{eq:fit} and fitted value of $\beta R_{\rm eff}$ in Eq.~\eqref{eq:efficiency}. The corresponding spectra are shown in Fig~\ref{fig:kappapfitplot}.}
\label{table:fit}
\end{table*}
The GW spectrum today can be obtained from~\eqref{eq:OmegaFit} by accounting for the scaling of the amplitude and frequency with the scale factor~\cite{Lewicki:2020jiv}:\footnote{Here for simplicity while red-shifting we assumed radiation dominated expansion from the transition time up to the matter-radiation equality. For a review of alternative scenarios and their impact on the spectra see Ref.~\cite{Allahverdi:2020bys}.}
\be
\begin{aligned}
&\Omega_{{\rm GW},0} = \frac{1.67\!\times\!10^{-5}}{h^2} \!\left[\frac{100}{g_*}\right]^{\!\frac13}\! \Omega_{{\rm GW}}(f) \,, \\
& f_{p,0} = h_* \left[\frac{f_p}{\beta} \right] \left[\frac{\beta}{H}\right] \,,
\end{aligned}
\ee
where $h$ denotes the dimensionless Hubble constant, $h = 0.674$~\cite{Planck:2018vyg}, and $h_*$ the inverse Hubble time at the transition redshifted to its value today
\be
h_* = 1.65\times 10^{-5}\,{\rm Hz}\, \left[\frac{T_*}{100\,{\rm GeV}}\right] \left[\frac{g_*}{100}\right]^{\frac16} \,.
\ee
Here $T_*$ denotes the temperature after the transition (including reheating) and $g_*$ the effective number of relativistic degrees of freedom at that temperature. At scales larger than the horizon scale at the time of the transition the source is not coherent and, consequently, in standard radiation domination the slope of the spectrum changes to $\Omega_{\rm GW}\propto f^3$ for $f < h_*$~\cite{Caprini:2009fx,Cai:2019cdl}.\footnote{The low frequency slope is also changed by possible modifications of the expansion rate~\cite{Barenboim:2016mjm,Hook:2020phx,Gouttenoire:2021jhk} although the only scenario in which the signal is not diminished is when the modification in question is itself caused by the transition for instance through slow decay of the scalar field leading to a period of matter domination~\cite{Ellis:2020nnr}.}
\section{Conclusions}
In this paper we have revisited the energy budget of strong first-order phase transitions to verify its impact on the produced gravitational wave spectra. We have gone beyond the current state-of-art by including the efficiency as a function of radius of the bubble accounting for the collision time of each point on the bubble surface. We have utilised numerical simulations randomly nucleating bubbles in a three dimensional box with periodic boundaries and used these to compute the GW spectra. This has allowed us to confirm that a simplified treatment of simply scaling entire spectra with an efficiency factor computed at some characteristic radius is accurate as the spectral shapes do not change due to the efficiency factor significantly. We did, however, find that in order to accurately describe the results the characteristic radius used in the simplified calculation should be around $R_{\rm eff}\approx 5/\beta$ rather than the usually employed average bubble separation $R_*= (8\pi)^\frac13 /\beta\approx 2.9 /\beta$.
In each simulation we have also took into account the scaling of the GW sources after the collision in order to provide new fits for the resulting spectra from strongly supercooled transitions. The results are shown in Table~\ref{table:fit} and, starting from strongest transitions, include bubble collision spectra for both $T_{rr}\propto R^{-3}$, appropriate for gauge symmetry breaking, and $T_{rr}\propto R^{-2}$, appropriate for global symmetry breaking. Going towards slightly weaker transitions, we have provided the spectrum generated by fluid motion with the scaling $T_{rr}\propto R^{-3}$ and assuming the fluid remains in the form of relativistic shocks $v_{\rm fluid}=1$ after the transition. For transitions which are not extremely strong, we have show results closer to the sound wave picture in which the velocity of the fluid quickly relaxes to the speed of sound $v_{\rm fluid}=c_s$, again assuming the scaling $T_{rr}\propto R^{-3}$. Finally, for illustration, we also provide fluid spectra assuming the scaling $T_{rr}\propto R^{-2}$.
Taking into account that for very relativistic walls the fluid profiles are extremely peaked, we have thus show that the final GW spectrum will be indistinguishable from an even stronger transition where bubble collisions would be the main source. Only for weaker transitions where the hydrodynamical effects change the propagation speed of the fluid shells, the spectrum diverges from the spectrum arising from bubble collisions.
\begin{acknowledgments}
\vspace{4pt}\noindent\emph{Acknowledgments} -- This work was supported by the Spanish MINECO grants IJC2019-041533-I, FPA2017-88915-P and SEV-2016-0588, the Spanish MICINN (PID2020-115845GB-I00/AEI/10.13039/501100011033), the grant 2017-SGR-1069 from the Generalitat de Catalunya, the Polish National Science Center grant 2018/31/D/ST2/02048, and the Polish National Agency for Academic Exchange within Polish Returns Programme under agreement PPN/PPO/2020/1/00013/U/00001. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya.
\end{acknowledgments}
\bibliography{gw}
|
Title:
Simons Observatory Focal-Plane Module: Detector Re-biasing With Bias-step Measurements |
Abstract: The Simons Observatory is a ground-based cosmic microwave background survey
experiment that consists of three 0.5 m small-aperture telescopes and one 6 m
large-aperture telescope, sited at an elevation of 5200 m in the Atacama Desert
in Chile. SO will deploy 60,000 transition-edge sensor (TES) bolometers in 49
separate focal-plane modules across a suite of four telescopes covering 30/40
GHz low frequency (LF), 90/150 GHz mid frequency (MF), and 220/280 GHz
ultra-high frequency (UHF). Each MF and UHF focal-plane module packages 1720
optical detectors spreading across 12 detector bias lines that provide voltage
biasing to the detectors. During observation, detectors are subject to varying
atmospheric emission and hence need to be re-biased accordingly. The re-biasing
process includes measuring the detector properties such as the TES resistance
and responsivity in a fast manner. Based on the result, detectors within one
bias line then are biased with suitable voltage. Here we describe a technique
for re-biasing detectors in the modules using the result from bias-step
measurement.
| https://export.arxiv.org/pdf/2208.05997 |
\keywords{cosmic microwave background, TES bolometers, microwave SQUID multiplexing}
\section{Introduction}
The Simons Observatory (SO) is a suite of ground-based telescopes to be sited in the Atacama Desert in northern Chile at an altitude of 5200 m. SO will focus on measuring the temperature and polarization anisotropy of the cosmic microwave background (CMB) with over 60,000 transition-edge sensor (TES) bolometers spread among one 6 m large aperture telescope (LAT) and three 0.5 m small aperture telescopes (SATs)\cite{forcast}. Forty-nine separate universal focal-plane modules (UFMs) spanning six frequency bands from 30 GHz to 280 GHz host the TES bolometers and readout circuitry based on microwave SQUID multiplexers\cite{Dober_2021}. The 30$/$40 GHz low frequency (LF) UFMs use lenslet-coupled sinuous antennas, while the 90$/$150 GHz mid frequency (MF) and the 220$/$280 GHz ultra-high frequency (UHF) UFMs implement horn-coupled orthomode transducers\cite{{so_20},{Galitzki_2018}}. The SLAC Superconducting Microresonator RF (SMuRF) electronics serve as the room temperature readout electronics\cite{Henderson_2018}.
There are 1720 optical detectors and 36 dark bolometers spread among 12 bias lines for each MF and UHF UFM. A single voltage bias value is shared by $\sim 150$ TES bolometers that connect to each bias line. The setting of the voltage bias has an impact on the bolometer performance qualities such as stability, sensitivity and time constant. During observation, detectors are subject to varying atmospheric emission. If detectors are not re-biased accordingly, the change of optical loading will move the TESes into different region of their superconducting transition, hence influence their performance. In the worst case, the TESes will become normal or superconducting and lose their sensitivity. The detector re-biasing process should be fast such that normal observation can be resumed soon after. Detector bias level is normally adjusted every 1 $\sim$ 2 hours for other CMB telescopes located in the same area, such as ACT\cite{{Choi_2020},{Aiola_2020}} and POLARBEAR\cite{P_A_R_Ade_2014,polar_bear_2020}. We anticipate the frequency of detector re-biasing in SO will be at the same order of magnitude.
In Section $\ref{UFM}$ we discuss the thermal circuit of the UFM and the Joule heating generated in the TES bias circuitry. They will influence the re-biasing process. Section $\ref{considerations}$ discusses the considerations about where to bias detectors. We describe two common ways to characterize the TES superconducting transition: I-V curve and bias-step measurement, and we introduce a technique of using the result from bias-step measurement to re-bias detectors in Section $\ref{rebias}$.
\section{UFM thermal circuit and Joule heating}
\label{UFM}
The UFM is a complicated assembly\cite{{McCarrick_2021},{Healy_2020}} with a complicated thermal circuit. In the UFM, the detector stack is placed atop the optical coupling. The detector stack consists four individual wafers, starting from the sky-side they are: the choke, the waveguide interface plate, the detector wafer and the backshort assembly. TES bolometers are located on the detector wafer. For MF and UHF UFMs, gold bonds are laid from the PdAu pads on the detector wafer to the Au-plated Al horn to provide an extra heat path. Above the detector stack, there is a copper ground plane hosting the multiplexer chips. The routing wafer locates above the copper ground plane and provides the 12 bias line and shunt resistors. A copper lid and Au-plated Al heat clamp stack above the routing wafer pushing wafers down to make sure a good thermal contact between each layer. The left panel of Figure $\ref{fig:ufm.png}$ shows a picture and a CAD model of the UFM.
During observations, UFMs will be mounted to the 100 mK focal plane with 7 UFMs in one SAT and three per LAT optics tube\cite{Galitzki_2018}. Copper heat strap will be added to connect the UFM heat clamp to the receiver 100 mK stage. The receiver focal plane is considered as the thermal bath for the UFM. The right panel of Figure $\ref{fig:ufm.png}$ shows a thermal circuit of the UFM when it is mounted on the receiver focal plane. The middle panel of Figure $\ref{fig:ufm.png}$ demonstrates the stacking structure of the UFM by showing the bulk parts that are important for understanding the UFM thermal geometry.
Each TES resistor (with normal resistance $R_n \sim 8~\mbox{m}\Omega$) is in parallel with a shunt resistor (with $R_s \sim 400~\mu\Omega$) so that it can be voltage-biased in its transition by bias current $I_{bias}$. A diagram from the TES bias circuitry can be found in Figure $\ref{fig:tes_circuit.png}$.
The total Joule heating $P_J^{total}$ from the TES bias circuitry in the UFM can be expressed as:
\begin{equation}
P_J^{total} = P_J^{shunt} + P_J^{TES},
\end{equation}
in which the $P_J^{shunt}$ is from all the the shunt resistors and $P_J^{TES}$ is from the TES bolometers. When we bias the TESes, we first over-bias them normal for a short period of time before dropping onto the transition. When over-biasing TESes to their normal state, $P_J^{shunt}$ generated by the applied bias voltage is estimated to be over 360 nW and the $P_J^{TES}$ is over 18 nW for each UFM. Such heat will increase with increased drive normal bias voltage. As a comparison, during observation, $P_J^{total}$ is $\sim$ 77 nW when operating TESes around 50${\%}{R_n}$ and saturation powers for 90 GHz and 150 GHz detectors are less then 10 pW\cite{McCarrick_2021,Wang_2022}. One thermal path from the routing wafer to the bath goes through the detector wafer, hence $P_J^{shunt}$ will dissipate through the detector wafer. The thermal conductance between the routing wafer and the detector wafer is hard to tune. To control the influence of $P_J^{shunt}$ to the detector wafer, we can boost up other thermal conductance, for example, we can increase the number of copper heat straps connecting the heat clamp and the receiver focal plane. We can also reduce the amount of $P_J^{shunt}$ during UFM operation; for example, we can reduce the number of times driving detectors normal. More details about the UFM thermal performance and the dependence between the array heating and the UFM geometry will be discussed in future work.
\section{Considerations about where to bias detectors}
\label{considerations}
During observation, detectors are subject to varying atmospheric emission and hence need to be re-biased accordingly. At certain optical loading and bath temperature, the detector properties such as time constant and responsivity vary with the TES tuned depth into the superconducting transition. To read out the changing resistance of each voltage-biased TES, we measure the time-dependent current via an rf-SQUID ammeter. The normal resistance $R_N$ of each TES is measured in the laboratory prior the deployment\cite{Wang_2022}. A conceptually simple measure of the tuned depth into the superconducting transition is the $R_{TES}$ as a fractional percent of $R_N$: ${\%}{R_n}$. This is commonly used as a proxy for other metrics such as signal-to-noise and dynamic range.
There are three main considerations about where to bias detectors: the array sensitivity, the detector time constant and the dynamic range. Section \ref{noise} discusses the expected detector noise and its behaviour when detector is in transition. This is related to the consideration about the detector sensitivity. When choosing the bias voltage for each bias line, we start from the detector level by choosing the suitable bias voltage for each detector and then move to the array level deciding the bias voltage for each bias line using a metric, this process is described in Section \ref{metric}.
\subsection{Detector noise in transition and sensitivity}
\label{noise}
When detector is in transition the expected total noise can be expressed as:
\begin{equation}
NEP = (NEP_{\gamma}^2+NEP_{G}^2+NEP_{ro}^2+NEP_j^2)^{1/2},
\end{equation}
where the $NEP_{\gamma}$ stands for the bolometer noise from photon loading, $NEP_{G}$ stands for the thermal carrier noise, $NEP_{ro}$ stands for the readout noise and $NEP_{j}$ stands for Johnson noise.
For $NEP_{\gamma}$, we consider it coming from two parts: the fluctuation in the number of photons received by the detector, which follows Poisson statistics, \cite{{Richards_94},{Zmuidzinas_03}} and a second term recovering the Dicke equation for the sensitivity of a radiometer \cite{{Richards_94}}:
\begin{equation}
NEP_{\gamma}^2 = 2\int h\nu P_{\nu} d\nu + 2\int \frac{P_{\nu}^2}{N_{\nu}} d\nu,
\end{equation}
where the $P_{\nu}$ is photon loading absorbed by the detector at optical frequency $\nu$ and $N_{\nu}$ describes effective number of modes received by the detector.
Detector $NEP_{G}$ can be expressed as\cite{Mather_82}:
\begin{equation}
NEP_G^2 = 4k_BFT_c^2G,%
\end{equation}
where the $k_B$ is the Boltzmann constant, F is a numerical factor, $T_c$ is the TES critical temperature and G is the bolometer thermal conductance.
Both $NEP_{G}$ and $NEP_{\gamma}$ are independent of what ${\%}{R_n}$ the TES is at in the transition region.
The $NEP_{ro}$ describes the noise from components in the readout chain such as SQUIDs and amplifiers. The $NEP_{j}$ is generated by the thermal fluctuations of the charge carries in the detector bias circuitry. Both readout noise and Johnson noise come naturally in the form of noise equivalent current (NEI) but can be translated into NEP using:
\begin{equation}
\label{NEP_NEI}
NEP = \frac{NEI}{|s_i|},
\end{equation}
where $s_i$ stands for the TES responsivity\cite{Irwin2005}. When a detector is in transition, both $NEP_{ro}$ and $NEP_{j}$ are small comparing to $NEP_{\gamma}$ and $NEP_{G}$. Even though $NEP_{ro}$ and $NEP_{j}$ change with $R_{TES}$, their contribution to the total NEP is not significant.
From in-lab measurements, we have observed small variation of individual detector's NEP at different ${\%}{R_n}$ around the middle of the transition but larger variation of detector's NEP across the array as shown in the left panel of Figure \ref{fig:NEP.png}.
For each individual detector, the sensitivity $NET$ in unit of $\mu K\sqrt{s}$ is a function of total efficiency $\eta$, NEP and conversion factor $r$:
\begin{equation}
\label{NET_NEP}
NET = r^{-1}NEP / \eta,
\end{equation}
where $r$ converts CMB temperature fluctuations in $\mu K \sqrt{s}$ at input to $aW / \sqrt{Hz}$ at the detector. When deployed into receivers, $\eta$ for each detector is decided by the receiver optics and the detector optical efficiency.
We measure the optical efficiencies of one third of the detectors in each UFM in laboratory with an internal cold load\cite{Wang_2022}. Another CMB experiment ACT has seen a large variation of detector optical efficiencies in the LF array from in-lab testing\cite{Li_2018} and on-site characterization\cite{Li_2021}. Although we have not observed such large variation of detector optical efficiencies across the array yet, we do not assume the detector optical efficiency will be a constant within one UFM when calculating array sensitivity in the field. Measurements of UFM optical efficiency will be reported in future work.
\subsection{Bias voltage choosing metric}
\label{metric}
Since each MF and UHF UFM has 12 bias lines and detectors on each bias line are biased under the same bias voltage, we consider the sensitivity in the unit of bias line. For each bias line, the sensitivity can be expressed as:
\begin{equation}
\label{NET_bl}
NET_{bl} = \sqrt{\frac{1}{\sum_{n=1}^{N} \frac{1}{NET_n^2}}} ,
\end{equation}
assuming there are N detectors on that bias line. When choosing the operating bias voltage for each bias line, we can consider minimizing $NET_{bl}$ by constructing a 2-d matrix that includes each detector's NEP at different bias voltages. However, constructing such matrix and minimizing Equation \ref{NET_bl} requires taking measurements at various bias voltages and is time consuming. Here we introduce a method to approximate the suitable operating bias voltage for each bias line.
We anticipate that we will re-bias detectors every 1 $\sim$ 2 hours similar to other CMB telescopes in the area. When choosing the bias voltage for each re-biasing process we first consider the detector stability as a constraint on choosing the lower bound of suitable ${\%}{R_n}$. Stable detector operating region will be measured in the laboratory\cite{Wang_2022} before deployment for each UFM. We then consider the fluctuations in the emission of water vapor and the elevation angle of the telescope. They will influence the optical loading during the next scan. Our goal is to bias each individual detector such that it can stay in transition with increased and decreased optical loading within one scan. Fluctuations in the emission of water vapor can be predicted from precipitable water vapor (PWV) forecast\cite{{Cort_s_2020},{apex}}. Studies from past CMB experiment located in the same area have demonstrated the impact of PWV to the data quality\cite{D_nner_2012,Hasselfield_2013} and have shown measurement of PWV around SO site is consistent between different measuring methods\cite{Morris_2022}. We explore the possibility of utilizing the PWV forecast to guide the choice of operating ${\%}{R_n}$. For example, if the PWV forecast shows rapid increasing PWV during next scan, we then would want to purposely bias detectors with low ${\%}{R_n}$, such that detectors will not saturate later at higher PWV. In this example, the upper bound of operating ${\%}{R_n}$ range is determined by the highest PWV from the forecast and the lower bound is determined by detector stability. The transfer function between the PWV and the detector operating range, which also depends on the total efficiency $\eta$, can be measured in the field. On going work in ACT is studying the fluctuation power response at different frequency band as a function of PWV.
After a range of suitable operating TES ${\%}{R_n}$ is determined for each scan, we then consider the sensitivity and time constant for each detector. As demonstrated in section \ref{noise}, the detector sensitivity is not sensitive to the bias point around the middle of the transition. The detector time constant acts as a low pass filter on the TES current response and increases at higher ${\%}{R_n}$. The right panel of Figure \ref{fig:NEP.png} shows the time constant measurements at different ${\%}{R_n}$. Therefore, within a suitable operating range, we choose to operate the detector at the lowest suitable ${\%}{R_n}$ to achieve smallest time constant. The suitable bias voltage for each detector is then determined to be the bias voltage that biases the detector to the lowest suitable ${\%}{R_n}$.
Next we define a metric for a bias line with total N detectors as:
\begin{equation}
\label{rebias_metric}
\sum_{n=1}^{N}\frac{|V_{bias}-V_n|}{NET_n^2},
\end{equation}
where $V_n$ is the voltage which will bring the nth detector to the target ${\%}{R_n}$ and $V_{bias}$ is the bias voltage applied to this bias line. Different detectors will have different $V_n$ because there are variations in detector properties, such as $\eta$ and $R_n$, among the array. This metric assigns a weight to each detector according to their NET. In order to favor detectors with low NET and minimizing $NET_{bl}$, $V_{bias}$ is chosen such that the metric reaches local minimum around the median of $V_n$.
\section{Re-bias method}
\label{rebias}
There are two common ways to characterize the TES superconducting transition: I-V curve and bias-step. In this section we introduce these two methods and discuss their advantages and limitations. We describe the method of using bias-step measurements to re-bias detectors using the metric described in Section \ref{considerations}. We note that a similar re-biasing procedure was used in the SPIDER experiment\cite{Rahlin_2014}.
\label{sec:rebias}
\subsection{I-V curve}
\label{IV}
The I-V curve measurement is commonly used to determine the bias voltage for detectors. The advantage of the I-V curve method is that it provides a full measurement of the TES transition region. During an I-V curve, detectors are first overbiased into the normal state, the bias voltage is then reduced to step down TESes through their superconducting transition. The change of the TES current is tracked by the rf-SQUID in the multiplexer chip. The high bias voltage needed to bias TES into the normal state will result in large amount of Joule heating from the TES bias circuitry (approximately 378 nW when driving one whole array normal) hence a thermal shock to the system. If using the TES normal branch measurement to calibrate the DC offset of the TES current, TES needs to be overbiased with a range of high bias voltage to ensure there are enough data in the normal state and the corresponding Joule heating will be increased.
To reduce the amount of heating, one can extend the cool down wait time during an I-V curve or take I-V curve on one single bias line at a time. Both methods will result in a longer run time.
\subsection{Bias-step}
\subsubsection{Bias-step measurement}
A bias-step measurement is another way that can be used to select detector bias voltage. In a bias-step acquisition, a small-amplitude square wave is added to the DC bias level. The step function can be thought of as a two-point I-V curve. An example bias-step measurement can be found in Figure $\ref{fig:bias_step}$. The bias-step measurement does not require a measure of the TES normal state to calibrate the TES current DC offset. Instead, we assume that the TES bias power remains constant around the middle of the TES transition and derive TES current $I_{TES}$ using the change in bias current $\delta I_{bias}$ and the change in TES current response $\delta I_{TES}$\cite{{Niemack08},{grace2016}}. Such assumption can be verified in the laboratory using I-V curve measurements and has been found out to be valid when $R_{TES}$ is lower than 80${\%}{R_n}$.
Figure \ref{fig:tes_circuit.png} shows a diagram of the TES bias circuitry. The bias current $I_{bias}$ can be calculated with commanded $V_{bias}$ and known $R_{bias}$ as: $V_{bias} / R_{bias}$. We then can express $I_{TES}$ as a function of bias power $P_{J}^{TES}$, $I_{bias}$ and $R_s$ as:
\begin{equation}
\label{I_tes}
I_{TES} = I_{bias} - I_{s} = I_{bias} - \frac{V_s I_{TES}}{R_s I_{TES}} = I_{bias} - \frac{P_{J}^{TES}}{R_s I_{TES}} = \frac{1}{2}(I_{bias} \pm \sqrt{I_{bias}^2 - 4\frac{P_{J}^{TES}}{R_s} } ).
\end{equation}
Taking the derivative of this and assuming $P_{bias}$ is a constant and independent of $I_{bias}$:
\begin{equation}
\frac{\delta I_{TES}}{\delta I_{bias}} = \frac{1}{2} (1 \pm \frac{I_{bias}}{\sqrt{I_{bias}^2 - 4\frac{P_{J}^{TES}}{R_s}}}).
\end{equation}
From the bias-step measurement we can measure $\delta I_{TES}$ and $\delta I_{bias}$, therefore we calculate $P_{J}^{TES}$ with measured or known quantities as:
\begin{equation}
P_{J}^{TES} = I_{bias}^2 R_s \frac{(\delta I_{TES}/\delta I_{bias})^2 - (\delta I_{TES}/\delta I_{bias})}{(1-2(\delta I_{TES}/\delta I_{bias}))^2}.
\end{equation}
We can then use $P_{J}^{TES}$ and Equation \ref{I_tes} to calculate $I_{TES}$. The $R_{TES}$ can then also be calculated and transfered into ${\%}{R_n}$ with measured $R_n$ from in-lab testing.
The step size of the bias step is small, hence the TES responsivity $s_i$ can be approximated in the small signal limit as \cite{Irwin2005}:
\begin{equation}
s_i = - \frac{1}{I_{TES}(R_{TES}-R_{s})} .
\end{equation}
The flat region of the bias-step measurement can be used to estimate the detector noise performance. Only the middle of the flat region is used to avoid the effect from the low pass filter in the warm SMuRF bias circuitry and detector time constant. The NEI can be approximated using the median of the amplitude spectral density of the time stream data between 5 and 50 Hz. This frequency range is selected to avoid both the low frequency region which can be contaminated by $1/f$ noise and the high-frequency roll-off defined by the anti-aliasing filter used in the SMuRF electronics. The NEP for each detector can then be calculated using Equation \ref{NEP_NEI}.
\subsubsection{Bias-step re-biasing}
We assume a situation that an initial I-V curve has been taken and detectors have been operating for some time at certain bias voltages, we now want to re-bias detectors for the next scan. Following the procedure described in Section \ref{metric}, a target operating ${\%}{R_n}$ for each TES is determined.
The bias-step re-biasing process measures resistances of TES bolometers in three different states. It starts with taking bias-step measurements on all 12 bias lines at the existing bias voltages. For each TES we call this the initial state with bias voltage $V_{bias,1}$ and measured TES resistance $R_{TES,1}$. Depending on whether the median of $R_{TES,1}$ of detectors is above or below the median of 50${\%}{R_n}$ in each bias line, a certain DC offset is applied to decrease or increase the current bias voltage on each bias line driving each TES into the intermediate state with $V_{bias,2}$. The second round of bias-step measurements are then taken, $R_{TES,2}$ for each TES is extracted. We then linear fit ($V_{bias,1}$, $R_{TES,1}$) and ($V_{bias,2}$, $R_{TES,2}$) to estimate the V-R relation around the middle of the transition for each TES. The approximated bias voltage needed to bias each detector to its suitable ${\%}{R_n}$ can then be predicted.
From the bias-step measurements, the NEP of each detector can be calculated. We utilize the measured total efficiency from planet scan and in-lab optical efficiency measurement result to guide the conversion from the NEP into NET using Equation \ref{NET_NEP} and use the metric described in Equation \ref{rebias_metric} to find the most suitable bias voltage for each bias line. Each detector is then driven to the final state and a confirmation round of bias-step measurements is then taken at the decided bias voltage for each bias line.
Figure \ref{fig:demo.png} shows an in-lab demonstration of this method and a comparison with I-V curve measurements. Comparing to the I-V method, the bias-step method uses a smaller total bias voltage range and does not require the measurements of the normal branch. Therefore, bias-step provides a fast way to determine the re-bias voltages for all 12 bias lines simultaneously. In the demonstration, the total run time of the bias-step re-biasing was $\sim$ 100 seconds
\section{Conclusion and discussion}
In this paper, we discussed the considerations about where to bias detectors during observation. We introduced a method to determine bias voltage for each bias line, such that the total array NET can be minimized. We discussed the UFM thermal circuit and the Joule heating from the TES bias circuitry.
We described two methods of characterizing the TES superconducting transition: the I-V curve and the bias-step. The I-V curve method provides a full measurements of the TES transition region. However it will introduce a thermal shock to the system when driving detectors normal. The bias-step provides an alternative way to measure and re-bias detectors. At small signal limit and under 80${\%}{R_n}$, bias step can be used to measure the $R_{TES}$, $s_i$, and detector NEP. We demonstrated using bias-step measurements and the metric we introduced to re-bias detectors. The in-lab demonstration of the bias-step re-biasing method took $\sim$ 100 seconds to re-bias all 12 bias lines in the UFM.
Further improvements of the detector re-biasing method for SO UFM will be explored. Detailed results of the UFM properties mentioned and used in this paper, such as optical efficiency and array sensitivity, and more discussions about the UFM thermal performance will be included in future work.
\begin{acknowledgements}
This work was supported in part by a grant from the Simons Foundation (Award 457687, B.K.) and private funding from universities.
SKC acknowledges support from NSF award AST-2001866.
YL is supported by KIC Postdoctoral Fellowship.
\end{acknowledgements}
\newpage
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Revisiting Chaplygin gas cosmologies with the recent observations of high-redshfit quasars |
Abstract: In this paper, we use the latest observations of quasars covering the
redshift range of $0.04<z<5.1$ to investigate a series of Chaplygin gas models
as candidates for unified dark matter and dark energy. Based on different
combinations of available standard candle and standard ruler data, we put
constraints on the generalized Chaplygin gas (GCG), modified Chaplygin gas
(MCG), new generalized Chaplygin gas (NGCG) and viscous generalized Chaplygin
gas (VGCG) models. Moreover, we apply Jensen-Shannon divergence (JSD),
statefinder diagnostics, and the deviance information criterion (DIC) to
distinguish these CG models, based on the statistical results derived from
Markov chain Monte Carlo method. The results show that (1) The standard ruler
data could provide more stringent constraints on the cosmological parameters of
different CG models considered in this analysis. Interestingly, the matter
density parameter $\Omega_{m}$ and Hubble constant $H_{0}$ derived from the
available data are well consistent with those from the Planck 2018 results; (2)
Based on the statistical criteria JSD, our findings demonstrate the well
consistency between Chaplygin gas and the concordance $\Lambda$CDM model.
However, in the framework of statefinder diagnostics, the GCG and NGCG models
cannot be distinguished from $\Lambda$CDM, while MCG and VGCG models show
significant deviation from $\Lambda$CDM in the present epoch; (3) According to
the the statistical criteria DIC, we show that the MCG and VGCG models have
substantial observational support from high-redshfit quasars, whereas the GCG
and NGCG models miss out on the less observational support category but can not
be ruled out.
| https://export.arxiv.org/pdf/2208.06167 |
\title{Revisiting Chaplygin gas cosmologies with the recent observations of high-redshfit quasars
}
\author{Jie Zheng\thanksref{addr1}
\and
Shuo Cao\thanksref{e1,addr1} %
\and
Yujie Lian\thanksref{addr1}
\and
Tonghua Liu\thanksref{addr2}
\and
Yuting Liu\thanksref{addr1}
\and
Zong-Hong Zhu\thanksref{e2,addr1,addr3}
}
\thankstext{e1}{e-mail: caoshuo@bnu.edu.cn}
\thankstext{e2}{e-mail: zhuzh@bnu.edu.cn}
\institute{Department of Astronomy, Beijing Normal University, Beijing 100875, China \label{addr1}
\and
School of Physics and Optoelectronic, Yangtze University, Jingzhou 434023, China \label{addr2}
\and
School of Physics and Technology, Wuhan University, Wuhan 430072, China \label{addr3}
}
\date{Received: date / Accepted: date}
\section{Introduction}
\label{intro}
The analysis of various observational data, including Type Ia supernovae (SNe Ia) \cite{SNe1,SNe2}, baryon acoustic oscillation (BAO) \cite{eisenstein20005}, and cosmic microwave background (CMB) \cite{CMB} suggest that the present universe is undergoing an accelerated phase of expansion \cite{acc_universe}. Different suggestions have been put forward to understand this phenomenon, with the inclusion of exotic dark energy (DE) with negative pressure on the right-hand side of the Einstein equation. The earliest and simplest model for DE is the cosmological standard $\Lambda$CDM model, which is in good agreement with recent observations but embarrassed by the well known coincidence problem and the fine-tuning problem \cite{weinberg1,weinberg2}. Meanwhile, the existence of dark matter (DM), which constitutes the major component of the matter density in our Universe, is the other primary indicator for the limitation of our knowledge of physics laws \cite{Cao2021,2022A&A...659L...5C}. In recent times, scholars proposed that a fluid called Chaplygin gas could provide a possible solution to unify two uncharted territories, mimicing the effects of DM in the early times and DE in the late times \cite{kamenshchik2001}. Specially, the Chaplygin gas obeys the exotic equations of state:
\begin{equation}
p_{}=-\frac{A}{\rho},
\label{eq:CG}
\end{equation}
where $p$ and $\rho$ denote the pressure and energy density, respectively. $A$ is a positive constant. Unlike quintessence, which describes the transition from the quasi-exponential expansion of the early universe to a power law expansion to explain the present acceleration of the universe but fails to avoid fine-tuning in explaining the cosmic coincidence problem, the Chaplygin gas (CG) model provided an alternative way to account for the accelerating universe by describing a transition from an epoch filled with dust-like matter to an accelerating universe. Additionally, they predicted that the cosmological constant was variable. In particular, the Chaplygin gas behaves as a pressureless fluid at higher redshifts and as a cosmological constant at lower redshifts, which tends to promote expansion. In addition, the equation of state of CG shows a well-defined connection with string and brane theories \cite{kamenshchik2001,Bento2003}.
However, several fatal drawbacks appeared in CG models. There is unexpected blowup in the DM power spectrum \cite{AH32,AH33} in the framework of the CG model, and the CG model is in disagreement with the observations, such as Type Ia supernovae \cite{GCG_SNe1,GCG_SNe2,GCG_SNe3}, X-ray gas mass fraction of clusters \cite{zhu2004}, Hubble parameter-redshift data\cite{GCG_hz} and gamma-ray bursts \cite{GCG_Gamma}.
Therefore, generalized Chaplygin gas (GCG) model was proposed \cite{Amendola2003,Bento2003}, which is capable of explaining the background dynamics of the early and late universe and is in good agreement with recent observations. The effective equation of state of GCG, given by $p=\alpha\rho$, proves the evolution of a universe evolving from a phase dominated by non-relativistic matter to a phase dominated by a cosmological constant through an intermediate period.
There are some undesirable features of the GCG power spectrum caused by adiabatic pressure perturbation, which is produced from a nonzero $\alpha$ \cite{Amendola2003,Thakur2019}. As a result, \cite{MCG2002,MCG2004} proposed the ``modified" Chaplygin gas (MCG) model, which considered an interpolation between standard fluids at high energy densities and Chaplygin gas fluids at lower energy densities.
Another generalization is dubbed new generalized Chaplygin gas model (NGCG), which was proposed by \cite{NGCG2006}. Since the equation of state of dark energy still cannot be determined exactly, they argued that the GCG model could be accommodated to any possible X-type dark energy with constant $\omega$, dual to an interacting XCDM parametrization scenario. In the framework of the NGCG model, it is not only described by Chaplygin gas fluid but also exhibits dust-like matter in the early universe and X-type dark energy in the late universe. Up to now, the nature of dark energy and
dark matter is still unknown. It is reasonable to consider other forms of dark energy models or further generalize the GCG model. For instance, \cite{VGCG2006} considered a phenomenological model that consists of viscous effects and the features of GCG, dubbed viscous generalized Chaplygin gas (VGCG), which is able to eliminate the problems raised by only dissipative fluids and explain the dynamics of the universe.
With so many GG cosmologies proposed in the literature, it is rewarding to determine which model is strongly supported by the currently available astrophysical probes. There are two general types of distance indicators at present: standard candles (SNe Ia and quasars), which are related to the luminosity distance $D_{L}(z)$ and standard rulers (BAO and CMB) that usually provide information on the large scale of the Universe. In this work, we adopt two different catalogs of data, standard candles and standard rulers, to determine how different samples affect the estimation of cosmological parameters. Here, we turn to a new standard candle compilation of 1598 quasars from X-ray and UV flux measurements with a redshift range $0.036 \leq z \leq 5.1003$ \cite{Risaliti2015}, which has become an effective probe to investigate different cosmological parameters \cite{UV5,Lian2021,xubing2021,khadka2021} especially the cosmic curvature $\Omega_{k}$ \cite{UV2,UV3}, and the cosmic distance duality relation \cite{UV1,liutonghua2020} in the early universe ($z\sim 5$). Besides, the newest SNe Ia sample `` Pantheon" consists of 1048 points
spanning a redshift range $0.01 \leq z \leq 2.3$ \cite{pantheon}, is also adopted in our work as a standard candle. For standard rulers, the angular size from 120 compact radio quasars obtained by very-long baseline interferometry (VLBI) from \cite{caoshuo2017AA,Cao20017qsoas,AS3} is taken into consideration covering the redshift range $0.46 \leq z \leq 2.76$, which has also been widely used in many cosmological analyses, such as the observational constraints on the interaction between cosmic dark sectors \cite{lixiaolei,AS8,AS5}, General Relatively and modified gravity theories \cite{Cao20017qsoas,AS1,xutengpeng2018,AS6}, the Hubble constant and cosmic curvature \cite{AS2,qijingzhao2021}. Additionally, we also adopt 11 BAO data points from BOSS DR12 at $z_{\textrm{eff}}=0.38,0.51,0.61$ \cite{Alam2017}, 6dFGs and SDSS MGS at $z_{\textrm{eff}}=0.122$ \cite{carter2018}, DES Y1 results at $z_{\textrm{eff}}=0.81$ \cite{DES2018}, eBOSS DR14 at $z_{\textrm{eff}}=1.52$ \cite{ata2018} and $z_{\textrm{eff}}=2.34$ \cite{dsa2019}. Specially, introducing quasar measurements to constrain cosmological parameters is beneficial for studying the evolution of cosmological models at higher redshifts \cite{Lian:2021tca,AS7,liutonghua2021}.
In this paper, we focus on standard candles and rulers to constrain four Chaplygin gas cosmological models with the goal of investigating the difference between standard candles and standard rulers and distinguishing these Chaplygin gas models by statistical analysis. This paper is organized as follows. In Section 2, we briefly introduce the basic equations of cosmological models, including GCG, MCG, NGCG, and VGCG. In Section 3, we describe the observational data adopted in this work and perform a Markov chain Monte Carlo (MCMC) analysis using different data sets. The results from observational constraints and the corresponding analysis are displayed in Section 4, as well as some statistical techniques of model comparison presented in Section 5. Finally, our conclusions are summarized in Section 6.
\section{Chaplygin gas cosmologies}
\label{sec:me} %
In this section, we give a description of four types of Chaplygin gas models in a spatially flat universe, including GCG, MCG, NGCG, and VGCG models. Moreover, to obtain stringent constraints on key cosmological parameters, we use the prior on the baryon density parameter $\Omega_{b}$ and radiation density parameter $\Omega_{r}$ from \cite{planck2018result}.
\subsection{GCG model}
The GCG model, which is extended from the CG model, has been generally studied to explain the accelerating universe \cite{Zhangjingfei32,Bento2003,zhu2004,Lixiaolei22,lixiaolei50,lixiaolei,Lian2021}. In this model, the dark energy and dark matter could be unified with an exotic equation, which is introduced as
\begin{equation}
p_{\mathrm{gcg}}=-\frac{A}{\rho_{\mathrm{gcg}}^{\alpha}},
\label{eq:equation1}
\end{equation}
where $p_{gcg}$ and $\rho_{\mathrm{gcg}}=\rho_{de}+\rho_{dm}$ present the pressure and density of Chaplygin gas, respectively. $A$ is a positive constant and $0 \leq \alpha \leq 1$. When $\alpha=1$, the GCG model reduces to the CG model, and when $\alpha=0$, the GCG model reduces to the $\Lambda$CDM model. The energy density of the GCG model is expressed as
\begin{equation}
\rho_{\mathrm{gcg}}(a)=\rho_{\mathrm{gcg} 0}\left(A_{\mathrm{s}}+\frac{1-A_{\mathrm{s}}}{a^{3(1+\alpha)}}\right)^{\frac{1}{1+\alpha}},
\label{eq:equation2}
\end{equation}
where $a$ is a scale factor, which is related to the observable redshift as $a=\frac{1}{1+z}$, $A_{\mathrm{s}} \equiv A / \rho_{\mathrm{gcg}0}^{1+\alpha}$ is a dimensionless parameter, and $\rho_{\mathrm{gcg}0}$ is the present energy value of the GCG density. $A_{s}$ can be written by the effective total matter density $\Omega_{m}$ and $\alpha$ as
\begin{equation}
A_{s}=1-\left(\frac{\Omega_{m}-\Omega_{b}}{1-\Omega_{b}}\right)^{1+\alpha}.
\end{equation}
Therefore, we can derive the normalized Hubble parameter $E(z)$ for this model as
\begin{eqnarray}
E^{2}(z)&=&\Omega_{\mathrm{b}}(1+z)^{3}+\Omega_{\mathrm{r}}(1+z)^{4}+ \nonumber \\
&&\left(1-\Omega_{\mathrm{b}}-\Omega_{\mathrm{r}}\right)\left(A_{\mathrm{s}}+\left(1-A_{\mathrm{s}}\right)(1+z)^{3(1+\beta)}\right)^{\frac{1}{1+\beta}}.
\label{eq:equationGCG}
\end{eqnarray}
where $E(z)=H^{2}(z)/H^{2}_{0}$ and the parameter set is $\mathbf{p} \equiv\left(\Omega_{m}, A_{\mathrm{s}}, \alpha, H_{0} \right)$.
\subsection{MCG model}
The MCG model is also a unified dark matter and dark energy model, which is a modification of the GCG model. It has been widely discussed in many perspectives \cite{lixinxu16,lixinxu14,lixinxu15,xulixin2012modified,li2019MCG,debnath2021MCG}. This class of equation of state is expressed as,
\begin{equation}
\label{eq:equtionMCGeos}
p_{\mathrm{mcg}}=B \rho_{\mathrm{mcg}}-\frac{A}{\rho_{\mathrm{mcg}}^{\alpha}},
\end{equation}
where $\rho_{\mathrm{gcg}}=\rho_{DE}+\rho_{DM}$, $A$ is a positive constant, $B$ is a free parameter, and $0 \leq \alpha \leq 1$. When $B=0$, this model corresponds to the GCG model, whereas when $A=0$, it reduces to the standard equation of state of a perfect fluid. Especially, it turns to $\Lambda$CDM model with $B=0$ and $\alpha=0$ and it reduces to CG model with $B=0$ and $\alpha=1$. Considering energy conservation, we can obtain the energy density as
\begin{equation}
\label{eq:equationmcgrho}
\rho_{\mathrm{mcg}}=\rho_{\mathrm{mcg} 0}\left[A_{s}+\left(1-A_{s}\right) a^{-3(1+B)(1+\alpha)}\right]^{\frac{1}{1+\alpha}},
\end{equation}
where $A_{s}=A/(1+B) \rho_{\mathrm{mcg}0}^{1+\alpha}$, $B\neq-1$ and $\rho_{\mathrm{mcg}0}$ is the present energy value of the MCG density. Therefore, we can rewrite the normalized Hubble parameter $E(z)=H(z)/H_{0}$ for the MCG model as
\begin{eqnarray}
\label{eq:equationmcgEz}
E^{2}(z)&=&\Omega_{b} (1+z)^{3}+\Omega_{r} (1+z)^{4}+(1-\Omega_{b}-\Omega_{r})\times \nonumber \\
&&[A_{s}+(1-A_{s}) (1+z)^{3(1+B)(1+\alpha)}]^{\frac{1}{1+\alpha}}.
\end{eqnarray}
For MCG, the parameter set is $\mathbf{p} \equiv\left(\Omega_{m},A_{\mathrm{s}}, B,\alpha,H_{0} \right)$.
\subsection{NGCG model}
The NGCG model has been studied in previous work, such as \cite{zhangjingfei34,liaokai2013,Zhangjingfei2019,salahedin2020NGCG,Almamon2021}. In the NGCG model, it assumes that the exotic background fluid interpolates between a dust-dominated epoch $\rho \sim a^{-3}$ and a cosmological constant-dominated epoch $\rho \sim a^{-3\left(1+\omega\right)}$, which is portrayed as a unification of X-type dark energy and dark matter. Specifically, when $\omega=-1$, the NGCG model reduces to the GCG model, while $\omega=-1$ and $\alpha=0$, it reduces to the XCDM model. The equation of state of NGCG is given by,
\begin{equation}
\label{eq:equationngcg_p}
p_{\mathrm{ngcg}}=-\frac{\tilde{A}(a)}{\rho_{\mathrm{ngcg}}^{\alpha}},
\end{equation}
where $\tilde{A}(a)= -w A a^{-3(1+w)(1+\alpha)}$ is a function of the scale factor, and $\alpha$ is a free parameter spanning 0 to 1. The energy density of the NGCG fluid is
\begin{equation}
\label{eq:equationngcg_rho}
\rho_{\mathrm{ngcg}}=\rho_{\mathrm{ngcg}0}a^{-3}\left[1-A_{s}+A_{s} a^{-3 w_{\mathrm{de}}(1+\alpha)}\right]^{\frac{1}{1+a}},
\end{equation}
where $A_{s}=\frac{1-\Omega_{m}}{1-\Omega_{\mathrm{b}}}$. Finally, we can get the form of $E(z)=H(z)/H_{0}$ of the NGCG model,
\begin{eqnarray}
\label{eq:equationNGCG_Ez}
E^{2}(z) &=&\Omega_{\mathrm{b}}(1+z)^{3}+\Omega_{\mathrm{r}}(1+z)^{4}+(1-\Omega_{\mathrm{b}}-\Omega_{\mathrm{r}})(1+z)^{3} \nonumber \\
&& \times [1-\frac{1-\Omega_{m}}{1-\Omega_{\mathrm{b}}-\Omega_{\mathrm{r}}}(1-(1+z)^{3 w(1+\alpha)})]^{\frac{1}{1+\alpha}}.
\end{eqnarray}
Hence, for the NGCG model, the parameter set that we adopt is $\mathbf{p} \equiv\left(\Omega_{m}, \omega, \alpha, H_{0} \right)$.
\subsection{VGCG model}
To tackle the late accelerated expansion of the universe, a hybrid model that consists of a fusion of viscous effects and the features of Chaplygin gas, the VGCG model was studied in \cite{VGCG2006,LiweiVGCG,liwei2015,almada2021VGCG}. This model is able to avoid causality problems that arise when only dissipative fluid is considered and alleviate the blowup in the DM power spectrum for GCG models \cite{almada2021VGCG}. The equation of state of the VGCG model is given by
\begin{equation}
p_{\mathrm{vgcg}}=-A / \rho_{\mathrm{vgcg}}^{\alpha}-\sqrt{3} \zeta \rho_{\mathrm{vgcg}}.
\label{eq:equationvgcg_eos}
\end{equation}
One can obtain the standard $\Lambda$CDM model when $\alpha = 0$ and $\zeta=0$, and this model reduces to the GCG model with $\zeta=0$. Then, we can deduce its energy density as,
\begin{eqnarray}
\rho_{\mathrm{vgcg}} &=&\rho_{\mathrm{vgcg} 0}[\frac{B_{s}}{1-\sqrt{3} \zeta}+(1-\frac{B_{s}}{1-\sqrt{3} \zeta}) \times \nonumber \\
&& a^{-3(1+\alpha)(1-\sqrt{3} \zeta)}]^{\frac{1}{1+\alpha}},
\label{eq:equationvgcg_rho}
\end{eqnarray}
where $B_{s}=A / \rho_{\mathrm{vgcg}0}^{1+\alpha}$, $0 \leq B_{s} \leq 1$ and $\zeta<\frac{1}{\sqrt{3}}$. The dimensionless Hubble parameter $E(z)=H(z)/H_{0}$ is expressed as
\begin{eqnarray}
\label{eq:equationvgcg_ez}
E^{2}(z) &=& \Omega_{b} (1+z)^{3}+\Omega_{r} (1+z)^{4} + \nonumber \\
&& ( 1 - \Omega_{b} - \Omega_{r}) \times [\frac{B_{s}}{1-\sqrt{3} \zeta} + \nonumber \\ &&(1-\frac{B_{s}}{1-\sqrt{3} \zeta}) (1+z)^{3(1+\alpha)(1-\sqrt{3} \zeta)}]^{\frac{1}{1+\alpha}}.
\end{eqnarray}
It is straightforward that the parameter set of the VGCG model is $\mathbf{p} \equiv ( \Omega_{m},B_{s}, \alpha, \zeta, H_0 )$.
\section{Cosmological observations}
\begin{table}
\centering
\begin{tabular}{cccc}
\hline
z & measurement & value & ref\\
\hline
0.38 & $D_{M}\left(r_{s, \mathrm{fid}} / r_{s}\right)$ & 1512.39 & \cite{Alam2017}\\
0.38 & $H(z)(r_{s} / r_{s,fid})$ & 81.2087 & \cite{Alam2017}\\
0.51 & $D_{M}(r_{s,fid} / r_{s})$ & 1975.22 & \cite{Alam2017}\\
0.51 & $H(z)(r_{s} / r_{s,fid})$ & 90.9029 & \cite{Alam2017}\\
0.61 & $D_{M}(r_{s,fid} / r_{s})$ & 2306.08 & \cite{Alam2017}\\
0.61 & $H(z)(r_{s} / r_{s,fid})$ & 98.9647 & \cite{Alam2017}\\
0.122 & $D_{V}(r_{s,fid} / r_{s})$ & $539 \pm 17$ & \cite{carter2018}\\
0.81 & $D_{A}/r_{s}$ & $10.75 \pm 0.43$ & \cite{DES2018}\\
1.52 & $D_{V}(r_{s,fid} / r_{s})$ & $3843 \pm 147$ & \cite{ata2018}\\
2.34 & $D_{H}/r_{s}$ & 8.86 & \cite{dsa2019}\\
2.34 & $D_{M}/r_{s}$ & 37.41 & \cite{dsa2019}\\
\hline
\end{tabular}
\caption{The newest observations of BAO used in this analysis.}
\label{tab:baodata}
\end{table}
In this section, we use three catalogs to constrain cosmological models: (1) a standard candle combination of quasars from X-ray and UV flux measurements and SNe Ia samples; (2) a standard ruler set of intermediate-luminosity radio quasars and BAO data listed in Table~\ref{tab:baodata}; and (3) a combination of standard candles and rulers. Additionally, in Fig.~\ref{fig:distribution}, we display the redshift distributions of standard candles and rulers.
\subsection{QSO[X-ray and UV flux]}
The latest compilation of quasar (QSO[XUV]) from X-ray and UV flux measurements is recognized via the X-ray luminosity and UV luminosity ($L_{X}-L_{UV}$) relation \cite{2019NatAs...3..272R} and used to constrain cosmological model parameters \cite{Risaliti2015,Lian2021}. The $L_{X}-L_{UV}$ relation is given by,
\begin{equation}
\label{eq:LXLUV}
\log (L_{X})=\gamma \log (L_{UV})+\beta,
\end{equation}
where the slopes $\gamma$ and $\beta$ are free parameters that can be measured from the dataset. When we express luminosities in terms of fluxes, $F=L / 4 \pi D_{L}(z)^{2}$, Eq. (\ref{eq:LXLUV}) becomes
\begin{equation}
\label{eq:FXUV}
\log \left(F_{X}\right)=\gamma \log \left(F_{U V}\right)+2(\gamma-1) \log \left(D_{L}\right)+(\gamma-1) \log (4 \pi)+\beta,
\end{equation}
where $F_{X}$ and $F_{UV}$ are the quasar X-ray and UV fluxes, respectively, and $D_{L}$ is the luminosity distance, which is determined via
\begin{equation}
\label{eq:DL}
D_{L}(z, \hat{p})=\frac{c(1+z)}{H_{0}} \int_{0}^{z} \frac{d z^{\prime}}{E\left(z^{\prime}\right)},
\end{equation}
where $E(z)$ depends on different cosmological models.
To obtain the likelihood function, we use Eq. (\ref{eq:FXUV}) and Eq. (\ref{eq:DL}) in a specific model as
\begin{equation}
\mathcal{L}_{F_{X}}=-\frac{1}{2} \sum_{i=1}^{N}\left[\frac{\left[\log \left(F_{X, i}^{\mathrm{obs}}\right)-\log \left(F_{X, i}^{\mathrm{th}}\right)\right]^{2}}{s_{i}^{2}}+\ln \left(2 \pi s_{i}^{2}\right)\right],
\end{equation}
where $\ln =\log _{e}$, $s_{i}^{2}=\sigma_{i}^{2}+\delta^{2}$, and where $\sigma_{i}$ and $\delta$ are the data error on the observed flux and the global intrinsic dispersion, respectively. In addition, according to \cite{2019NatAs...3..272R,khadka2021}, we employ the QSO from X-ray and UV fluxes in the analysis with the chi-square statistic
\begin{equation}
\chi_{F_{x}, m i n}^{2}=-2 \ln (L F)_{\min }-\sum_{i=1}^{1598} \ln \left(2 \pi\left(\sigma_{i}^{2}+\delta_{best-fit}^{2}\right)\right).
\end{equation}
\subsection{SNe Ia}
To use the Pantheon sample, first, we should determine the corresponding observable value and its theoretical value. The observable value given in the Pantheon sample is a corrected magnitude; see Table~A17 of \cite{pantheon} for more details, expressed by
\begin{eqnarray}
Y^{obs} &=& m_B+K \nonumber\\
&=& \mu+M,
\label{eq:Y_obs}
\end{eqnarray}
where $\mu$ is the distance modulus, $m_B$ is the apparent B-band magnitude, and $M$ is the absolute B-band magnitude of fiducial SNe Ia. There is a correction term $K = \alpha x_1-\beta c+\Delta_M+\Delta_B$ that includes the corrections related to four different sources (for more details, see \cite{pantheon}). The theoretical value is given by,
\begin{eqnarray}
Y^{th}&=& 5\log(D_L)+25 +M \nonumber\\
&=&5\log[(1+z)D(z)]+ Y_0,
\label{eq:Y_th}
\end{eqnarray}
where the constant term $Y_0 = M+5log(\frac{cH_0^{-1}}{Mpc})+25$, which should be marginalized by the methodology presented in \cite{Giostri2019}. The chi-square for the Pantheon sample can be given by
\begin{equation}
\label{eq:chi2SNe}
\chi^{2}_{\textrm{SNe}}={\Delta \overrightarrow{Y}}^T\cdot\textbf{C}^{-1}\cdot{\Delta \overrightarrow{Y}},
\end{equation}
where $\Delta \overrightarrow{Y}_i = [Y^{obs}_i-Y^{th}(z_i; Y_0,\textbf{p})]$ and the covariance matrix $\textbf{C}$ of the sample includes the contributions from both the statistical and systematic errors \cite{pantheon}.
\subsection{QSO[AS]}
\cite{Cao20017qsoas} extracted 120 compact radio quasars (QSO[AS]) based on a 2.29 GHz VLBI all-sky survey of 613 milliarcsecond ultracompact radio sources, covering a redshift range from 0.46 to 2.76. The observable value angular sizes $\theta_{obs}(z)$ is related to the intrinsic length $\ell_m$ and the angular diameter distance $D_A(z)$ \cite{Cao:2015APJ,AS3}. The corresponding theoretical angular size is defined by
\begin{equation}
\theta_{th}(z)=\frac{\ell_{m}}{D_{A}(z)},
\end{equation}
where $\ell_{m}$ is the intrinsic metric linear size, which is calibrated to $11.03 \pm 0.25$ pc by an independent method introduced in \cite{caoshuo2017AA}, and $D_{A}(z)$ is the angular diameter distance
\begin{equation}
D_{A}(z)=\frac{D_{L}(z)}{(1+z)^{2}},
\end{equation}
where $D_{L}(z)$ is defined by Eq.(\ref{eq:DL}). Therefore, we calculate the chi-square function by
\begin{equation}
\chi_{\textrm{QSO}}^{2}=\sum_{i}^{120} \frac{\left(\theta\left(z_{i} ; \mathbf{p}\right)-\theta_{i}^{o b s}\right)^{2}}{\sigma_{i}^{2}}.
\end{equation}
where $\theta(z_{i};\hat{p})$ is the theoretical value of the angular size and the total uncertainty can be expressed as $\sigma^{2}_{i}=\sigma^{2}_{stat,i}+\sigma^{2}_{sys,i}$.
\subsection{BAO}
The BAO data is also a powerful cosmological probe \cite{eisenstein1998,eisenstein20005}, which is extracted from galaxy redshift surveys. Here, we use 11 BAO measurements summarized in Table~\ref{tab:baodata}.
The observable quantities used in the measurements are expressed in terms of the transverse co-moving distance $D_M(z)$, the volume-average angular diameter distance $D_V(z)$, the Hubble rate $H(z)\equiv H_{0}E(z)$, the Hubble distance $D_{H}\equiv c/H(z)$, the sound horizon at the drag epoch $r_s$, and its fiducial value $r_{\rm{s,fid}}$.
In a flat universe, the transverse co-moving distance $D_M(z)$ equals the line-of-sight co-moving distance $D_{C}(z)$, which is expressed as
\begin{equation}
\label{eq:DC}
D_{C}=\frac{c}{H_{0}} \int_{0}^{z} \frac{d z^{\prime}}{E\left(z^{\prime}\right)},
\end{equation}
where $c$ is the velocity of light. The volume-average angular diameter distance is
\begin{equation}
D_{V}(z)=\left[\frac{c z}{H_{0}} \frac{D_{M}^{2}(z)}{E(z)}\right]^{1 / 3}.
\label{eq:equationDVz}
\end{equation}
Following \cite{ryan2019}, we use the fitting formula of \cite{eisenstein1998} to compute $r_s$ and calculate $r_{s,fid}$ by using the fiducial cosmological model.
Most of data we used are correlated; however, those from \cite{carter2018,DES2018,ata2018}) are uncorrelated. For the uncorrelated data points, the chi-square statistic is expressed as
\begin{equation}
\chi_{\mathrm{BAO}}^{2}(p)=\sum_{i=1}^{N} \frac{\left[A_{\mathrm{th}}\left(p , z_{i}\right)-A_{\mathrm{obs}}\left(z_{i}\right)\right]^{2}}{\sigma_{i}^{2}},
\end{equation}
where $A_{th}(p,z_i)$ denotes the model predictions at the effective redshift, $A_{obs}(z_i)$ is the observational value and $\sigma_i$ is the error bar of the measurements. For the correlated data points from \cite{Alam2017,dsa2019}, it requires
\begin{equation}
\chi_{\mathrm{BAO}}^{2}(p)=\left[\vec{A}_{\mathrm{th}}(p)-\vec{A}_{\mathrm{obs}}\right]^{T} C^{-1}\left[\vec{A}_{\mathrm{th}}(p)-\vec{A}_{\mathrm{obs}}\right],
\label{eq:chi2_BAO}
\end{equation}
where $C^{-1}$ is the inverse of the covariance matrix. The corresponding covariance matrix of \cite{Alam2017} is available from the SDSS website, and that of \cite{dsa2019} is presented in \cite{Caoshulei2020}.
In the cosmological analysis, the probability distributions of model parameters are obtained with an affine invariant Markov chain Monte Carlo (MCMC) ensemble sampler (emcee) \cite{emcee}, where the statistic can be determined with
\begin{equation}
\mathcal{L}(p)=e^{-\frac{\chi(p)^{2}}{2}},
\end{equation}
where $p$ is the set of model parameters from different cosmological models.
\section{Results and discussion}
\begin{table*}
\caption{The best-fit values and 68\% confidence limits for the CG cosmological parameters in each model (GCG, MCG, NGCG, and VGCG) and data set (QSO[XUV]+SNe Ia, QSO[AS]+BAO, and QSO[XUV]+SNe Ia+QSO[AS]+BAO).}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccccc}
\hline\noalign{\smallskip}
Model & Data & $\Omega_{m}$ & $A_{s}$ & $\alpha$ & $H_{0}$ (km/s/Mpc) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GCG & QSO[XUV]+SNe Ia & $0.53^{+0.53}_{-0.29}$ & $0.78 \pm 0.06$ & $0.46^{+0.57}_{-0.42}$ & $68.27^{+6.98}_{-4.76}$\\
& QSO[AS]+BAO & $0.33\pm0.02$ & $0.60 \pm 0.10$ & $-0.33^{+0.27}_{-0.24}$ & $65.81^{+2.26}_{-2.28}$ \\
& Combination & $0.31\pm 0.01$ & $0.73 \pm 0.04$ & $0.03^{+0.17}_{-0.14}$ & $68.26^{+1.18}_{-1.08}$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Model & Data & $\Omega_{m}$ & $A_{s}$ & $B$ & $\alpha$ & $H_{0}$ (km/s/Mpc)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
MCG & QSO[XUV]+SNe Ia & $0.47^{+0.36}_{-0.27}$ & $0.81^{+0.06}_{-0.09}$ & $0.12^{+0.26}_{-0.21}$ & $0.20^{+0.58}_{-0.39}$ & $68.28^{+7.23}_{-3.48}$ \\
& QSO[AS]+BAO &$0.33 \pm 0.02$ & $0.61^{+0.10}_{-0.14}$ & $-0.12^{+0.18}_{-0.09}$ & $0.05^{+0.87}_{-0.52}$ & $66.27^{+2.01}_{-2.30}$ \\
& Combination & $0.31\pm 0.01$ & $0.73^{+0.04}_{-0.06}$ & $-0.14^{+0.13}_{-0.06}$ & $0.71^{+0.78}_{-0.71}$ & $68.09^{+1.11}_{-1.06}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Model & Data & $\Omega_{m}$ & $\omega$ & $\alpha$ & $H_{0}$ (km/s/Mpc)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
NGCG & QSO[XUV]+SNe Ia & $0.30^{+0.16}_{-0.14}$ & $-1.10^{+0.24}_{-0.39}$ & $0.23^{+0.89}_{-0.53}$ & $68.50^{+7.19}_{-5.21}$\\
& QSO[AS]+BAO & $0.34 \pm 0.02$ & $-0.79^{+0.13}_{-0.14}$ & $-0.12^{+0.12}_{-0.17}$ & $65.16^{+2.38}_{-2.18}$\\
& Combination & $0.31 \pm 0.01$ & $-1.01^{+0.05}_{-0.06}$ & $0.01^{+0.09}_{-0.08}$ & $68.34^{+1.19}_{-1.09}$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Model & Data & $\Omega_{m}$ & $B_{s}$ & $\alpha$ & $\zeta$ & $H_{0}$ (km/s/Mpc) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
VGCG & QSO[XUV]+SNe Ia & $0.47^{+0.36}_{-0.31}$ & $0.82^{+0.13}_{-0.19}$ & $0.41^{+0.74}_{-0.49}$ & $-0.02^{+0.10}_{-0.08}$ & $68.58^{+6.82}_{-5.16}$\\
& QSO[AS]+BAO & $0.33 \pm 0.02 $ & $0.55^{+0.17}_{-0.16}$ & $0.08^{+0.99}_{-0.57}$ & $0.07^{+0.06}_{-0.12}$ & $66.32^{+2.16}_{-2.42}$\\
& Combination & $0.31 \pm 0.01$ & $0.64^{+0.10}_{-0.07}$ & $0.61^{+0.82}_{-0.66}$ & $0.07^{+0.04}_{-0.08}$ & $68.21^{+1.19}_{-1.04}$\\
\noalign{\smallskip}\hline
\end{tabular}}
\label{tab:results}
\end{table*}
\begin{table}
\caption{The values of DIC and their differences for CG and $\Lambda$CDM cosmologies. The Jensen-Shannon divergence between $\Lambda$CDM and other cosmological models is also calculated with respect to $\Omega_{m}$ and $H_{0}$.}
\label{tab:IC}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Data & Model & DIC & $\Delta$DIC & $D_{JS}(\Omega_{m})$ & $D_{JS}(H_{0})$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
QSO[XUV]+SNe Ia & $\Lambda$CDM & 2632.34 & 0 & 0 & 0\\
&GCG & 2635.10 & 2.76 & 0.721 & 0.124\\
& MCG & 2625.76 & -6.58 & 0.722 & 0.510\\
& NGCG & 2636.31 & 3.97 & 0.681 & 0.083 \\
& VGCG & 2638.69 & 6.36 & 0.727 & 0.092\\
\noalign{\smallskip}\hline\noalign{\smallskip}
QSO[AS]+BAO & $\Lambda$CDM & 616.42 & 0 & 0 & 0 \\
& GCG & 618.29 & 1.87 & 0.296 & 0.710 \\
& MCG & 608.04 & -8.38 & 0.300 & 0.662 \\
& NGCG & 617.97 & 1.55 & 0.468 & 0.927 \\
& VGCG & 605.54 & -10.88 & 0.319 & 0.655 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Combination & $\Lambda$CDM & 3245.86 & 0 & 0 & 0\\
& GCG & 3252.10 & 6.24 &0.074& 0.286\\
& MCG & 3246.42 & 0.56 & 0.132 & 0.275\\
& NGCG & 3251.15 & 5.29 &0.077 & 0.288\\
& VGCG & 3239.22 & -6.64 & 0.131 & 0.277\\
\noalign{\smallskip}\hline
\end{tabular}}
\end{table}
In this section, we display and discuss the constraint results of the cosmological parameters by using the standard candles and rulers data. It shows that how the different types of observational data could inflect the constraints of cosmological parameter estimation.
\subsection{GCG model}
We present the 1D probability distributions and 2D contours with 1$\sigma$ and 2$\sigma$ confidence levels (CLs) for the GCG model in Fig.~\ref{fig:GCG} and list the best-fit parameters at the 1$\sigma$ confidence level in Table~\ref{tab:results}.
The standard candle data gives $\Omega_{m}=0.53^{+0.53}_{-0.29}$, $A_{s}=0.78\pm 0.06$,$\alpha=0.46^{+0.57}_{-0.42}$ and $H_{0}=68.27^{+6.98}_{-4.76}$ km/s/Mpc, while the standard ruler data obtains $\Omega_{m}=0.33\pm0.02$, $A_{s}=0.60 \pm 0.10$, $\alpha= -0.33^{+0.27}_{-0.24}$ and $H_{0}=65.81^{+2.26}_{-2.28}$ km/s/Mpc. First, it is clear that the value of $\Omega_{m}$ obtained from standard candles shows a deviation from the Planck collaboration ($\Omega_{m}=0.3103 \pm0.0057$) \cite{planck2018result}. This is because a larger value of the matter density parameter is favored by the recent QSO[XUV] compilation in most cosmological models at higher redshifts ($2.5 < z < 5$), which has been discussed in previous work \cite{Risaliti2015,khadka2020_1}. In addition, $\alpha$ is an important parameter $0 \leq \alpha \leq 1$, where $\alpha=0$ denotes the $\Lambda$CDM model and $\alpha=1$ denotes the CG model. Although previous studies showed that the CG model is ruled out by observations, we find that the CG model is accepted by QSO[XUV]+SNe Ia at a 68\% CL. In addition, the standard ruler data favor the $\Lambda$CDM model at a 95\% CL, as well as the combination sample. For the Hubble constant, our constraint results are in good agreement with the Planck collaboration ($H_{0}=67.66 \pm 0.42$ km/s/Mpc) \cite{planck2018result}, although the values obtained from standard rulers are lower than that from other probes. In addition, the standard ruler data could bring down the error bars of $\Omega_{m}$, $\alpha$ and $H_{0}$ compared with the standard candles. This indicates that QSO[AS]+BAO could give a more restrictive constraint on cosmological parameters. Moreover, it is necessary to refer to the previous results, such as $A_{s}=0.70^{+0.16}_{-0.17}$ and $\alpha=-0.09^{+0.54}_{-0.33}$ constrained from the X-ray gas mass fraction, Type Ia supernovae and Type IIb radio galaxies in \cite{zhu2004} and $\alpha=-0.14^{+0.30}_{-0.19}$ obtained by SNe Ia+H(z)+CMB in \cite{wupuxun2007}, which are consistent with our results from combination data and favor the standard $\Lambda$CDM model. It is worth mentioning that \cite{Lian2021} gave $\Omega_{m}=0.416^{+0.088}_{-0.068}$, $\alpha=2.360^{+1.803}_{-1.793}$ and $H_{0}=69.254^{+4.427}_{-4.970}$ km/s/Mpc from the QSO[XUV]+QSO[AS], which is in good agreement with our results from QSO[XUV]+SNe Ia and includes the CG model at a 68\% CL. This suggests that the latest QSO compilation from X-ray and UV flux measurements slightly favors the CG model and prefers a larger value of $\Omega_{m}$.
\subsection{MCG model}
In the case of the MCG model, the results are presented in Fig.~\ref{fig:MCG} and Table~\ref{tab:results}. The standard candle data generates $\Omega_{m}=0.47^{+0.36}_{-0.27}$, $A_{s}=0.81^{+0.06}_{-0.09}$, $B=0.12^{+0.26}_{-0.21}$, $\alpha=0.20^{+0.58}_{-0.39}$ and $H_{0}=68.28^{+7.23}_{-3.48}$ km/s/Mpc, while the standard ruler data provides $\Omega_{m}=0.33 \pm 0.02$, $A_{s}=0.61^{+0.10}_{-0.14}$, $B=-0.12^{+0.18}_{-0.09}$, $\alpha=0.05^{+0.87}_{-0.52}$ and $H_{0}=66.27^{+2.01}_{-2.30}$ km/s/Mpc. The value of $\Omega_{m}$ obtained from QSO[XUV]+SNe Ia is still higher than that from other probes, which is the same as the case of the GCG model and still consistent with that from \cite{planck2018result} at a 68.3\% CL. In the framework of the MCG model, considering the fact that the parameter $B$ reflects the deviation from the GCG model (the MCG model reduces to the GCG model when $B=0$), the GCG model is accepted by current observations at a 95\% CL in all cases. However, the MCG model shows a tiny deviation from the GCG model by the combination sample at a 68\% CL. For the key parameter $\alpha$ that quantifies the deviation from the CG model and $\Lambda$CDM model, it is clear that the $\Lambda$CDM model, $B=0$ and $\alpha=0$, is accepted by standard candles and standard rulers at 68\% CLs, while the CG model, $B=0$ and $\alpha=1$, is favored by the combination sample at a 95\% CL. However, in the case of the combination sample, both the $\Lambda$CDM and CG models are favored within a 68\% CL.
In other words, this suggests that the $\Lambda$CDM model is more favored by standard candles and standard rulers, respectively, but the CG model is slightly preferred by the combination. Focusing on the Hubble constant, the constraint results agree well with Planck collaboration \cite{planck2018result}. Moreover, our results from combination samples are consistent with the results obtained from SNe Ia+BAO+CMB \cite{xulixin2012modified} with $\alpha=0.000727^{+0.00142}_{-0.00140}$, $B_{s}=0.782^{+0.0163}_{-0.0162}$ and $B=0.000777^{+0.000201}_{-0.000302}$ and from H(z)+BAO+CMB+SNe Ia \cite{Thakur2019} with $\Omega_{m}=0.284^{+0.013}_{-0.014}$, $\alpha=0.046^{+0.107}_{-0.102}$ and $B=0.0026\pm0.005$ at the 1$\sigma$ confidence level. This proves that most cosmological probes favor the $\Lambda$CDM model; however, the inclusion of the QSO sample from X-ray and UV flux measurements \cite{Risaliti2015} at higher redshifts changes to slightly favor the CG model.
\subsection{NGCG model}
In Fig.~\ref{fig:NGCG} and Table~\ref{tab:results}, we show the constraint results of the NGCG model. Compared with the standard candle dataset with $\Omega_{m}=0.30^{+0.16}_{-0.14}$, $\omega=-1.10^{+0.24}_{-0.39}$, $\alpha=0.23^{+0.89}_{-0.53}$ and $H_{0}=68.50^{+7.19}_{-5.21}$ km/s/Mpc, the standard ruler data obtains $\Omega_{m}=0.34\pm0.02$, $\omega=-0.79^{+0.13}_{-0.14}$, $\alpha=-0.12^{+0.12}_{-0.17}$ and $H_{0}=65.16^{+2.38}_{-2.18}$ km/s/Mpc. The most notable thing is that $\Omega_{m}=0.30^{+0.16}_{-0.14}$ from standard candles is consistent with Planck collaboration ($\Omega=0.3103\pm0.0057$) \cite{planck2018result}, however this is contrary in the scenarios of the GCG, MCG and VGCG models. \cite{khadka2020_1} constrained $\Omega_{m} \sim 0.3$ in the XCDM model from only compiled X-ray and UV flux measurements of 1598 quasars, while $\Omega_{m} \sim 0.5-0.6$ in the $\Lambda$CDM and $\phi$CDM models. There are similarities between the NGCG and XCDM models because the parameter $\omega$ in the NGCG model is proposed by a similar idea to that in the XCDM model. Hence, we obtain a normal value of $\Omega_{m}$ in the framework of NGCG, which indicates that X-ray and UV flux measurements of 1598 quasar compilations could help to determine the dark energy and dark matter.
It should be noted that $\omega$ is a free constant and \cite{NGCG2006} proposed the probability that dark energy behaves in a quintessence-like form with $\omega > -1$ and phantom-like form with $\omega <$ $-1$. The 1$\sigma$ range $\omega \in (-1.05,-0.95)$ from the combination sample implies that there is an equal chance that dark energy behaves as a quintessence-like form or phantom-like form. In all cases, it suggests that the GCG model (i.e., $\omega=-1$) and XCDM model (i.e., $\omega=-1$ and $\alpha=0$) are still supported by the observational data at a 95\% CL. In addition, it is remarkable that the CG model, $\omega=-1$ and $\alpha=1$, is accepted by the standard candle data at a 68\% CL. The Hubble constant obtained in our analysis is more consistent with the results of \cite{planck2018result} at a 68\% CL. Furthermore, we make a comparison with the previous findings in the literature. For instance, \cite{liaokai2013} derived $\Omega_{de}=0.7297^{+0.0229}_{-0.0276}$, $\omega=-1.0510^{+0.1563}_{-0.1685}$ and $\eta=1+\alpha=1.0117^{+0.0469}_{-0.0502}$ with SNe Ia+BAO+WMAP+H(z) data; \cite{Zhangjingfei2019} obtained $\Omega_{de}=0.6879 \pm 0.0078$, $\omega=-1.02 \pm 0.045$, $\alpha=-0.0029 \pm 0.0097$ and $H_{0}=67.78 \pm 0.87$ km/s/Mpc with a joint sample of SNe Ia+BAO+CMB; and \cite{salahedin2020NGCG} stated $\Omega_{m}=0.2508^{+0.0081}_{-0.0097}$, $\omega=-1.041\pm0.045$, $A_{s}=0.7371^{+0.0097}_{-0.0086}$, $\eta=1+\alpha=0.9443\pm0.0097$ and $H_{0}=70.15\pm0.84$ km/s/Mpc with SNe Ia+BAO+CMB+BBN+H(z) data. This indicates that the value of $\Omega_{m}$ from current observations, i.e., SNe Ia, BAO, CMB and H(z), is generally smaller than $\Omega_{m}=0.3103 \pm 0.057$ from \cite{planck2018result}; however, the inclusion of QSO[XUV] and QSO[AS] changes the value of $\Omega_{m}$ to 0.3-0.34 in our work. It indicates that the inclusion of quasar data could help us to study dark matter and dark energy.
\subsection{VGCG model}
The best-fit values for the VGCG model from different observations are shown in Fig.~\ref{fig:VGCG} and Table~\ref{tab:results}. The standard candle data obtains $\Omega_{m}=0.47^{+0.36}_{-0.31}$, $B_{s}=0.82^{+0.13}_{-0.19}$, $\alpha=0.41^{+0.74}_{-0.49}$, $\zeta=-0.0017^{+0.10}_{-0.08}$ and $H_{0}=68.58^{+6.82}_{-5.16}$ km/s/Mpc, while the standard ruler data shows $\Omega_{m}=0.33 \pm 0.02$, $B_{s}=0.55^{+0.17}_{-0.16}$, $\alpha=-0.08^{+0.99}_{-0.57}$, $\zeta=0.07^{+0.06}_{-0.12}$ and $H_{0}=66.32^{+2.16}_{-2.42}$ km/s/Mpc. It indicates that the value of $\Omega_{m}$ is still larger than that of \cite{planck2018result}, since the QSO[XUV] data favors higher $\Omega_{m}$ in most dark energy models \cite{Risaliti2015,khadka2020_1}.
$\zeta$ is the viscosity term that affects the CMB power spectrum about the matter density on the height of the acoustic peaks. From the results shown in Table~\ref{tab:results}, it implies that $\zeta$ is very small, which could alleviate the oscillations causing the blowup in the DM power spectrum in the GCG models. Moreover, the GCG model (i.e., $\zeta=0$) is still favored by the available observations.
On the other hand, $\alpha$ is an important parameter that reflects the deviation from the CG model and $\Lambda$CDM model. In all cases, the CG model cannot be ruled out by current observations at a 68\% CL, while $\Lambda$CDM is still accepted by the observations at a 68\% CL. In other words, it indicates that QSO[XUV], SNe Ia, QSO[AS] and BAO data could not give accurate constraints on $\alpha$.
In addition, our results on the Hubble constant approve the value of Planck collaboration ($H_{0}=67.66\pm 0.42$ km/s/Mpc) \cite{planck2018result} at a 68\% CL.
It is reasonable to compare to previous studies, such as \cite{liwei2013VGCG} declared $\zeta=0.000708^{+0.00151}_{-0.00155}$ from SNe Ia+BAO+WMAP and \cite{LiweiVGCG} announced $\zeta=0.0000138^{+0.00000614}_{-0.0000105}$ from SNLS3
+BAO+HST. In a recent work, \cite{almada2021VGCG} used a joint sample of SLS+SNe Ia
+BAO+OHD+HIIG and obtained $B_{s}=0.50^{+0.05}_{-0.06}$, $\alpha=0.99^{+0.61}_{-0.58}$, $\zeta=0.13^{+0.02}_{-0.03}$ and $h=0.69\pm0.01$. They concluded that the GCG model, $\zeta=0$, is disfavored by SLS+SNe Ia+BAO+OHD+HIIG at a 68\% CL, which is different from our results with $\zeta=0.07^{+0.04}_{-0.08}$. Moreover, we find that the inclusion of cosmic microwave background data could give a more precise constraint on $\zeta$.
From our constraint results on the matter density parameter $\Omega_{m}$ in different CG models, it is clear that the standard candle data combining QSO[XUV] with SNe Ia prefers larger values of $\Omega_{m}$ ranging from $0.47-0.53$ except the NGCG model. In \cite{khadka2020_1}, it states that the QSO[XUV] data at $z\sim2-5$ prefers larger values of $\Omega_{m} \sim 0.5-0.6$.
Other studies have concentrated on exploring the tension between high redshift quasar measurements and other observations, such as BAO measurements in \cite{Risaliti2015,razaei2020,yangtao2020cosmography,lixiaolei2021hubble}. It implies that there is an unknown systematic error in the high redshift observations or a stimulus of the new physics and astronomy. Therefore, more accurate cosmological probes are required to solve the problem of the $\Omega_{m}$ inconsistency from high and low redshift observations. On the other hand, it is also rewarding to comment on the possible alleviation of the $H_0$ tension by the VGCG and MCG model. Based on our results presented in Table~\ref{tab:results}, the constraint on the Hubble constant lies in the range of $H_{0}=68.28^{+7.23}_{-3.48}$ km/s/Mpc to $H_{0}=68.58^{+6.82}_{-5.16}$ km/s/Mpc for the standard candles, as well as $H_{0}=68.09^{+1.11}_{-1.06}$ km/s/Mpc to $H_{0}=68.21^{+1.19}_{-1.04}$ km/s/Mpc for the combined sample. It is noteworthy that these two Chaplygin gas models suggest a central value of the Hubble constant between the Planck experiment \cite{planck2018result}, $H_{0}=67.4 \pm 0.5$ km/s/Mpc and the SH0ES experiment \cite{riessH0}, $H_{0}=74.03 \pm 1.42$ km/s/Mpc.
\section{Statistical analysis}
The statistical analysis is essential to diagnose the different models. Hence, we apply the Jensen-Shannon Divergence, statefinder diagnostic and the deviance information criterion. In this section, we compare these models and discuss how strongly are they favored by the observational data sets.
\label{sec:stat}
\subsection{Jensen-Shannon Divergence}
\label{sec:jsd}
This new class of information-theoretic divergence measures based on Jensen's inequality and the Shannon entropy, called ``Jensen-Shannon Divergence", could assign the similarity between two probability distributions \cite{Lin1991JSD,JSD2012,Lian2021}. It should be mentioned that JSD is used to assess two different cosmological models by the common parameters; here, we choose the matter density $\Omega_{m}$ and the Hubble constant $H_{0}$ to distinguish the four CG models as well as the $\Lambda$CDM model.
In general, the JSD is symmetric and ranges from 0 to 1, which can be written as
\begin{equation}
D_{J S}(p \mid q)=\frac{1}{2}\left[D_{K L}(p(x) \mid s)+D_{K L}(q(x)\mid s)\right],
\end{equation}
where $s=1/2(p+q)$. $p(x)$ and $q(x)$ are two probability distributions of two different models and $D_{KL}$ denotes the Kulback-Leibler divergence (KLD), which can be expressed as
\begin{equation}
D_{K L}(p \mid q)=\int p(x) \log _{2}\left(\frac{p(x)}{q(x)}\right) d x.
\end{equation}
It is clear that a smaller value of JSD indicates that the two models are similar.
Fig.~\ref{fig:JSD_Om} and Fig.~\ref{fig:JSD_H0} display the posterior distributions of $\Omega_{m}$ and $H_{0}$. Table~\ref{tab:IC} presents the JSD values between the $\Lambda$CDM model and four nonstandard models by using different observations with respect to $\Omega_{m}$ and $H_{0}$.
For standard candle data, the posterior distributions of $\Omega_{m}$ and $H_{0}$ in the NGCG model agree more with the $\Lambda$CDM model in terms of the JSD values, while the MCG model shows a larger distance from the $\Lambda$CDM model. In the scenario of standard ruler data, the value of JSD concerning $\Omega_{m}$ shows that the GCG model agrees more with the $\Lambda$CDM model; however, concerning $H_{0}$, all four nonstandard models are distant from the $\Lambda$CDM model, where the VGCG model is closest to the $\Lambda$CDM model.
In the case of the combination sample, for $\Omega_{m}$, the GCG model and NGCG model are more closer to the $\Lambda$CDM model due to the smaller values of JSD, while for $H_{0}$, the MCG model and VGCG model are closest to the $\Lambda$CDM model.
\subsection{Statefinder Diagnostic}
\label{sec:statefinder}
In the framework of a specific cosmological model, the Hubble parameter $H(z)$ and the deceleration parameter $q(z)$ can be expressed,
\begin{equation}
H=\frac{\dot{a}}{a}, q=-\frac{\ddot{a}}{a H^{2}}=-\frac{a \ddot{a}}{\dot{a}^{2}},
\end{equation}
where $a$ is the scale factor $a=1/1+z$. As $H(z)$ and $q(z)$ cannot effectively distinguish different cosmological models, it requires a higher order of time derivatives of $a$. To investigate more dark energy models, except for the cosmological constant model, the author of \cite{statefinder} focused on a new geometrical diagnostic pair $(r,s)$ constructed from the $a(t)$ and its third time derivatives beyond, where $r(z)$ is a natural next step beyond $H(z)$ and $q(z)$, and $s(z)$ is a linear combination of $r(z)$ and $q(z)$. This approach has been widely adopted in comparing different cosmological models \cite{lixiaolei,xutengpeng2018,dubey2021statefinder,panyu2021statefinder}.
The statefinder pair $(r,s)$ is also related to the equation of state of dark energy and its first time derivative, which can be expressed as
\begin{equation}
r=\frac{\dot a}{a H^{3}}, \quad s=\frac{r-1}{3(q-1 / 2)},
\end{equation}
For a given model, the statefinder diagnostic can be obtained by
\begin{equation}
r(z)=1-2 \frac{E^{\prime}(z)}{E(z)}(1+z)+\left[\frac{E^{\prime \prime}(z)}{E(z)}+\left(\frac{E^{\prime}(z)}{E(z)}\right)^{2}\right](1+z)^{2},
\end{equation}
and
\begin{equation}
s(z)=\frac{r(z)-1}{3(q(z)-1 / 2)},
\end{equation}
and
\begin{equation}
q(z)=\frac{E^{\prime}(z)}{E(z)}(1+z)-1.
\end{equation}
Based on the best-fit model parameters derived from the combined QSO[XUV]+SNe Ia+QSO[AS]+BAO data, we calculate the statefinder pairs $(r,s)$ for the $\Lambda$CDM model and four CG models and present the results in Fig.~\ref{fig:statefinder}. Specifically, the parameter $r$ is more effective in distinguishing different cosmological models.
It is noteworthy that although the corresponding values for the
MCG model and VGCG model significantly deviate from the $\Lambda$CDM model at the present epoch, both of them eventually converge to the standard cosmological model. On the other hand, it is obvious that in the framework of the GCG model and NGCG model, the statefinder pairs $(r,s)$ exhibit similar behaviors at present and evolve along different trajectories; however, only the GCG model ultimately converges on the point of $(r,s)=(1,0)$.
The evolutionary trajectories in the $r-q$ plane are displayed in Fig.~\ref{fig:rqplane}. Although the curves of each cosmological model originate from different points, they finally converge to the same point $(r,q)=(1,-1)$ except for the NGCG model. We clearly see that the GCG and NGCG models evolve along similar trajectories with the $\Lambda$CDM model.
In addition, we find that the GCG model and NGCG model presume values in the range $r > 1$ and $q > 0$ at early times and therefore represent as Chaplygin gas-type dark energy models. Moreover, the MCG model and VGCG model start from the regions $r < 1$ and $q > 0$ belonging to Quintessence dark energy models, while the MCG model quickly reverts back into the Chaplygin gas-type dark energy model at later times. There are notable flips from positive to negative in the value of $q$, which explains the recent phase transition of these models and proves the accelerating universe exactly.
\subsection{Model selection statistic}
\label{sec:IC}
From Sect.~\ref{sec:jsd} and Sect.~\ref{sec:statefinder}, we cannot clearly determine these four CG models with the $\Lambda$CDM model. When comparing and distinguishing different competing models, certain information criteria, such as the Akaike information criterion \cite{AIC}, the Bayes information criterion \cite{BIC}, and the deviance information criterion \cite{DIC}, would be crucial.
The AIC is based on information theory, the BIC is based on Bayesian inference, and the DIC combines heritage from both Bayesian methods and information theory \cite{DIC}. Compared with DIC, the AIC and BIC are too simple to select which model performs better by only requiring the maximum likelihood and the number of parameters within a given model rather than the likelihood throughout the parameter space \cite{2011PhRvD..84b3005C,2012ApJ...755...31C}; therefore, we apply DIC to model selection in this paper. Moreover, $\Delta$DIC is an important value which denotes the difference in values of DIC between cosmological models. In our analysis, we calculate the values of DIC and $\Delta$DIC with respect to four Chaplygin gas models and $\Lambda$CDM model for same observations. In particular, negative values of $\Delta$DIC indicates that the model fits the observations better than $\Lambda$CDM model.
The DIC was introduced by \cite{DIC} and defined as
\begin{equation}
\mathrm{DIC} \equiv D(\bar{\theta})+2 p_{D},
\label{eq:DIC_1}
\end{equation}
where $D(\theta)=-2 \ln \mathcal{L}(\theta)+C$, $p_{D}=\overline{D(\theta)}-D(\bar{\theta})$, $C$ is a `standardizing' constant depending only on the data that will vanish from any derived quantity and $D$ is the deviance of the likelihood.
The definition of DIC (i.e., Eq. (\ref{eq:DIC_1})) is motivated by the form of the AIC, replacing the maximum likelihood $\mathcal{L}_{max}$ with the mean parameter likelihood $\mathcal{L}(\bar{\theta})$ and replacing the number of parameters $k$ with the effective number of parameters $p_{D}$, which represents the number of parameters that can be usefully constrained by a particular dataset. By using the effective number of parameters, the DIC also overcomes the problem of the BIC that they do not discount parameters that are unconstrained by the data \cite{DIC}. In the DIC analysis, the favorite model is the one with the minimum DIC value.
We introduce the DIC to evaluate which model is more consistent with the observational data. As for standard candle data, it suggests that the DIC criterion advocates on the MCG model. From standard rulers and the combination sample, the VGCG model seems to be preferred by the smallest values of DIC. In addition, the GCG and NGCG model are seriously punished by the DIC. In particular, we use the model selection DIC criterion to specify which model is preferred by the currently available observations, rather than selecting the single best-fit cosmological model. As shown in the recent observational constraints on $f(T)$ gravity \cite{PhysRevD.100.083517}, the exponential $f(T)$ model presents a small deviation from $\Lambda$CDM paradigm, based on the SNe Ia Pantheon sample, Hubble constant measurements from cosmic chronometers, the CMB shift parameter and redshift space distortion measurements. Our findings demonstrate that the MCG model and VGCG model behave better than the concordance $\Lambda$CDM model. We remark here that the $\Lambda$CDM cosmological model, built on the assumptions of a cosmological constant and cold dark matter, shows a $\sim 4\sigma$ tension with the high-redshift Hubble diagram of SNe Ia, QSO and gamma-ray bursts (GRB) \cite{2019NatAs...3..272R}. Such irreconcilable tension between high-redshift QSOs and flat $\Lambda$CDM, which has been recently traced and extensively discussed \cite{2020PhRvD.102l3532Y,Lian2021} in the framework of log polynomial expansion and modified gravity theories, highlights the seriousness of the conflict with dark energy within the flat $\Lambda$CDM model. However, it is still interesting to see if future high-redshift datasets show similar tension with flat $\Lambda$CDM cosmology, given the limited sample size and current quality of the available observational data.
\section{Conclusions}
In this paper, we investigated the constraint ability of standard candles (QSO[XUV]+SNe Ia) and standard rulers (QSO[AS]+BAO) on a series of Chaplygin gas models, including the GCG model, MCG model, NGCG model and VGCG model. These Chaplygin gas models are considered as important candidate models that regard dark energy and dark matter as a unification. The first part is devoted to performing MCMC statistical analysis to confront the models with the most recent observations. The second part is dedicated to comparing the agreement between the $\Lambda$CDM model and the other four models using JSD, exploring the evolution of cosmological and cosmographical parameters with the assistance of statefinder diagnostic analysis and examining the viability of four nonstandard models by information criteria such as DIC.
Here, we summarize our main conclusions in more detail:
(i) It is intriguing that the value of $\Omega_{m}$ is noticeably larger from the standard candle data than that from other measurements. Such discrepancy is caused by the QSO X-ray and UV flux data, which favors the higher $\Omega_{m}\sim 0.5-0.6$ discussed in \cite{Risaliti2015,khadka2020_1} at high redshifts $z\sim2-5$. Therefore, the quasar data at high redshifts can cast a new light on investigating the accelerating universe. Considering the Hubble constant, it is noteworthy that the constraint results from standard candles and the combination sample suggest central values on $H_{0}$ between the value measured by the Planck CMB measurements and local $H_{0}$ measurements, possibly alleviating the tension between these measurements. In addition, it is remarkable that although we are using data based on local measurements, such as SNe Ia, which favors the local value (SH0ES's result), it does not play a role in constraining the Hubble constant caused by the marginalization of the constant term $Y_{0}$ we adopted. Hence, the QSO data from X-ray and UV flux measurement prefers the value of $H_{0}$ from the Planck 2018 results.
(ii) Most CG models include the concordance $\Lambda$CDM model as a special case corresponding to certain values of their parameters, such as the parameter $\alpha$ in the GCG model and the parameters $B$ and $\alpha$ in the MCG model. For standard ruler data, the GCG model and NGCG model are generally inconsistent with the cosmological constant case within a 68\% CL, while the MCG model and NGCG model disagree with the $\Lambda$CDM model by the combination sample at a 68\% CL. In the previous studies, they concluded that the CG model is ruled out by recent observations. In our work, considering standard candle data, the CG model is accepted in all cases. The CG model is favored in the framework of the MCG and VGCG models from standard ruler data as well as combined sample.
This is because that the inclusion of QSOs from X-ray and UV measurements and QSOs from VLBI could provide more information from the early universe. Hence, it is expected that these selected quasars could be considered additional probes in the future.
(iii) To evaluate the similarity between $\Lambda$CDM and other CG models, we adopt the JSD in this paper. For standard candle data, the posterior distributions of $\Omega_{m}$ from four nonstandard models are distant from the $\Lambda$CDM model, while the NGCG model is in good agreement with the $\Lambda$CDM model in terms of the JSD value of $H_{0}$. For standard ruler data, the NGCG model shows a larger distance from the $\Lambda$CDM model according to the values of JSD from the posterior distribution of $\Omega_{m}$ and $H_{0}$. The posterior distributions of $\Omega_{m}$ and $H_{0}$ from the MCG model and VGCG model are in good agreement with the $\Lambda$CDM model from the combined standard candle and ruler data. Based on the best fits obtained with the combination sample, we apply the statefinder diagnostic to discriminate the dynamic behaviors of the four CG models. The GCG model and NGCG model evolve similarly to the $\Lambda$CDM model, but the NGCG model could stray from the $\Lambda$CDM model in the near future. Clearly, the MCG model and VGCG model exhibit significantly different evolutionary trajectories to the $\Lambda$CDM model; however, they approach to $\Lambda$CDM in the future.
According to the DIC criterion, VGCG model is more favored by observations; on the other hand, the GCG and NGCG models are punished by all catalogs of data. In addition, the MCG model is slightly supported by standard candles data.
In conclusion, we find that the VGCG model and MCG model could be strong candidates for investigating the accelerating universe. Moreover, $H_{0}$ tension will be alleviated with VGCG model and MCG model and these models can satisfy the combination of standard candle and standard ruler measurements with $\Delta$DIC=-6.64 and $\Delta$DIC=-0.56 compared with $\Lambda$CDM model.
In addition, it is distinctive that the CG model cannot be ruled out by high redshift observations, such as the compilation of 1598 QSO X-ray and UV measurements. Therefore, extending the cosmological analysis with high-redshift data should be critical in distinguishing between different CG models that are degenerate at low redshifts. As a result, it is promising that future precise high redshift data (i.e., gravitational wave data) will provide stronger evidence to judge whether dark energy and dark matter are unified and to understand the nature of the accelerating universe.
There are several issues we do not consider in this paper and which remain to be addressed in the future analysis. One general concern is given
by the fact that we have considered only the 0th order cosmology and Chaplygin gas models might have instabilities at the perturbation level. Some work has also studied the behavior of the particular case of generalized Chaplygin gas models in the matter power spectrum. As worked out in detail by \cite{PhysRevD.69.123524}, the oscillations or exponential blowup of power spectrum, which are inconsistent with the observations of the 2df galaxy redshift survey, contribute to the ruling out of GCG models in 1st order cosmology (the growth of linear perturbations). Now precision data of
redshift-space distortions (RSD) \cite{growthdata11,Arman2018,201109516}, the rms mass fluctuation $\sigma_{8}$(z) inferred from galaxy and Ly-$\alpha$ surveys \cite{growthdata12,growthdata13,Cuceu2021}, weak lensing statistics \cite{growthdata14}, baryon acoustic oscillations \cite{growthdata15,growthdata15_2}, X-ray luminous galaxy clusters \cite{growthdata16}, and Integrated Sachs-Wolfe (ISW) effect \cite{growthdata17} are gradually allowing us to determine the linear growth function that are related to perturbations. In the future analysis we will take a further step in this direction, focusing on more stringent constraints on the perturbative behaviors of a series of Chaplygin gas models.
\section*{Acknowledgements}
This work was supported by National Key R\&D Program of China No.
2017YFA0402600; the National Natural Science Foundation of China under Grants Nos. 12021003, 11690023, and 11920101003; the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB23000000;
and the Interdiscipline Research Funds of Beijing Normal University.
\section*{DATA AVAILABILITY STATEMENTS}
The data underlying this article will be shared on reasonable request
to the corresponding author.
\bibliographystyle{spphys} %
\bibliography{refer} %
|
Title:
Predictions for Quantum Gravitational Signatures from Inflation |
Abstract: We compute the corrections to the primordial power spectrum that should arise
in realistic inflationary scenarios if there exists a generic covariant
ultraviolet (UV) cutoff, as commonly motivated by considerations of quantum
gravity. The corrections to the spectrum consist of small superimposed
oscillations whose frequency, phase, and amplitude are functions of the
comoving wave number. For any given cosmological parameters that characterize
the slow roll during inflation, the frequency predicted for these oscillations
depends only on the value of the UV cutoff. The specificity of this prediction
can be used to increase experimental sensitivity through the filtering for
template signatures. This will allow experiments to put new bounds on where a
natural UV cutoff can be located between the Planck scale and the Hubble scale
during inflation. It may even bring imprints of Planck-scale physics in the
cosmic microwave background and in structure formation within the range of
observations.
| https://export.arxiv.org/pdf/2208.10514 |
\title{Predictions for Quantum Gravitational Signatures from Inflation}
\author{Aidan Chatwin-Davies${}^{a,b}$}
\author{Achim Kempf${}^{c}$}
\author{Petar Simidzija${}^{a}$}%
\altaffiliation{\altemail{achatwin@phas.ubc.ca}\\ \altemail{akempf@pitp.ca}\\\altemail{psimidzija@phas.ubc.ca}}
\affiliation{~\\${}^a$Department of Physics and Astronomy, University of British Columbia\\
Vancouver, BC, V6T 1Z1, Canada\\~\\
${}^b$Institute for Theoretical Physics, KU Leuven\\
Celestijnenlaan 200D B-3001 Leuven, Belgium \\~\\
${}^c$Department of Applied Mathematics, University of Waterloo\\
Waterloo, ON, N2L 3G1, Canada\\~
}
\date{\today}
It is widely expected that a theory of quantum gravity will be needed to describe physics at the Planck scale.
There are several competing candidates for such a theory, see, e.g., \cite{Rovelli:1997qj,WittenStrings,Carlip:2015asa,Loll:2022ibq}, but experiments that could probe Planck-scale physics are still lacking. For example, particle accelerators are presently able to operate at centre-of-mass energies that are still about 15 orders of magnitude below the Planck scale.
Fortunately, observational early universe cosmology involves scales that are substantially closer to the Planck scale. This is because, according to standard inflationary theory, the fluctuations that are now visible in the cosmic microwave background (CMB) originated in primordial quantum fluctuations that occurred when the Hubble length during inflation was only 5 to 6 orders of magnitude away from the Planck length \footnote{See Eq.~(12).}. This raises the exciting question of what type of observational signature Planck-scale physics may have left in the CMB, and at what magnitude.
Even without a confirmed UV-complete theory of quantum gravity at hand, it is clear that, as the Planck scale is approached from lower energies, conventional quantum field theory should start to exhibit modifications that are due to Planck-scale physics \cite{Garay:1994en,Hossenfelder:2012jw}. To some extent, these modifications should have impacted the inflationary mechanism for the generation of primordial fluctuations.
In the literature, the possibility of Planck-scale signatures in the CMB has been addressed, therefore, by modelling natural ultraviolet (UV) cutoffs, associated modifications to dispersion relations, and other approaches \cite{Padmanabhan:1988jp,Padmanabhan:1988se,Jacobson:1999zk,Kempf:2000ac,Martin:2000xs,Brandenberger:2000wr,Niemeyer:2000eh,Brandenberger:2001zqv,Easther:2001fi,Kempf:2001fa,Easther:2001fz,Brandenberger:2002hs,Easther:2002xe,Danielsson:2002kx,Brandenberger:2004kx,Sriramkumar:2004pj,Greene:2004np,Shiu_2005,Easther:2005yr,Tawfik:2015rva,Ali:2015ola,Skara:2019uzz,Frob:2012ui,Frob:2014cza,Calcagni:2016ofu,Calcagni:2017via,Modesto:2022asj,Calcagni:2022tuz} within the quantum field theoretic framework of inflation.
However, most of these approaches break local Lorentz invariance, which makes it difficult to distinguish to what extent the predicted effects would be due to Planck-scale physics and to what extent they would be due to the symmetry breaking.
In previous work, we therefore introduced into inflation a class of natural UV cutoffs that are fully covariant. The requirement of covariance is technically hard to implement but has the benefit of being highly restrictive. We then showed how the conventional inflationary calculations, when augmented with a natural covariant cutoff, could be used, in principle, to compute a prediction for Planckian signatures in cosmological observables \cite{Chatwin-Davies:2016byj}.
In the present letter, and in a longer accompanying article \cite{Chatwin-Davies:2022irs}, we build on this prior work and
compute the effect of covariant natural UV cutoffs on the primordial power spectra (PPS) of scalar and tensor fluctuations for realistic single-field inflation models.
We find that the presence of a covariant UV cutoff (either soft or sharp) results in small oscillations superposed on the conventional PPS, and we arrive at an explicit prediction for the frequency of these oscillations.
Their frequency is a function, Eq.~\eqref{eq:frequency}, of a single dimensionless parameter: the ratio $\sigma$ of the Hubble parameter at horizon crossing, $H$, and the cutoff scale, $\Omega$.
In the simplest case, where the natural UV cutoff is a sharp cutoff, we can predict not only the frequency, but also the amplitude and phase of the oscillatory perturbations. This yields a family of possible template signatures in the CMB that is parametrized only by the value of the natural UV cutoff scale. The high specificity of the predicted signatures suggests that experimental sensitivity can be significantly enhanced by
using template matching techniques, similarly to how current gravitational wave detectors are able to detect the presence of extremely low amplitude gravitational waves. Using the new results and comparing them with PPS data, it should at least be possible to put new stringent bounds on where, in between the Hubble length during inflation and the Planck length, a natural UV cutoff can be located.
\emph{Primordial power spectra}---In single-field inflationary models, the rapid expansion is driven by a single scalar field, the \textit{inflaton}, rolling down its potential. After taking into account that the quantum fluctuations of the inflaton couple to the quantum fluctuations of the effectively scalar part of the metric, one obtains three decoupled degrees of freedom: a free scalar $\calR$ and two free tensor helicities $\cal T_\pm$. The quantum fluctuations of $\calR$ and $\cal T_\pm$ are thought to be the origin of all inhomogeneities in the universe. In this letter, we will mainly focus on the scalar $\calR$. The tensor perturbations, which have not been observed as yet, can be studied similarly; for details, see \cite{Chatwin-Davies:2022irs}.
It is convenient to decompose the free field $\calR$ into its comoving Fourier modes. For a mode with a wavevector of magnitude $k$, the variance of its quantum fluctuations is quantified by the PPS $\Delta_{\calR}(k)$ \cite{dodelson2003modern}. Measurements of the CMB find that the PPS is of the form
\begin{align} \label{eq:scalar-spectrum-pheno}
\Delta_{\mathcal R}^2(k) = A_s\left(\frac{k}{k_\star}\right)^{n_s-1},
\end{align}
where, at the pivot scale $k_\star = 0.05~\mrm{Mpc}^{-1}$, the spectral amplitude and spectral tilt are $A_s = (2.10\pm0.03)\times 10^{-9}$ and $n_s = 0.966 \pm 0.004$, respectively \cite{Planck:2018vyg}. Because the spectral tilt is close to $1$, the PPS is nearly scale-invariant.
Such a scale-invariant PPS can be obtained if the inflaton field which drives inflation is {slowly rolling} down its potential, i.e. if the slow roll parameter $\epsilon \equiv -\dot H/H^2$ is much less than 1, where $H$ is the Hubble parameter and a dot denotes a derivative with respect to cosmic time $t$. In this case, the spacetime is quasi-de Sitter and the PPS evaluates to (see, e.g., \cite{dodelson2003modern})
\begin{equation} \label{eq:scalar-spectrum-slow-roll}
\Delta_{\cal R}^2(k) = \frac{H^2(\eta_k)}{\pi\epsilon M_P^2},
\end{equation}
where $\eta_k$ is the comoving time at which a mode $k$ crosses the Hubble length.
\emph{Covariant ultraviolet cutoffs}---The primordial power spectrum $\DR(k)$, being a measure of two-point correlations of the field $\calR$, can also be written in terms of the spatial Fourier transform $G_F(\eta,k)$ of the Feynman propagator for $\calR$:
\begin{equation} \label{eq:PSP-GF}
\Delta_{\calR}^2(k) = 4\pi k^3 |G_F(\eta_k,k)|.
\end{equation}
Writing $\DR(k)$ in this form is useful because the Feynman propagator can be written as a path integral, and the path integral can be naturally modified to include a covariant UV cutoff.
To see this, we start with a path integral as a sum over field configurations $\phi(x)$.
Each field configuration is weighted by the complex phase $\exp(iS[\phi])$, where $S[\phi]$ is the action functional. For example, the Feynman propagator of a real scalar field $\phi$ is then
\begin{equation} \label{eq:GF}
i G_F(x,x') = \frac{\int \mathcal{D}\phi ~ \phi(x) \phi(x') e^{iS[\phi]}}{\int \mathcal{D}\phi ~ e^{iS[\phi]}}.
\end{equation}
By the stationary phase approximation, the classical field configurations, i.e., the stationary points of the action functional, tend to dominate the path integral.
Meanwhile, the non-stationary contributions represent quantum fluctuations and they tend to be suppressed by averaging out.
This can be made more precise in the case of free field theories, such as those describing inflationary scalar and tensor perturbations. For a massless free field $\phi$ on a manifold $\mathcal M$, the stationarity of the action is equivalent to the linear field equation $\Box \phi = 0$, where $\Box$ is the self-adjoint d'Alembertian on $\mathcal M$. In this case, classical field configurations are those for which $\Box \phi = 0$, while field configurations that describe quantum fluctuations are linear combinations of eigenfunctions of $\Box$ which possess non-zero eigenvalues.
The magnitude of these eigenvalues determines the extent to which these quantum fluctuations are off shell.
Since the conventional path integral does not bound these eigenvalues, it includes quantum fluctuations that are arbitrarily far off shell, i.e., for which $\Box \phi = \lambda \phi$ where $|\lambda|$ arbitrarily far exceeds the Planck scale.
We therefore obtain a covariant UV cutoff by path integrating only over field configurations obeying $|\lambda|<\Omega^2$, where $\Omega$ sets the UV cutoff scale, while omitting the more extreme, trans-Planckian configurations.
One may expect $\Omega$ to be at or near the Planck scale, but its value must of course be determined experimentally.
For a sharp UV cutoff, the set of allowed field configurations in the path integral is then
\begin{equation}
B_{\mathcal{M}}(\Omega) \equiv \mrm{span}\left\{ \psi_\lambda \, | \, \Box \psi_\lambda = \lambda \psi_\lambda, |\lambda| \leq \Omega^2 \right\}.
\end{equation}
The allowed eigenvalues are bounded from above and below since the spectrum of $\Box$ is unbounded in both the positive and negative directions. Hence, for instance, the cut-off Feynman propagator is given by
\begin{equation} \label{eq:GF-cutoff}
i G_F^\Omega(x,x') \equiv \frac{\int_{B_\mathcal{M}(\Omega)} \mathcal{D}\phi ~ \phi(x) \phi(x') e^{iS[\phi]}}{\int_{B_\mathcal{M}(\Omega)} \mathcal{D}\phi ~ e^{iS[\phi]}}.
\end{equation}
Note that this spectral cutoff $\Omega$ is manifestly covariant since $\Box$ is a scalar operator and, therefore, the spectrum of $\Box$ is independent of the choice of coordinates for $\mathcal{M}$.
In the simple example where the manifold $\mathcal{M} = \mathbb{R}$, the scalar wave operator is just $-\partial_x^2$, its eigenfunctions are plane waves $e^{ikx}$, its eigenvalues are the frequencies $k^2$, and hence $\Omega$ is a maximum frequency cutoff. Thus, for general manifolds, we can think of $\Omega$ as a covariant generalization of a maximum frequency.
We remark that the cutoff $\Omega$ also possesses an information theoretic interpretation as a cutoff on the density of field degrees of freedom in spacetime, in the sense of Nyquist-Shannon sampling theory \cite{shannon1998mathematical,NyquistReprint}.
For example, consider again $\mathcal{M} = \mathbb{R}$ and suppose that a function $\phi(x)$ has a Fourier transform that vanishes outside of a window $[-\Omega,\Omega]$.
Then, theorems of Shannon and Beurling state that $\phi(x)$ can be fully reconstructed knowing only the values it takes at a discrete lattice of sample points, whose average density is greater than $\Omega/\pi$ \cite{ShannonTheorem,LandauSampling}.
Similarly, one can also think of a covariant cutoff $\Omega$ as a covariant bandlimit, which controls the density of information contained in $\phi(x)$.
This information theoretic interpretation generalizes to Lorentzian manifolds, where the density of information in space and time transforms covariantly when going from one reference frame to another \cite{Kempf:1999xt,Kempf:2003qu,Kempf:2009us,Kempf:2010rx,Kempf:2012sg}.
Consider again the computation of the covariantly cut-off Feynman propagator $G_F^\Omega$ in Eq. \eqref{eq:GF-cutoff}.
Instead of directly evaluating the path integral over the set of bandlimited fields configurations $B_{\mathcal M}(\Omega)$, it is equivalent and simpler in practice to project out the high eigenvalue contributions to $G_F$ \cite{Chatwin-Davies:2016byj,Chatwin-Davies:2022irs}.
Concretely, we act on $G_F$ to the left and right with projectors, $P_\Omega$:
\begin{align}\label{eq:G_cutoff}
G_F^\Omega = P_\Omega G_F P_\Omega.
\end{align}
The projector $P_\Omega$ is a linear operator that acts on fields on $\mathcal{M}$ and that is defined as
\begin{equation} \label{eq:cutoff}
P_\Omega \equiv \sum_{\lambda \in \mrm{spec}\,\Box} \theta(\Omega^2 - |\lambda|) \langle \psi_\lambda, \, \cdot \, \rangle \psi_\lambda,
\end{equation}
where $\langle \, \cdot \, , \cdot \, \rangle$ denotes the inner product on $\mathcal{M}$ and $\psi_\lambda$ is the eigenfunction of $\Box$ with eigenvalue $\lambda$.
As a result, eigenvalues $|\lambda| > \Omega^2$ are projected out.
This way of expressing the cutoff further illustrates that the sharp cutoff described so far is part of a larger class of covariant cutoffs obtained by replacing the Heaviside step function $\theta$ in \Eq{eq:cutoff} by a general non-negative function $f$ \footnote{A systematic way to study possible high-energy corrections to low-energy observables of a field theory is via the formalism of effective field theory, in which one allows for the addition of arbitrary terms to the Lagrangian, provided that they are consistent with the desired symmetries, such as general covariance.
How the covariant cutoffs considered here and effective field-theoretic treatments of inflation \cite{Cheung:2007st} compare is discussed in the companion article \cite{Chatwin-Davies:2022irs}.}.
In particular, we could soften the cutoff by choosing a function $f$ that smooths out the Heaviside step function.
Even so, the cutoff remains covariant, since it is obtained by restricting the spectrum of the scalar operator $\Box$.
For now we will continue to focus on a sharp cutoff at $\Omega$ and we will return to soft cutoffs later.
\emph{Corrections to primordial power spectra}---Now let us assume that $\mathcal{M}$ is a spatially flat Friedmann-Lema{\^i}tre-Robertson-Walker (FLRW) spacetime with the line element
\begin{equation}
\dee s^2 = a^2(\eta)\left(-\dee \eta^2 + \dee {\bm x}^2 \right),
\end{equation}
where $a(\eta)$ is a scale factor describing slow-roll inflation, i.e. the spacetime is quasi-de Sitter. Our goal is to compute the correction, $\delta \Delta_\mathcal{R}^2(k)$, to the PPS due to a sharp covariant cutoff $\Omega$ imposed on the spectrum of the FLRW d'Alembertian $\Box$.
Notice that a spatial Fourier transform with respect to $\bm x$ preserves the spectrum of $\Box$. Explicitly, if $\Box u(\eta, {\bm x}) = \lambda u(\eta, {\bm x})$, then $\Box_{\bm k} u(\eta, {\bm k}) = \lambda u(\eta, {\bm k})$, where $u(\eta,\bm k)$ is the spatial Fourier transform of $u(\eta,\bm x)$ and $\Box_{\bm k} = -a^{-4}\partial_\eta(a^2 \partial_\eta)-k^2 a^{-2}$ is the spatial Fourier transform of $\Box$.
We may therefore impose the cutoff on each mode $k = |{\bm k}|$ individually by cutting off the spectrum of each $\Box_{k}$.
Let us now compute what the effect of such a covariant cutoff is on the primordial power spectrum $\DR(k)$. From \Eq{eq:PSP-GF}, one finds that the relative correction is
\begin{equation}
\label{eq:reldiffscalar}
\frac{\delta \Delta_\phi^2(k)}{\Delta_\phi^2(k)} = \mrm{Re} \left( \frac{\delta G_F(\eta_k,k)}{G_F(\eta_k,k)} \right) + O(\delta G_F^2),
\end{equation}
where $\delta G_F = G_F^\Omega - G_F$. Thus, to compute the relative correction to the PPS at leading order in $\delta G_F$, we need to compute $G_F$ and $\delta G_F$ for a quasi-de Sitter spacetime.
Let us first consider $G_F$.
For a general scale factor, a closed-form expression for $G_F$ is not known, although it can be expressed as a mode sum.
Computing the Feynman propagator $G_F^\mrm{dS}$ for a free scalar field on a pure de Sitter background is a standard computation, however, and the answer can be obtained in closed form \cite{Birrell:1982ix}.
In a quasi-de Sitter spacetime, one can thus employ a \textit{slow-roll approximation} for $G_F$, which approximates the exact slow-roll calculation with a corresponding de Sitter calculation.
More precisely, we approximate
\begin{align}
G_F(\eta_k,k)\approx G_F^\mrm{dS}(\eta_k^\mrm{dS},k),
\end{align}
where $\eta^\mrm{dS}_k = -1/k$ is the de Sitter horizon crossing time for a mode $k$ and where, in computing $G_F^\mrm{dS}$, we take the de Sitter Hubble constant $H^\mrm{dS}$ to be equal to $H(\eta_k)$, the instantaneous Hubble parameter in the quasi-de Sitter spacetime at the moment when the mode $k$ crosses the horizon. By equating Eqs.~\eqref{eq:scalar-spectrum-pheno} and \eqref{eq:scalar-spectrum-slow-roll}, we obtain an expression for $H(\eta_k) \equiv H(k)$ in terms observable parameters,
\begin{equation} \label{eq:Heff}
H(k) = \Mpl\sqrt{\pi \epsilon A_s} \left( \frac{k}{k_\star} \right)^{(n_s-1)/2}.
\end{equation}
The relative error incurred through the use of the slow-roll approximation is of the same order as the slow-roll parameters \cite{Chatwin-Davies:2022irs}, so the approximation is accurate in the regime where these parameters are much smaller than 1.
Next let us discuss $\delta G_F$. Defining the projector $P_\Omega^\perp \equiv I - P_\Omega$, which projects onto the eigenspace of $\Box_k$ with eigenvalues $|\lambda|>\Omega^2$, we can write
\begin{align}
\delta G_F = P_\Omega^\perp G_F P_\Omega^\perp - P_\Omega^\perp G_F - G_F P_\Omega^\perp.
\end{align}
Computing $\delta G_F$ is conceptually straightforward:
For each mode $k$, one solves the the Sturm-Liouville eigenvalue problem $\Box_k \psi_\lambda = \lambda \psi_\lambda$ to obtain an orthonormal eigenfunction basis.
One then constructs the projectors $P_\Omega^\perp$ and uses them to compute $\delta G_F$.
Once again, however, this calculation is intractable for general slow-roll spacetimes.
Fortunately, computing the de Sitter correction $\delta G_F^\mrm{dS}$ is more tractable, and hence we can accurately approximate $\delta G_F$ for a quasi-de Sitter spacetime by applying the slow-roll approximation here as well \cite{Chatwin-Davies:2022irs}.
A technical complication which arises in computing $\delta G_F^\mrm{dS}$ is that the minimal realization of $\Box_k$ as a differential operator is only symmetric and not self-adjoint.
In functional analytic language, its deficiency indicies are $(1,1)$ and a generalized boundary condition must be specified \cite{naimark1968linear}.
This freedom in defining a self-adjoint $\Box_k$ is in correspondence with the choice of vacuum state for de Sitter.
We fix this freedom by requiring that $\Box_k$ act as the left inverse of $G_F$, where the latter is determined from canonical quantization and from having chosen the Bunch-Davies state as the vacuum; see \cite{Chatwin-Davies:2016byj,Chatwin-Davies:2022irs} for further details. In this way we are able to compute $\delta G_F^\mrm{dS}$, and hence, via the slow-roll approximation, $\delta G_F$ for a quasi-de Sitter spacetime.
Upon computing $G_F$ and $\delta G_F$, we can use Eq. \eqref{eq:reldiffscalar} to construct the relative correction $\dDR/\DR$ to the primordial power spectrum due to a sharp covariant cutoff at the scale $\Omega$. We find that $\delta \Delta_\calR^2/\Delta_\calR^2$ is a function of only $\sigma(k) \equiv H(k)/\Omega$, the ratio of the Hubble scale \eqref{eq:Heff} and cutoff scale \footnote{Strictly speaking, $\calR$ experiences a modified Hubble parameter due to a modified scale factor $z = (\Mpl^2\epsilon/4\pi)^{1/2}a$; however, $H = \dot{a}/a$ and $\dot{z}/z$ coincide to leading order with deviations that are suppressed by slow-roll parameters.}.
Concretely, the correction to the PPS is
\begin{equation} \label{eq:prediction}
\frac{\delta \Delta^2_\calR}{\Delta^2_\calR}=
\mathcal{C}
\frac{\sigma(k)^{3/2}}{\ln(\sigma(k)/2)} \sin\left(\omega(k)\, \sigma(k)\right),
\end{equation}
where $\mathcal{C} = 0.8796...$ \footnote{More precisely, $\mathcal C = 2(\cos1+\sin1)\pi^{-1}$, where $\cos1$ and $\sin1$ appear from evaluating $J_{3/2}(k\eta)$ and $Y_{3/2}(k\eta)$ at horizon crossing, $k\eta=-1$.} and where we have defined
\begin{equation} \label{eq:frequency}
\omega(k) \equiv \frac{1}{\sigma(k)^2}\left(1-\ln\frac{2}{\sigma(k)}\right).
\end{equation}
The correction $\dDR(k)/\DR(k)$ to the PPS thus consists of oscillations whose amplitude and frequency are functions of $k$; see Fig. \ref{fig:prediction}. Notice that the form of the correction depends only on the single parameter $\Omega$, the energy scale at which we impose a sharp, covariant UV cutoff. As $\Omega$ is increased, the amplitude of the oscillations decreases as $\Omega^{-3/2}$ (up to a logarithmic factor) and the frequency of the oscillations in the $k$ domain increases.
In the other direction, as illustrated in the lower plot of \Fig{fig:prediction}, if the cutoff is brought very close to the Hubble scale, the correction becomes so prominent that existing observational data should be able to place a lower bound on where the cutoff scale could lie.
The analysis so far has been focused on a sharp UV cutoff, which is fully specified by the single parameter $\Omega$.
As discussed earlier, by using a continuous function $f(\lambda)$ instead of $\theta$ in Eq. \eqref{eq:cutoff}, it is also possible to soften the cutoff. Specifying such cutoffs necessarily requires more than just a single parameter $\Omega$, and so the corresponding class of corrections $\dDR/\DR$ to the PPS will depend on multiple parameters. Our analysis in the longer companion paper \cite{Chatwin-Davies:2022irs} shows that the effect of smoothing out the cutoff is to alter the amplitude and phase of the corrections to the PPS in a way that depends on the specific form of the smoothing. In particular, the amplitude of the correction tends to decrease as one increases the smoothness of the cutoff.
However, we also found that if there exists a natural covariant UV cutoff at a well-defined finite scale, then the frequency of the predicted oscillations in the PPS, given by Eq. \eqref{eq:frequency}, is essentially independent of whether the cutoff is sharp or softened.
\emph{Discussion}---If, as is widely assumed, there exists a natural UV cutoff at or close to the Planck scale, then the framework of quantum field theory should increasingly exhibit the presence of this UV cutoff when approaching the cutoff scale from low energies. Since the quantum field theoretic calculations of cosmic inflation involve scales that are only about 5-6 orders of magnitude from the Planck scale, the inflationary predictions might exhibit a noticeable impact of such a natural UV cutoff.
Here, we presented the first explicit inflationary predictions for a natural UV cutoff within the inflationary path integral that is covariant. We focused on the case of the scalar primordial power spectrum, $\DR(k)$, and we considered both the simplest case where the UV cutoff is sharp, and also the case where the cutoff is a softened decay.
The full details of the calculations, as well as the corresponding predictions for the tensor power spectrum, are discussed in the companion article \cite{Chatwin-Davies:2022irs}.
Our predictions assume single-field inflation but are otherwise model independent. We do not make assumptions on the inflationary potential; instead, we write our predictions explicitly in terms of measured cosmological parameters.
We found that $\DR(k)$ is corrected by small oscillations, $\dDR(k)$, on top of the power spectrum.
Logarithmic oscillations in and of themselves are not unique to our prediction---see, e.g., Ref.~\cite{Calcagni:2016ofu} and Refs.~[27-43] therein---however, the highly specific signature \eqref{eq:prediction} and the chirping frequency \eqref{eq:frequency} are, to our knowledge, unique.
Propagated forward in time, they would imply corresponding corrections to the CMB and to large-scale cosmic structure.
The most robust prediction here is for the predicted frequency of these oscillations, \Eq{eq:frequency}. This is because this frequency is virtually independent of the exact form of the covariant cutoff, i.e., whether it is a sharp cutoff or whether the cutoff turns on smoothly. Instead, the predicted frequency depends only on the characteristic cutoff scale $\Omega$.
If the UV cutoff is sharp, then the amplitude and the phase of the oscillations also only depend on $\Omega$.
In this case, we obtained a simple analytic expression for $\dDR(k)$, given in Eq. \eqref{eq:prediction}.
Alternatively, if the UV cutoff is soft, then the amplitude and the phase of $\dDR(k)$ depend moderately on the precise form of the cutoff.
The specificity of our predictions might help significantly in testing them, in particular by using template search methods.
This would be similar to recent measurements of gravitational radiation emitted by distant massive objects \cite{LIGOScientific:2016aoc}, which succeeded by searching for members of a three-parameter family of template waveforms \cite{Privitera:2013xza}.
Further, while a natural UV cutoff could presumably be located anywhere between the Hubble scale during inflation and the Planck scale, the corrections that the UV cutoff induces in inflationary spectra are significantly easier to detect the closer $\Omega$ is to the Hubble scale rather than to the Planck scale. This is because, towards the Hubble scale, the amplitude of the oscillations increases while their frequency decreases.
It will be very interesting to determine which values for a natural covariant UV cutoff $\Omega$ can already be ruled out, namely by comparing the predictions presented here with the experimentally inferred primordial power spectrum.
As future experiments increase the accuracy with which the primordial power spectrum is known, a positive detection of the signature of a natural UV cutoff may even come within the reach of observations.
\begin{acknowledgments}
We thank Panos Betzios, Fran\c cois Bouchet, Richard Easther, Simon Foreman, Lukas Hergt, Arjun Kar, Jorma Louko, Rob Martin, and Mark Van Raamsdonk for helpful discussions during the preparation of this manuscript, as well as Gianluca Calcagni and Albert Roura for comments on the first version.
ACD acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number PDF-545750-2020].
ACD was supported for a portion of this work as a postdoctoral fellow (Fundamental Research) of the National Research Foundation -- Flanders (FWO), Belgium. AK acknowledges support through a Discovery Grant of the National Science and Engineering Council of Canada (NSERC) and a Discovery Project grant of the Australian Research Council (ARC). PS acknowledges support from the NSERC CGS-D award.
\end{acknowledgments}
\bibliographystyle{utphys-modified}
\bibliography{refs.bib}
|
Title:
Debiasing the Minimum-Mass Extrasolar Nebula: On the Diversity of Solid Disk Profiles |
Abstract: A foundational idea in the theory of in situ planet formation is the "minimum
mass extrasolar nebula" (MMEN), a surface density profile ($\Sigma$) of disk
solids that is necessary to form the planets in their present locations. While
most previous studies have fit a single power-law to all exoplanets in an
observed ensemble, it is unclear whether most exoplanetary systems form from a
universal disk template. We use an advanced statistical model for the
underlying architectures of multi-planet systems to reconstruct the MMEN. The
simulated physical and Kepler-observed catalogs allows us to directly assess
the role of detection biases, and in particular the effect of non-transiting or
otherwise undetected planets, in altering the inferred MMEN. We find that
fitting a power-law of the form $\Sigma = \Sigma_0^* (a/a_0)^\beta$ to each
multi-planet system results in a broad distribution of disk profiles;
$\Sigma_0^* = 336_{-291}^{+727}$ g/cm$^2$ and $\beta = -1.98_{-1.52}^{+1.55}$
encompass the 16th-84th percentiles of the marginal distributions in an
underlying population, where $\Sigma_0^*$ is the normalization at $a_0 = 0.3$
AU. Around half of inner planet-forming disks have minimum solid masses of
$\gtrsim 40 M_\oplus$ within 1 AU. While transit observations do not tend to
bias the median $\beta$, they can lead to both significantly over- and
under-estimated $\Sigma_0^*$ and thus broaden the inferred distribution of disk
masses. Nevertheless, detection biases cannot account for the full variance in
the observed disk profiles; there is no universal MMEN if all planets formed in
situ. The great diversity of solid disk profiles suggests that a substantial
fraction ($\gtrsim 23\%$) of planetary systems experienced a history of
migration.
| https://export.arxiv.org/pdf/2208.09031 | command.
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\renewcommand{\cellalign}{bc} %
\renewcommand{\theadalign}{bc} %
\def\qm#1{{\color{blue}{\bf Matthias: #1}}} %
\def\qe#1{{\color{red}{\bf Eric: #1}}} %
\def\rev#1{{\color{magenta}{#1}}} %
\def\newt#1{{\color{violet}{#1}}} %
\def\Kepler{\textit{Kepler}} %
\def\Gaia{\textit{Gaia}} %
\def\TESS{\textit{TESS}} %
\def\JWST{\textit{JWST}} %
\def\SysSim{\textit{SysSim}} %
\defcitealias{2020AJ....160..276H}{H20}
\received{}
\revised{}
\accepted{}
\shorttitle{Debiasing the Minimum-Mass Extrasolar Nebula}
\shortauthors{He \& Ford}
\graphicspath{{./}{Figures/}}
\begin{document}
\title{Debiasing the Minimum-Mass Extrasolar Nebula: On the Diversity of Solid Disk Profiles}
\correspondingauthor{Matthias Yang He}
\email{mhe@nd.edu}
\author[0000-0002-5223-7945]{Matthias Y. He}
\affiliation{Department of Physics \& Astronomy, 225 Nieuwland Science Hall, The University of Notre Dame, Notre Dame, IN 46556, USA}
\affiliation{Department of Astronomy \& Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Center for Exoplanets \& Habitable Worlds, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Center for Astrostatistics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Institute for Computational \& Data Sciences, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\author[0000-0001-6545-639X]{Eric B. Ford}
\affiliation{Department of Astronomy \& Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Center for Exoplanets \& Habitable Worlds, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Center for Astrostatistics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Institute for Computational \& Data Sciences, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802, USA}
\affiliation{Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA}
\affiliation{Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA}
\keywords{Exoplanet systems (484); Exoplanet formation (492); Exoplanets (498); Extrasolar rocky planets (511); Planetary system formation (1257); Protoplanetary disks (1300)}
\section{Introduction} \label{sec:Intro}
Well before the discovery of extrasolar planets, the idea of a ``minimum mass'' solar nebula (MMSN) was introduced to posit the \textit{in situ} formation of the planets in our solar system \citep{1977Ap&SS..51..153W, 1985prpl.conf.1100H}. The MMSN framework is intuitively simple: in this view, the initial proto-planetary disk must have enough material to form the final planets in their present locations, and in particular must contain enough solids for the planets to locally accrete up to their core masses within their feeding zones. This implies that one can work backwards to infer the requisite (solid) surface density, $\Sigma$, local to each planet by spreading its mass in an annulus centered around its semi-major axis ($a$). By assuming a smooth disk profile, a power-law for $\Sigma$ as a function of $a$ is typically fit to then infer the radial distribution of disk solids.
Since the discovery of thousands of exoplanets, largely due to the transformational success of NASA's \Kepler{} mission, numerous studies have applied the MMSN template to these extrasolar worlds in order to form an analogous minimum mass \textit{extrasolar} nebula (MMEN; \citealt{2013MNRAS.431.3444C, 2014MNRAS.440L..11R, 2014ApJ...795L..15S, 2020AJ....159..247D}).
Yet, it continues to be debated whether most exoplanetary systems conform to a universal disk profile.
Most previous studies have fit a single disk profile, in the form of a single power-law for $\Sigma(a)$, to all the exoplanet candidates observed by \Kepler{} simultaneously \citep{2013MNRAS.431.3444C, 2014ApJ...795L..15S, 2020AJ....159..247D}.
This construction has multiple shortcomings: (1) it washes out any potential system-level correlations, (2) the resulting ``MMEN'' does not represent the properties of any single true/physical disk, and (3) it does not account for planet multiplicity, and specifically the effect of missing (undetected) planets in each system.
One notable exception\footnote{A pioneering study by \citet{2004ApJ...612.1147K} also fit power-laws to individual planetary systems, but with exoplanets detected by the radial velocity method and were limited to a much smaller sample of systems.} is \citet{2014MNRAS.440L..11R} (hereafter RC14), who fit a power-law to each individual system with three or more observed planets. They showed that this produces a broad diversity of minimum-mass disks with profiles ranging from $\Sigma \propto a^{-3.2}$ to $a^{0.5}$, thus retaining the variances across individual systems and illustrating the inconsistency of a universal disk profile.
Previous studies have also relied on simplistic treatments for detection biases and the exoplanet mass-radius relationships to construct the MMEN from the \Kepler{} planet catalog, typically by applying a correction factor for the transit geometric and detection probability (i.e., a form of inverse detection efficiency) of each planet in an attempt to ``debias'' the observed sample \citep{2013MNRAS.431.3444C, 2020AJ....159..247D}.
While this approach effectively weights the planets such that longer period and smaller sized planets are compensated for their reduced detectability by transits, it is a clear oversimplification of the \Kepler{} detection pipeline \citep{2020AJ....160..159C} and does not correct for missing planets in systems with known planet(s).
The recent development of detailed forward models for the \Kepler{} mission has enabled unprecedented inferences on the intrinsic population of inner planetary systems from the observed population (e.g., \citealt{2019MNRAS.490.4575H}), leading to advanced statistical models such as the ``maximum AMD model'' that captures the underlying architectures and correlations in multi-planet systems \citep{2020AJ....160..276H}.
The \SysSim{} simulated catalogs, comprised of \textit{physical} and \textit{observed} catalog pairs, provide a way to directly address the impact of non-transiting or otherwise unseen planets in \Kepler{}-like systems.
In this article, we use simulated catalogs (physical and observed) to assess how detection biases can affect our interpretation of the MMEN and make comparisons to the \Kepler{} observed catalog.
In \S\ref{sec:Methods}, we describe how we build the MMEN, beginning with a summary of the \SysSim{} population model developed previously to reproduce the \Kepler{} planet catalog (\S\ref{sec:AMD_model}), and a review of how planets are converted to solid surface densities using various prescriptions for their feeding zones found in the literature (\S\ref{sec:surface_densities}).
We discuss the standard method in which a power-law is fit to all the exoplanets in a given catalog, and show differences in fitting to the simulated observed and physical catalogs (\S\ref{sec:fit_all}).
We then adopt a procedure modified from that of RC14 in which a power-law disk profile is fit to each multi-planet system to directly construct the distribution of MMEN from the physical catalogs (\S\ref{sec:fit_systems}).
In \S\ref{sec:missing_planets}, this procedure is repeated for observed multi-transiting systems (simulated and \Kepler{}) to assess the effect of missing planets in altering the inferred MMEN distribution.
We discuss the implications for the distribution of minimum disk masses in \S\ref{sec:disk_masses}.
Finally, we summarize and discuss our key results in \S\ref{sec:discussion}.
\section{Constructing the Minimum Mass Extrasolar Nebula} \label{sec:Methods}
\subsection{Population model} \label{sec:AMD_model}
The ``maximum AMD model'' (\citealt{2020AJ....160..276H}; hereafter, the \citetalias{2020AJ....160..276H} model) was developed to describe as many features of the \Kepler{} planet catalog as possible, with a focus on the correlated properties of planets in multi-transiting systems, using a combination of statistical distributions and conditions for dynamical stability \citep{2017A&A...605A..72L, 2017A&A...607A..35P}.
It provides a detailed parameterization of the underlying distribution of planetary systems between orbital periods of $P = 3 - 300$ days and planet sizes of $R_p = 0.5 - 10 R_\oplus$ around a purified sample of FGK dwarf stars (see \citealt{2019AJ....158..109H} and \citetalias{2020AJ....160..276H} for a description of the stellar selection criteria).
The \Kepler{} planetary systems in this sample were derived from the Kepler Objects of Interest DR25 data set obtained from the NASA Exoplanet Archive \citep{koidr25}\footnote{Accessed on 2020-10-19 at 22:34.} and include 2169 planet candidates of which 964 are in 388 multi-transiting systems.
For this planet sample, the model reproduces an astounding number of properties at the population level (see \citetalias{2020AJ....160..276H}; \citealt{2021AJ....161...16H}; \citealt{2021AJ....162..166M}), including but not limited to the following which are most relevant to this study: (1) the overall number of planets per star and the observed multiplicity distribution, (2) the period and period ratio distributions, and (3) the intra-system size similarity patterns, of which the latter two are commonly referred to as the ``peas-in-a-pod'' patterns (see \citealt{2022arXiv220310076W} for a review).
\emph{Mass-radius relation:} a planetary mass-radius (M-R) relation is necessary to estimate planet masses from their radii. The \citetalias{2020AJ....160..276H} model uses a probabilistic M-R relation that consists of a lognormal distribution centered around the ``Earth-like rocky'' model from \citet{2019PNAS..116.9723Z} for small planets and a non-parametric model defined by a series of Bernstein polynomials fit to a sample of 127 \Kepler{} planets with RV or TTV masses from \citet{2018ApJ...869....5N} for large planets, with the transition radius from small to large chosen to be $1.472 R_\oplus$ such that both the mean prediction and scatter are continuous across the entire range of radii.
This M-R relation is thus motivated by a combination of both physical (below the transition radius) and empirical (above the transition radius) models and is more detailed than previously adopted relations for computing the MMEN from the \Kepler{} planets.
\SysSim{} enables the generation of physical and observed catalog pairs from the \citetalias{2020AJ....160..276H} model (the code is available at \citealt{eric_ford_2022_5915004, matthias_yang_he_2022_5963884}).
A physical catalog represents (one realization of) the true, underlying distribution of planetary systems.
Each system in this catalog consists of a known \Kepler{} target star (with stellar parameters from the \Gaia{}-\Kepler{} Stellar Properties Catalog; \citealt{2020AJ....159..280B}) and a set of planets with physical radii, masses, orbital periods, and orbital elements drawn directly from the \citetalias{2020AJ....160..276H} model.
An observed catalog then represents one realization of the detected transiting planets from the physical catalog under a \Kepler{}-like (primary) mission, simulated using a detailed model for the combined detection and vetting efficiency of the \Kepler{} pipeline that accounts for the window and 1-$\sigma$ depth functions of each individual target star (\citealt{2017ksci.rept...14B, 2020AJ....160..159C}; see \citealt{2019AJ....158..109H} for details).
\begin{deluxetable*}{lcccccccc}
\centering
\tablecaption{Power-law fits (equation \ref{eq_mmen}, with $a_0 = 0.3$ AU) for various prescriptions of $\Delta{a}$ and ensembles of planets.}
\tablehead{
\colhead{} & \multicolumn2c{CL13} & \multicolumn2c{RC14} & \multicolumn2c{10Hill} & \multicolumn2c{S14} \\
\colhead{Fit all planets} & \colhead{$\Sigma_0$ (g/cm$^2$)} & \colhead{$\beta$} & \colhead{$\Sigma_0$ (g/cm$^2$)} & \colhead{$\beta$} & \colhead{$\Sigma_0$ (g/cm$^2$)} & \colhead{$\beta$} & \colhead{$\Sigma_0$ (g/cm$^2$)} & \colhead{$\beta$}
}
\startdata
Physical catalogs & $65_{-8}^{+9}$ & $-2.08_{-0.03}^{+0.02}$ & $146_{-15}^{+16}$ & $-1.97_{-0.03}^{+0.05}$ & $568_{-45}^{+47}$ & $-2.05_{-0.02}^{+0.01}$ & $147_{-12}^{+13}$ & $-2.55_{-0.02}^{+0.01}$ \\[3pt]
Observed catalogs & $246_{-6}^{+7}$ & $-1.84_{-0.02}^{+0.04}$ & $369_{-27}^{+42}$ & $-2.03_{-0.05}^{+0.05}$ & $1396_{-33}^{+22}$ & $-1.89_{-0.02}^{+0.02}$ & $414_{-10}^{+12}$ & $-2.35_{-0.02}^{+0.02}$ \\[3pt]
Kepler catalog & $264_{-5}^{+5}$ & $-1.67_{-0.02}^{+0.02}$ & $413_{-13}^{+13}$ & $-1.90_{-0.03}^{+0.03}$ & $1466_{-17}^{+20}$ & $-1.77_{-0.01}^{+0.01}$ & $445_{-4}^{+4}$ & $-2.18_{-0.01}^{+0.01}$ \\[3pt]
\hline
\rule{0pt}{4ex}\makecell{Fit each system} & \makecell{$\Sigma_0^*$ (g/cm$^2$)} & \makecell{$\beta$} & \makecell{$\Sigma_0^*$ (g/cm$^2$)} & \makecell{$\beta$} & \makecell{$\Sigma_0^*$ (g/cm$^2$)} & \makecell{$\beta$} & \makecell{$\Sigma_0^*$ (g/cm$^2$)} & \makecell{$\beta$} \\[3pt]
\hline
Physical catalog & $160_{-140}^{+313}$ & $-2.02_{-1.51}^{+1.50}$ & $336_{-291}^{+727}$ & $-1.98_{-1.52}^{+1.55}$ & $1038_{-771}^{+1125}$ & $-2.02_{-1.00}^{+1.00}$ & $263_{-196}^{+330}$ & $-2.53_{-0.99}^{+1.00}$ \\[3pt]
Observed catalog & $349_{-284}^{+597}$ & $-1.99_{-1.28}^{+1.36}$ & $519_{-420}^{+2100}$ & $-1.97_{-1.27}^{+1.54}$ & $1758_{-1214}^{+1766}$ & $-2.00_{-0.82}^{+0.96}$ & $496_{-328}^{+539}$ & $-2.39_{-0.79}^{+0.85}$ \\[3pt]
Kepler catalog & $388_{-308}^{+914}$ & $-1.84_{-1.25}^{+1.52}$ & $515_{-428}^{+2255}$ & $-1.80_{-1.28}^{+1.59}$ & $1776_{-1048}^{+2790}$ & $-1.82_{-0.72}^{+1.01}$ & $519_{-332}^{+723}$ & $-2.25_{-0.80}^{+0.91}$ \\[3pt]
\enddata
\tablecomments{The uncertainties for ``Fit all planets'' denote the 16th-84th percentiles computed over many iterations of simulated catalogs (each with the same number of target stars as the \Kepler{} catalog), while the uncertainties for ``Fit each system'' represent the 16th-84th percentiles of the distributions over all the systems in a single catalog. Since there is only one \Kepler{} catalog, the uncertainties for ``Fit all planets'' in the \Kepler{} catalog were computed via re-samplings of the mass-radius relation for all of the planets.}
\tablenotetext{*}{The fitted values of $\Sigma_0$ have been scaled up by a factor $\alpha$ for each system, as described in \S\ref{sec:fit_systems}.}
\label{tab:params}
\end{deluxetable*}
\subsection{Computing solid surface densities from planets} \label{sec:surface_densities}
The minimum solid surface density ($\Sigma$) required to form a planet is given by spreading the mass of the planet in solids ($M_p$) over an annulus of width $\Delta{a}$ (which conceptually represents the feeding zone of the planet) centered at a separation of $a$:
\begin{equation}
\Sigma = \frac{M_p}{2\pi{a}\Delta{a}}. \label{eq_ssd}
\end{equation}
Since the solid mass of a planet is limited to its core mass and most planets above $\sim 1.6 R_\oplus$ have gaseous envelopes \citep{2015ApJ...801...41R}, we also limit $M_p$ to $10 M_\oplus$. The choice of setting $10 M_\oplus$ as the maximum core mass is motivated by numerous studies on the critical core mass for runaway gas accretion \citep{1982P&SS...30..755S, 1996Icar..124...62P, 2006ApJ...648..666R, 2014ApJ...797...95L, 2015ApJ...800...82P}, which find $M_{\rm core} \simeq 5-20 M_\oplus$. Yet, it is possible for some planets to have solid masses greater than 10 Earth masses (potentially including Jupiter and Saturn; \citealt{1999P&SS...47.1183G, 2017GeoRL..44.4649W, 2019Natur.572..355L}); thus, we also repeat our analyses without setting any maximum core mass and find little change in our results\footnote{For example, while the upper tail of our distribution in solid surface density normalizations (later defined in \S\ref{sec:fit_systems}) extends to modestly higher values, the median value increases by only $\sim 2\%$. Thus, our results are insensitive to our assumption of the exact core mass limit.}, due to the infrequency of large/giant planets in the \citetalias{2020AJ....160..276H} model.
For each planet, we then compute $\Sigma$ using a number of previously adopted prescriptions for the feeding zone width $\Delta{a}$:
\begin{enumerate}
\item The simplest prescription is to adopt a width equal to the semi-major axis (\citealt{2013MNRAS.431.3444C}; hereafter CL13),
\begin{equation}
\Delta{a} = a. \label{eq_deltaa_CL2013}
\end{equation}
While convenient, this prescription tends to result in overlapping regions between planets in the same system and likely overestimates $\Delta{a}$, thus underestimating $\Sigma$.
\item RC14 recommend using the geometric means of the semi-major axes for neighboring planets as the dividing boundaries for their feeding zones,
\begin{align}
a_{{\rm sep},i} &= \sqrt{a_i a_{i+1}}, \quad i = 1,\dotsc,N-1 \\
\Delta{a_i} &= a_{{\rm sep},i} - a_{{\rm sep},i-1}, \label{eq_deltaa_RC2014}
\end{align}
where $a_{{\rm sep},i}$ is the boundary between the $i$ and $i+1$ planets, $\Delta{a_i}$ is the width for the $i^{\rm th}$ planet, and $N$ is the number of (physical or observed) planets in the system. For the inner (outer) edge of the innermost (outermost) planet, we define it by enforcing the same ratio in $a$ both interior and exterior to the planet. We note that this prescription is only applicable to multi-planet systems and may overestimate (or underestimate) $\Delta{a}$ when not all planets are detected.
\item Alternatively, one can justify using a multiple of the planet's Hill radius:
\begin{equation}
\Delta{a} = k R_{\rm Hill} = k{a}\Big(\frac{M_p}{3M_\star}\Big)^{1/3}, \label{eq_deltaa_kHill}
\end{equation}
where $k$ is a constant factor and $M_\star$ is the mass of the host star. \citet{2020AJ....159..247D} chose $k = 10$ as motivated by the spacings of \Kepler{} observed multi-planet systems \citep{2018AJ....155...48W}, which we denote hereafter as 10Hill. While this is appropriate for low-mass planets on low-eccentricity orbits, it is expected to underestimate the width of the feeding zone for systems with significant eccentricities.
\item Finally, \citet{2014ApJ...795L..15S} (hereafter S14) suggests
\begin{equation}
\Delta{a} = 2^{3/2} a \sqrt{\frac{a M_p}{R_p M_\star}}, \label{eq_deltaa_S2014}
\end{equation}
motivated by considerations for the role of giant impacts in dictating a planet's effective feeding zone width.
\end{enumerate}
In Figure \ref{fig:mmen_deltaa} (left panel), we plot the solid surface density versus semi-major axis for each planet in the \Kepler{} catalog (black points) as well as in a simulated observed catalog (blue points). To facilitate the rest of the paper, the RC14 prescription is used for the purposes of this panel.
Similarly, we plot the solid surface densities for a sample of $10^3$ simulated planets drawn from a simulated physical catalog (right panel). Each planet is repeated as four points (once for each of the above prescriptions). While there is significant scatter in $\Sigma$ due to both (1) the range of planet masses (up to two orders of magnitude even after capping the core masses; $M_p \sim 0.1-10 M_\oplus$) and (2) the varying prescriptions for $\Delta{a}$, there is a clear linear trend in $\log{\Sigma}$ vs. $\log{a}$ indicative of a power-law relation between $\Sigma$ and $a$ that is qualitatively consistent with previous studies. In the next subsection, we describe our procedure for fitting power-laws to both the total ensemble of planets as well as individual multi-planet systems.
\subsection{The canonical power-law model} \label{sec:fit_all}
A power-law model for the MMEN is typically fitted to the solid surface densities computed from the planets as a function of the semi-major axis, of the form:
\begin{equation}
\Sigma(a) = \Sigma_0 \bigg(\frac{a}{a_0}\bigg)^\beta, \label{eq_mmen}
\end{equation}
where $\Sigma_0 \equiv \Sigma(a_0)$ is the normalization at separation $a_0$ and $\beta$ is the slope. While $a_0$ is typically assumed to be 1 AU, we choose $a_0 = 0.3$ AU so that it is closer to the median $a$ of the simulated and \Kepler{} planets, and thus reduces the covariance of $\Sigma_0$ and $\beta$.
Most previous studies have fitted equation \ref{eq_mmen} to an ensemble of planets (from transit or RV surveys) to construct a single, ``universal'' MMEN. To facilitate direct comparisons with these prior studies, we also fit a power-law to all the planets in each of the catalogs (\Kepler{} observed, simulated observed, and simulated physical), as denoted by the various dashed lines in Figure \ref{fig:mmen_deltaa}. The complete results for all prescriptions of $\Delta{a}$ are presented in Table \ref{tab:params} under ``Fit all planets''.
First, we compare the results of the simulated observed catalogs to the \Kepler{}-observed catalog. While there is a qualitatively good agreement between the two, the fits to the simulated planets give slightly lower normalizations at 0.3 AU ($\sim 5-10\%$ smaller values of $\Sigma_0$) for all prescriptions. Similarly, the values of $\beta$ are slightly steeper in the simulated observed catalogs than in the actual \Kepler{} observations.
These differences are likely due to having fewer simulated systems with positive size ordering (``monotonicity''; see Figure 12 of \citetalias{2020AJ....160..276H}) compared to the \Kepler{} data, i.e. there are slightly more larger planets at short periods relative to the \Kepler{} systems.
However, while these differences are statistically significant over many simulated catalogs, they are smaller than the differences arising from the various prescriptions.
Next, we compare how the fits to the simulated planets change between the physical and observed catalogs. This comparison quantifies how the detection biases of the transit survey alter the inferred (mean) MMEN. We find that the values of $\Sigma_0$ are a factor of $\sim 2.5-4$ lower for the physical catalogs compared to the observed catalogs; this is readily explained by the fact that transit detections are biased towards larger planets (which would tend to be more massive) at all separations. Interestingly, $\beta$ is only modestly affected, appearing to be slightly dampened by the detection biases; surprisingly, the opposite may even be true for the RC14 prescription, although both physical and observed fits are consistent with $\beta = -2$.
Finally, we focus on the physical catalogs and the differences between the prescriptions for $\Delta{a}$. While the fits to the observed planets illustrate how detection biases affect the results and serve as a more direct comparison to previous studies, the fits to the physical planets should be interpreted as the \textit{true} MMEN if one knew of all the planets (within the range probed by the simulations, 0.04 to 0.88 AU).
We find that $\beta$ is steepest for the S14 prescription, comparable between CL13 and 10Hill, and slightly shallower for RC14. All of these MMEN slopes are steeper than the value of $\beta = -1.5$ for the MMSN.
The normalization ($\Sigma_0$) is highest for 10Hill, reflecting that it also tends to be the narrowest definition of $\Delta{a}$ of the four prescriptions; intuitively, a smaller feeding zone implies that a greater solid surface density is necessary to collect the same amount of solid material for forming a planet. In contrast, the approximation used by CL13 ($\Delta{a} = a$) results in the lowest value of $\Sigma_0$. Broadly, these results indicate that the mean MMEN is more massive than the MMSN for the innermost regions, although all of these (averaged) extrasolar disk profiles must cross under the MMSN model at some point due to the steeper $\beta$ (e.g., $\sim 0.3$ AU for CL13 and $\gtrsim 1$ AU for RC14).
\subsection{A diversity of disk profiles: fitting power-laws to individual systems} \label{sec:fit_systems}
The single power-law model defined above averages over the global population of exoplanets, but it does not describe the properties of any real or single planet-forming disk. Furthermore, it fails to capture the diversity of individual disks. A better approach is given by \citet{2014MNRAS.440L..11R}, who fit the solid surface density profiles of each individual multi-planet system to show that it is inconsistent to assume a universal disk profile. By considering \Kepler{} and RV systems with at least three planets smaller than $5 R_\oplus$ or $30 M_\oplus$, RC14 showed that there is an extremely wide range of slopes ranging from $\beta = -6.3$ to $5.8$.
Here, we adopt a very similar approach to RC14 by also fitting a power-law (equation \ref{eq_mmen}) to each system, starting with the multi-planet (2+) systems in our physical catalog. One issue with this approach is that the resulting power-law for a given system does not guarantee that there is enough solid disk mass to form \textit{every} planet in the system, since by design the power-law fit will be above some points and below the other points in the $\Sigma$ vs. $a$ space (the latter of which have under-predicted local solid surface densities).\footnote{This is not a problem for the two-planet systems, since a power-law can always be fit exactly through both points.} To address this issue, we then ``scale up'' each power-law such that all planets in a given system are at or below the curve, by multiplying $\Sigma_0$ by a scale factor, $\alpha = {\rm max}\{\Sigma_i/\Sigma(a_i)\}$, where $\Sigma_i$ is the solid surface density for the $i^{\rm th}$ planet in the system and $\Sigma(a_i)$ is the solid surface density of the power-law fit evaluated at that planet's semi-major axis.
Hereafter, we will use $\Sigma_0^* \equiv \alpha\Sigma_0$ to denote the scaled-up solid surface density normalization.
This ensures that the resulting power-law profile contains enough mass to form each planet in the system while being self-consistent with the assumption that each planet accreted material from within its feeding zone -- a true ``minimum mass'' extrasolar nebula.
The resulting distribution of power-law fits to each multi-planet system in a physical catalog is shown in Figure \ref{fig:fit_per_sys_RC14} (using the RC14 prescription). In the top panel, we plot a sample of power-laws in $\Sigma$ vs. $a$. The bottom panel shows the corresponding distribution of $\Sigma_0^*$ and $\beta$. While the median MMEN slope ($\beta = -1.98$) is the same as the fit to the full ensemble (Figure \ref{fig:mmen_deltaa}), there is a broad and symmetric distribution with the 16th-84th percentile ranging from $\beta = -3.5$ to $-0.43$. Similarly, there is a wide distribution of $\Sigma_0^*$ (16th-84th percentile ranging from 45 to 1060 g/cm$^2$); the median $\Sigma_0^*$ is also comparable to the fit over all planets considering a scale factor was applied to each individual system (median $\alpha = 2.16$ for systems with 3+ planets) but not to the full ensemble. %
We remind the reader that our values of $\Sigma_0^*$ are normalized at 0.3 AU; projecting the power-laws to 1 AU gives $\Sigma(1\rm AU) = 31.5_{-29.2}^{+142.3}$ g/cm$^2$.
While the extreme ends of the distributions are dominated by fits to systems with just two planets, restricting to 3+ systems still produces a substantially broad distribution: $\beta = -1.96_{-1.17}^{+1.18}$ and $\Sigma_0^* = 427_{-343}^{+642}$ g/cm$^2$.
We find comparable results (a broad diversity of $\beta$ and $\Sigma_0^*$) for the other prescriptions of $\Delta{a}$, as listed in Table 1 under ``Fit each system''.
Remarkably, for all except the S14 prescription, the median slope is highly reminiscent of the predictions from the peas-in-a-pod and pair-wise energy-optimized configurations of planetary systems \citep{2019MNRAS.488.1446A, 2022arXiv220310076W}.
\section{The effect of missing planets} \label{sec:missing_planets}
In \S\ref{sec:fit_all}, we showed how fitting a power-law to \textit{all} planets in a physical catalog leads to some notable differences in the inferred MMEN compared to fitting to \textit{all} planets in an observed catalog.
While such a comparison illustrates how the overall distribution of planet radii (and thus masses) and semi-major axes is affected by transit detection biases, it does not demonstrate how the biases of a \Kepler{}-like survey affect individual systems, in the form of missing planets due to their non-transiting geometries or undetectably small sizes.
On the other hand, the approach of constructing MMEN from individual systems as described in \S\ref{sec:fit_systems} enables us to directly study this effect, by comparing the power-law fits of the observed systems to those of the physical systems. We note that for the RC14 prescription, the effect of missing planets in a given system is two-fold: (1) the power-law fit (its slope and/or normalization) may be significantly altered or biased, and (2) the assumed feeding zone ($\Delta{a}$) of each planet may also be affected.
We illustrate these biases in Figure \ref{fig:fit_examples} (top panel) using a simulated system. In this particular example, the $4.6 M_\oplus$ observed planet has an under-estimated surface density due to an over-estimated feeding zone width caused by the unseen $1.5 M_\oplus$ planet exterior to it, leading to a disk profile that is biased steep. The power-law fit to the underlying system must also be scaled up to ensure that it can form all the planets, including the $6.1 M_\oplus$ planet ($\alpha \simeq 3.8$ in this case). While this example demonstrates a case in which $\beta$ can appear steeper than reality, a wide range of other cases are also possible, from $\beta$ that are unchanged, shallower, or even positive (bottom panels of Figure \ref{fig:fit_examples}).
In this section, we aim to quantify how these biases affect the overall observed distribution of MMEN disk profiles.
We showed the wide diversity of underlying disk profiles arising from a physical catalog in the previous section. Here, we repeat the procedure of fitting a power-law to each planetary system in the corresponding observed catalog (as well as the \Kepler{}-observed catalog, to test how well our procedure mimics the real Kepler planetary systems). In Figure \ref{fig:fit_per_sys_obs_RC14}, we plot the distribution of power-law fit parameters for each observed multi-transiting system in a simulated catalog and the \Kepler{} catalog, analogous to the bottom panel of Figure \ref{fig:fit_per_sys_RC14}.
The RC14 prescription is used for the figure; the results of the other prescriptions are provided in the bottom half of Table \ref{tab:params} (under ``Fit each system'').
\subsection{Comparing fits to the observed versus physical planets in simulated systems}
Several conclusions can be drawn from comparing the distributions of $\Sigma_0^*$ and $\beta$ fitted between the observed systems (left panel of Figure \ref{fig:fit_per_sys_obs_RC14}) and the physical systems (bottom panel of Figure \ref{fig:fit_per_sys_RC14}).
First, it is remarkable that the distributions of power-law slopes are virtually unchanged between the physical and observed systems; the median $\beta$ and width of the distribution remain similar for each prescription.
This is in contrast to the fits to all planets in a catalog, which tends to lead to slightly shallower slopes for observed catalogs versus physical catalogs for all except the RC14 prescription.
However, we find that the distributions of normalizations are significantly skewed to larger values for all prescriptions.
While $\Sigma_0^*$ can be either under- or over-estimated for any given system, the median $\Sigma_0^*$ is a factor of $\sim 1.5$ (RC14) to $\sim 2.2$ (CL13) higher for the observed systems than the physical systems. In addition, the spread of the $\Sigma_0^*$ distribution is also substantially increased. This is likely due to the fact that smaller planets tend to be missed more often than larger ones, the latter of which tend to also be more massive, thus biasing the disks inferred from only the observed planets towards higher masses. We note that this is partly counter-acted by the increased feeding zone widths due to missing planets in the RC14 prescription, which explains why the median $\Sigma_0^*$ is least biased using this prescription.
Although the overall distribution of $\beta$ is unaffected by detection biases, both $\beta$ and $\Sigma_0^*$ can be significantly over- or under-estimated for any given system due to missing planet(s), as already illustrated in Figure \ref{fig:fit_examples}. In Figure \ref{fig:fit_per_sys_obs_vs_phys_RC14}, we plot histograms of $\beta$ ratios (top panel) and $\Sigma_0^*$ ratios (bottom panel) for the simulated observed systems, where the ratio is the value of the fit to only the observed planets compared to the fit to all planets in their true underlying systems. The median $\beta$ ratio is close to unity, consistent with the previous comparison showing how the median $\beta$ is insensitive to detection biases. More than half of the observed systems have underestimated values of $\Sigma_0^*$.
RC14 argued against a universal MMEN by showing that synthetic populations of planetary systems generated from a single power-law model (i.e., a fixed value of $\beta$) fail to produce the wide diversity of disk profiles of the \Kepler{}-observed systems even after simplistic simulations of transit detections. While their reported distribution of surface density slopes resulted from fits to the observed systems only, they also showed that a flat distribution of $\beta \in [-2.5, 0]$ or a Gaussian distribution centered around $\beta = -1.25$ (with standard deviation $\sigma = 0.8$) for the underlying distribution appears to roughly match the observed distribution.
While we find a somewhat steeper value for the median surface density slope ($\beta \simeq -2$) compared to RC14, we have demonstrated that this is very insensitive to detection biases when accounting for the role of missing planets using the \citetalias{2020AJ....160..276H} model, and the diversity of slopes across systems remains. %
Additionally, our results show that while detection biases do broaden (and shift) the observed distribution of $\Sigma_0^*$, they cannot account for the full extent of the variance across multi-planet systems. Thus, we further strengthen the conclusion of RC14 that there is no universal MMEN, but rather a diversity of MMEN profiles in the underlying population (Figure \ref{fig:fit_per_sys_RC14}).
\subsection{Comparing fits to the simulated versus \Kepler{} systems}
As seen in Figure \ref{fig:fit_per_sys_obs_RC14}, the distributions of MMEN power-laws are very similar between the \SysSim{} and \Kepler{} observed systems.
There is a systematic shift to slightly shallower median values of $\beta$ for the \Kepler{} systems regardless of the prescription used (e.g. $\sim -2$ vs. $-1.8$ using RC14), similar to the results of fitting a power-law to all planets as discussed in \S\ref{sec:fit_all}.
The distributions of $\Sigma_0^*$ are very similar, although the \Kepler{} catalog appears to have relatively more systems towards the high $\Sigma_0^*$ tail as evidenced by the higher 84\% percentile values for some prescriptions.
Nevertheless, the likeness of the power-law profiles between the simulated systems and the \Kepler{} systems illustrates the robustness of the \citetalias{2020AJ....160..276H} model, and enables us to estimate how missing planets in \Kepler{} systems likely affect our inferences on the MMEN. As we have shown here, the overall distribution of disk surface density slopes is not strongly affected by detection biases and likely represents the \textit{true} underlying distribution. The true disk masses are often higher than what we would conclude from the observed planets alone. In addition, the diversity of disk masses is also reduced for the true systems, although there is still a wide range (as summarized in \S\ref{sec:fit_systems}). In the next section, we use the results from the physical systems to infer the primordial distribution of total disk masses within 1 AU.
\section{Implications for minimum disk masses} \label{sec:disk_masses}
A power-law model for the surface density profile can be integrated to give the total mass in solids enclosed within a radius $r$ from a star:
\begin{align}
M_r &= \int_{0}^{2\pi} d\theta \int_{r_0}^{r} \Sigma(a)a\, {da} \\
&= \begin{cases}
\frac{2\pi\Sigma_0^*}{(2+\beta){a_0}^\beta} \big(r^{2+\beta} - {r_0}^{2+\beta}\big), &\beta \neq -2 \label{eq_int_mmen} \\
2\pi\Sigma_0^* {a_0}^2 \ln(r/r_0), &\beta = -2,
\end{cases}
\end{align}
where $r_0$ denotes the inner edge of the disk where the density must truncate. For $\beta \leq -2$, a non-zero value of $r_0$ is also necessary to avoid an infinite amount of mass. We choose $r_0 \simeq 0.04$ AU, corresponding to a 3-day orbital period around a solar mass star (which is also the minimum period of planets in the \citetalias{2020AJ....160..276H} model).
We compute $M_r$ for each fitted system in a physical catalog, for several values of $r$, and plot (one minus) the cumulative distributions in Figure \ref{fig:total_mass_CDFs}. The y-axis should be interpreted as the fraction of planet-forming MMEN disks with at least $M_r$ of solid mass enclosed within a given radius $r$.
For example, we find that most such disks have at least an Earth mass of solids within even 0.1 AU (the dotted line). Over 40\% (10\%) of disks have over $10 M_\oplus$ ($100 M_\oplus$) of solids within the same distance. Around a third of disks have over $40 M_\oplus$ within 0.5 AU (the dashed line), and this fraction rises to a half within 1 AU (the solid line). The latter is very similar to the median disk reported by RC14, which also contains $\sim 40 M_\oplus$ within 1 AU (evaluating equation \ref{eq_int_mmen} using their median fit values of $\Sigma(1\, {\rm AU}) = 116$ g/cm$^2$ and $\beta = -1.45$).
The observation that the four curves ($r = 0.1$ to 1 AU) closely approach each other past $M_r \gtrsim 200 M_\oplus$ suggests that for the most massive disks, most of the material would be concentrated in the very inner regions if all the planets formed \textit{in situ} (i.e., these disks have high values of $\Sigma_0^*$ and steep negative values of $\beta$).
\section{Summary and Discussion} \label{sec:discussion}
The MMEN remains an insightful framework for understanding the primordial conditions of protoplanetary disks required to form the planets we see today in their present locations.
Yet despite numerous studies on inferring the MMEN, two key limitations persist: (1) nearly all previous works rely on rudimentary treatments of the transit detection biases,
and (2) most studies (with the exception of RC14) also attempt to fit all known exoplanets with a single power-law model, which does not capture the diversity of planetary systems and the disks from which they formed.
The statistical models for multi-planet system architectures developed via forward modeling of the \Kepler{} mission (e.g., \citealt{2019MNRAS.490.4575H, 2020AJ....160..276H}) provide a detailed and unprecedented way of inferring the MMEN while overcoming both of the above issues.
In this article, we produce various constructions of MMEN using both simulated physical and observed catalogs of exoplanetary systems in addition to the true \Kepler{} catalog, and using a variety of prescriptions for the feeding zone width of each planet (\S\ref{sec:surface_densities}).
First, we follow the widely prevalent approach of fitting a single power-law to each catalog to demonstrate the effect of observational biases on the inferred ``mean'' MMEN and the differences between various prescriptions (\S\ref{sec:fit_all}).
We then fit a power-law to the minimum-mass surface densities of the planets in each planetary system, producing a broad distribution of MMEN (\S\ref{sec:fit_systems}).
By repeating this procedure for individual physical and observed planetary systems, we show that undetected planets in observed systems can significantly alter the inferred disk profile (\S\ref{sec:missing_planets}). Interestingly, while $\beta$ can be strongly affected for any given system, the overall distribution is largely unaffected by missing planets. However, the distribution of $\Sigma_0^*$ is biased and broader for the observed systems compared to the physical systems. Altogether, these results demonstrate that although detection biases do affect the inferred distribution of solid disk profiles, they do not explain all of the variance in the observed profiles. There is no universal MMEN if all planets formed in their present locations.
While our approach is similar to that of RC14, one key difference is that we scale up each power-law fit (i.e., $\Sigma_0^*$ $= \alpha\Sigma_0$ where $\alpha \geq 1$) such that the surface density is no less than that local to any individual planet in the system. This is necessary to ensure that each resulting disk has enough solid mass to form each and every planet while being self consistent with the assumption that each planet accreted solid material from only within its local feeding zone. Our finding that the median $\alpha \simeq 2$ (for systems with 3+ planets) suggests that this consideration alone causes previous studies to typically underestimate the minimum surface densities (and thus minimum disk masses) of planet-forming disks by a factor of two.
RC14 used this wide diversity of MMEN to argue against the \textit{in situ} formation scenario.
Viscous disk models generally predict surface density slopes of $\beta \simeq 0$ to $-2$, depending on the temperature profile (\citealt{1973A&A....24..337S, 1997ApJ...490..368C, 1998A&A...337..625H}; see RC14 for a concise review). Observations of cold dust in disk structures also consistently yield $\beta \simeq -0.4$ to $-1.1$ \citep{2009ApJ...700.1502A, 2010ApJ...723.1241A}. Thus, the wide swath of slopes inferred from applying the MMEN framework to exoplanetary systems, as we have shown, cannot be fully explained by either observations or theory.
Systems with slopes moderately steeper than $\beta \simeq -2$ may be indicative of having witnessed a significant radial drift of solid materials prior to accretion or the migration of planetary embryos or fully-formed planets toward the inner regions of the disks. More sharply falling profiles in the inner regions could be caused by a truncation of the inner planet-forming disk due to the presence of an unknown, distant giant planet. Other extreme slope profiles, including strongly positive values, that exist in the physical systems can potentially be explained by more violent dynamical histories, perhaps involving planetary collisions and/or ejections. Based on the number of simulated physical systems with $\beta < -4$ or $\beta > 0$, we estimate that at least $\sim 23\%$ of planetary systems experienced a history of migration and/or planet-planet interactions that prevent the final planet masses and semi-major axes from conveying information about the initial disk profile. Even among systems with a more typical disk profile, the great diversity of disk masses necessary to enable \textit{in situ} planet formation provides strong evidence that a substantial fraction of these systems experienced a radial drift of solids or substantial orbital migration. In any case, multi-stage formation processes involving the migration, mergers, and scatterings of planetary cores have also been proposed to explain the typical architectures of compact planetary systems (e.g., \citealt{2007ApJ...654.1110T, 2014IAUS..299..360C, 2022arXiv220205342Z}). It appears that a combination of mechanisms beyond the simple \textit{in situ} accretion scenario is necessary to explain all system outcomes.
Additionally, the broad range of inferred solid disk profiles implies that there must have been some very massive or extreme disks, which would likely be unstable. Previous studies have assessed the stability of the gaseous disk by assuming a gas-to-dust ratio (typically $\Sigma_{\rm gas}/\Sigma \sim 200$; e.g. \citealt{2014ApJ...795L..15S}). While outside the scope of this paper, future studies may perform a detailed calculation of the stability of the disks inferred from our \citetalias{2020AJ....160..276H} model using a range of gas-to-dust ratios.
As an alternative to invoking the large scale migration of planets post-formation, it is also possible to consider the radial redistribution of solids (in the form of dust or small pebbles) from which \textit{in situ} planet formation then occurs. Rather than starting from a smooth disk profile, the inward drift of sub-meter sized ``pebbles'' may collect at pressure maxima creating a series of gravitationally unstable rings, from which planets form ``inside-out'' \citep{2014ApJ...780...53C, 2016IAUFM..29A...6T}. The pile-up of solids into narrow annuli from fractions to a few AU which serve as the sites of planet formation can produce steeper radial profiles than the initial disk profile \citep{2016A&A...594A.105D}, and has recently been applied to the inner solar system via the silicate sublimation line at $\sim 1$ AU \citep{2022NatAs...6...72M}.
In theory, one may also evaluate the feasibility of the MMEN model by comparing the distribution of minimum disk masses from the exoplanet population to the distribution of observed protoplanetary disks. Yet, this is challenging for a number of reasons, including the biases inherent to either population and the difficulty of measuring disk masses (see \citealt{2022arXiv220309759D} and \citealt{2022arXiv220309818M} for a review). Observations using the Atacama Large Millimeter/sub-millimeter Array (ALMA) have revealed that Class 0 (young, $<0.5$ Myr) disks typically have $\sim 100 M_\oplus$ of solids while the older, Class I/II disks have less than $\sim 50 M_\oplus$ \citep{2020A&A...640A..19T, 2022arXiv220408731A}. However, these values are sensitive to assumptions for the dust opacity, and extend out to $\sim 100$ AU. On the other hand, ignoring the effects of photons scattering off dust grains can lead to underestimated disk masses \citep{2019ApJ...877L..18Z}. There may also be differences in the stellar samples; ALMA mostly observed disks around K and M dwarfs (e.g., \citealt{2016ApJ...831..125P, 2016ApJ...828...46A, 2018ApJ...869L..41A}), while most exoplanets found by \Kepler{} are around FGK dwarfs. Despite these differences, it has also been suggested that the solid masses of planets and of dust may be similar per star \citep{2021ApJ...920...66M}. It remains that we have a limited understanding of the total solid masses available in the innermost regions of protoplanetary disks, and thus the efficiency of planet formation.
Recent studies have also suggested that the MMEN is dependent on the stellar mass, and to a smaller extent, stellar metallicity \citep{2020AJ....159..247D}. We did not find any correlation between the disk profiles (minimum mass or slope) and stellar mass.
However, it is possible that any underlying correlation is lost due to the scrambling of fitted disk profiles caused by missing planets as we have shown in \S\ref{sec:missing_planets}.
In any case, the inter-system variation in $\Sigma_0^*$ (which can change by a few orders of magnitude) is significantly greater than the range of stellar masses (which at most change by a factor of $\sim 2$ across the FGK range).
A more detailed analysis of how the MMEN varies with host star properties is outside the scope of this paper and should be explored in future work.
The distribution of primordial disk profiles serves as a fundamental input for the initial conditions of planet formation simulations (e.g., \citealt{2013ApJ...775...53H, 2016ApJ...832...34M, 2016ApJ...822...54D, 2020ApJ...891...20M}). It has been shown through simulations of the \textit{in situ} assembly of planetesimals by giant impacts that a diversity of solid disk normalizations can lead to an ensemble of planetary systems resembling the \Kepler{} exoplanetary systems \citep{2020ApJ...891...20M}. However, these studies have typically fixed the surface density slope (e.g., $\beta = -1.5$ or $-2.5$; \citealt{2013ApJ...775...53H, 2016ApJ...832...34M, 2020ApJ...891...20M}) due to the large number of tunable parameters. While it is unlikely that all exoplanetary systems formed exclusively in their present locations, the \textit{in situ} model may still explain a wide variety of planetary system outcomes given an adequately flexible array of initial conditions. The MMEN framework can continue to provide constraints on these formation conditions.
The code for fitting MMEN to individual systems, as well as for reproducing the figures and results of this manuscript, are available via the ``SysSimPyMMEN'' package \citep{matthias_yang_he_2022_7117309}. The simulated catalogs used in this study and code for reproducing the \citetalias{2020AJ....160..276H} model are downloadable from the ``SysSimExClusters'' package \citep{matthias_yang_he_2022_5963884}.
\section*{Acknowledgements}
We thank Darin Ragozzine, Danley Hsu, Robert Morehead, and Keir Ashby for contributions to the broader \SysSim{} project.
We are grateful to Sarah Millholland, Lauren Weiss, and Chao-Chin Yang for helpful discussions.
We also thank the anonymous referee for their constructive review and comments.
M.Y.H. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number PGSD3 - 516712 - 2018.
M.Y.H. and E.B.F. acknowledge support from the Penn State Eberly College of Science and Department of Astronomy \& Astrophysics, the Center for Exoplanets and Habitable Worlds, and the Center for Astrostatistics.
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
\software{NumPy \citep{2020Natur.585..357H},
Matplotlib \citep{2007CSE.....9...90H},
ExoplanetsSysSim \citep{eric_ford_2022_5915004},
SysSimData \citep{eric_ford_2019_3255313},
SysSimExClusters \citep{matthias_yang_he_2022_5963884},
SysSimPyPlots \citep{matthias_yang_he_2022_7098044},
SysSimPyMMEN \citep{matthias_yang_he_2022_7117309}
}
\bibliographystyle{aasjournal}
\bibliography{main}
|
Title:
XPOL-III: a New-Generation VLSI CMOS ASIC for High-Throughput X-ray Polarimetry |
Abstract: While the successful launch and operation in space of the Gas Pixel Detectors
onboard the PolarLight cubesat and the Imaging X-ray Polarimetry Explorer
demonstrate the viability and the technical soundness of this class of
detectors for astronomical X-ray polarimetry, it is clear that the current
state of the art is not ready to meet the challenges of the next generation of
experiments, such as the enhanced X-ray Timing and Polarimetry mission,
designed to allow for a significantly larger data throughput.
In this paper we describe the design and test of a new custom,
self-triggering readout ASIC, dubbed XPOL-III, specifically conceived to
address and overcome these limitations. While building upon the overall
architecture of the previous generations, the new chip improves over its
predecessors in several, different key areas: the sensitivity of the trigger
electronics, the flexibility in the definition of the readout window, as well
as the maximum speed for the serial event readout. These design improvements,
when combined, allow for almost an order of magnitude smaller dead time per
event with no measurable degradation of the polarimetric, spectral, imaging or
timing capability of the detector, providing a good match for the next
generation of X-ray missions.
| https://export.arxiv.org/pdf/2208.14103 |
\begin{frontmatter}
\title{\xpoliii: a New-Generation VLSI CMOS ASIC for High-Throughput X-ray Polarimetry}
\author[1]{Minuti M.}
\author[2,1]{Baldini L.}
\author[1]{Bellazzini R.}
\author[1]{Brez A.}
\author[1]{Ceccanti M.}
\author[3]{Krummenacher F.}
\author[4]{Latronico L.}
\author[1]{Lucchesi L.}
\author[1]{Manfreda A.}
\author[1]{Orsini L.}
\author[1]{Pinchera M.}
\author[1]{Profeti A.}
\author[1]{SgrГІ C.}
\author[1]{Spandre G.}
\address[1]{Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa, Italy}
\address[2]{UniversitГ di Pisa, Dipartimento di Fisica Enrico Fermi, Largo B. Pontecorvo 3, I-56127 Pisa, Italy}
\address[3]{Advanced Silicon SA, Lausanne, Switzerland}
\address[4]{Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Via P. Giuria, 1, I-10125 Torino, Italy}
\journal{NIM A}
\date{Compiled on \today}
\begin{keyword}
X-ray polarimetry
\PACS 95.55.Ka \sep 95.55.Qf
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec:introduction}
The recent launches of the PolarLight cubesat~\cite{PolarLight} and the Imaging X-ray
Polarimetry Explorer (IXPE)~\cite{IXPEJATIS} signal the re-opening of a new observational
window---that of astronomical X-ray polarimetry---after almost 40 years. At the heart of
both missions are innovative polarization-sensitive Gas Pixel Detectors (GPD)~\cite{BALDINI2021102628},
exploiting the photoelectric effect to derive the linear polarization of the
incoming radiation based on the reconstruction of the azimuthal direction of emission of
the photo-electrons. The design, qualification and successful operation in space of the
PolarLight and IXPE Gas Pixel Detectors mark the culmination of a long R\&D
program~\cite{Costa2001662, BELLAZZINI2004477, BELLAZZINI2006552}, following the first
pioneering attempts at a practical implementation of photoelectric X-ray polarimetry
(see, e.g., \cite{Ramsey}).
Future X-ray missions will pose even tighter requirements on focal-plane detectors.
With a much larger effective area (about a factor $\times~5$ compared to IXPE) the mirror
modules of the Polarimetry Focusing Array (PFA) onboard the enhanced X-ray Timing and
Polarimetry (e-XTP) mission~\cite{eXTP} will produce too large of a data throughput for
the current generation of Gas Pixel Detectors, even for moderately bright sources.
It is then clear that
a drastic reduction of the dead-time per event
is one of the necessary preconditions for a significant leap in sensitivity for the next
generation of polarimetric missions.
In this paper we describe \xpoliii, a new-generation custom CMOS ASIC specifically
designed for high-throughput X-ray polarimetry applications, taking full advantage of the
lessons learned through the development of the IXPE mission. The new design is specifically
aimed at a substantial reduction of the average dead time per event, while preserving all
the other relevant high-level performance metrics of the readout chip. As detailed in
section~\ref{sec:performance}, the first comprehensive test campaign that we performed
on the new ASIC confirms that all design goals were met, and \xpoliii{} is a prime candidate
to provide polarimetric capabilities to future X-ray missions.
\section{\xpoliii\ Architecture}
\label{sec:architecture}
The main features of the previous \xpol\ generations, most of which are also relevant
in the context of this work, are thoroughly described
in~\cite{BELLAZZINI2004477, BELLAZZINI2006552} and~\cite{BALDINI2021102628}.
In this section we only provide a succinct summary of the internal functioning of the chip,
with emphasis on the design changes specific to \xpoliii.
\begin{table}[htb!]
\centering
\begin{tabular}{p{0.5\linewidth}p{0.45\linewidth}}
\hline
Parameter & Value\\
\hline
\hline
Number of pixels & $107\,008~(304 \times 352)$\\
Physical pitch & 50~$\mu$m\\
Shaping time & 1~$\mu$s\\
Pixel gain & $200$~mV~fC$^{-1}$\\
Pixel Noise & $30$~$e^{-}$~ENC\\
Full scale linear range (FSLR) & $30\rm{k}~e^{-}$\\
Minimum trigger threshold & $\sim 150$~$e^{-}$ (0.5\% of FSLR)\\
\hline
\end{tabular}
\caption{Summary table of the basic geometrical and electrical characteristics
of the \xpoliii{} readout ASIC.}
\label{tab:asic_characteristics}
\end{table}
Manufactured with a standard 180~nm CMOS process, the \xpoliii\ readout ASIC (shown in
Figure~\ref{fig:xpol3}) is organized as a rectangular matrix of $107\,008$~hexagonal pixels
arranged in a tringular pattern of~$304$ columns and $352$ rows at $50$~$\mu$m pitch, for a total
active area of $15.2 \times 15.2$~mm$^2$. Each pixel is composed of a metal electrode (from the
top layer of the process), acting as a charge-collecting anode, connected to a charge-sensitive
amplifier, followed by a shaping circuit and a sample-and-hold system, as illustrated in the
simplified schematic in Figure~\ref{fig:pixel_schematic}.
Similarly to the previous ASIC generations, \xpoliii{} provides an advanced self-triggering
capability, with automatic localization of the \emph{region of interest} (ROI) containing the
photo-electron track. Upon trigger, the outputs of the pixels within the ROI are sequentially
connected to an on-chip global differential amplifier driving the output pads, and the digitization of the corresponding signal is performed with a dedicated, external ADC.
At a very fundamental level, this is the basic mechanism that allows to keep the overall
readout time manageable, as, for a typical event, we only read out a few hundreds pixels out
of the $> 100$~k in the full matrix.
Building on this concept, in the initial stages of the \xpoliii\ design we identified three distinct possible
lines of actions to speed up the readout:
\begin{itemize}[leftmargin=10pt]
\item reduce the average size of the region of interest;
\item increase the maximum frequency of the serial readout clock;
\item streamline the readout sequence to avoid unnecessary delays.
\end{itemize}
As we shall discuss in the remaining of this section, the combination of these three,
seemingly simple updates provides a drastic dead-time reduction.
\subsection{Event Triggering}
\label{sec:trigger}
Every 4 pixels in the ASIC are logically OR-ed together to contribute to a local trigger
with a dedicated shaping amplifier. This basic building block of $2 \times 2$ pixels is
called a trigger \emph{mini-cluster}, and is central to the entire trigger logic.
Upon trigger, the event is automatically localized by the ASIC core logic in the smallest
rectangle containing all triggered mini-clusters, called the \emph{region of trigger} (ROT).
More specifically, the chip calculates and makes available in a specific register the coordinates
$\left<X_\text{min},~Y_\text{min}\right>$ and $\left<X_\text{max},~Y_\text{max}\right>$ of the
upper-left and lower-right corners of the ROT. On an event by event basis, this information
can be manipulated in an arbitrary fashion---typically by adding a pre-defined padding on the
four borders---to define the \emph{region of interest} (ROI) for the event capture and readout.
While the top-level trigger logic in the previous generations of the ASIC was similar to that
of \xpoliii, the algorithm implemented in \xpoli\ to define the ROI was comparatively crude:
a constant padding of 8~columns on the left and right and 10~rows on the top and bottom of the
ROT was automatically added by the ASIC, as shown in Figure~\ref{fig:xpoli_tracks}, with no
room for additional tuning. This approach served well the primary purpose of making the
ROI big enough to fully contain the tracks at all the energies of interest (and was ultimately adequate for the typical IXPE throughput), but was clearly sub-optimal at low energy, where
tracks tend to be more compact and with a comparatively higher ionization density.
In contrast, \xpoliii\ offers maximal flexibility---to the point that, once the ROT has been
defined by the ASIC, the ROI can be calculated by means of any arbitrary external logic and
loaded into the proper register on an event-event-basis. Additionally, a \emph{hybrid}
operational mode is available where a pre-loaded padding, independently adjustable on the
four sides of the ROT, is automatically applied by the chip without the need of an additional
data transactions. (In practice, since the information available at the time that the ROI
decision has to be made is limited, this hybrid mode is in fact outperforming overly
sophisticated padding schemes, and will be the baseline configuration used for all the tests
described in the following of the paper.) Even more importantly, this additional flexibility in
the definition of the ROI is accompanied by another fundamental design change: the AC nature
of the \xpoliii{} trigger coupling. Compared to the \xpoli{} architecture, where the dispersion
of the DC offsets across different amplifiers was playing a prominent role, this change allows
to lower dramatically the minimum trigger threshold practically achievable, increasing,
in turn, the fraction of mini-clusters in the track participating in the trigger.
It cannot be over-emphasized that these two seemingly simple changes in the trigger circuitry
constitute a change of paradigm and a radical departure from how previous iterations of the
ASIC operated. To put things in the simplest possible way, in \xpoli\ it was mostly the Bragg
peak to participate into the trigger, and we had to resort to a comparatively large padding
of the ROT in order to capture the initial part of the high-energy tracks.
In \xpoliii\ the entire track participates into the trigger at all energies, and a minimal
additional padding is sufficient to ensure full containment of the track.
As shown in Figure~\ref{fig:roi_vs_energy}, this allows for ROIs that are smaller on average,
and scale more favorably with the track length, offering additional leverage for the operation
at the focus of a X-ray optics, where the vast majority of the photons are detected at the
lower end of the energy spectrum. Assuming a constant padding of 2 (4) pixels on all four sides
of the ROT, the \xpoliii\ design results in an overall reduction of the ROI size by a factor
of 3.5 (2), across the band.
\subsection{Event Readout and Pedestal Subtraction}
\label{sec:readout}
Upon trigger, in nominal data-taking configuration, the chip autonomously initiates the
peak-detection process of the internal sample-and-hold system, at the end of which the analog
output of each pixel within the ROI is sequentially routed to the differential output buffer
connected to the external ADC, and the serial readout proceeds driven by an adjustable readout
clock provided by the back-end electronics. The region of interest is actually read out twice,
and the FGPA on the DAQ board then performs the pixel-by-pixel pedestal subtraction and proceeds
to the zero suppression and the event compression and transmission~\cite{BarbaneraTNS}. As explained in~\cite{BALDINI2021102628}, this online pedestal subtraction is useful to minimize
subtle systematic effects that were found to be important for our application.
While a detailed technical description of the readout sequence is beyond the scope of this
paper, the readout time per event can be parametrized, neglecting sub-dominant contributions,
as a linear function of the number of pixels $n_{\mathrm pix}$ in the region of interest
\begin{align}\label{eq:readout_time}
T_\mathrm{read} = q + m n_\mathrm{pix},
\end{align}
where the constant term $q$ incorporates, e.g., the peak-detection interval, the necessary
data transactions to execute the readout and reset the system, as well as all the fixed
delays that are needed to synchronize the sequence, while the slope $m$ captures all the
contributions whose duration is proportional to the size of the ROI, such as the event
serial readout sequence and the pedestal subtraction in FPGA.
(We note that both $q$ and $m$ depend on the particular settings of the readout ASIC and the
back-end electronics with, e.g., $m$ being dominated by the serial readout clock.)
Compared to \xpoli{}, much of the differential output has been redesigned in \xpoliii{} in
order to streamline the readout. The design of the output buffer has been tuned to minimize
the settling time of the analog signal and increase the maximum serial readout clock practically
usable, which for \xpoli{} was limited to $\sim 5$~MHz, and the analog reset circuitry has been
reworked to avoid the need for extra delays, necessary in \xpoli{} to avoid self-induced false
triggers.
One additional notable change is that the ROI coordinates, once determined, remain stored
in their register until the latter is explicitly reset, which avoids another class of
fixed delays necessary in \xpoli{} to perform the online pedestal subtraction.
The net result of all these optimization is a net (and significant) reduction of both terms
in~Equation~\eqref{eq:readout_time}, as we shall see in section~\ref{sec:tests}.
\section{Measurement setup}
\label{sec:assembly}
The GPD assembly is the natural \emph{packaging} for the readout ASIC, and an essential
ingredient for testing its functionality beyond the basic electrical aliveness.
We have assembled, baked-out, filled and sealed entirely in house, a few detectors equipped with
a \xpoliii{} readout ASIC.
Figure~\ref{fig:gpd_exploded} shows an exploded view of the detector, whose design implements some important simplification with respect to the focal plane detectors onboard IXPE~\cite{BALDINI2021102628}.
We kept the top-level geometry and gas mixture, but we opted for a chip-on-board scheme
(i.e. with the ASIC directly glued and wire-bonded over its printed circuit board,
avoiding the use of a ceramic package) to simplify the assembly.
We used $50~\mu$m pitch, $50~\mu$m thick GEM foils manufactured with a conventional chemical etching process.
Finally, the entrance window is a simple $50~\mu$m Be foil with no alumination.
The main characteristics of the detectors are summarized in table~\ref{tab:gpd_characteristics}.
\begin{table}[htb!]
\centering
\begin{tabular}{p{0.5\linewidth}p{0.4\linewidth}}
\hline
Parameter & Value\\
\hline
\hline
Entrance window & Pure Be, 50~$\mu$m\\
Drift gap thickness & 10~mm\\
Transfer gap thickness & 0.8~mm\\
Readout configuration & Chip on board\\
GEM pitch & 50~$\mu$m\\
GEM hole diameter & 30~$\mu$m\\
Gas filling & Pure DME @ 730~mbar\\
\hline
\end{tabular}
\caption{Summary table of the basic properties of the GPD under test.}
\label{tab:gpd_characteristics}
\end{table}
Each one of these modifications is a step toward our ultimate goal of a new-generation
GPD, featuring a simpler and more robust mechanical assembly, and a higher
polarization sensitivity---both in terms of modulation factor and uniformity
of response to unpolarized radiation. A detailed description of the GPD development,
including the description of a dedicated filling facility
and the long-term stability of the performance, is beyond the scope of this work and
will be presented in a forthcoming paper.
\section{Electrical tests and working point definition}
\label{sec:tests}
This section describes the initial electrical and functional tests that we performed
to verify the basic functionality of the ASIC and gauge the optimal working point, in terms
of event padding and serial readout clock, to be used for measuring the relevant high-level
performance metrics that we shall present in section~\ref{sec:performance}.
For completeness, Figure~\ref{fig:xpoliii_track} shows a single event display of a real track
from a 5.9~keV radioactive $^{55}$Fe source, illustrating much of the improvements from
the new trigger circuitry discussed in section~\ref{sec:architecture}.
\subsection{Electronics Noise}
As already explained, and unlike the previous \xpol{} generations,
\xpoliii{} allows to set the coordinates of the ROI for the readout externally, and we
extensively used this novel feature to measure the system noise in a number of readout chips.
More specifically, we use a series of partially overlapping, $21 \times 21$ regions of
interest covering the entire active area of the chip and, for each ROI, we perform a
number of readouts, with the same basic readout sequence used in nominal data-taking.
For each pixel, the root mean square of the pedestal subtracted PHA values is a measure
of the pre-amplifier noise%
\footnote{To be more precise, since we are operating with an online, event-by-event
pedestal subtraction strategy as described in section~\ref{sec:architecture}, the raw rms
values are a factor of $\sqrt{2}$ larger than the intrinsic noise of the amplifier.
This scale factor is correctly accounted for in Figure~\ref{fig:noise_distribution} as
well as in the numbers quoted in the text.}
Figure~\ref{fig:noise_distribution} shows the noise distribution measured across the channels
of a typical \xpoliii{} chip. The average noise is about $30~e^{-}$~ENC. It should
be emphasized that this level of noise, translating into a track S/N ratio of $\sim 200$ for a
typical event, is largely irrelevant for our applications, and not a limiting factor in any
practical sense.
We note that the initial chips manufactured at the foundry and tested for this work have a small fraction of a \textperthousand{} dead channels; while this does not represent an issue for operating a GPD, a fine tuning of the manufacturing will be investigated with the foundry.
\subsection{ROT Padding}
We systematically investigated the effect of the ROT padding settings on the basic track
properties by exploiting the hybrid \xpoliii{} operational mode described in
section~\ref{sec:architecture} and acquiring X-ray data with different, pre-loaded padding
values---from 2 to 5 pixels on all four sides of the ROT.
Since the main purpose of the padding is to guarantee the full containment of the electron
track, we resorted to using \emph{the fraction of events for which at least one of the pixels
in the track lies on a border row or column}, which might indicate a possible track leakage,
as the relevant metric for our study.
As shown in Figure~\ref{fig:margin_padding} the fraction of tracks with border pixels
decreases monotonically as the padding increases, going from several \% at
2~pixels to a few $10^{-5}$ at 5~pixels. We emphasize that a single pixel above threshold on
one of the borders of the ROI does not necessarily imply that the track has been actually
truncated, nor that the polarimetric information encoded by the reconstructed track direction
has been compromised appreciably.
We therefore conservatively take a padding
value of 2~pixels as the smallest practically usable and we deem padding values larger than
4 pixels as overly large, as the plot in Figure~\ref{fig:margin_padding} clearly indicates
that the curve starts flattening (cf. Figure~\ref{fig:roi_vs_energy}).
We note that more elaborated padding strategies (e.g., use different
values on different sides of the ROT) might conceivably provide an additional measurable
performance gain. The magnitude of the potential improvement can be gauged by noting that
reducing the padding by one unit on one side provides a relative reduction of the ROI size by
\begin{align}
\frac{\delta n}{n} \approx \frac{1}{\sqrt{n}},
\end{align}
which is $\sim 7\%$ at a typical average ROI size of 200 pixels.
Since investigating the effects of
such fine tuning is beyond the scope of this paper, we shall take a constant, comfortable padding of
$3$ pixels as our nominal working point from now on, unless stated otherwise.
\subsection{Readout Time}
As explained in section~\ref{sec:readout}, the average readout time per event can be
parametrized as a linear function of the number of pixels in the ROI.
Since, for any given X-ray energy, events come in a large variety of different
topologies, depending on the depth of the absorption point and the initial emission direction,
as well as the stochastic nature of the interaction processes-even photons from a monochromatic
source will generate in the detector ROIs of many different sizes and shapes. As a consequence,
illuminating the detector with a X-ray source provides a simple mean to study the dead time per
event as a function of the ROI size.
Figure~\ref{fig:dead_time} shows the measured readout time as a function of the ROI size
for two different values of the serial readout clock (6 and 10~MHz), and a constant padding
of 3~pixels on all four sides of the ROT, with the detector uniformly illuminated with 5.9~keV
X-rays from a $^{55}$Fe radioactive source. The data are fitted with the straight line
in Equation~\eqref{eq:readout_time}. At our reference energy of $\sim 2.5$~keV, the center of our fiducial
region in the padding vs readout clock phase-space yields a tentative working point of $\sim 150~\mu$s, at an average ROI size of $\sim 200$~pixels.
This is to be compared with $\sim 500$~pixels and $\sim 1$~ms for the \xpoli{} chip currently
operating on PolarLight and IXPE.
We note that, due to the current design of our back-end electronics, the measured readout
time includes a contribution of $\sim 35~\mu$s for the pedestal subtraction in
the FPGA, for a ROI of 200~pixels.
For a high-throughput
application, this component could be conceivably parallelized at the cost of a slight increase
in the complexity of the system, yielding an additional $> 20$\% increase in readout speed without
side effects on any of the high-level performance metrics.
\section{High-Level Performance Figures}
\label{sec:performance}
In this section we briefly discuss the basic spectral and polarimetric response of the
\xpoliii{} chip, embedded in the new GPD design. The reader should keep in mind that
the readout ASIC is only a component of the mix, and the results are dependent on a number
of other factors, including the detector assembly, the characteristic of the filling gas,
the amplification stage, as well as the track-reconstruction software.
For the sake of internal consistency, all the tests described in the following are done at the
working point identified in section~\ref{sec:tests}, that is, with a constant padding of
3~pixels on the four sides of the ROT, and a serial readout clock of 7.5~MHz.
\subsection{Energy Resolution}
Figure~\ref{fig:fe55peak} shows the pulse-height distribution in pure DME at the nominal
data taking settings for a flat field with 5.9~keV photons from a $^{55}$Fe radioactive
source. After correcting for spatial non-uniformities of the GEM, we achieve a FWHM
around 17\%, in line with the corresponding figure for the IXPE flight
detectors~\cite{BALDINI2021102628}.
We emphasize how even a superficial comparison with Figure~16 in~\cite{BALDINI2021102628}
shows that the low-energy tail in the spectrum due to photon conversions in the passive materials
is reduced by a factor $\sim 2$, due to the elimination of the aluminum deposition on the inner
face of the entrance window. While the suitability of a pure Be window for a long-term use in
space will be investigated and presented in a separate paper, we note that this factor is
beneficial for both the spectral deconvolution and the polarization measurement.
\subsection{Azimuthal Response}
We tested the azimuthal response of the GPD assembly with both unpolarized photons from
a 5.9~keV $^{55}$Fe radiactive source and polarized X-rays generated via Bragg diffraction at
45$^{\circ}$ on a graphite crystal. Our polarized setup produces three different monochromatic
lines at 2.6, 5.2 and 7.8 keV, but the low- and high-energy ones are strongly suppressed due to
the absorption in air and the energy dependence of the GPD quantum efficiency, respectively.
Figure~\ref{fig:modulation_curves} shows two modulation curves, measured in our unpolarized
and polarized setups. For the latter, we selected events in the central 5.2~keV over a beam
spot with a $\sim 1$~mm radius close to the center of the detector, and performing two separate
acquisitions rotated by 90$^{\circ}$ in order to compensate for any possible residual systematic
effect. (Although, as shown in the bottom panel, such effects are largely subdominant.)
The measured modulation factor at 5.2~keV is $46.7 \pm 0.5$ \%, without any additional
selection on the events other than that of the central line in the source spectrum.
The measured spurious modulation at 5.9~keV is smaller than 1\%, and, as explained in~\cite{BALDINI2021102628}, is
to be ascribed to the multiplication stage.
This is in good agreement with both the figures for the IXPE focal-plane detectors (given the differences
in the GPD assembly and the X-ray source) and with the prediction from our Monte Carlo
simulations, indicating that the new readout chip is fully preserving the intrinsic polarimetric
capabilities of the GPD.
\section{Conclusions}
We have developed and tested a new custom readout chip for high-throughput X-ray
astronomical polarimetry. Compared to the ASIC currently operating on PolarLight and IXPE,
\xpoliii{} is able to generate significantly smaller, and yet fully contained track images
and to operate at a faster readout clock, with a streamlined readout sequence.
When combined, these factors reduce the average deadtime per event to $\sim 150~\mu$s,
or a factor of 7 smaller than the current state of the art. Considering that at least $\sim
35~\mu$s of the measured deadtime are an overhead introduced by the existing back-end
electronics, a speed-up of a full order of magnitude is clearly within reach,
making \xpoliii{} an ideal match for the upcoming generation of X-ray observatories.
This leap forward in readout speed is achieved with no measurable degradation in the polarimetric, spectral, imaging or timing capability of the detector, as demonstrated by the tests presented in this work.
In addition, the reduction of at least a factor 2.5 of the ROI size is clearly very helpful in reducing the required bandwidth for both on board satellite operation as well for data transmission to the ground.
Finally, the possibility of operating at a trigger threshold as low as $\sim 150~e^{-}$ paves the way to new types of applications that were previously
impossible, e.g., low-pressure configurations tailored to very low energies or, on the
other side of the energy band, operating in pure ionization mode with suitable gas mixtures.
We argue that this work constitutes a crucial step toward the deployment of a new generation
of GPD matching the needs of future X-ray observatories.
The development of suitable multiplications stages, free of systematic effects in the
azimuthal response, is the other fundamental ingredient of the mix, and we shall report
our progress in a separate paper.
\section{Acknowledgements}
This work was supported by the Italian Space Agency (ASI) through the agreements
2018-11-HH.O, “ADAM - Advanced Detectors for X-ray Astronomy Mission" and 2017.13-H0, "Italian participation to the NASA IXPE mission"
\bibliographystyle{unsrt}
\bibliography{bibliography}
|
Title:
Warped Disk Galaxies. I. Linking U type Warps in Groups/Clusters to Jellyfish Galaxies |
Abstract: arped disk galaxies are classified into two morphologies: S- and U-types.
Conventional theories routinely attribute both types to galactic tidal
interaction and/or gas accretion, but reproducing of U-types in simulations is
extremely challenging. Here we investigate whether both types are governed by
the same mechanisms using the most extensive sample of $\sim$8000 nearby
(0.02\,$<$\,z\,$<$\,0.06) massive ($M_{*}/M_{\odot}$\,$>$\,$10^9$) edge-on
disks from SDSS. We find that U-types show on average bluer optical colors and
higher specific star formation rate (sSFR) than S-types, with more strongly
warped U-types having higher sSFR. We also find that while the S-type warp
properties correlate with the tidal force by the nearest neighbor regardless of
the environment, there is no such correlation for U-types in groups/clusters,
suggesting a non-tidal environmental could be at play for U-types, such as ram
pressure stripping (RPS). Indeed, U-types are more common in groups/clusters
than in fields and they have stellar mass, gas fraction, sSFR enhancement and
phase-space distribution closely analogous to RPS-induced jellyfish galaxies in
clusters. We furthermore show that the stellar disks of most RPS galaxies in
the IllustirsTNG simulation are warped in U-shape and bent in opposite
direction of stripped gas tails, satisfying theoretical expectations for
stellar warps embeded in jellyfishes. We therefore suggest that despite the
majority of U-types that live in fields being still less explained, RPS can be
an alternative origin for those in groups/clusters.
| https://export.arxiv.org/pdf/2208.05534 |
\title{Warped Disk Galaxies. I. Linking U-type Warps in Groups/Clusters to Jellyfish Galaxies}
\correspondingauthor{Suk-Jin Yoon}
\email{sjyoon0691@yonsei.ac.kr}
\author[0000-0003-0960-687X]{Woong-Bae G. Zee$^*$}
\affiliation{Department of Astronomy, Yonsei University, Seoul, 03722, Republic of Korea}
\affiliation{Center for Galaxy Evolution Research, Yonsei University, Seoul, 03722, Republic of Korea}
\author[0000-0002-1842-4325]{Suk-Jin Yoon$^*$}
\affiliation{Department of Astronomy, Yonsei University, Seoul, 03722, Republic of Korea}
\affiliation{Center for Galaxy Evolution Research, Yonsei University, Seoul, 03722, Republic of Korea}
\author[0000-0001-7075-4156]{Jun-Sung Moon}
\affiliation{Department of Astronomy, Yonsei University, Seoul, 03722, Republic of Korea}
\affiliation{Center for Galaxy Evolution Research, Yonsei University, Seoul, 03722, Republic of Korea}
\author[0000-0003-3791-0860]{Sung-Ho An}
\affiliation{Department of Astronomy, Yonsei University, Seoul, 03722, Republic of Korea}
\affiliation{Center for Galaxy Evolution Research, Yonsei University, Seoul, 03722, Republic of Korea}
\author[0000-0003-2922-6866]{Sanjaya Paudel}
\affiliation{Department of Astronomy, Yonsei University, Seoul, 03722, Republic of Korea}
\affiliation{Center for Galaxy Evolution Research, Yonsei University, Seoul, 03722, Republic of Korea}
\author{Kiyun Yun}
\affiliation{Max-Planck-Institut fГјr Astronomie, KГ¶nigstuhl 17, D-69117 Heidelberg, Germany}
\def\thefootnote{*}\footnotetext{These authors contributed equally to this work.}\def\thefootnote{\arabic{footnote}}
\keywords{Galaxy evolution (594), Galaxy interactions (600), Galaxy structure (622), Star formation (1569)}
\section{Introduction} \label{sec:intro}
Observations over the past few decades showed that the warped disk structure is common in the local universe.
More than half of nearby edge-on disk galaxies observed in optical and radio passbands exhibit warps at the outskirts of disks (e.g., \citealt{1990MNRAS.246..458S}; \citealt{1991wdir.conf..181B}; \citealt{1998A&A...337....9R}; \citealt{2002A&A...382..513R}; \citealt{2002A&A...391..519C}; \citealt{2002A&A...394..769G}; \citealt{2003A&A...399..457S}; \citealt{2006NewA...11..293A}; \citealt{2016MNRAS.461.4233R}; see also \citealt{2019NatAs...3..320C}; \citealt{2020ApJ...905...49C}; \citealt{2021ApJ...912..130C} for the Milky Way's warp).
Optical stellar warps are, in general, weaker than HI gaseous warps; however, the incidence of optical warps is as prevalent as HI warps (e.g., \citealt{1990ApJ...352...15B}; \citealt{2002A&A...391..519C}; \citealt{2002A&A...394..769G}; \citealt{2010A&A...519A..53G}; \citealt{2008MNRAS.389...63C}).
The morphology of warped disks is classified into two types, S- (integral-shaped) and U-type (bow-shaped) (\citealt{1998A&A...337....9R}; \citealt{2006NewA...11..293A}).
Galactic warps are often taken as results of galaxy--galaxy interactions.
For instance, simulations by \citet{2018MNRAS.481..286L} and \citet{2018Natur.561..360A} reproduced the grand-design S-shaped warp of the Milky Way using the orbiting Sagittarius dwarf and Magellanic Clouds.
\citet{2014ApJ...789...90K} and \citet{2017MNRAS.465.3446G} suggested that the fly-by encounter is another warp formation mechanism.
\citet{2020MNRAS.498.3535S} used IllustrisTNG simulation to investigate the origin of warped galaxies.
The authors focused only on S-type warps and showed that $\sim$30\% of S-types are constructed by tidal interactions with $\sim$15\% being minor mergers and $\sim$15\% fly-by encounters.
Observationally, some studies suggested that interacting galaxies are more frequently warped than non-interacting galaxies (\citealt{1998A&A...337....9R}; \citealt{2001A&A...373..402S}; \citealt{2006NewA...11..293A}).
There is, however, only a weak correlation (\citealt{1990A&A...233..333K}; \citealt{1998A&A...337....9R}; \citealt{2001A&A...373..402S}) or even anti-correlation (\citealt{2002A&A...394..769G}) between the frequency of warps and the local environment, at variance with the conventional tidal scenario.
Moreover, several isolated galaxies in the field exhibit warped disks (\citealt{1990ApJ...352...15B}; \citealt{1997A&A...321..754V}; \citealt{2007JKAS...40....9A}; \citealt{2008A&A...488..511L}).
Thus, other alternative warp formation mechanisms have been suggested, including cold gas flow (\citealt{1989MNRAS.237..785O}; \citealt{1999MNRAS.303L...7J}; \citealt{2010MNRAS.408..783R}; \citealt{2018MNRAS.474..254R}), interaction with the intergalactic accretion onto disks (\citealt{2002A&A...386..169L}; \citealt{2006MNRAS.365..555S}), ram-pressure by galaxies' movement with respect to the inter-galactic medium (\citealt{2014MNRAS.440L..21H}), misalignment between stellar disks and prolate/oblate dark matter halos (\citealt{1988MNRAS.234..873S}; \citealt{1991ApJ...376..467K}; \citealt{1995MNRAS.275..897N}; \citealt{2000MNRAS.311..733I}; \citealt{2009ApJ...696.1899J}; \citealt{2009ApJ...703.2068D}; \citealt{2021arXiv211013964S}), and interaction with the intergalactic magnetic field (\citealt{1990A&A...236....1B}; \citealt{1998A&A...332..809B}).
However, which mechanism dominates the warp formation remains debated.
When it comes to U-type warps, many observations witnessed conspicuously warped U-types (\citealt{2002A&A...382..513R}; \citealt{2003A&A...399..457S}; \citealt{2006NewA...11..293A}), but their physical origin is still a puzzle.
Most of conventional studies simply assumed that S- and U-types are results of the same warp formation mechanism, such as the tidal interaction.
However, simulations of tidal interactions preferentially create S-types with no or few U-types (e.g., \citealt{1995ApJ...455L..31W}; \citealt{2000ApJ...534..598V}; \citealt{2006ApJ...641L..33W}; \citealt{2008MNRAS.388..697M}; \citealt{2013MNRAS.429..159G}; \citealt{2014ApJ...789...90K}; \citealt{2017MNRAS.465.3446G}; \citealt{2018MNRAS.481..286L}; \citealt{2013MNRAS.429..159G}; \citealt{2018Natur.561..360A}).
On the other hand, \citet{2008ASPC..390..359L} elaborated that intergalactic gas accretion can produce both types, in that S- and U-types are respectively made via angular and linear momentum transmission during gas accretion. When intergalactic gas is accreted from the nearly vertical direction to the galactic disk, galaxies are bent into the U-shaped morphology.
Alternatively, the simulation by \citet{2020MNRAS.498.1080K} showed that the warped disk’s morphology depends on infalling galaxies’ relative direction to the dark matter particles.
When a galaxy moves through dark matter halo, tidal friction induced by the over-density region behind the drifting galaxy produce U-shaped disks.
Our main questions in this study are: ($a$) whether S- and U-type warps are crafted by the same mechanism? and ($b$) how can we explain the existence of U-type warps?
To address them, we construct the most extensive catalog of nearby edge-on warped disk galaxies from Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7).
We for the first time distinguish the intrinsic characteristics of S- and U-type warped galaxies and report the discovery of their discrepancy.
We propose a new possibility that U-type warps in groups/clusters could be related to ram-pressure stripping (RPS).
The paper is organised as follows.
In Section~\ref{sec:maths}, we introduce our data and explain how we measure the warp structure using our newly developed, automated scheme.
In Section~\ref{sec:result}, we show the discrepancy in several intrinsic properties between S- and U-types, including the optical color, specific star formation rate (sSFR) and environmental effects.
In Section~\ref{sec:kine}, we compare the morphologies, kinematics within groups/clusters, sSFR, stellar masses and gas fraction of S- and U-types with those of RPS galaxies and discuss the possible jellyfish origin of U-type warped disks.
In Section~\ref{sec:conc} we summarise our results.
\section{Data and Methodology}
\label{sec:maths}
\subsection{Observational Data}
We select sample galaxies from the SDSS DR7 Legacy Survey (\citealt{2009ApJS..182..543A}). The SDSS Legacy Survey is a large optical imaging survey that provides a map covering more than a quarter of the celestial sphere.
The DR7 is the final data release of the SDSS Legacy Survey. We chose $\sim$ 20,000 galaxy targets ($0.02 < z < 0.06$) from the Main Galaxy Sample and retrieve their images and spectra.
We then select highly inclined edge-on disk galaxies by utilizing the morphological classification data from the Galaxy Zoo 2 (GZ2) project.
The morphological classification was done by \citet{2013MNRAS.435.2835W} and we make use of the class ``spiral galaxy other," which consists of highly inclined edge-on spiral galaxies whose spiral arms cannot be distinguished.
When selecting edge-on galaxies we have the following criteria: ($a$) they are flagged as spirals, which requires 80 \% of the volunteer votes for the spiral category after the de-biasing procedure, and ($b$) they are classified as edge-on spirals for which more than a half of the volunteers voted for the spiral category.
Many of the sample galaxies are too small and/or too faint to identify the presence of the warped structure.
We only use galaxies large and massive enough to analyze their warped disk structures.
We select $\sim$11,000 edge-on disk galaxies with $g$-band isophotal major axis $A_{\rm iso} > 22''$ and stellar mass $M_{*}/M_{\odot}$\,$>$\,$10^9$.
Through visual inspection, we further remove $\sim$3000 galaxies that are not suitable to measure their warped structures due to their intricate dust lanes, spiral arms, and overlapped stars/galaxies at the edge of the galaxies.
The stellar masses are taken from the \citet{2014ApJS..210....3M} catalog that provides photometric data of one million galaxies in the SDSS.
In the catalog, the stellar masses for bulges, disks, and total are based on the updated bulge + disk decomposition data from \citet{2011ApJS..196...11S}.
In \citet{2011ApJS..196...11S}, they used a successful bulge + disk fitting code, {\fontfamily{qcr}\selectfont GIM2D}.
The size measurements from {\fontfamily{qcr}\selectfont GIM2D} are converted into the stellar masses for the decomposed bulge and disk components separately using all $ugriz$ wavebands.
However, \citet{2014ApJS..210....3M} cautioned that in the case of highly inclined edge-on galaxies, the total stellar mass and the sum of decomposed bulge + disk stellar mass can differ.
We select only the reliable galaxies that are within the five times of standard deviation from the correlation between the total stellar masses and the sum of bulge + disk masses.
The optical colors and sSFR in this study are taken from the MPA/JHU catalog (\citealt{2004MNRAS.351.1151B}) that provides spectroscopic data of galaxies in the SDSS. The MPA/JHU catalog lists optical colors as well as sSFR estimated from emission lines such as H$\alpha$, H$\beta$, [OII], [NII], and [SII]. To examine the sSFR enhancement across a galaxy, we use the aperture corrected sSFR data. Galaxies that show emission lines of AGNs are also removed. The AGN candidates are selected using the distribution on the BPT diagram (\citealt{1981PASP...93....5B}; \citealt{2003MNRAS.341...33K}). We exclude `AGN' and `composite' galaxies. We note that this procedure can not classify $\sim$700 galaxies with feeble emission lines because they are not shown on the BPT diagram. They are essentially normal quiescent galaxies with no AGN activity and included in our sample. Our sample contains $\sim$8000 non-AGN, edge-on galaxies.
\subsection{Measurements of the Warped Stellar Disks}
We retrieve the corrected frames of our sample galaxies from the SDSS Data Archive Server. The corrected frame is an imaging frame from the SDSS imaging pipeline that has been bias-subtracted, flat-flagged, and pixel-defect corrected. We crop the field images based on the galaxy positions on the frame. The resultant images used in our analysis are $500 \times 500$ pixel ($3.3' \times 3.3'$) FIT images centered on the sample galaxies. In this study, we use the $g$-band images. The signal-to-noise, sensitivity for foreground stars, and dust extinction depend on wavebands, but we do not find any dependence of measured warp properties on wavebands such as $u$-, $r$-, $i$-, and $z$-band.
In order to extract the overall shape of disks, we blur the images of the target galaxies using the {\fontfamily{qcr}\selectfont SMOOTH} function. This procedure returns a copy of array smoothed with 5-pixel width. Thus, after the smoothing procedure, the scale of our galaxy images becomes five times smaller ($100 \times 100$ pixel). The center of our target galaxy is defined as the location of the brightest point in the smoothed image. The sky is not subtracted in the SDSS imaging pipeline and we thus subtract the background sky values from the original images. The background sky is measured as the median pixel value outside of the region of interest.
To quantify the warp structure, we newly invent an automated warp measurement scheme.
Figure~\ref{fig:1} illustrates the procedure of the warp angle measurement through our automated scheme for the case of S-type (upper row) and U-type warps (lower row).
Specifically, we first align the major axes of galaxies horizontally based on the position angle (PA) from SDSS DR7 database, which is defined as the angle with respect to the north celestial pole following the direction of the right ascension.
We then derive the vertical brightness distribution using a Gaussian function of the vertical distance from the major central axis at each x-coordinate from the left to the right side.
In this procedure, we only use pixels whose value is brighter than five times the standard deviation of the background sky value.
We join all peak points of the vertical brightness distribution at each x-coordinate as a single curved line, which are defined as the `spine' of target galaxy.
To derive the central major axis, we apply the orthogonal linear regression to the central spine points within the half of disk size.
The estimated major axis is usually misaligned with the horizontal line due to an inaccurate PA from the SDSS pipeline data, and even after the first rotation of our target images, the PA is not zero.
This misalignment impedes the measurement of the exact value of warping amplitudes. Thus, we determine the major axis and warped disk structure again by repeating the rearrangement procedure until the calculated PA meets the tolerance (PA $< 0.01^{\circ}$).
Finally, the warping amplitude, $\alpha$, is calculated as the degree of misalignment between the central major axis and tips at the outermost bent structures.
The warping amplitudes are measured on both sides of the galactic disk.
To avoid spurious warp detection, we only consider a disk whose vertical deviation at the outermost point is more significant ($>3\sigma$) than the fluctuation of wobbling along the central major axis.
Between the warping amplitudes of the two sides, we take the larger one as the major warping amplitude, $\alpha$.
We then divide the morphology of disks into S-type, U-type, and unwarped based on the degree and direction in which each endpoint of the disk is bent (\citealt{1998A&A...337....9R}; \citealt{2002A&A...382..513R}; \citealt{2003A&A...399..457S}; \citealt{2006NewA...11..293A}).
\subsection{Warped Disk Galaxy Sample}
In this work, we present a new statistical analysis of optical warps of disk galaxies with a much larger body of data than has been used in previous studies (\citealt{1998A&A...337....9R}; \citealt{2002A&A...382..513R}; \citealt{2003A&A...399..457S}; \citealt{2006NewA...11..293A}; \citealt{2016MNRAS.461.4233R}).
Using our automated warp measurement scheme, after all pre-procedures, we identify 3662 warped disk galaxies out of $\sim$8000 highly inclined edge-on galaxies in the local universe ($0.02 < z < 0.06$).
Among them, we have 2206 S-types and 1456 U-types.
Table~\ref{tab:example_1} gives the basic properties of our S- and U-type warped galaxies, including the number of galaxies, number fraction, median warp angle, and median stellar mass.
The properties are in agreement with the aforementioned studies.
In particular, warps are very common with the fraction of $\sim$50 \% of edge-on disk galaxies, and S-types are about 1.5 times more frequent and exhibit slightly stronger warping amplitudes than U-types.
Figure~\ref{fig:2} shows some randomly selected example images of S- and U-type warps among our final catalog.
\subsection{Control Sample}
The upper panels of Figure~\ref{fig:3} show the distributions of redshift and stellar mass of unwarped and warped galaxies. Compared to the unwarped galaxies, the warped galaxies exhibit slightly higher redshift and lower stellar mass.
The larger redshift for warped galaxies is likely due to the selection bias of warp detection.
Because of their closer distance and thus larger apparent size, there is higher chance that target galaxies are overlapped with other stars and/or galaxies.
The different distribution of stellar mass between warped and unwarped galaxies can be explained by the mass dependence on external mechanisms.
Simulations of galaxy--galaxy interactions showed that the extent of the morphological changes depends on the mass ratio of interacting galaxies, in the sense that less massive galaxies are more susceptible to external forces (e.g., \citealt{2008MNRAS.384..386C}; \citealt{2017MNRAS.465.3446G}).
To avoid the effect of different redshift and stellar mass between warped and unwpared galaxies, we carefully construct a control sample of unwarped edge-on galaxies.
We randomly select an unwarped galaxy for each warped galaxy within the bin range of $\pm$ 0.005 in redshift and $\pm$ 0.1 dex in stellar mass.
In the lower panels of Figure~\ref{fig:3}, the control sample exhibits redshift and stellar mass distributions nearly identical to those of warped galaxies.
\section{Physical Properties of Warped Disk Galaxies}
\label{sec:result}
To address which mechanism governs the formation of the different warp morphologies, we begin with comparison of some key properties of S- and U-type warped galaxies with respect to those of the unwarped control sample, including optical colors, sSFR, and environment.
\subsection{The Optical Color and Star Formation Rate}
In Figure~\ref{fig:4}, the upper panels show the distribution of SDSS optical $g-r$ colors of warped and unwarped galaxies as functions of stellar mass.
Interestingly, we discover an unexpected discrepancy between S- and U-type warps in optical $g-r$ colors.
While S-type warped galaxies show $g-r$ similar to unwarped control sample, U-types are buler by $\sim$0.05 dex.
In the lower panels, we also find that U-type warps exhibit $\sim$0.25 dex higher sSFR than unwarped control galaxies.
The blue-ward color offset and the increase of sSFR are greater for less massive galaxies ($M_{*}/M_{\odot}<10^{10}$).
In Figure~\ref{fig:5}, we compare the residuals of sSFR as a function of the warping amplitude.
We define the residual of sSFR, $\Delta$Log(sSFR), as the difference in sSFR between the warp sample and the mean sSFR of its corresponding stellar mass- and redshift-matched control sample in bins of stellar mass.
Pearson correlation coefficient (cc) is shown in each panel of Figure~\ref{fig:5}.
There is a distinct correlation between $\Delta$Log(sSFR) and warping amplitudes only for U-type warps with cc = 0.185.
Strongly warped ($\alpha > 10^{\circ}$) U-type galaxies have $\sim0.3$ dex higher sSFR than weakly warped ($\alpha < 5^{\circ}$) U-type ones.
The enhancement of sSFR for strongly warped U-type galaxies supports that the galaxies are associated with more efficient star formation activity than S-types and unwarped galaxies.
This discrepancy between S- and U-type warps implies that the two morphologies are probably governed by distinct mechanisms.
\subsection{The Environment}
\label{sec:Env}
Conventional simulations of galaxy-galaxy interactions suggested that S-type warps are common products of tidal interactions (\citealt{2014ApJ...789...90K}; \citealt{2017MNRAS.465.3446G}; \citealt{2018MNRAS.481..286L}; \citealt{2018Natur.561..360A}).
However, whether formation of U-type warps is governed by the same mechanism is unclear.
To explore the environmental effects on galactic warps, we examine the incidence and amplitudes of S- and U-type warps as functions of environmental parameters from the local to galaxy groups/clusters scale.
\subsubsection{The Local Environmental Effect}
We investigate the effect of the local environment on warp formation by examining whether the frequency and amplitude of warps depend on environmental parameters.
We define two different local environmental parameters: ($a$) local density of the surrounding area, $\Sigma_{\textrm{N}}$, ($b$) tidal influence of the nearest neighboring galaxy, $F_{\textrm{tidal}}$.
To estimate the local density, we adopt the projected surface density used by \citet{2006MNRAS.373..469B}, such that \begin{equation} \Sigma_{\textrm{N}} = \cfrac{N}{\pi {d_\textrm{N}}^2} \hspace{1mm}, \end{equation}where $d_{\textrm{N}}$ is the comoving distance to the N$^{th}$ nearest galaxy. The projected surface density Log($\Sigma_{\textrm{45}}$) is used for our study, which is defined as the average value of Log($\Sigma_{\textrm{N}}$) for N = 4 and 5 as proposed by \citet{2006MNRAS.373..469B}.
We also use $F_{\rm tidal}$ defined as \begin{equation} F_{\textrm{tidal}} = \text{Log}(M_{*}/M_{\odot}) - 3\,\text{Log}(r_{\textrm{nearest}}) \hspace{1mm}, \end{equation}where $M_{*}$ is the stellar mass of the closest neighboring galaxy and $r_{\textrm{nearest}}$ is the distance to the neighboring galaxy in kpc.
We define the nearest neighbor as a galaxy that is of the closest projected distance within a radial velocity difference of 1000 km\, s$^{-1}$ and with mass between 0.1 and 10 of the target galaxy.
Figures~\ref{fig:6} and \ref{fig:7} show respectively the frequency and amplitude of warped structures as functions of both local environmental parameters, Log($\Sigma_{\textrm{45}}$) and $F_{\textrm{tidal}}$.
In their upper panels, there is no detectable environmental effect on the warp fraction and amplitude.
Intriguingly, the lower panels of Figures~\ref{fig:6} and \ref{fig:7} show that S-type warps depend on the tidal force by the nearest neighbor galaxies, $F_{\textrm{tidal}}$.
As the tidal influence increases, the incidence and the warping amplitude of S-warps increases with cc = 0.189. When we consider the strongly warped galaxies ($\alpha > 3^{\circ}$), the correlation becomes slightly more significant with cc = 0.203.
Contrary to S-type warps, the frequency and amplitudes of U-types depend on neither environmental parameters.
This implies that the tidal interaction plays a role in formation of S- and U-type warps differently.
This result is consistent with previous theoretical studies, which usually reproduced S-types with few or no U-types by tidal interactions (\citealt{1995ApJ...455L..31W}; \citealt{2000ApJ...534..598V}; \citealt{2006ApJ...641L..33W}; \citealt{2008MNRAS.388..697M}; \citealt{2013MNRAS.429..159G}; \citealt{2014ApJ...789...90K}; \citealt{2017MNRAS.465.3446G}; \citealt{2018MNRAS.481..286L}; \citealt{2013MNRAS.429..159G}; \citealt{2018Natur.561..360A}).
\subsubsection{The Group/Cluster Effect}
We examine the effect of the group/cluster environment on galactic warps.
We select the member galaxies of groups and clusters using the SDSS DR8 group/cluster member galaxy catalog by \citet{2012A&A...540A.106T}.
The group/cluster effects such as RPS and harassment are considered to happen usually in massive clusters ($M_{\textrm{halo}}/M_{\odot}$\,$>$\,$10^{14}$).
However, some recent observations reported that RPS galaxies exist even in low-mass galaxy groups (e.g., \citealt{2011atnf.prop.3970W}; \citealt{2012ApJ...757..122R}; \citealt{2018MNRAS.480.3152V}; \citealt{2019MNRAS.487.2797E}).
\citet{2011MNRAS.416.3170T} determined the RPS effect in groups and clusters with the halo mass range of $10^{12.5}$\,$<$\,$M_{\textrm{halo}}/M_{\odot}$\,$<$\,$10^{15.35}$. They concluded that the strength of RPS becomes more prominent as the halo mass increases, but galaxies in low-mass groups also experience mass loss by RPS.
\citet{2016AJ....151...78P} reported 344 jellyfish candidates in 71 galaxy clusters and 75 candidates in lower mass groups ($10^{11}$\,$<$\,$M_{\textrm{halo}}/M_{\odot}$\,$<$\,$10^{14}$).
Although the exact physical origin of jellyfishes in group environments should be securely investigated, the authors introduced convincing cases of jellyfish galaxies in group and lower mass halos.
They stated that \textit{``jellyfish galaxies could be present even in groups and lower mass halos."}
\citet{2021A&A...652A.153R} also identified 60 jellyfish galaxies in $\sim$500 galaxy groups with halo mass of $10^{12.5}$\,$<$\,$M_{\textrm{halo}}/M_{\odot}$\,$<$\,$10^{14}$ and compared them with LOFAR Two-metre Sky Survey (LoTSS) jellyfish galaxies.
Adopting the recent results, we use two different definition of group/cluster environments: ($a$) a loose criterion including galaxy groups of low halo mass ($M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{12}$), ($b$) a conventional tight criterion only including massive galaxy clusters ($M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$).
When we adopt the looser criterion, 13.2\% of S-types and 17.5\% of U-types belong to groups.
When using the conventional criterion, 2.2\% of S-types and 4.1\% of U-types are classified as cluster member galaxies.
Figure~\ref{fig:8} shows best example images of U-type warped galaxies in massive clusters. While only a small fraction of warped galaxies are involved in groups/clusters, U-type warps are slightly more common in these environments.
However, since the tidal and group/cluster environmental effects occur simultaneously, we find no explicit dependency of warp fractions and warping amplitudes on host halo masses.
To distinguish the effect of tidal interactions and group/cluster environments, we examine the tidal effects for non-group/cluster and group/cluster galaxies separately.
In Figure~\ref{fig:9}, the upper panels show the warping amplitudes of S- and U-type warps in non-group/cluster environments as a function of $F_{\textrm{tidal}}$.
The tidal force is exerted by the nearest neighbor and thus isolated warped galaxies are not shown in this figure.
The middle and lower panels show warped galaxies in groups and clusters.
We exclude the central galaxy in each group/cluster to examine the the group/cluster effect on infalling galaxies only.
S-type warps in non-group/cluster and group/cluster environments show similar positive correlations with cc = 0.220.
This implies that tidal interactions is important in formation of S-types even in groups/clusters.
By contrast, U-types in groups/clusters show no correlation between warping amplitudes and $F_{\textrm{tidal}}$ with relatively low cc = 0.037 and 0.027.
This can be explained by the fact that the imprints of tidal interactions on formation of U-types in groups/clusters are vanished by non-tidal effects such as RPS.
Thus, it is necessary to investigate how non-tidal mechanisms are at work in construction of U-types in groups/clusters.
\section{Are U-type warps in groups/clusters jellyfishes?}
\label{sec:kine}
RPS in groups/clusters often produces galaxies with disturbed HI gas disks and tentacle-like structures, which are so-called ``jellyfish galaxies" (e.g., \citealt{2010MNRAS.408.1417S}; \citealt{2014ApJ...781L..40E}; \citealt{2014MNRAS.445.4335F}; \citealt{2016AJ....151...78P}; \citealt{2017ApJ...844...48P}; \citealt{2018MNRAS.476.3781R}; \citealt{2018MNRAS.476.4753J}; \citealt{2019MNRAS.483.1042Y}).
Some simulations suggested that jellyfish galaxies can exhibit U-shaped \textit{stellar} disks during RPS, specifically at the very early stage. For example, \citet{2012MNRAS.420.1990S} simulated that drag force on the gas disk during RPS can be transmitted to the dark matter halo, and stellar disk can be changed into a U-shape appearance briefly ($<$ 200 Myr).
It is noteworthy that, according to this simulation, U-shaped stellar disks in jellyfishes are bent into the opposite direction of their stripped gas trails.
This theoretical expectation is consistent with the recent simulation of \citet{2022arXiv220101316L}, in which a U-shaped stellar warp opposite to stripped gas tails is present at the beginning of RPS ($t_\textrm{form} \sim$ 185 Myr).
Motivated by the previous work and our results on U-type warps in groups/clusters not showing the tidal effect in local environments, we investigate whether U-type warps in groups/clusters are related to jellyfish galaxies.
In this section, we discuss the possible link between U-type warps in groups/clusters and jellyfish galaxies, considering their similarities in optical morphology, kinematics in groups/clusters, sSFR, and HI fraction.
\subsection{Warped Jellyfish Galaxies}
We first look into the appearance of the warped stellar component of RPS jellyfish galaxies in the literature.
Although previous observational studies on jellyfish galaxies did not delve into galaxies' warped disk structures, we find some interesting examples that show detectable U-shaped stellar disks through our visual inspection on their optical images.
Our examples include MACSJ0451-JFG1, MACSJ0712-JFG1, MACSJ1752-JFG1 in \textit{HST} F606W and F814W from \citet{2014ApJ...781L..40E}, A1758N-JFG1 in \textit{HST} F606W and F814W from \citet{2019ApJ...887..158K}, and JO113 in $g$-, $r$-, and $i$-bands from \citet{2020ApJ...899...13G}.
We find other examples from \citet{2020MNRAS.495..554R}; at least five U-shaped stellar warps among eight edge-on galaxies with no S-shaped ones.
\citet{2021NatAs...5.1308G} investigated the effect of RPS on low-mass galaxies in the Coma and Abell 2147 clusters. We find through our visual inspection that at least four out of five edge-on galaxies (GMP 3176, GMP 2639, GMP 4348, and J160231.45+155749.9) are bent into U-shape morphologies.
However, since these observations did not include radio wavelengths, it is limited to directly compare the directions of stellar warps and stripped gas tails.
Thanks to other recent multi-wavelength observations, we find further promising examples consistent with the theoretical expectation by \citet{2012MNRAS.420.1990S}.
\citet{2021arXiv211104501M} identified 13 jellyfish galaxies from Abell 2744 and Abell 370 clusters.
They provided RGB images from HST of jellyfish galaxies with overlapped [OII] emission from MUSE observation.
We find two edge-on galaxies, A370-06 and A370-08.
Galaxy A370-06 shows a U-shaped stellar disk bent in the opposite direction of gas tails.
\citet{2021A&A...652A.153R} provided $\sim$60 jellyfish galaxies' $g$-band optical images with overlapped LOFAR 144MHz maps.
Through our visual inspection, we find one S-shaped stellar disk (KUG0930+342) and two U-shaped disks (LEDA2158637 and LEDA2157975) among 20 edge-on galaxies from their sample.
Specifically, LEDA2157975 exhibits a U-shaped stellar disk clearly bent in the opposite direction of stripped gas components.
We further investigate morphologies of stellar disks of jellyfish galaxies in the Illustris TNG100 simulation (\citealt{2018MNRAS.475..648P}; \citealt{2019ComAC...6....2N}). \citet{2019MNRAS.483.1042Y} identified 800 candidates of jellyfish galaxies with host halo mass of $10^{13}$\,$<$\,$M_{\textrm{halo}}/M_{\odot}$\,$<$\,$10^{14.6}$ through visual inspection.
Figure~\ref{fig:10} illustrates best example images of stellar and gas components of the jellyfishes.
We find 50 jellyfish galaxies that show detectable U-shaped stellar disks and only one example of the S-shape disk.
The direction of warped stellar disks and gas tails are marked as orange and cyan arrows, respectively.
Most U-shaped jellyfish galaxies show detectable U-type stellar warps bent in the opposite direction of their stripped gas tails.
This is consistent with the numerical expectation of \citet{2012MNRAS.420.1990S} and \citet{2022arXiv220101316L}.
We note that there is one case of the S-shaped stellar warp (ID76093) among Illustris TNG100 jellyfish candidates.
The galaxy has the S-shaped gas component; thus it seems that not only RPS but other gravitational mechanisms influence both its stellar and gaseous disks.
Still, it is necessary to trace how the morphologies of warped disks evolve and prove whether the RPS govern their appearances.
We will investigate the time-dependent evolution of S- and U-type warps using simulations in forthcoming papers.
\subsection{The Phase-Space Distribution of Group/Cluster Galaxies}
The spatial distribution and kinematics of galaxies in clusters allow us to trace the evolutionary phases of the infalling process.
This well-established method is based on the `phase-space diagram' (e.g., \citealt{2011MNRAS.416.2882M}; \citealt{2013MNRAS.431.2307O}; \citealt{2015MNRAS.448.1715J}; \citealt{2017ApJ...843..128R}; \citealt{2019ApJ...876..145S}; \citealt{2019MNRAS.484.1702P}).
The position where each galaxy is located on the diagram indicates its evolutionary phase from beginning of infall to a cluster to being entirely virialised.
Figure~\ref{fig:11} shows galaxies with host halo mass of $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{12}$ on the phase-space diagram
along with the caustic regions (\citealt{2004A&A...414..445M}; \citealt{2011MNRAS.416.2882M}; \citealt{2019MNRAS.484.1702P}).
Figure~\ref{fig:12} shows the same but for galaxies with more massive halo mass of $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$.
We follow the definition of \citet{2019MNRAS.484.1702P} [see their equations (3) and (4) from p = 1 to 5].
The definition is valid for galaxies which are close to cluster center ($R/R_{\textrm{vir}} < 1.0$) and we use the subsample which are located in the same range.
Region 1 is for virialised galaxies, whereas galaxies in Regions 5 and 6 begin to infall.
As illustrated in Figure~\ref{fig:11} and Figure~\ref{fig:12}, while S-type warps are more concentrated in Region 1, the distribution of U-types is more extended to outer regions on the phase-space diagram.
For the looser criterion of halo mass, 22.4\% of S-types and 15.2\% of U-types are located in Region 1, respectively.
The difference becomes greater when we only use warped galaxies in more massive clusters; 35.1\% of S-type and 15.5\% of U-type warps are populated in Region 1.
Many studies showed that jellyfish galaxies tend to spread to the higher value of relative velocity on the phase-space diagram (e.g., \citealt{2018MNRAS.476.4753J}; \citealt{2019MNRAS.483.1042Y}).
Intriguingly, S- and U-type warps among our sample show clearly different distribution of relative velocity.
Histograms in Figure~\ref{fig:11} and Figure~\ref{fig:12} show that while S-type warps exhibit a similar distribution to unwarped control galaxies, U-type warps are more widely in relative velocity.
Among others, \citet{2019MNRAS.483.1042Y} classified $\sim$800 jellyfish galaxies from Illustris TNG100 simulation and showed that the relative velocity distribution of jellyfish galaxies are more extended than the undisturbed control sample.
In the bottom panels of Figure~\ref{fig:11}, our U-type warps show an extended distribution in relative velocity similar to $\sim$\,150 jellyfish candidates observed by \citet{2016AJ....151...78P}.
In the bottom panels of Figure~\ref{fig:12}, when we compare U-type warps and jellyfish candidates in more massive clusters, they still show similar distribution of relative velocity.
Also, we showed in Section~\ref{sec:Env} that, while the S-type warps show systematic correlations with the tidal interactions irrespective of cluster environments, U-types in clusters are not related with the local environments.
These results imply that most U-type warps in galaxy clusters are still not virialised within clusters' potential and seem to be affected by RPS like jellyfish galaxies.
\subsection{Stellar Mass, Star Formation Rate, and Gas Mass Fraction}
Jellyfish galaxies often exhibit increased star formation activity (\citealt{1985ApJ...294L..89G}; \citealt{2008MNRAS.388.1152P}; \citealt{2012MNRAS.427.1252M}; \citealt{2014MNRAS.438..444B}; \citealt{2016AJ....151...78P}; \citealt{2021arXiv210405383R}).
The SF is enhanced at the infalling front of jellyfishes in clusters where gas compression occurs (\citealt{2009A&A...499...87K}; \citealt{2018ApJ...866L..25V}; \citealt{2019MNRAS.487.4580R}; \citealt{2021A&A...650A.111R}).
For instance, \citet{2018ApJ...866L..25V} identified 42 RPS galaxies and found that both disks and tails have systematically higher SFR than the control sample.
\citet{2020MNRAS.495..554R} investigated RPS galaxies in the Coma cluster to find that RPS galaxies exhibit a higher SFR relative to `normal' star-forming galaxies and isolated galaxies in fields. They suggested that RPS can trigger star formation prior to quenching.
\citet{2021arXiv211208728R} identified four jellyfish galaxies from the Perseus cluster and showed that all four jellyfishes exhibit star formation enhancement along the opposite direction of their stripped tails.
\citet{2021gcf2.confE..22D} found 79 jellyfish candidates from the MACS0717 cluster and confirmed that jellyfishes tend to have higher sSFR and bluer colors.
They stated that \textit{``jellyfish galaxy candidates appear to have somewhat larger SFRs than non-jellyfish star-forming galaxies."}
\citet{2022MNRAS.509.1342R} identified 48 RPS galaxies in low-mass groups ($M_{\textrm{halo}}/M_{\odot}$\,$<$\,$10^{14}$) and massive clusters ($M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$) by visual inspection using the Ultraviolet Near Infrared Optical Northern Survey (UNIONS) imaging and showed that RPS galaxies commonly have enhanced SFRs.
However, some studies claimed the lack of observational evidence of the SFR enhancement during RPS.
For example, \citet{2021JKAS...54...17M} and \citet{2021IJAA...11...95H} respectively investigated 48 RPS galaxies in the Virgo cluster and 180 galaxies of the merging cluster Abell 3266, and found no strong evidence of RPS-induced global SF enhancement.
\citet{2021IJAA...11...95H} suggested that RPS-induced SF enhancement is only locally modest, and the overall effect of SF quenching increases as the strength of RPS increases.
However, the net effect of SF enhancement strongly depends on gas fraction and galaxies' evolutionary phase during RPS.
Without morphological classification, these studies do not represent the characteristics of U-shaped disk galaxies in clusters.
\citet{2019MNRAS.487.3102G} introduced a case of jellyfish galaxy JO201 in which star formation is reduced during RPS.
This galaxy shows a H$_2$ cavity with recently suppressed star formation by its AGN feedback in the last few $10^8$ yr.
However, we exclude AGN-hosting galaxies in our sample in this study.
Now, we examine sSFR and star formation efficiency (SFE) of warped and unwarped galaxies at given stellar and gas mass.
Recent observations suggested that on average RPS galaxies exhibit lower stellar mass and higher gas fractions.
For example, \citet{2018A&A...618A.130G} suggested that galaxies with lower stellar mass and higher gas fractions are more affected by RPS.
\citet{2021NatAs...5.1308G} found that $\sim$60\% of RPS galaxies show higher gas fractions.
Even though infalling galaxies should lose their gas components during RPS, most stripped galaxies still exhibit higher gas fractions.
This result is consistent with theoretical expectations of RPS galaxies.
\citet{2012MNRAS.420.1990S} demonstrated using their simulation that galaxies with higher gas mass fractions exhibit clearer signs of U-shaped stellar disk structures.
\citet{2019MNRAS.483.1042Y} found that jellyfishes are about three times more common at lower stellar mass ($M_{*}/M_{\odot} < 10^{10}$) than at higher mass ($M_{*}/M_{\odot} > 10^{10}$).
Following these recent findings, in the upper panels of Figure~\ref{fig:13} and Figure~\ref{fig:14}, we compare sSFRs and HI gas mass fractions ($f_{\textrm{HI}}=M_{\textrm{HI}}/M_{*}$) of galaxies in clusters, using different halo mass criteria ($M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{12}$ and $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$).
Our sample is matched with the galaxies in the ALFALFA survey (\citealt{2018ApJ...861...49H}) and S- and U-type warps have HI detection rates of $\sim$2.4\% and $\sim$3.6\%, respectively.
Despite the small sample, U-type warps in clusters are, on average, of slightly smaller stellar mass and higher $f_{\textrm{HI}}$, which are susceptible to RPS.
The lower left panels of Figure~\ref{fig:13} and Figure~\ref{fig:14} show that, at the same $f_{\textrm{HI}}$, U-type warps exhibit higher sSFR than S-types and unwarped galaxies.
This is more significant for galaxies in more massive host halos ($M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$).
The result indicates that U-type warps in clusters have high SFE similar to jellyfish galaxies (\citealt{2018ApJ...867L..29W}; \citealt{2019MNRAS.487.4580R}; \citealt{2019MNRAS.486L..26S}; \citealt{2020A&A...640A..22R}).
Using the EAGLE simulation, \citet{2016Galax...4...77T} found clear evidence of asymmetric SFE enhancement of RPS galaxies at the windward side.
\citet{2019MNRAS.486L..26S} simulated that pressure by intracluster medium can increase SFE under the assumption that galactic magnetic field halts evaporation of gas clouds during RPS.
Observationally, \citet{2019MNRAS.487.4580R} and \citet{2020A&A...640A..22R} introduced the case of JO206 galaxy among the GAs Stripping Phenomena in galaxies with MUSE (GASP) sample which show higher SFE than field galaxies with similar stellar and gas mass.
Specifically, RPS jellyfish galaxies exhibit 5--10 times higher SFE at disk than stripping tails due to compression of molecular gas by RPS.
Even at the high redshift universe, the enhancement of SFE of RPS galaxies is reported.
\citet{2018ApJ...867L..29W} identified 14 member galaxies from a distant X-ray cluster CLJ1001 (z $\sim$ 2) and found a clear trend of their SFE as a function of cluster centric distance.
Galaxies at the cluster center exhibit less molecular gas than field galaxies, but, intriguingly, higher SFR.
They suggested that cluster environment effects such as RPS can delay quenching with increasing SFE.
However, it is needed to consider that, in observations, one can detect not the initial but present $f_{\textrm{HI}}$.
Many simulations suggested that RPS can remove the gas component efficiently, and RPS galaxies can change systematically from gas-rich to gas-poor over time.
For example, \citet{2017ApJ...838...81Y} showed that RPS galaxies have a wide range of gas fractions. Using a phase-space diagram and HI morphologies, they defined the evolutionary phase of infalling galaxies and showed that the HI deficiency depends on the time since the first infall.
Specifically, only recently infalled galaxies exhibit strongly disturbed HI morphologies and higher gas fractions.
Similarly, \citet{2021JKAS...54...17M} showed that the HI deficiency of RPS galaxies in the Virgo cluster strongly depends on the evolutionary phase of infall.
\citet{2021arXiv211212244M} investigated RPS galaxies in the Coma cluster and showed using a simple model that the distribution of $\Delta f_{\textrm{HI}}$ and $\Delta$sSFR can trace the evolutionary phase of RPS.
The residual of $f_{\textrm{HI}}$, $\Delta f_{\textrm{HI}}$, is defined by the difference in $f_{\textrm{HI}}$ between the warp sample and the mean $f_{\textrm{HI}}$ of its corresponding control sample.
According to their model, gas-rich galaxies temporarily experience a short timescale ($\leq$ 300Myr) gas removal and starburst, resulting in the still regular HI gas fraction and enhanced SFR at the beginning of RPS.
As galaxies evolve, the deficiency of HI increases and SFR is suppressed.
Along this evolution during RPS, galaxies move from the first through second to third quadrant on the $\Delta f_{\textrm{HI}}$--$\Delta$sSFR parameter space.
The lower right panels of Figure~\ref{fig:13} and Figure~\ref{fig:14} show the distribution of S-, U-type warps and unwarped galaxies in clusters.
In this $\Delta f_{\textrm{HI}}$--$\Delta$sSFR plane, the symbol size represents the warping amplitude.
Most U-types are distributed in the first and second quadrants, with only a few in the fourth quadrant.
The absence of galaxies in the fourth quadrant is similar to the RPS galaxies explored by \citet{2021arXiv211212244M}.
We also find that more strongly warped U-types prefer the first quadrant.
On average, the mean value of warping amplitude of U-type warps in the first quadrant is $\sim$5.5$^{\circ}$ greater than that of U-types in the third quadrant regardless of the halo mass criteria of groups/clusters.
This is consistent with our aforementioned result on the correlation between sSFR enhancement and warping amplitudes for U-type warps.
To be more quantitative, we estimate the orthogonal offsets of S- and U-type warps from the unwarped galaxies.
We fit the relation for unwarped galaxies in clusters with host halo mass of $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{12}$ [$\Delta \textrm{Log(sSFR)} = 0.97 \,\Delta\textrm{Log}(f_{\textrm{HI}}) - 0.01$] and in clusters with host halo mass of $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$ [$\Delta \textrm{Log(sSFR)} = 0.94\,\Delta\textrm{Log}(f_{\textrm{HI}})$].
The resulting distributions of offsets are illustrated in Figure~\ref{fig:15}.
On average, U-type warped galaxies show $\sim$ 0.2 dex higer offset than unwarped and S-type ones.
This result is consistent with theoretical expectation of \citet{2021arXiv211212244M}.
\section{Summary and Discussion}
\label{sec:conc}
Our main questions in this study are: ($a$) are S- and U-type warps created by the same mechanism? and ($b$) how can we explain U-type warps?
To address the questions,
we complete the most extensive catalog of $\sim$3000 nearby (0.02\,$<$\,z\,$<$\,0.06) massive ($M_{*}/M_{\odot}$\,$>$\,$10^9$) warped disk galaxies through our new automatic warp measurement scheme.
Then, we compare key properties, including optical colors, sSFR, several environmental parameters, warping amplitudes, and kinematics within groups/clusters, stellar mass and gas fraction, between S- and U-type warped galaxies for the first time.
Our findings are summarised as follows.
\begin{enumerate}
\item
U-type warps exhibit bluer optical color and higher sSFR than S-types and unwarped galaxies at a given stellar mass.
The $\Delta$Log(sSFR) correlates positively with warping amplitudes.
The results indicates that U-type warp formation mechanism entails higher sSFR.
\item
While warp properties of S-type warps are correlated with the tidal force by the nearest neighbors irrespective of galaxy cluster membership, U-types in clusters show no local environmental dependence.
This implies that the conventional gravitational interactions create S-type warps only and other non-tidal alternative mechanisms are required to explain the existence of U-type warps at least in clusters.
\item
A thorough visual inspection of jellyfish galaxies in literature leads us to find some intriguing examples of RPS-driven U-shaped warped galaxies bent into the opposite direction of stripped gas tails, which is consistent with previously published theoretical expectations of RPS-driven stellar warps.
\item
There are considerable similarities between U-type warps in galaxy clusters and RPS-induced jellyfish galaxies in terms of the morphology, location on the phase-space diagram, sSFR and $f_\textrm{{HI}}$.
The results suggest that U-types in groups/clusters could be connected to jellyfish galaxies at the very early stage of RPS, explaining the existence of U-types which are hard-to-make in conventional galaxy--galaxy interaction simulations.
\end{enumerate}
The discovery of similarities between U-type warps in groups/clusters and jellyfish galaxies is encouraging in the context of unveiling the origin of warped disk galaxies.
Given that only 17.5$\%$ (4.1$\%$) of U-type warps belong to groups/clusters with $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{12}$ (more massive clusters with $M_{\textrm{halo}}/M_{\odot}$\,$\geq$\,$10^{14}$), the RPS origin of the group/cluster U-type warps remains to be approved by follow-up observations such as integral field units and HI survey of the gas component in these galaxies.
Also, it is still limited to explain the other majority of U-type warps which live in fields.
Different pathways to form U-shaped disk galaxies including galaxy-galaxy interactions, large-scale gas infall, and interactions with misaligned dark matter halos remain to be fully explored.
We propose that RPS galaxies can be observed as U-type warps with higher sSFR and stronger warping amplitudes at the beginning of infall.
Following our scenario, it is natural to expect that U-type warps should be more common in more massive clusters due to their stronger effect of RPS.
However, the incidence and strength of U-type warps in groups/clusters do not depend on their host halo mass.
The absence of a strong correlation between U-type warps and groups/clusters' halo mass can be explained by the short time-scale of RPS-driven U-shaped stellar disks.
Theoretical studies expected that U-shaped stellar disks could be constructed very briefly ($\leq$ 200Myr) at the early stage of RPS (\citealt{2012MNRAS.420.1990S} and \citealt{2022arXiv220101316L}).
This time-scale is too short to be observed as stable U-shaped stellar disks.
In contrast, RPS occurs slowly with a longer time-scale ($\sim$3 Gyr) in galaxy groups with less massive halos.
Thus, despite its lower efficiency of RPS, there is more chance to detect stable stripped stellar disks in less massive groups than in more massive clusters.
It is necessary to investigate the time-dependence evolution of RPS-driven stellar warps in detail by further cosmological simulations.
Our results are essential for galactic warp studies because U-shaped RPS galaxies can lead to overestimation of the incidence of tidal-origin U-type warps.
Thus, warped galaxies in fields and clusters should be investigated separately to assess the effect of the tidal interactions without the contamination of cluster environment effects.
Also, it is important for RPS studies that suggest to look for warps as an additional sign of stripping.
The measurement of warp morphologies and warping amplitudes are affected by the complex spiral arm, dust lanes, and the orientation angle from the observer's viewpoint.
We thus need more massive data to study the effect of the substructures on warp measurements.
We initiated \textit{Poppin' Galaxy}\footnote{\href {https://www.zooniverse.org/projects/wim0705/poppin-galaxy}{https://www.zooniverse.org/projects/wim0705/poppin-galaxy}} project through the Zooniverse in 2018 to gather massive warped galaxies classified by over 10,000 volunteers.
In forthcoming papers, we are to investigate the origin and properties of the warp phenomenon exploiting our extensive observational data in comparison to high-resolution cosmological simulations of galaxies.
\acknowledgments
S.-J.Y. acknowledges support by the Mid-career Researcher Program (No. 2019R1A2C3006242) through the National Research Foundation of Korea.
Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is \url{http://www.sdss.org/}.
The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\bibliography{sample63}
\bibliographystyle{aasjournal}
|
Title:
Errors When Constraining Hot Blackbody Parameters with Optical Photometry |
Abstract: Measuring blackbody parameters for objects hotter than a few 10^4K with
optical data alone is common in many astrophysical studies. However this
process is prone to large errors because at those temperatures the optical
bands are mostly sampling the Rayleigh-Jeans tail of the spectrum. Here we
quantify these errors by simulating different blackbodies, sampling them in
various bands with realistic measurement errors, and re-fitting them to
blackbodies using two different methods and two different priors. We find that
when using only optical data, log-uniform priors perform better than uniform
priors. Still, measured temperatures of blackbodies above ~35,000K can be wrong
by ~10,000K, and only lower limits can be obtained for temperatures of
blackbodies hotter than ~50,000K. Bolometric luminosities estimated from
optical-only blackbody fits can be wrong by factors of 3-5. When adding
space-based ultraviolet data, these errors shrink significantly. For when such
data are not available, we provide plots and tables of the distributions of
true temperatures that can result in various measured temperatures. It is
important to take these distributions into account as systematic uncertainties
when fitting hot blackbodies with optical data alone.
| https://export.arxiv.org/pdf/2208.13674 |
\title{Errors When Constraining Hot Blackbody Parameters with Optical Photometry}
\author[0000-0001-7090-4898]{Iair Arcavi}
\affiliation{The School of Physics and Astronomy, Tel Aviv University, Tel Aviv, 69978, Israel}
\affiliation{CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada}
\correspondingauthor{Iair Arcavi}
\email{arcavi@tauex.tau.ac.il}
\keywords{High energy astrophysics (739), Astronomical methods (1043), Optical astronomy (1776), Ultraviolet astronomy (1736)}
\section{Introduction}
In many astrophysical studies it useful to fit a blackbody spectrum to broadband photometry in order to determine the effective temperature, emitting radius, and bolometric luminosity of an object or transient event. Hot blackbodies (at temperatures of a few $10^4$\,K) are of particular interest, for example in certain tidal disruption events (see \citealt{vanVelzen2020} and \citealt{Gezari2021} for recent reviews), in the early phases of supernovae \citep[e.g.][]{Valenti2016} and in the very early phases of kilonovae \citep[at least based on the one case observed so far;][]{Abbott2017}.
Constraining the temperature, and with it the bolometric luminosity, of tidal disruption events is important for distinguishing between emission models, and for deriving properties of the supermassive black hole population \citep[e.g.][]{Piran2015,Dai2018,Mockler2019,Ryu2020}. Measuring the cooling rate of supernovae during their first days can be used to constrain both the progenitor parameters and explosion physics \citep[e.g.][]{Nakar2010,Rabinak2011,Shussman2016,Arcavi2017,Rubin2017,Sapir2017}. For kilonovae to be used to constrain the neutron-star equation of state, heavy-element nucleosynthesis, and even cosmology, it turns out that measuring their cooling during the first few hours is especially important \citep[e.g.][]{Arcavi2018}.
However, in many of these cases (when not highly redshifted), the optical wavelength regime is in the Rayleigh–Jeans tail of the spectrum, where it is difficult to constrain the temperature with optical data alone. Figure \ref{fig:filters} illustrates how the spectra of blackbodies hotter than approximately 30,000\,K are very similar in the optical bands. It is therefore expected that optical data alone will not be able to distinguish between blackbodies at those temperatures clearly, especially when measurement uncertainties are taken into account.
Indeed, \cite{Faran2018} show that when approaching temperatures of 20,000\,K, temperature estimates based on optical data alone can be off by $\gtrsim$1,000\,K. Here we expand the quantification of such measurement errors and characterize them also at higher temperatures. We simulate observations of blackbodies at various temperatures, taking into account realistic measurement uncertainties, in both optical and ultraviolet bands, and fit the data to blackbodies as would be done for real observations (Section \ref{sec:method}). We then measure how different the best-fit temperature is from the true simulated temperature for different temperatures, band combinations, methods, priors, and measurement uncertainties (Section \ref{sec:results}). These results are then inverted in order to determine, given an optical-only temperature measurement, what true temperature values could produce it. We also check what effect the temperature measurement errors have on the errors of the deduced bolometric luminosities (Section \ref{sec:analysis}).
We do not consider real-life complications such as K-corrections, distance uncertainties, extinction uncertainties, line emission and absorption contamination, and line blanketing. Extinction and line blanketing are especially important to consider with ultraviolet data; however here we wish to isolate the degeneracies and systematic errors of fitting just a hot blackbody component with optical data, and compare it to using also ultraviolet data.
\section{Method}\label{sec:method}
We use two separate methods for generating synthetic magnitudes from blackbody spectra and for fitting them back to blackbody spectra (each method performs both operations). The first method makes use of the Astrolib PySynphot package (hereafter referred to only as \texttt{pysynphot}; \citealt{pysynphot}), and the second uses the Light Curve Fitting package (hereafter, \texttt{lightcurve\_fitting}; \citealt{lightcurve_fitting}), version 0.4.0.
The main difference between the methods is in the way the blackbody spectrum is fit to the data (see below). However, there are also differences in the magnitudes generated by each method for given blackbodies (Appendix \ref{sec:app-diffs} and Figure \ref{fig:methodsdiff}). Given these differences, we use each method to fit only data simulated with that same method.
\subsection{Generating Synthetic Data}
We create spectra of blackbodies with a radius of 1,000\,R$_{\sun}$ and temperatures of 5,000\,K--80,000\,K in 5,000\,K increments. For each blackbody, we generate synthetic spectral energy distribution (SED) magnitudes in the Sloan Digital Sky Survey \citep[SDSS;][]{Doi2010} $ugri$ bands, the Johnson-Cousins \citep{Johnson1953,Cousins1976} $BVRI$ bands, and The Neil Gehrels {\it Swift} Observatory (hereafter {\it Swift}; \citealt{Gehrels2004}) Ultraviolet/Optical Telescope \citep{Roming2005} $uvw1$, $uvm2$ and $uvw2$ bands (hereafter $w1$, $m2$ and $w2$ respectively)\footnote{Magnitudes in the $ugri$-bands are generated in the AB system, while magnitudes in the rest of the bands are generated in the Vega system.}.
We generate data in six band combinations: $gri$ (optical observations), $ugri$ (optical with ground-based ultraviolet observations), $ugri$ + $w1$, $m2$ and $w2$ (optical with ground- and space-based ultraviolet observations), and similarly for the Johnson-Cousins bands, $BVRI$, $UBVRI$, and $UBVRI$ + $w1$, $m2$ and $w2$. We do not consider infrared bands. Although hot blackbodies differ in their infrared emission (Fig. \ref{fig:filters}), in practice their infrared emission is faint compared to their optical emission, and therefore more difficult to measure.
In total we produce 32 sets of blackbody simulation parameters (16 temperatures using two methods for generating the magnitudes). For each of these 32 sets of parameters, we generate two ensembles of 100 SED realizations each by adding randomly generated Gaussian noise to each synthetic magnitude, once with a standard deviation of 0.05 magnitudes and once with a standard deviation of 0.1 magnitudes\footnote{We use fixed standard deviations for all bands at all temperatures, though in reality, measurement uncertainties could depend on band and temperature.}.
\subsection{Fitting Blackbodies to the Synthetic Data}
We use the ``forward modeling'' technique preferred by \cite{Brown2016}, whereby the model spectrum (in our case, a blackbody) is convolved with the various filter response curves and then compared to the simulated data with chi-squared used as the likelihood function and all bands weighted equally. In the \texttt{pysynphot} method, the fit is done in magnitude vs. wavelength space, while in the \texttt{lightcurve\_fitting} package it is done in flux density ($F_\nu$) vs. frequency space.
In both cases the fit is performed using the Markov Chain Monte Carlo (MCMC) technique as implemented by the \texttt{emcee} package \citep{Foreman-Mackey2013}. We use 16 walkers and 600 steps (of which 200 are burn-in steps) to fit both the temperature and radius of the blackbody to the data. We repeat the fits with two types of priors: uniform and log-uniform for both the temperature (in the range $10^3$--$10^5$\,K), and the radius (in the range $10$--$10^6$\,R$_{\sun}$). For the log-uniform case, we also test the effect of underestimating the photometric uncertainties by reporting half of the true uncertainty to the fitters. The code used to simulate and fit the blackbodies is publicly available on Github\footnote{\url{https://github.com/arcavi/bb_sims}}.
In total we fit each of the $32\times200$ realizations of simulated data in 18 different ways (six band combinations, using two types of priors and underestimated uncertainties for one of the priors). Thus we have 115,200 different fits, which we proceed to analyse.
\subsection{Nomenclature}
Hereafter, we will use the term ``error'' to denote the difference between a measured value and a true value, while the term ``uncertainty'' will denote the estimated spread of a measured value due to measurement errors or fit posteriors. For example, if for a 35,000\,K blackbody we measure a temperature of 29,000$\pm$7,000\,K, then the error is 6,000\,K while the uncertainty is 7,000\,K.
\section{Results}\label{sec:results}
\subsection{Example Fits}
Example fits using the log-uniform priors for a 30,000\,K blackbody are shown in Figure \ref{fig:eg_fits}. As expected, fitting such a blackbody with optical data alone results in both inaccurate (i.e. having a large error) and imprecise (i.e. having a large uncertainty) measurements of the blackbody radius and temperature, compared to fitting it with optical and ultraviolet data together. In the top realization in Figure \ref{fig:eg_fits}, fit with the \texttt{pysynphot} method, the most likely values are close to the true ones, but the uncertainty in the optical-only fits are much larger compared to the fits which use the optical and ultraviolet data together. In the bottom realization, fit with the \texttt{lightcurve\_fitting} method, the optical-only fits show not only larger uncertainties compared to the combined optical-ultraviolet fits, but are also farther from the true values (by $\sim$10,000\,K).
The difference in accuracy in this example is not due to the different methods (see below), but rather the different realizations. This is why we generate ensembles of 100 realizations for each temperature, band combination, magnitude uncertainty, and method, and look at the statistical properties of the resulting fits.
\subsection{Full Set of Fits}
For each individual MCMC fit we take the 50th percentile (i.e. median) of the posterior as the best-fit estimated value. Repeating this for each of the 100 realizations of a given ensemble produces an ensemble distribution of best-fit values for that set of blackbody temperature, band combination, magnitude uncertainty, and method. Similarly, we take the 16th and 84th percentiles of each MCMC fit posterior as the lower and upper uncertainty estimates of the fit respectively\footnote{We use the median and 68\% credible interval range since it is equivalent to using the mean $\pm1\sigma$ range for normally distributed results, but does not require assuming that the distribution is normal (or even symmetric).}, and also produce an ensemble-distribution of these values per parameter set.
\subsubsection{The Effect of Priors}
We plot the median of the best-fit temperature ensemble distributions for each set of blackbody simulation parameters and each type of prior for the 0.05 magnitude uncertainty simulations and the SDSS SEDs in Figure \ref{fig:res_t_sdss_p05_priorcomp}. When using optical and ultraviolet data (right panel in Figure \ref{fig:res_t_sdss_p05_priorcomp}), both priors produce similar results. However, when using optical data alone (left panel in Figure \ref{fig:res_t_sdss_p05_priorcomp}), the log-uniform priors produce more accurate ensemble median best-fit temperatures, compared to the uniform priors. The results for the SDSS 0.1 magnitude uncertainties and for both uncertainty values with the Johnson-Cousins SEDs are similar and are presented in Figures \ref{fig:res_t_sdss_p1_priorcomp}, \ref{fig:res_t_john_p05_priorcomp} and \ref{fig:res_t_john_p1_priorcomp}. We conclude that the log-uniform priors produce more precise results when fitting optical data, and we proceed to analyse only those fits in the remainder of this work.
\subsubsection{Full Results for the Log-Uniform Priors}
We plot the median and 16th--84th percentile of the ensemble distributions for each set of blackbody simulation parameters in Figures \ref{fig:res_t_sdss_p05} and \ref{fig:res_t_sdss_p1} for the SDSS SEDs (the results for the Johnson-Cousins SEDs are similar and are presented in Figures \ref{fig:res_t_john_p05} and \ref{fig:res_t_john_p1}).
So, for example, if the true temperature of the object being measured is 40,000\,K, then $gri$-band measurements with an uncertainty of 0.05 magnitudes yield best-fit temperatures in the range $\sim$30,000--60,000\,K (68\% bounds; top left panel of Figure \ref{fig:res_t_sdss_p05}). The fit uncertainties in this case are $\sim$15,000--20,000\,K (bottom left panel of Figure \ref{fig:res_t_sdss_p05}). Incorporating ground- and space-based ultraviolet measurements reduces the range of the best-fit temperatures to just a few hundred K around the true temperature (top right panel of Figure \ref{fig:res_t_sdss_p05}). The fit uncertainties are also (correctly) reduced to a few hundred K in this case (bottom right panel of Figure \ref{fig:res_t_sdss_p05}).
\subsubsection{The Effect of Underestimating the Photometric Uncertainties}
We plot the simulation results in the same way as above for the fits where the reported uncertainty is half of the true one in Figures \ref{fig:res_t_sdss_p05_err_und} -- \ref{fig:res_t_john_p1_err_und}. We find that there is no significant difference compared to the case where the uncertainties are estimated correctly. The only apparent difference is that underestimating the photometric uncertainties causes the \texttt{pysynphot} method to underestimate the derived temperature uncertainties.
\section{Analysis} \label{sec:analysis}
Having established that the log-uniform priors perform better compared to the uniform priors and that underestimating the photometric uncertainties does not have a strong effect on the results, our analysis hereafter focuses only on the fits using the log-uniform priors and having the correct uncertainty estimations.
\subsection{Errors in Temperature Estimation}
We find that for 0.05 magnitude uncertainties (Fig. \ref{fig:res_t_sdss_p05}), relying on $gri$-band data alone can result in several thousand K errors in temperature (in both directions) already at 20,000\,K. At around 40,000\,K these errors increase to -10,000\,K and +20,000\,K. At temperatures of 80,000\,K, systematic underestimates of up to 20,000\,K can occur. For 0.1 magnitude uncertainties (Fig. \ref{fig:res_t_sdss_p1}), at temperatures of 30,000\,K, overestimates of 20,000\,K can occur, while at 80,000\,K, underestimates reach 40,000\,K. Adding $u$-band observations slightly decreases these errors, but not significantly.
In all of these cases, the reported fit uncertainties (dotted lines in the bottom panels of Figures \ref{fig:res_t_sdss_p05} and \ref{fig:res_t_sdss_p1}) roughly encompass the error correctly. However, for blackbodies hotter than about 50,000\,K, the fit uncertainties remain symmetric while the error is systematically toward underestimating the temperature (as expected when sampling a blackbody deep in its Rayleigh–Jeans tail). Similar results are seen for the Johnson-Cousins SEDs (Figures \ref{fig:res_t_john_p05} and \ref{fig:res_t_john_p1}).
It is only when adding data from the {\it Swift} bands do the errors decrease significantly, and the fit uncertainties encompass the errors correctly at all temperatures.
In all cases studied there are almost no differences between the two fitting methods.
\subsection{The True Temperature Posterior For Optical Fits}
Unfortunately, ultraviolet observations are expensive and not always possible. Therefore it desirable to know the correct uncertainty when measuring temperatures with optical data alone. This is essentially the action of inverting the top left plots of Figures \ref{fig:res_t_sdss_p05}, \ref{fig:res_t_sdss_p1}, \ref{fig:res_t_john_p05} and \ref{fig:res_t_john_p1}.
In other words, assume we are only able to obtain $gri$-band data, we fit a blackbody to them, and we obtain a best-fit temperature of 30,000\,K - what true temperatures could have led to that measurement, given a known magnitude uncertainty?
The answer to that question, for blackbodies fit to $gri$-band data with 0.05 magnitude uncertainties using the \texttt{lightcurve\_fitting} method, is presented in Figure \ref{fig:inverse_t_eg}. So, for example, if we obtained a best-fit temperature in the range 30,000--35,000\,K, the true temperature is indeed likely in that range, but could be as low as 20,000\,K and there is a probability tail going up to and above 60,000\,K (6th histogram from the bottom in Figure \ref{fig:inverse_t_eg}). The results are similar for $BVRI$-band data and when using the \texttt{pysynphot} method. These are shown, together with 0.1 magnitude uncertainty data, in Figure \ref{fig:inverse_t_all}.
The 16th, 50th, and 84th percentiles of the true temperature distributions that produce a given measured temperature range are presented (as the median and lower and upper bounds) in Tables \ref{tab:inverse_t_flux} (for the \texttt{lightcurve\_fitting} method) and \ref{tab:inverse_t_mag} (for the \texttt{pysynphot} method). The data behind these distributions are publicly available on GitHub\footnote{\url{https://github.com/arcavi/bb_sims}}. Continuing our example where we obtained a best-fit temperature in the range 30,000--35,000\,K by fitting $gri$-band data with 0.05 magnitude uncertainties, the true temperature could be between 25,000\,K and 55,000\,K (68\% confidence bounds) with a median of 35,000\,K.
This is a large uncertainty, which needs to be taken into account when fitting hot blackbodies using optical data alone. It becomes especially important for optical-only measured temperatures $\gtrsim$ 25,000\,K.
\begin{deluxetable*}{@{\extracolsep{4pt}}ccccc@{}}
\tablecaption{Median and 68\% confidence bounds of the true temperatures that produce various measured temperatures in the \texttt{lightcurve\_fitting} method for different band and magnitude uncertainty combinations.} \label{tab:inverse_t_flux}
\tablehead{
\colhead{Measured temperature} & \multicolumn{4}{c}{True temperature} \\
\cline{1-1} \cline{2-5}
\colhead{} & \multicolumn{2}{c}{For a $gri$-band measurement} & \multicolumn{2}{c}{For a $BVRI$-band measurement}\\
\cline{2-3} \cline{4-5}
\colhead{} & \colhead{0.05 mag uncertainty} & \colhead{0.1 mag uncertainty} & \colhead{0.05 mag uncertainty} & \colhead{0.1 mag uncertainty} \\
\colhead{($10^3$\,K)} & \colhead{($10^3$\,K)} & \colhead{($10^3$\,K)} & \colhead{($10^3$\,K)} & \colhead{($10^3$\,K)}}
\startdata
5--10 & $5.0_{ -0.0 }^{ +5.0 }$ & $5.0_{ -0.0 }^{ +5.0 }$ & $10.0_{ -5.0 }^{ +0.0 }$ & $5.0_{ -0.0 }^{ +5.0 }$ \\
10--15 & $10.0_{ -0.0 }^{ +5.0 }$ & $15.0_{ -5.0 }^{ +0.0 }$ & $15.0_{ -5.0 }^{ +0.0 }$ & $12.5_{ -2.5 }^{ +2.5 }$ \\
15--20 & $15.0_{ -0.0 }^{ +5.0 }$ & $20.0_{ -5.0 }^{ +15.0 }$ & $20.0_{ -5.0 }^{ +0.0 }$ & $20.0_{ -5.0 }^{ +5.0 }$ \\
20--25 & $25.0_{ -5.0 }^{ +5.0 }$ & $35.0_{ -15.0 }^{ +16.0 }$ & $25.0_{ -5.0 }^{ +0.0 }$ & $25.0_{ -5.0 }^{ +15.0 }$ \\
25--30 & $30.0_{ -5.0 }^{ +10.0 }$ & $35.0_{ -10.0 }^{ +29.2 }$ & $30.0_{ -5.0 }^{ +10.0 }$ & $35.0_{ -10.0 }^{ +20.0 }$ \\
30--35 & $35.0_{ -10.0 }^{ +20.0 }$ & $40.0_{ -15.0 }^{ +25.6 }$ & $35.0_{ -10.0 }^{ +10.0 }$ & $45.0_{ -18.2 }^{ +23.2 }$ \\
35--40 & $45.0_{ -10.0 }^{ +20.0 }$ & $50.0_{ -20.0 }^{ +20.0 }$ & $40.0_{ -10.0 }^{ +15.0 }$ & $45.0_{ -10.0 }^{ +20.0 }$ \\
40--45 & $50.0_{ -15.0 }^{ +20.0 }$ & $47.5_{ -20.3 }^{ +22.5 }$ & $45.0_{ -10.0 }^{ +25.0 }$ & $50.0_{ -15.4 }^{ +20.0 }$ \\
45--50 & $50.0_{ -10.0 }^{ +25.0 }$ & $55.0_{ -20.0 }^{ +20.0 }$ & $52.5_{ -12.5 }^{ +17.5 }$ & $50.0_{ -15.0 }^{ +20.0 }$ \\
50--55 & $55.0_{ -15.0 }^{ +15.2 }$ & $60.0_{ -25.0 }^{ +15.0 }$ & $50.0_{ -10.0 }^{ +20.0 }$ & $55.0_{ -20.0 }^{ +15.0 }$ \\
55--60 & $60.0_{ -15.0 }^{ +15.0 }$ & $55.0_{ -15.0 }^{ +20.0 }$ & $57.5_{ -12.9 }^{ +17.5 }$ & $60.0_{ -20.0 }^{ +15.0 }$ \\
60--65 & $60.0_{ -13.2 }^{ +15.0 }$ & $60.0_{ -20.0 }^{ +15.0 }$ & $60.0_{ -11.0 }^{ +10.0 }$ & $60.0_{ -18.0 }^{ +10.0 }$ \\
65--70 & $60.0_{ -15.0 }^{ +15.0 }$ & $60.0_{ -15.0 }^{ +15.0 }$ & $65.0_{ -15.0 }^{ +10.0 }$ & $65.0_{ -15.8 }^{ +15.0 }$ \\
70--75 & $65.0_{ -11.4 }^{ +10.0 }$ & $65.0_{ -20.0 }^{ +10.0 }$ & $65.0_{ -10.8 }^{ +10.8 }$ & $65.0_{ -10.0 }^{ +10.0 }$ \\
75--80 & $70.0_{ -20.0 }^{ +10.0 }$ & $60.0_{ -10.2 }^{ +3.4 }$ & $70.0_{ -15.0 }^{ +10.0 }$ & $70.0_{ -14.0 }^{ +10.0 }$ \\
\enddata
\tablecomments{Simulations only go up to 80,000\,K. The true upper bound is likely much higher for measured temperatures above 40,000\,K (the true temperature could actually be unbounded from above in certain regimes; see the text for details).}
\end{deluxetable*}
Our results are truncated at 80,000\,K because those are the highest temperatures we simulated, but it is evident from the flattening of the curve in the top left panels of Figures \ref{fig:res_t_sdss_p05}, \ref{fig:res_t_sdss_p1}, \ref{fig:res_t_john_p05} and \ref{fig:res_t_john_p1}, that any optical measurement yielding a best-fit temperature of 60,000\,K or above for 0.05 magnitude uncertainties, and 40,000\,K or above for 0.1 magnitude uncertainties, actually sets only a lower limit on the true temperature.
\subsection{Errors in Bolometric Luminosity Estimation}
In many cases, the blackbody temperature $T$ and radius $R$ are used to estimate the bolometric luminosity through $L_{bol}=4{\pi}R^2\sigma_{SB}T^4$ (with $\sigma_{SB}$ the Stephan-Boltzmann constant). One would expect that large errors in the temperature would translate to order-of-magnitude errors in the bolometric luminosity. However, as seen in the example fits in Figure \ref{fig:eg_fits}, there is typically an anticorrelated degeneracy between the temperature and radius, which cancels some of the error in each parameter when estimating the bolometric luminosity.
Figure \ref{fig:res_l_sdss} shows the ratio between the measured bolometric luminosity and the true bolometric luminosity for various SDSS and ultraviolet band combinations. When relying on optical data alone, the luminosities of blackbodies up to $\sim$60,000\,K are overestimated on average. For 0.05 magnitude uncertainties, the luminosity overestimation reaches a factor of $\sim$3 (84th ensemble percentile). At higher temperatures, beyond $\sim$60,000\,K, the preference shifts to underestimating the luminosity by factors of $\sim$3 (16th percentile). For 0.1 magnitude uncertainties, the luminosity over- and underestimates reach factors of $\sim$4--5 (at the 84th and 16th percentiles, respectively).
Adding space-based ultraviolet bands reduces these errors significantly with the ensemble average luminosities tracking the true luminosity at all temperatures tested. Similar results are seen for the Johnson-Cousins SEDs (Fig. \ref{fig:res_l_john}). Like in the temperature estimation, there are almost no differences between the two methods (except at higher temperatures in the $gri$-band fits where the \texttt{lightcurve\_fitting} method has a longer tail toward underestimation of the true luminosity).
\section{Summary and Conclusions}
When limited to optical data, large errors in blackbody temperatures can occur, reaching $\sim$10,000\,K at 30,000--40,000\,K (depending on the measurement uncertainties). Beyond 40,000--60,000\,K it is only possible to obtain lower limits on the true temperature. It is important to consider these errors whenever an optical measurement yields a temperature $\gtrsim$ 25,000\,K. We calculate realistic uncertainties for such measurements and present them in Tables \ref{tab:inverse_t_flux} and \ref{tab:inverse_t_mag}. These results refer to fits using log-uniform priors. Uniform priors produce even less precise results, and are not discussed further. Underestimating the photometric uncertainties by a factor of two has almost no effect on the results (especially when using the \texttt{lightcurve\_fitting} package).
Despite the large errors in temperature measurements of hot blackbodies with optical data alone, the error on the bolometric luminosity calculated from such fits is not orders of magnitude as one might naively expect from the $L_{bol}{\propto}T^4$ dependence. Temperature overestimations partially cancel with radius underestimations (and vice versa), producing bolometric luminosity errors up to factors of 3--5 in 68\% of cases.
Adding ground-based ultraviolet observations reduces these errors somewhat, but space-based ultraviolet observations are required to reduce these errors significantly to less than a few hundred K in temperature and 10\%-20\% in bolometric luminosity.
There are no major differences between the results obtained when using the \texttt{pysynphot} method compared to the \texttt{lightcurve\_fitting} method. However, the \texttt{lightcurve\_fitting} method is significantly faster.
Our results relate to pure and single blackbodies. Departures from such spectra due to absorption and emission features, extinction, or other types of underlying spectra altogether, will induce even larger errors. In addition, errors in distance estimation will cause errors in determining the blackbody radius, which in turn will cause errors in determining the bolometric luminosity. Given the degeneracy between temperature and radius fitting, distance errors could cause additional errors also in temperature determination.
Observations in the ultraviolet, currently possible in large numbers only with the {\it Swift} observatory, and soon also with the Ultraviolet Transient Astronomy Satellite \citep[ULTRASAT;][]{Sagiv2014}, are crucial for constraining the emission properties, and hence physics, of high-temperature astrophysical objects and events. When ultraviolet observations are not available, it is important to be aware of the uncertainties quantified here.
~\\
We thank G. Hosseinzadeh for assistance in using the \texttt{lightcurve\_fitting} package and for helpful comments, and D. A. Howell for helpful comments.
I.A. is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Universe Program and acknowledges support from that program, from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 852097), from the Israel Science Foundation (grant number 2752/19), from the United States -- Israel Binational Science Foundation (BSF), and from the Israeli Council for Higher Education Alon Fellowship.
\software{PySynphot \citep{pysynphot},
Lightcurve\_Fitting \citep{lightcurve_fitting},
emcee \citep{Foreman-Mackey2013}.}
\bibliography{refs}{}
\bibliographystyle{aasjournal}
\restartappendixnumbering
\appendix
\section{Differences in Synthetic Magnitudes Generated by the Two Packages} \label{sec:app-diffs}
We generate synthetic magnitudes for blackbodies with a radius of 1,000\,R$_{\sun}$ and temperatures of 5,000, 10,000, 20,000, 30,000 and 60,000\,K using both the \texttt{pysynphot} and \texttt{lightcurve\_fitting} packages. Figure \ref{fig:methodsdiff} shows the differences in magnitudes generated by the two methods.
Some of these difference may be due to different filter response curves used by each package. However, the largest differences (up to 0.2 magnitudes for some temperatures) are seen in the $w2$ band even though both packages use the same \cite{Breeveld2011} response curves for the {\it Swift} $w1$, $m2$ and $w2$ bands. Therefore there are likely other factors responsible for these inconsistencies.
\section{Results and Analysis for Additional Scenarios}
We present in Figures \ref{fig:res_t_sdss_p1_priorcomp}, \ref{fig:res_t_john_p05_priorcomp} and \ref{fig:res_t_john_p1_priorcomp} our prior comparison results for the SDSS SEDs with 0.1 magnitude errors and the Johnson-Cousins SEDs with both the 0.05 and 0.1 magnitude errors. The results are similar to those presented and analysed in the main text for the SDSS SEDs with 0.05 magnitude errors.
We present in Figures \ref{fig:res_t_john_p05} and \ref{fig:res_t_john_p1} our full simulation results for the Johnson-Cousins SEDs using the log-uniform priors. The results are similar to those presented and analysed in the main text for the SDSS SEDs with the same priors.
In Figures \ref{fig:res_t_sdss_p05_err_und} -- \ref{fig:res_t_john_p1_err_und} we present the full simulation results for the fits with uncertainties underestimated by a factor of two. We find no major differences compared to the fits with correctly estimated uncertainties, except that the uncertainties in the fitted temperature are underestimated when using the \texttt{pysynphot} method.
The posterior distributions of true temperatures that produce various measured temperatures for the different optical SEDs, assumed magnitude uncertainties, and methods (using log-uniform priors) are shown in Figure \ref{fig:inverse_t_all}. The median and $1\sigma$ bounds of these distributions for the \texttt{pysynphot} method are presented in Table \ref{tab:inverse_t_mag}.
\begin{deluxetable*}{@{\extracolsep{4pt}}ccccc@{}}
\tablecaption{Same as Table \ref{tab:inverse_t_flux} but for the \texttt{pysynphot} method.} \label{tab:inverse_t_mag}
\tablehead{
\colhead{Measured temperature} & \multicolumn{4}{c}{True temperature} \\
\cline{1-1} \cline{2-5}
\colhead{} & \multicolumn{2}{c}{For a $gri$-band measurement} & \multicolumn{2}{c}{For a $BVRI$-band measurement}\\
\cline{2-3} \cline{4-5}
\colhead{} & \colhead{0.05 mag uncertainty} & \colhead{0.1 mag uncertainty} & \colhead{0.05 mag uncertainty} & \colhead{0.1 mag uncertainty} \\
\colhead{($10^3$\,K)} & \colhead{($10^3$\,K)} & \colhead{($10^3$\,K)} & \colhead{($10^3$\,K)} & \colhead{($10^3$\,K)}}
\startdata
5--10 & $5.0_{ -0.0 }^{ +5.0 }$ & $5.0_{ -0.0 }^{ +5.0 }$ & $5.0_{ -0.0 }^{ +5.0 }$ & $5.0_{ -0.0 }^{ +5.0 }$ \\
10--15 & $10.0_{ -0.0 }^{ +5.0 }$ & $10.0_{ -0.0 }^{ +5.0 }$ & $10.0_{ -0.0 }^{ +5.0 }$ & $10.0_{ -0.0 }^{ +5.0 }$ \\
15--20 & $15.0_{ -0.0 }^{ +5.0 }$ & $15.0_{ -0.0 }^{ +10.0 }$ & $15.0_{ -0.0 }^{ +5.0 }$ & $20.0_{ -5.0 }^{ +5.0 }$ \\
20--25 & $25.0_{ -5.0 }^{ +5.0 }$ & $25.0_{ -8.6 }^{ +10.0 }$ & $22.5_{ -2.5 }^{ +2.5 }$ & $25.0_{ -5.0 }^{ +10.0 }$ \\
25--30 & $30.0_{ -5.0 }^{ +5.0 }$ & $30.0_{ -10.0 }^{ +15.0 }$ & $30.0_{ -5.0 }^{ +5.0 }$ & $25.0_{ -5.0 }^{ +15.0 }$ \\
30--35 & $35.0_{ -10.0 }^{ +10.0 }$ & $35.0_{ -10.0 }^{ +20.0 }$ & $35.0_{ -6.0 }^{ +6.0 }$ & $35.0_{ -10.0 }^{ +15.0 }$ \\
35--40 & $40.0_{ -10.0 }^{ +15.0 }$ & $40.0_{ -15.0 }^{ +25.0 }$ & $35.0_{ -5.0 }^{ +15.0 }$ & $40.0_{ -10.0 }^{ +20.0 }$ \\
40--45 & $45.0_{ -10.0 }^{ +15.0 }$ & $45.0_{ -15.0 }^{ +20.0 }$ & $45.0_{ -10.0 }^{ +20.0 }$ & $50.0_{ -20.0 }^{ +17.0 }$ \\
45--50 & $50.0_{ -15.0 }^{ +18.4 }$ & $50.0_{ -20.0 }^{ +20.4 }$ & $45.0_{ -10.0 }^{ +20.0 }$ & $50.0_{ -15.0 }^{ +20.0 }$ \\
50--55 & $55.0_{ -15.0 }^{ +15.0 }$ & $60.0_{ -20.0 }^{ +15.0 }$ & $55.0_{ -10.0 }^{ +14.8 }$ & $55.0_{ -10.0 }^{ +16.6 }$ \\
55--60 & $60.0_{ -15.0 }^{ +15.0 }$ & $60.0_{ -20.0 }^{ +15.0 }$ & $55.0_{ -10.0 }^{ +20.0 }$ & $55.0_{ -15.0 }^{ +20.0 }$ \\
60--65 & $60.0_{ -15.0 }^{ +15.0 }$ & $60.0_{ -15.0 }^{ +15.0 }$ & $60.0_{ -10.0 }^{ +15.0 }$ & $60.0_{ -19.8 }^{ +15.0 }$ \\
65--70 & $65.0_{ -15.0 }^{ +10.0 }$ & $60.0_{ -15.0 }^{ +15.0 }$ & $62.5_{ -7.5 }^{ +12.5 }$ & $65.0_{ -15.0 }^{ +10.0 }$ \\
70--75 & $67.5_{ -17.5 }^{ +7.5 }$ & $70.0_{ -10.0 }^{ +10.0 }$ & $65.0_{ -15.0 }^{ +10.0 }$ & $65.0_{ -10.0 }^{ +15.0 }$ \\
75--80 & $70.0_{ -12.0 }^{ +10.0 }$ & $62.5_{ -6.9 }^{ +11.9 }$ & $75.0_{ -10.0 }^{ +5.0 }$ & $75.0_{ -17.8 }^{ +5.0 }$ \\
\enddata
\end{deluxetable*}
The bolometric luminosity errors for the Johnson-Cousins SEDs (assuming log-uniform priors) are presented in Figure \ref{fig:res_l_john}. These results are similar to the results presented in the main text for the SDSS SEDs using the same priors.
|
Title:
New Accretion Constraint on the Evaporation of Primordial Black Holes |
Abstract: In this paper, we have investigated the processes of evaporation and
accretion of primordial black holes during the radiation-dominated era and the
matter-dominated era. This subject is very important since usually these two
processes are considered independent of each other. In other words, previous
works consider them in such a way that they do not have a direct effect on each
other, and as a result, their effects on the mass of primordial black holes are
calculated separately. The calculations of this paper indicate that assuming
these two processes independently of each other will lead to wrong results that
only give correct answers within certain limits. In fact, in general, it is a
mistake to consider the static state for the event horizon of primordial black
holes and perform calculations related to their evaporation, while the radius
of primordial black holes is constantly changing due to accretion. In addition,
we have shown that considering the dynamic event horizon in some masses and in
some times can lead to the shutdown of the Hawking evaporation process. This
study is much more accurate and detailed than our previous study. These
calculations show well the mass evolution of primordial black holes from the
time of formation to the end of the matter-dominated era, taking into account
both the main processes governing black holes, evaporation and accretion.
| https://export.arxiv.org/pdf/2208.04197 |
\definecolor{orange}{rgb}{0.9,0.45,0}
\def\CovDev{D}
\def\Res{{\mathcal R}}
\def\Gammaflat{\hat \Gamma}
\def\metricflat{\hat \gamma}
\def\Dflat{\hat {\mathcal D}}
\def\part_n{\partial_\perp}
\def\Lie{\mathcal{L}}
\def\A{\mathcal{X}}
\def\Aphi{\A_{\phi}}
\def\hAphi{\hat{\A}_{\phi}}
\def\E{\mathcal{E}}
\def\Ham{\mathcal{H}}
\def\M{\mathcal{M}}
\def\R{\mathcal{R}}
\def\p{\partial}
\def\hg{\hat{\gamma}}
\def\hA{\hat{A}}
\def\hD{\hat{D}}
\def\hE{\hat{E}}
\def\hR{\hat{R}}
\def\hcA{\hat{\mathcal{A}}}
\def\hDelt{\hat{\triangle}}
\def\na{\nabla}
\def\dif{{\rm{d}}}
\def\non{\nonumber}
\newcommand{\erf}{\textrm{erf}}
\renewcommand{\t}{\times}
\long\def\symbolfootnote[#1]#2{\begingroup%
\def\thefootnote{\fnsymbol{footnote}}\footnote[#1]{#2}\endgroup}
\title{New Accretion Constraint on the Evaporation of Primordial
Black Holes}
\author{Seyed Sajad Tabasi}
\email{sstabasi98@gmail.com}
\affiliation{Department of Physics, Sharif University of Technology, P. O. Box 11155-9161, Tehran, Iran}
\affiliation{PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran}
\author{Mahsa Berahman}
\email{mahsa.berahman@email.kntu.ac.ir}
\affiliation{Department of Physics, K.N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran}
\affiliation{PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran}
\author{Javad T. Firouzjaee}
\email{firouzjaee@kntu.ac.ir}
\affiliation{Department of Physics, K.N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran}
\affiliation{PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran}
\affiliation{School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran, Iran}
\section{Introduction}
The detection of gravitational waves generated by the mergers of two black holes \cite{LIGOScientific:2016aoc, LIGOScientific:2021djp} has led to renewed interest in Primordial Black Holes (PBHs) \cite{Sasaki:2018dmp,Carr:2020gox,Green:2020jor}, as they could be part of a fraction of the events observed by the LIGO/Virgo/KAGRA collaboration \cite{Hutsi:2020sol,DeLuca:2021wjr, Franciolini:2021tla}.
PBHs may be formed through the gravitational collapse of rare overdense regions upon horizon entry in the early stages of the universe's evolution. The collapse could take place during the radiation-dominated era when PBHs are generated only if the initial amplitude of the density perturbation is on the far side of a large threshold (see, e.g., \cite{Niemeyer:1999ak,Shibata:1999zs,Allahyari:2016osl, Musco:2020jjb}).
There are two main features of PBHs dynamics first, their evaporation by Hawking radiation, and second their accretion, which is due to the nature of the black hole's significant gravity. PBHs Hawking radiation flux is not independent of its accretion flux \cite{Firouzjaee:2014zfa,Firouzjaee:2015bqa,Firouzjaee:2015wps}.
Since all stationary BHs evaporate due to Hawking radiation \cite{Hawking:1975vcx}, losing their mass in a time related to their initial mass by equation $\tau \sim M^3$, then the PBH with initial mass less than $ 10^{15} g $ have entirely evaporated until now.
With respect that the accretion could overcome Hawking radiation during the radiation-dominated era and causes PBHs radius to grow, therefore the constrain from evaporation for PBHs is reduced from $ 10^{15} g $ to $ 10^{14} g $ \cite{Tabasi:2021cxo}. Therefore, they safely show the constraints down to $ M\geq 10^{14} g $, which leads to the remained possible PBHs mass range windows to be extended foe explaining dark matter.
These two dynamical features help us to know the abundance of PBHs which share in detected gravitational waves and dark matter mass fraction.
The abundance of PBHs is constrained by observations in different mass ranges (for a comprehensive review, see \cite{Carr:2020gox}).
For example, Ricotti, Ostriker, and Mack \cite{Ricotti:2007au} derived strong constraints from the cosmic microwave background (CMB) frequency spectrum and temperature and polarization anisotropies for PBHs more massive than one solar mass. The basic idea about these constraints is that PBHs accrete primordial gas in the early universe and then convert a fraction of the accreted mass to radiation which affects the CMB. To proceed, first one has to model the PBHs accretion to quantify their mass value in time. Second, the type of the accretion flux (gas) and the era of the universe in which the PBHs evolve in it determine PBHs mass spectrum.
The rest of this paper is organized as follows. In section \rom{2} we have an overview of some general cosmological equations. Then, in section \rom{3}, we have explained the evaporation process and the equations leading it, and then we have continued the same process for the accretion process. In section \rom{4}, the equations of accretion of matter and radiation are analyzed. In section \rom{5}, considering the significance of the two eras, radiation-dominated and matter-dominated, we examined the evolution of mass due to the accretion of matter and radiation in each era separately. The mass graph is drawn in terms of time, and the effects of accretion of radiation and matter are discussed. In section \rom{6}, we have talked about the presence or absence of evaporation by examining the rate of increase in the radius of the PBH due to accretion.
\section{General Equations}
All PBHs have been formed in the radiation-dominated era. The study of PBHs mass gives much information about its evolutionary process and effects on the surrounding environment. The mass of PBHs that formed at the time t after the Big Bang is equivalent to or less than the Hubble mass \cite{Carr:2009jm}
\begin{equation}
\label{e1}
M_{PBH} \sim \frac{c^3t}{G}\sim 10^{15}(\frac{t}{10^{-23}s})g,
\end{equation}
where $c\simeq3\times10^{8}m/s$ is the speed of light and $G\simeq6.67\times10^{-11}m^{3}/kgs^2$ is the gravitational constant.
The cosmological evolution of PBHs, such as accretion, evaporation, and merging, can significantly impact PBHs mass and release radiation, injecting energy into the surrounding medium, strongly affecting its thermal state, and leaving influential observable signatures \cite{Villanueva-Domingo:2021spv}.
To study of PBHs, we need to survey the universe's evolution. Friedmann equations describe the homogeneous and isotropic universe as \cite{Rice:2017avg}
\begin{equation}
\label{e2}
(\frac{\dot{a}}{a})^2+\frac{kc^2}{a^2}=\frac{8\pi G}{3}\rho,
\end{equation}
\begin{equation}
\label{e3}
\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+\frac{3P}{c^2}),
\end{equation}
and the total energy conservation equation is
\begin{equation}
\label{e4}
\dot{\rho}+3H(\rho+p)=0.
\end{equation}
General equation of state is $p = \omega\rho$, where $\omega$ for matter, radiation and cosmological constant are 0, 1/3, в€’1, thus we can rewrite the Eq.\eqref{e4} as \cite{Nayak:2011sk}
\begin{equation}
\label{e5}
\begin{split}
\rho=\rho_{cr}(\frac{a}{a_{0}})^{-3(1+\omega)}\hspace{1cm}\\
\vspace{0.5cm}
\rho(a)\propto
\begin{cases}
a^{-4} & \hspace{0.5cm} \text{Radiation}\\
a^{-3} & \hspace{0.5cm} \text{Matter} \\
constant & \hspace{0.5cm} \text{Vacuum}
\hspace{1 cm}
\end{cases}
\end{split}
,
\end{equation}
and by substituting Eq.\eqref{e3} in Eq.\eqref{e5} we have
\begin{equation}
\label{e6}
\hspace{1.2cm}a(t)\propto \begin{cases}
t^{\frac{1}{2}} & \hspace{0.5cm} \text{Radiation} \\
t^{\frac{2}{3}} & \hspace{0.5cm} \text{Matter} \\
e^{H_{0}t} & \hspace{0.5cm} \text{Vacuum}
\end{cases}.
\hspace{2 cm}
\end{equation}
Now we want to calculate the rate of mass change of PBHs through evaporation and accretion processes.
\subsection{Evaporation}
After inspecting quantum properties for black holes, Hawking indicated that black holes emit particles with a thermal spectrum \cite{hawking-75}. The properties of the emitted particles depend on mass, angular momentum, and charge of BHs \cite{Cheek:2021odj}. %
We consider PBHs as Schwarzschild black holes \cite{Carr:2009jm}
\begin{equation}
\label{e7}
\frac{dM_{PBH}}{dt}=-f_{eva}4\pi R_{PBH}^{2}c\rho_{r},
\end{equation}
where $f_{eva}$ is the evaporation efficiency factor, $R_{PBH}$ is the PBH radius, and $\rho_{r}$ is radiation density. Evaporation efficiency factor plays a vital role in the evaporation rate, and its value depends upon PBHs physical parameters and environment. The function $\rho_{r}$ is given by \cite{Rice:2017avg}
\begin{equation}
\label{e8}
\rho_{r}=\frac{\pi^2}{30}g_{*}(T_{PBH})\frac{(T_{PBH}k_{B})^4}{{\hbar}^3c^5},
\end{equation}
where $g_{*}$ is the number of relativistic particle degrees-of-freedom which is obtained by
\begin{equation}
\label{e9}
g_{*}(T_{PBH})=\sum_{i}(\omega_{i}g_{i}).
\end{equation}
In order to get a numerical value for $g_{*}(T_{PBH})$, we need to have values of $\omega_{i}$ and $g_{i}$
\begin{equation}
\label{e10}
\begin{split}
\omega_{i}=
\begin{cases}
2s_{i}+1 & \hspace{0.5cm} massive\hspace{2mm} particles\\
2 & \hspace{0.5cm} massless \hspace{2mm} species \\
1 & \hspace{0.5cm} s_{i}=0
\end{cases}\\
\vspace{0.3cm}
\hspace{1.3cm} g_{i}(T_{PBH})=
\begin{cases}
1.82 & \hspace{0.5cm} s=0 \\
1.0 & \hspace{0.5cm} s=\frac{1}{2}\\
0.41 & \hspace{0.5cm} s=1\\
0.05 & \hspace{0.5cm} s=2\\
\end{cases}\hspace{1.5cm}
\end{split},
\end{equation}
and obviously, $s_{i}$ is the particle spin. %
Hence, if $M_{PBH}\ll10^{11} g$ for standard model particles $g_{*}(T_{PBH})\simeq 108$. As a substitute, the Minimal Supersymmetric Standard Model (MSSM) approximates $g_{*}(T_{PBH})\simeq 316$ \cite{Hooper:2020otu,Keith:2020jww}.
Although we have used Eq.\eqref{e7} to continue, it can be rewritten by Eq.\eqref{e8} as follows
\begin{equation}
\label{e11}
\frac{dM_{PBH}}{dt}=-\frac{8\pi^3}{15}\frac{f_{eva}g_{*}}{4}\frac{M_{PBH}^2}{c^5\hbar c M_{Pl}^4}(k_{B}T_{PBH})^4.
\end{equation}
In this equation, $T_{PBH}$ is the temperature of the radiation particles from PBHs, which is equal to PBHs temperature. As we will demonstrate, PBHs temperature is critical in the accretion and evaporation processes that are given as \cite{Carr:2009jm}
\begin{equation}
\label{e12}
T_{PBH}=\frac{\hbar c^3}{8\pi G k_{B}M_{PBH}} \simeq10^{-7}(\frac{M_{PBH}}{M_{\odot}})^{-1}.
\end{equation}
This process slowly reduces the PBH mass, so if the dominant process is evaporation, the lifetime of a PBH with initial mass M derives from the following equation \cite{Chisholm:2011kn}
\begin{equation}
\label{e13}
\tau(M)\simeq (10^{-26} s)(\frac{M}{1 g})^3.
\end{equation}
\subsection{Accretion}
As mentioned, accretion has a significant effect on the evolution of PBHs. Infalling matter and photons onto PBHs increase the mass and other observable parameters.
The physical parameter of a cosmological fluid determines the accretion rate in each cosmic epoch \cite{Rice:2017avg}. In this study, we focus on accretion equations, and all calculations are performed by considering spherical symmetric condition. The Bondi-Hoyle accretion model is used for this goal \cite{Bondi:1952ni}
\begin{equation}
\label{e14}
\frac{dM_{PBH}}{dt}=4\pi R_{PBH}^2\rho v.
\end{equation}
In accretion of radiation $v={c}/{\sqrt{3}}$ and $R_{PBH}=R_{s}={2GM_{PBH}}/{c^2}$. Therefore we can rewrite Eq.\eqref{e14} as follows
\begin{equation}
\label{e15}
\frac{dM_{PBH}}{dt}=16\pi G^2 M_{PBH}^2\rho_{r} (\frac{c}{\sqrt{3}})^{-3} f_{acc},
\end{equation}
where $f_{acc}$ is the accretion efficiency. Now we consider conditions under which a PBH acquires matter in the accretion process. This case is more complex, and we need more information about the environment. To obtain the rate of mass increases by baryonic matter, we use the following equation
\begin{equation}
\label{e16}
\frac{dM_{b}}{dt}=\lambda4\pi m_{H} n_{gas} v_{eff} r_{B}^2.
\end{equation}
Here, $n_{gas}$ is the number density, $r_{B}=G M_{PBH} v_{eff}^{-2}$ is the Bondi-Hoyle radius and $v_{eff}=(v_{rel}^2+c_{s}^2)^{\frac{1}{2}}$ is the effective velocity of PBH, expressed in terms of the PBH relative velocity $v_{rel}$ with regard to the gas with sound speed $c_{s}$ \cite{Ricotti:2007au}. The gas viscosity, Compton drag, Compton cooling by CMB photons, and free electron fraction are factors that determine the value of dimensionless accretion rate $\lambda$, which is effective in obtaining the final mass value. Provided both Compton drag and Compton cooling are negligible, the classic Bondi problem can be solved for an adiabatic gas \cite{Ali-Haimoud:2016mbv}.
\section{ACCRETION OF UNIVERSE'S COMPONENTS}
As we know, the universe is made up of baryonic matter (gas), dark matter, radiation, and dark energy. In this section, we will examine the accretion of these components. However, due to the fact that in this paper, we study equations until the end of the matter-dominated era, and we expect that in these two eras, mass gain by matter and radiation will be dominant, our focus will be on the accretion of matter and radiation so we neglect accretion of dark energy. In the following section, we peruse these two regimes of accretion individually.
\subsection{Accretion of radiation}
The presence of CMB anisotropies and fluctuations on scales larger than the Hubble radius in the recombination era point strongly to the early inflationary epoch \cite{Brandenberger:2012zb}. The thermal bath result from reheating is an essential aspect of inflation. Thereupon we can consider the universe is the precise black-body \cite{Allahverdi:2005fq}. Considering that equations associated with accretion of radiation are distinct in the radiation-dominated era and matter-dominated era, we discuss them separately. In the radiation-dominated era, photons from thermal bath fall into PBHs and increase their mass. As mentioned, we consider spherical symmetrical accretion and use Eq.\eqref{e15}. We need to have this equation in terms of time or redshift to examine the evolution of PBHs. Therefore, by using Eq.\eqref{e5}, we know $\rho_{r}=\rho_{cr}(a/ a_{0})^{-4}$ then Eq.\eqref{e6} is used to enter the time parameter, and Eq.\eqref{e15} is rewritten as follows \cite{Nayak:2011sk}
\begin{equation}
\label{e17}
\begin{split}
\frac{dM_{PBH}}{dt}=16\pi G^2 \rho_{cr} \Omega_{r}^0 (\frac{c}{\sqrt{3}})^{-3} f_{acc}\\
\times(t_{1}^{-\frac{2}{3}} t_{2}^\frac{8}{3} e^{-4H_{0}(t_{2}-t_{0})}) (\frac{M_{PBH}}{t})^2
\end{split},
\end{equation}
where $\rho_{cr}=9.2\times10^{-30}{g}/{cm^3}$ is the critical energy density, $\Omega_{r}^0=4.2\times10^{-5}$ is relative contribution of relativistic particles, $t_{1}=2.1\times10^{12}s$ is the time of end of the radiation-dominated era, $t_{2}=2.4\times10^{17}s$ is the time of end of the matter-dominated era, and $t_{0}=4.4\times10^{17}s$ is the present time \cite{Rubakov:2017xzr}. By solving the differential equation of Eq.\eqref{e17}, the final mass of PBHs due to accretion of radiation in radiation-dominated is obtained in terms of time
\begin{equation}
\label{e18}
\begin{split}
M_{ R-RD}(t)=(\frac{1}{M_{i}}+1.3\times10^{-35}f_{acc}(\frac{1}{t}-\frac{1}{t_{i}}))^{-1}\\
for \hspace{0.5cm}t_{i}<t<t_{1}\hspace{2cm}
\end{split},
\end{equation}
where $M_{i}$ is the initial mass, and $t_{i}$ is the formation time of the PBH. Explicitly, Eq.\eqref{e18} determines the mass resulting from the accretion of radiation at any time during the radiation-dominated era and specifically the final mass of the PBH at the end of this era. Correspondingly in the matter-dominated era $\rho_{r}=\rho_{cr}({a}/{a_{0}})^{-3}$ and $a(t)\propto t^{{2}/{3}}$ so we have
\begin{equation}
\label{e19}
\begin{split}
\frac{dM_{PBH}}{dt}=16\pi G^2 \rho_{cr} \Omega_{r}^0 (\frac{c}{\sqrt{3}})^{-3} f_{acc}\\ \times(t_{2}^{-\frac{8}{3}} e^{-4H_{0}(t_{2}-t_{0})}) \frac{M_{PBH}^2}{t^{\frac{8}{3}}}
\end{split}.
\end{equation}
Now, by solving the differential equation of Eq.\eqref{e19}, we can have mass evolution through the accretion of radiation in the matter-dominated era. The final mass because of accretion of radiation in matter-dominated era is obtained as
\begin{equation}
\label{e20}
\begin{split}
M_{R-MD}(t)=(\frac{1}{M_{i}}+3.5\times10^{-27}f_{acc}(\frac{1}{t^{\frac{5}{3}}}-\frac{1}{t_{1}^{\frac{5}{3}}}))^{-1}\\for \hspace{0.5cm} t_{1}<t<t_{2}\hspace{3cm}
\end{split}.
\end{equation}
In Ref.\cite{Tabasi:2021cxo}, the importance and consequences of correctly determining the value for the accretion efficiency factor have been well studied. However, in the literature, values between 0.05 and 0.2 are usually attributed to it. In all the calculations of this paper, we have considered the value of 0.1 for it.
\subsection{Accretion of Matter}
Throughout this paper, we assume that the PBH with point mass M is immersed in the Hydrogen gas. In order to continue, we need to refer to Eq.\eqref{e16} and investigate each term of the equation. The numerical value of the mean cosmic gas density is
\begin{equation}
\label{e21}
n_{gas}\simeq 200 cm^{-3} (\frac{1+z}{1000})^3.
\end{equation}
Aforementioned, $v_{eff}$ is a variety of $c_{s}$ and the $v_{rel}$ between the PBH and the medium averaged with a Gaussian distribution. From \cite{Yang:2021agk}
we have
\begin{equation}
\label{e22}
\sqrt{\langle{v_{L}^2}\rangle}\simeq min[1,\frac{1+z}{1000}]\times 30 km/s.
\end{equation}
Given the equation $c_{s}=(5.7 km.s^{-1})(T_{gas}/2730)^{1/2}$ , to compute the speed of sound, we need the gas temperature. Before decoupling, we can consider the gas temperature was roughly equal to the CMB temperature. After that, $T_{gas}$ started to decrease adiabatically due to the Hubble parameter. Therefore, the value of $c_{s}$ can be written approximately as follows \cite{Ali-Haimoud:2016mbv}
\begin{equation}
\label{e23}
c_{s}\simeq
\begin{cases}
(5.7 km.s^{-1})(\frac{1+z}{1000})^{\frac{1}{2}} & \hspace{0.5cm} z\gg132 \\
1800 km.s^{-1} & \hspace{0.5cm} z\ll132 \\
\end{cases}.\hspace{1.2cm}
\end{equation}
Finally, we should introduce
\begin{equation}
\label{e24}
v_{eff}\simeq
\begin{cases}
c_{s}\mathcal{M}^{\frac{1}{2}}[3\sqrt{\frac{2}{2\pi}B(\frac{3}{2},\frac{3}{2})}]^{-\frac{1}{6}} & \hspace{0.5cm} \mathcal{M}\gg1 \\
c_{s} & \hspace{0.5cm} \mathcal{M}\ll1 \\
\end{cases},\hspace{0.3cm}
\end{equation}
where B(x,y) is the beta function, and $\mathcal{M}$ is defined as $\mathcal{M}\equiv{{\sqrt{\langle{v_{L}^2}\rangle}}/{c_{s}}}$ \cite{Mena:2019nhm}.
The value of $\lambda$ must be determined in terms of redshift. We assume the constant free electron fraction $x_{e}$ is equal to the free electron fraction of background $\overline{x_{e}}=1$ also, we need the characteristic dimensionless Compton drag rate $\beta$ and Compton cooling rate $\gamma$ as a function of redshift. We can get \cite{DeLuca:2020fpg}
\begin{equation}
\label{e25}
\begin{split}
\beta=(\frac{M}{10^4M_{\odot}})(\frac{z+1}{1000})^{\frac{3}{2}}(\frac{v_{eff}}{5700})^{-3}\\ \times[0.275+1.45(\frac{x_{e}}{0.01})(\frac{1+z}{1000})^{\frac{5}{2}}]
\end{split},
\end{equation}
\begin{equation}
\label{e26}
\gamma=\frac{2 m_{p}}{m_{e}(1+x_{e})}\beta.
\end{equation}
Although $\lambda$ can vary according to how $\gamma$ and $\beta$ relate to each other, in general, the following relationship applies to all redshifts \cite{Ali-Haimoud:2016mbv}
\begin{equation}
\label{e27}
\lambda(\beta,\gamma)\approx\frac{\lambda(\gamma; \beta \ll 1)\lambda(\gamma\gg 1;\beta)}{\lambda_{iso}}.
\end{equation}
In this equation, $\lambda_{iso}=1.12$ in the isothermal case and $\lambda_{ad}=0.12$ in tha adiabatic case. Additionally, $\lambda(\gamma;\beta\ll1)$ is the accretion rate numerical solution for $\beta\ll1$ and arbitrary $\gamma$. Similarly $\lambda(\gamma\gg 1;\beta)$ is the numerical solution for $\gamma\gg1$ and arbitrary $\beta$. Eq.\eqref{e28} and Eq.\eqref{e29} show equations of these special $\lambda$
\begin{equation}
\label{e28}
\lambda(\gamma;\beta\ll1)\approx\lambda_{ad}+(\lambda_{iso}-\lambda_{ad})(\frac{\gamma^2}{88+\gamma^2})^{0.22},
\end{equation}
\begin{equation}
\label{e29}
\lambda(\beta;\gamma \gg 1)\approx exp[\frac{4.5}{3+\beta^{\frac{3}{4}}}]\times\frac{1}{(\sqrt{1+\beta}+1)^2}.
\end{equation}
Now we have all the parameters of Eq.\eqref{e16} in terms of redshift, and we can substitute them for getting the mass rate equation. As in the previous section, with placing $1+z=(a_{0}/a)=e^{H_{0}(t_{0}-t_{2})}(t_{2}/t_{1})^{2/3}(t_{1}/t)^{1/2}$ according to Eq.\eqref{e6}, we can solve Eq\eqref{e16} in terms of time and obtain the mass evolution of PBHs in the radiation-dominated era
\begin{equation}
\label{e30}
\begin{split}
M_{M-RD}(t)=M_{i}(1+\frac{123}{25\times10^{39}}M_{i}(\sqrt[4]{t_{i}} - \sqrt[4]{t}))^{-1}\\ for\hspace{0.5cm} t_{i}<t<t_{1} \hspace{2.5cm}
\end{split}.
\end{equation}
Besides, the mass evolution equation in terms of time in the matter-dominated era by using $1+z=(a_{0}/a)=e^{H_{0}(t_{0}-t_{2})}(t_{2}/t)^{2/3}$ is
\begin{equation}
\label{e31}
\begin{split}
M_{M-MD}(t)=M_{t-RD}(1+1.5\times10^{-36}M_{t-RD}\ln{\frac{t_{1}}{t}})^{-1}\\for \hspace{0.5cm} t_{1}<t<t_{2} \hspace{3cm}
\end{split},
\end{equation}
where $M_{t-RD}$ is the final mass of PBHs because of accretion of radiation and matter at the end of the radiation-dominated era.
\section{Accretion during the radiation-dominated era}
Radiation and matter fall into PBHs all the time and increase its mass. In the last section, we discussed the evolution of mass for accretion of radiation and accretion of matter. In this section, we want to establish whether the assumptions that radiation has a more serious effect on increasing the mass of the PBH in the radiation-dominated era or the matter is responsible for increasing the mass in the matter-dominated era are correct and not. Due to the importance of observing PBHs, many studies have been conducted on limiting the possible masses for the existence of PBHs and for explaining dark matter. After applying all constrains, including evaporation \cite{Page:1976wx}, lensing \cite{Niikura:2019kqi}, gravitational waves \cite{Raidal:2017mfl}, cosmic microwave background distortions \cite{Kohri:2014lza}, four mass windows $10^{16}–10^{17} g$, $10^{20}–10^{24} g$, and $1–10^4M_{\odot}$ remain \cite{Carr:1997cn,Carr:2021bzv}. In this paper, we have studied a mass from each mass window to analyze the results obtained ($10^{17}$, $10^{27}$, $10^{33}$, and $10^{37} g$), and we have examined the graph of mass over time in two eras separately. Fig.~(1) demonstrates the growth of PBHs mass during radiation-dominated era and compares the effect of matter and radiation on the increase of PBHs mass.
As we expected, radiation during the radiation-dominated era significantly increases the mass of PBHs, and we can neglect the accretion of matter in this epoch.
\section{Accretion during the matter-dominated era}
Now we should investigate the accretion of PBHs during the matter-dominated era. Eq.\eqref{e20} and Eq.\eqref{e30} illustrate the mass of PBHs that has started to devour matter and radiation in this era. Fig.~(2) has satisfied our expectations in the matter-dominated era; the growth of PBHs mass is mainly done because of matter.
To examine the accuracy of our work, we compared our results with the previous works, in particular with papers of Kamionkowski {\it et al.} and Ricotti {\it et al.} For this goal, it is necessary to define the dimensionless Bondi-Hoyle accretion rate that shows the evolution of the accretion rate normalized to the Eddington rate as $\dot{m}\equiv{{\dot{M_{b}}}/{\dot{M_{Ed}}}}$, where $\dot{M_{Ed}}=1.44\times10^{17}(M_{PBH}/M_{\odot}) \hspace{0.2cm} erg.s^{-1}$ is the Eddington accretion rate. Fig.~(3) gives us practical information about the mass evolution only by gas accretion. This paper uses an analytical solution to calculate the equations as much as possible.
Regarding our semi-analytical approach, Fig.~(3) depicts a slight difference between the mentioned approach and fully numerical methods. Although in low redshifts, Kamionkwski {\it et al.} have considered the adiabatic accretion in this era because of the neglectable Compton cooling effect, Ricotti {\it et al.} implicitly have assumed that $\gamma \gg 1$ at all times when accounting for Compton drag in the analysis \cite{Ali-Haimoud:2016mbv}.
\section{Evaporation vs. Accretion}
In previous sections, we investigated the process of PBHs mass increase during the radiation-dominated and matter-dominated era. One of the most meaningful results obtained from a PBH mass is the calculation of its radius. According to Schwarzschild radius relationship, $R_{s}=2GM_{PBH}/c^2$, if we substitute $\dot{M_{b}}$ obtained from the previous parts, we can study Hawking evaporation by comparing the growth rate of the event horizon and Planck length \cite{Tabasi:2021cxo}.
By using Beckenstein-Hawking entropy $S=A/4l_{p}^2$ and setting characteristics of thermal fluctuations about equilibrium $\delta S \sim 1$, we can estimate the scale of quantum fluctuations of the horizon. We know the horizon is treated as $N\equiv{A/lp^2}$, and equation $\delta A \sim \sqrt{N}\delta a \sim lp \delta r$ holds for each era and radius $r$. Therefore, according to equation $\delta a \sim lp^{2}$ we have $\delta r \sim lp$ \cite{Jacobson:1993hn}. Thus, particles that escape the black hole start their journey from about a Planck length farther than the event horizon.
The apparent horizon of any dynamical space-time must be inside the event horizon; thus, any virtual pair particle created by vacuum, cannot escape outside and should fall back into PBHs. Due to accretion, a PBH is in the dynamical phase, so it cannot have adiabatic conditions around the apparent horizon for Hawking radiation \cite{Firouzjaee:2015bqa}.
In this context, we are interested in following changing rate of radius for the mentioned masses. We would like to know if for these different masses, there are time periods where evaporation is turned off. In Fig.~(4) and Fig.~(6), radius growth rates are plotted in terms of time for selected masses. In order to facilitate conclusions, the radiation-dominated era and matter-dominated era are separately shown, and the regions where the radius changes are more than the Planck length are crosshatched.
As we can see in Fig.~(4), PBHs with the initial mass $10^{17} g$ constantly evaporate during the radiation-dominated era. The situation is a bit more complicated for PBHs with the initial mass of $10^{27}g$. Because according to Fig.~(4).b these PBHs firstly evaporate during the radiation-dominated era, but the radius increase rate shortly exceeds the Planck length and the evaporation process stops. In the case of the other two selected masses, these PBHs do not evaporate at all during radiation-dominated era.
Since it is usually more appropriate to work with dimensionless parameters for comparison, in this paper, we define a new parameter $\chi={\dot{R}}/{v_{eff}}$. In addition to the fact that this parameter is dimensionless and this makes it suitable for comparing different models, there is another reason for defining it. This parameter is dependent on $v_{eff}$ and as a result it is related to sound speed and relative velocity of PBHs. This dependence makes the effects of the cosmic environment which is diverse in various models as well as the relative velocity of the initial PBHs for which there are different estimations to be seen in the changes of this parameter. On the other hand, the type of accretion that is chosen, whether it is spherical symmetrical accretion or dist accretion, also has a serious effect on this parameter. Therefore, the definition of this parameter is necessary. $\chi$ as a function of z for four masses plotted in Fig.~(5) and Fig.~(7).
We apply the same calculations on PBHs during the matter-dominated era. In Fig.~(6), we can see that masses which evaporation process was turned off during the radiation-dominated era do not evaporate in the matter-dominated as well due to the increase in the radius change rate. At the end of the matter-dominated era, the radius change rate drops sharply, whereby quenched evaporation may be reactivated in some masses; for example, in Fig.~(6), we see this condition in the PBH with an initial mass of $10^{27}g$. For the initial mass of $10^{17} g$ g\, radius changes are less than the Planck length and continue to evaporate during this era.
In our opinion, all models in which the evaporation of PBHs has been proposed to justify a phenomenon in the history of the universe should be re-examined. Since the starting and stopping times of Hawking evaporation are different for PBHs with different masses, these calculations must be done first to ensure that PBHs with the proposed masses will evaporate at all at that time or not. This issue is much more important for primordial black holes with low masses. Thus, the calculations related to this work must be checked for them first. We also suggest that the $\chi$ parameter should be used seriously in all future works, because this very important parameter contains many features of a model related to PBHs like the model of their formation, their accretion model, cosmic environment situations, etc.
\\\\\
\section{Conclusions}
Since PBHs are one of the most important candidates for dark matter, their evolution in time is also very important. As we know, the two main processes that can change the mass of black holes are Hawking radiation and accretion. Therefore, the behavior of these processes must be well understood in order to be able to calculate the evolution of PBHs. The accretion equations can be well represented by the Bondi-Hoyle model. Of course, this model is a well-defined model with the condition of spherical symmetry. A disk model can also be considered, which will provide more accurate answers. However, for simplicity, the Bondi-Hoyle model is used in this paper.
On the other hand, Hawking's approach to considering black holes as black bodies and trying to investigate the thermodynamic properties of black holes is very attractive and practical. Although no such radiation has been observed so far, the logic of its existence is so convincing that we cannot deny its existence.
Nevertheless, the main question is whether a black hole can always swallow particles through accretion and emit particles from itself through Hawking evaporation. This question becomes even more important when we realize that any of these processes, when applied to PBHs, can have important cosmological and astrophysical consequences. Thus, without a doubt, this question must be answered.
In this paper, we first showed that in the radiation-dominated era, the rate of mass increase of PBHs due to the swallowing radiation is much higher than the rate of increase of mass due to the swallowing matter. Despite it is the opposite in the matter-dominated era. That is, matter accretion is much more effective than radiation accretion in the mass accretion of PBHs. Such a thing was to be expected and was consistent with our imaginations. Furthermore, we compared the model we obtained for augmentation with the works of Ricotti {\it et al.} and Kamionkowski {\it et al.} in Fig.~(3) to ensure its accuracy.
One of the flaws in the interpretation of the process of evaporation of PBHs was that, in various papers, the apparent horizon of black holes was considered static, and calculations related to Hawking radiation were performed with this assumption. However, it is obvious that PBHs cannot be isolated and there is matter and radiation around them. Especially when we consider them as the constituents of dark matter. We know that the proportionality between the radius and the mass of the Schwarzschild black hole is established, so considering the PBHs as Schwarzschild black holes, it is clear that with the increase in mass, the radius will definitely go out of the static state and become dynamic. Particularly if the increase is continuous, the radius also changes continuously. As a result, there is a competition between mass reduction due to evaporation and mass increase due to accretion.
Nonetheless, we have to be very careful about evaporation calculations. Hawking radiation is the result of tunneling in the horizon potential barrier. Now, if the horizon is growing, this potential barrier is no longer the same as the static horizon potential barrier. Knowing that the quantum fluctuations on the horizon are related to the Planck length, it is enough to check the graphs related to the rate of change of the radius of black holes over time in order to know in which cases the accretion can cause rapid growth of the radius and as a result, for what mass and at what times accretion can prevent particles from escaping from the black hole's gravity. In this paper, we considered four masses $10^{16}–10^{17} g$, $10^{20}–10^{24} g$, and $1–10^4M_{\odot}$ . The red hatching in these figures means that the growth apparent horizon is so great that it actually forces escaping particles to fall back into PBHs, thus stopping the evaporation of them.
This paper is a very interesting start to investigating the models that claim that the evaporation of PBHs creates cosmological effects or that they want to explain a phenomenon with the help of the evaporation of PBHs. It seems that before any calculation to explain a phenomenon with the help of Hawking radiation, it should be checked whether the PBH with a specific mass could have Hawking radiation at all or not.
\section{Acknowledgments}
We are grateful to Pouriya Khaliliyan for several helpful discussions and comprehensive advice during this work.
\
|
Title:
Concerning Colour: The Effect of Environment on Type Ia Supernova Colour in the Dark Energy Survey |
Abstract: Recent analyses have found intriguing correlations between the colour ($c$)
of type Ia supernovae (SNe Ia) and the size of their mass-step, the
relationship between host galaxy stellar mass and Hubble residual. These
analyses suggest that the underlying cause of this relationship is dust. Using
a sample of 675 photometrically-classified SNe Ia from the Dark Energy Survey
5-year sample, we study the differences in Hubble residual for a variety of
host and local properties for subsamples split by their colour ($c$). We find a
$3\sigma$ difference for the size of the mass-step when comparing blue ($c <
0$) and red ($c > 0$) SNe. We observe the lowest r.m.s. scatter ($\sim 0.14$)
in Hubble residual for blue SNe in low mass or blue environments, suggesting
that these objects provide the most homogeneous sample for cosmological
analyses. By fitting for $c$-dependent relationships between Hubble residuals
and $M_\mathrm{stellar}$, approximating existing dust models, we remove the
mass-step from the data but find significant remaining steps in rest-frame
$U-R$, indicating that current dust modelling based on $M_\mathrm{stellar}$ may
not fully explain the remaining dispersion in SN luminosity. The most
dispersion is removed by instead accounting for a $c$-dependent relationship
between Hubble residuals and global $U-R$, resulting in $\leq 1\sigma$
remaining steps in other environmental properties, suggesting that $U-R$
provides different information about the environment of SNe Ia to
$M_\mathrm{stellar}$. This $c$-dependent $U-R$ relation implies that $U-R$ may
be more closely linked to dust, motivating the future inclusion of galaxy $U-R$
colour in the correction for SN distance biases.
| https://export.arxiv.org/pdf/2208.01357 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
cosmology: observations -- distance scale -- supernovae: general -- surveys
\end{keywords}
\section{Introduction} \label{intro}
The improved standardisation of type Ia supernovae (SNe Ia) is important to constrain their luminosity dispersion and gain further understanding of the dark energy equation-of-state parameter, $w$. By applying corrections based on empirical relationships between their brightness and light-curve width \citep[the \lq{brighter-slower}\rq\ relation;][]{Rust1974, Pskovskii1977, Phillips1993} and their brightness and optical colour \citep[the \lq{brighter-bluer}\rq\ relation;][]{Riess1996, Tripp1998}, their luminosity dispersion can be reduced to $\sim0.14$\,mag \citep{Scolnic2018}. After accounting for observational uncertainties, $\sim0.08$--$0.10$\,mag of \lq intrinsic dispersion\rq\ remains \citep[e.g.][]{Brout2019}.
In addition to these traditional light-curve corrections, there are additional correlations between the corrected SN Ia luminosity and various host galaxy \lq environmental\rq\ properties. The most well-studied of these is the \lq{mass step}\rq\ \citep[e.g.][]{Sullivan2010,Kelly2010,Lampeitl2010,Gupta2011,Johansson2013,Childress2013,Uddin2017,Uddin2020,Smith2020,Ponder2020, Popovic2021}, in which SNe Ia in more massive galaxies are more luminous after corrections than their counterparts occurring in galaxies with lower stellar masses. This step is typically measured through differing average Hubble residuals\footnote{Hubble residual: the difference between the measured distance modulus ($\mu_\mathrm{obs}$) to each SN and the distance modulus calculated from the best-fit cosmology ($\mu_{\mathrm{cosmo}}$).} on either side of some division in environmental property, e.g. high and low stellar mass. The astrophysical reasons for this disparity are unclear, however it is known that the stellar mass ($\mstellar$) of a galaxy correlates with the stellar ages, gas-phase and stellar metallicities, and dust content \citep{Tremonti2004, Gallazzi2005,Garn2010,Bravo2011,Zahid2013}, suggesting that the trends between corrected SN Ia brightness and host stellar mass could be due to differences in intrinsic SN progenitor properties \citep[e.g., age or metallicity;][]{Timmes2003,Ropke2004,Kasen2009,Bravo2010} or dust \citep[e.g.,][]{BroutScolnic2021}, or both. The physical nature of the dominant underlying effect remains an open question.
In addition to looking at the $\mstellar$ of the galaxy, some studies \citep[e.g. ][]{Lampeitl2010,Sullivan2010,DAndrea2011,Childress2013,Pan2014, Wolf2016,Uddin2017,Kim2019,Kelsey2021} also consider other environmental properties such as the star formation rate (SFR), specific star formation rate (sSFR; SFR per unit $\mstellar$) or rest-frame colour (e.g. $U-R$). These properties are correlated with $\mstellar$; the most massive galaxies tend to be redder, more passive, with the lowest sSFR, whilst the lower mass galaxies tend to have more recent or ongoing star formation. These parameters provide other complementary ways to probe the stellar populations of the SN host galaxies, and may also provide insight into potential ages of the host stellar populations. Similarly sized SN luminosity steps have been found for global host galaxy sSFR, with $>3\sigma$ evidence that SNe Ia in low sSFR galaxies are brighter on average than those in higher sSFR galaxies after corrections. The most accurate tracer to determine the relationship between magnitude and environmental property for use in cosmology remains unclear \citep{Briday2021}.
Alongside the host galaxy correlations, a wealth of studies \citep{Rigault2013,Rigault2015,Rigault2020,Jones2015,Jones2018,MorenoRaya20162,MorenoRaya2016,Roman2018,Galbany2018,Rose2019,Kim2018,Kim2019,Kelsey2021} have shown that looking at the local region around the SN, rather than the global properties of the entire host galaxy, can provide a better understanding of the SN progenitor environment. Global galaxy properties are weighted by surface brightness, meaning that global measurements are most representative of the properties of the brightest galactic regions, and thus may not accurately descripe the true environment of the progenitor and resulting SN \citep{Rigault2013}. For example, a SN Ia may be located within a locally star forming region within a globally passive galaxy, or vice versa. \citet{Rose2021} suggest that combining corrections based on host galaxy stellar mass and local stellar age provides the best improvement to SNe Ia standardisation at $>3\sigma$, reducing the unexplained scatter by $\sim 10\%$.
Recent analyses \citep{Brout2019,Smith2020,BroutScolnic2021,Kelsey2021} have shown that the magnitude of this step in average luminosity or Hubble residual with environmental property changes when considering SNe of different colours. \citet[hereafter \citetalias{Kelsey2021}]{Kelsey2021} found a significant ($\sim 3\sigma$) difference between the step sizes for subsamples comprised of \lq red\rq\ and \lq blue\rq\ SNe Ia, with bluer SNe (defined as having a SALT2 \citep{Guy2007,Guy2010} colour $c$ of $c < 0$) being more homogeneous and displaying no significant step, whilst the redder ($c>0$) SNe have a higher dispersion and larger step sizes.
Analyses of the underlying relationships between SN Ia colour $c$ and the properties of their host galaxy environments have grown over the past year, with suggestion that the differing average Hubble residuals in low and high mass galaxies are caused by differences in dust properties for SNe with different $c$ \citep{BroutScolnic2021,Popovic2021,Popovic2022}. Bluer SNe ($c < 0$) will suffer less dust extinction \citep{Jha2007} and therefore less scatter from event-to-event than red ($c > 0$) SNe. The presence of dust along the line of sight reddens the SN by differing amounts dependent on the properties of the dust, and therefore may not be the same for all SNe Ia \citep{Gonzalez-Gaitan2020, Thorp2021}. There is known variation in the total-to-selective extinction ratio ($R_V$) along different lines of sight in the Milky Way \citep[e.g. ][]{Schlafly2016}, so logically $R_V$ should vary between, and even within, different SNe host galaxies. This is considered in \citet{Chen2022} for a sample of DES SNe Ia in redMaGiC galaxies, and \citet{Rose2022} for a sample of Pantheon+ SNe Ia \citep{Scolnic2021,Brout2022}. \citet{Meldorf2022} suggest that a correlation between host-$R_V$ and SNe Ia properties indicate that intrinsic scatter is driven by $R_V$.
An alternate explanation is that red and blue SNe Ia represent differing progenitor paths \citep[e.g. ][]{Milne2013, Stritzinger2018, Gonzalez-Gaitan2020, Kelsey2021}. Blue objects are considered to be comprised of one distinct set of progenitors (hence displaying no significant step in Hubble residual across hosts of differing masses), whilst red objects are likely a combination of different progenitors or explosion mechanisms (including the blue SNe that have been reddened by dust), causing a step in Hubble residual between different mass hosts to be observed. Environmental studies may find evidence for this by analysis of the stellar population age of the region surrounding the SNe.
Regardless of the cause of the Hubble residual step, such studies indicate that blue SNe Ia, particularly those in bluer/low-mass environments, are more homogeneous and thus are better for use in cosmology \citep{Graur2015,Gonzalez-Gaitan2020, Kelsey2021}.
In this study, we aim to add more weight to this argument that blue SNe Ia are more homogeneous by studying the differences in Hubble residual for subsets of SNe Ia divided by SN colour, using photometrically-confirmed SNe Ia from the Dark Energy Survey (DES) SN programme (DES-SN) five-year cosmological sample.
Our paper is structured in the following way. In \sref{data-and-methods}, we describe the DES-SN SN Ia sample that was used in this analysis and present the method to obtain environmental properties from photometric data. We discuss the results of our study in \sref{results} and \sref{colour}, and additional analysis in \sref{discussion}. Finally, in \sref{summary} we summarise and conclude.
\section{Data and methods} \label{data-and-methods}
We begin by describing the SN Ia sample used in our analysis, and the methods used to obtain information about their galactic environments.
\subsection{The DES-SN photometric SN Ia sample}
DES is an optical imaging survey that uses four independent astrophysical probes to measure the properties of dark energy \citep{DES2016}. Here we use a sample of SNe Ia discovered by the dedicated SN programme in DES, DES-SN \citep{Abbott2019}, comprised of SNe Ia discovered in imaging data acquired by the Dark Energy Camera \citep[DECam;][]{Flaugher2015}, mounted on the Blanco 4-m telescope at the Cerro Tololo Inter-American Observatory. The DES-SN programme was optimised for the detection of SNe Ia over the redshift range $0.2 < z < 1.2$ \citep{Bernstein2012,SmithDAndrea2020} for use in cosmology, observing ten 3-deg$^2$ fields with an average cadence of 7 days in four filters ($griz$). Our sample is taken from the full five years of the survey.
This sample differs from the DES-SN three-year (DES-SN3YR) sample used in \citetalias{Kelsey2021}: it includes data from the full five years of the survey instead of only the first three years \citep{Brout2019a}, and it includes both spectroscopically-confirmed and photometrically-classified SNe Ia where the redshift for each SN is determined by a spectroscopic redshift measurement of its host galaxy. The photometry is obtained using \texttt{diffimg} \citep{Kessler2015}. DES photometric classification is outlined in \citet{Vincenzi2020} and \citet{Moller2022}, with host association details in \citet{Wiseman2020}.
\subsubsection{SN Ia light-curve parameters} \label{params}
We use the SALT2 SN Ia light-curve model \citep{Guy2007,Guy2010} to fit the SN Ia light curves and obtain estimates of their \lq stretch\rq\ ($x_1$), \lq colour\rq\ ($c$) and $m_B$ ($-2.5\log(x_0)$, where $x_0$ is the fitted amplitude). SALT2 is trained with the JLA compilation SN sample, and implemented in the \textsc{snana} software package \citep{Kessler2009}. In this analysis, we use 1D BBC bias corrections \citep{KesslerScolnic2017} to correct for selection bias with a \citet{Guy2010} intrinsic scatter model (consistent with \citetalias{Smith2020} and \citetalias{Kelsey2021}). We do not employ BEAMS, instead setting each P(Ia) = 1. The light curve parameters are used to calculate Hubble residuals:
\begin{equation}
\Delta\mu = \mu_\mathrm{obs} - \mu_\mathrm{cosmo},
\label{equation:resi}
\end{equation}
where $\mu_\mathrm{cosmo}$ is the fixed distance modulus calculated from a reference cosmology (flat $\Lambda CDM$ with $w = -1$), and $\mu_\mathrm{obs}$ is the measured distance modulus \citep[e.g.,][]{Tripp1998,Astier2006}:
\begin{equation}
\mu_\mathrm{obs} = m_B - M_0 + \alpha x_1 - \beta c + \mu_\mathrm{bias},
\label{equation:hubble}
\end{equation}
with $\alpha$, $\beta$ and $M_0$ as nuisance parameters describing the SN population in the BBC fit.
The $\mu_\mathrm{bias}$ represents a correction that is applied to each SN to account for survey selection effects. This correction is typically either a \lq 1D correction\rq\ as a function of redshift, or a \lq 5D correction\rq\ as a function of \{$z, x_1, c, \alpha, \beta$\} \citep{KesslerScolnic2017}. The 1D correction does not account for the $c$-dependent selection bias (bluer SNe are brighter and easier to observe), which results in a trend of $\Delta\mu\ \textrm{vs}\ c$ for blue SNe. A discussion of the differences between 1D and 5D corrections with regards to host galaxy correlations in the DES-SN3YR sample can be found in \citet[hereafter S20]{Smith2020}. We consider a 5D bias correction in \aref{BBC5D}, finding no significant difference in our results.
In cosmological analyses, there is an additional host galaxy \mstellar\ correction added to Eq. \ref{equation:hubble}, $\gamma G_\textrm{host}$, where the nuisance parameter $\gamma$ is analogous to $\alpha$ and $\beta$, and $G_\textrm{host}$ is a step function typically located at $\log (\mstellar / M_\odot) = 10$. We do not use this additional correction in our analysis, as we want to study the overall cause of the additional dispersion and determine if it can be explained with this simple correction.
We assume a spatially-flat $\Lambda$CDM model, with a matter density $\Omega_M = 0.3$ and Hubble constant $H_0 = 70$\,km\,s$^{-1}$\,Mpc$^{-1}$ as a reference cosmology for the calculation of $\Delta\mu$.
\subsection{SN host galaxy photometry}
Here we briefly describe the DES-SN image stacking procedure and methods used to obtain photometry of each SN Ia host galaxy and region local to the SN event. The method is identical to that used in \citetalias{Kelsey2021}, and details can be found therein.
Host galaxies are assigned using the directional light radius method \citep[DLR;][]{Sullivan2006,Gupta2016} and are catalogued in \citet{Wiseman2020}. The DLR is a measure of the separation distance between the SN and each galaxy, normalised by the apparent size of the galaxy light profile being considered \citep[obtained from high-quality depth-optimised coadded images;][]{Wiseman2020}, in terms of the elliptical radius along a line connecting the SN to the host center.
The majority of host galaxy spectroscopic redshifts for the DES photometric sample were provided by the OzDES programme \citep{Yuan2015, Childress2017, Lidman2020} using the Anglo-Australian Telescope (AAT). A subset of host galaxy redshifts were obtained from external catalogues of prior surveys that overlapped with the DES-SN fields. Details of host galaxy association and redshifts can be found in \citet{Vincenzi2020, Moller2022}.
We use the \lq seeing-optimised\rq\ DES image stacks described in \citetalias{Kelsey2021} \citep[created following][]{Wiseman2020}. Single-epoch exposures are added to the stack if they pass given quality cuts; for this analysis we use exposures with a $\tau$ (ratio between effective exposure time and true exposure time) of $>$ 0.02 and a point spread function (PSF) full-width half-maximum (FWHM) $< 1.3\arcsec$\ in all filters. This provides a balance between seeing and redshift coverage for our analysis.
Following \citetalias{Kelsey2021}, photometry for the host galaxy (\lq global\rq\ photometry) is measured using \textsc{Source Extractor} \citep{Bertin1996} on the stacked $griz$ images.
We also measure local photometry at the SN position using a 4\,kpc aperture radius following \citetalias{Kelsey2021}, based on the quality of our stacked images. Local aperture photometry is measured using \textsc{aperture\_photometry} from the \textsc{photutils} Python module \citep{Bradley2019}, and photometric uncertainties are calculated using the root-mean-square of the background-subtracted stacked images.
All our measured $griz$ fluxes are corrected for Milky Way dust extinction using colour excess $E(B-V)$ values from \citet{Schlegel1998} and multiplicative coefficients for the DES filters of $R_{g} = 3.186$, $R_{r} = 2.140$, $R_{i} = 1.569$ and $R_{z} = 1.196$ \citep{Abbott2018}, calculated using a Fitzpatrick reddening law \citep{Fitzpatrick1999}.
\subsection{SN host galaxy SED fitting}
As per \citet{Smith2020,Wiseman2020,Kelsey2021}, we use spectral energy distribution (SED) fitting and templates based on the \textsc{p\'egase} spectral evolution code \citep{Fioc1997,Fioc2019} assuming a \citet{Kroupa2001} initial mass function (IMF) and a series of 9 smooth exponentially-declining star-formation histories, each with 102 time steps, in order to estimate the physical parameters from the photometry. Synthetic DES photometry is generated for each SED template and, using $\chi^2$ minimisation, is matched with the measured photometry for each region \citep[e.g.,][]{Wiseman2020}. We apply a foreground dust screen with $E(B-V) = 0$ to $0.3$\,mag in steps of $0.05$ to account for dust extinction, and only consider solutions younger than the age of the universe for each SN redshift.
From this SED fitting we obtain the stellar mass ($\mstellar$, in $\mathrm{M}_{\sun}$) and the rest-frame $UBVR$ magnitudes for all global and local regions. For each set of photometry we additionally use a Monte Carlo process adjusting the observed photometry according to its uncertainties, with 1000 iterations in order to estimate the uncertainties in the above parameters. Full details of this process can be found in \citetalias{Smith2020}.
As described in \citetalias{Kelsey2021}, we apply a \lq mangling\rq\ \citep{Hsiao2007,Conley2008} correction, adjusting the best-fitting SED for each host galaxy or region using a wavelength-dependent spline multiplicative function to ensure that the SED exactly reproduces the observed photometry. This procedure allows rest-frame $UBVR$ magnitudes to be accurately calculated.
As in \citetalias{Kelsey2021}, we focus our analysis on rest-frame $U-R$. We choose this colour because it spans the greatest wavelength range in our observer-frame ($griz$) photometry (above our redshift cut, discussed in \sref{selection}, we lose rest-frame $R$ band), it is an approximate tracer of the SFR, it carries some age information of the galaxy \citep{Trayford2016}, and it correlates with galaxy morphology \citep[correlation with $u-r$;][]{Strateva2001,Lintott2008}. By assuming that the difference in SN luminosities is due to local stellar population age, rest-frame $U-R$ has been shown to be the best photometric tracer of this parameter \citep{Briday2021}, making it highly suitable for high redshift cosmology, where spectroscopy may not be obtained of each SN environment. Furthermore, recent analyses suggest that combining environmental corrections such as global host galaxy stellar mass, and local age (potentially with colour as a proxy) may provide the best standardisation for SNe cosmology \citep{Rose2019,Rose2021,Rigault2020}.
\subsection{SN selection requirements} \label{selection}
From the SuperNNova classifier, we require each candidate SN Ia to have a probability of being a SN Ia of $P(Ia) > 0.5$.\footnote{See \aref{diff_class} for a discussion of the use of different classifiers, templates and $P(Ia)$ selection cuts.} We apply a redshift cut of $z < 0.6$, which ensures that at all redshifts the aperture size is larger than the smallest useful aperture ($\sigma$) of 0.55\arcsec\ for a maximum full-width half-maximum of 1.3\arcsec\ when approximating to a Gaussian. Additionally, this redshift cut minimises selection biases, particularly in the shallow fields \citep{Kessler2019}. We also apply a cut on $\sigma_{(U-R)} < 1$\,mag for both the global and local measurements to have well-constrained rest-frame $U-R$ colours. This cut also removes objects with large uncertainties in \mstellar\ and SFR. We apply a typical \lq{JLA-like}\rq\ \citep{Betoule2014} light-curve selection in $x_1$ and $c$, and their associated uncertainties. A summary of the selection applied is:
\begin{itemize}
\item $P(Ia) > 0.5$,
\item redshift $z < 0.6$,
\item $(\mstellar)_\textrm{global} \ge (\mstellar)_\textrm{local}$,
\item $\sigma_{(U-R)} < 1$\,mag,
\item colour $|c| < 0.3$,
\item colour uncertainty $\sigma_c < 0.1$,
\item stretch $|x_1| < 3$,
\item stretch uncertainty $\sigma_{x_1} < 1$,
\item $|\Delta\mu|/\sigma_{\mu} < 4$
\end{itemize}
After selection cuts, there are a total of 675 objects in our sample. From the 1484 cosmology-grade SNe Ia from \citet{Moller2022}, 1152 remain after the redshift cut. The rest removed from the sample are entirely due to our quality cuts on the environmental photometry to ensure properties are well-constrained. Using a BBC fit (Section~\ref{params}), we obtain values of $\alpha = 0.158\pm0.007$ and $\beta = 2.88\pm0.07$ for these data.
We present the global and local photometry and derived environmental properties for the 675 SNe Ia used in this analysis in the online supplementary material. Light curves and associated environmental data for the full DES5YR sample will be released online with the cosmology data release.
\section{Global and Local Environments} \label{results}
\subsection{Environmental properties and colour}
We study the relationships between SN Ia colour and the SN Ia environmental properties \mstellar\ and rest-frame $U-R$ colour, both globally for the host galaxy, and for the \lq local\rq\ 4\,kpc radius regions. From Fig.~\ref{fig:c_envprop_relations}, trends in environmental properties with $c$ are shown, with more massive, redder galaxies and environmental regions hosting redder SNe Ia, consistent with prior (but weak) observed trends \citep{Sullivan2010,Childress2013,Kelsey2021, BroutScolnic2021, Popovic2021}. As in \citetalias{Kelsey2021}, we observe an absence of red SNe Ia in low-mass galaxies and, to a lesser extent, in bluer $U-R$ regions.
\subsection{Environmental Property steps in Hubble residual} \label{envHR}
We now turn to investigating the relationships between SN Ia Hubble residuals with $\mstellar$ and rest-frame $U-R$ colour, both globally and locally, for our DES5YR sample. We plot the Hubble residual vs. the chosen environmental property split into two bins at a chosen division point in \fref{fig:5yr-hubble_mass_colour_step-litsplit}, and measure the mean and dispersion in Hubble residual either side of this division. The magnitude of the \lq{step}\rq\ is simply the difference between the two means, provided with the statistical significance ($N\sigma$) of the difference. The resulting steps are presented in \tref{table:5yr_4values_BBC1D}.
We present the step values calculated with the following step locations (division points):
\begin{itemize}
\item $\log (\mstellar/M_\odot)_{\textrm{global}} = 10.0$ \citep[e.g.,][]{Sullivan2010}
\item $\log(\mstellar/M_\odot)_{\textrm{local}} = 9.4$ (this value represents the median local $\mstellar$ for this DES5YR sample)
\item $(U-R)_{\textrm{global}} = 1.0$ \citepalias{Kelsey2021}
\item $(U-R)_{\textrm{local}} = 1.1$ (this value represents the median local $U-R$ of this DES5YR sample, which is redder than that for \citetalias{Kelsey2021})
\end{itemize}
\input{1D_Tables/5yr_full_mur_BBC1D.tex}
\citetalias{Kelsey2021} found that the majority (56\%) of SNe in the DES3YR spectroscopically-confirmed sample were located in local regions that were bluer than their host galaxy average, with a median local $U-R = 0.95$. Using this larger sample of DES5YR photometrically-classified SNe we find a different result, $62\%$ of SNe are located in regions that are locally redder than their host galaxy. This relationship is not redshift dependent, so is likely a feature of types of galaxies that are found to host SNe Ia in a photometric survey. DES5YR has more SNe in high mass hosts than DES3YR, and contains many more SNe with particularly low DLR measurements. For DES3YR, which was spectroscopically-confirmed, SNe spectra were required, which is difficult to obtain for SNe near the centre of their host galaxies. For DES5YR, which was photometrically-identified, spectra from the SN themselves were not needed, so more SNe in the sample are located closer to the centre of their hosts. By comparing the local $U-R$ to the DLR for DES5YR, the majority of SNe Ia that are in locally redder regions than their host galaxy average are located closer to the centre of their host galaxy. This is likely due to the colour gradients in galaxies in which elliptical and spiral galaxies are redder in the centre, getting progressively bluer outwards \citep[e.g.][]{Tortora2010}. This is an age effect, known as the \lq{inside-out scenario}\rq \citep[e.g.][]{Perez2013}. Star formation happens close to the galaxy centre, and over time is triggered towards the outskirts, generating an age gradient. This physical age gradient is observed in our data as a colour gradient. This colour gradient means that the average colour of a galaxy may be bluer than the colour of the central region. Without SN spectra in DES5YR, more SNe Ia in the centres of galaxies are present in the sample as shown in \fref{fig:histDLR}, meaning that the effect of this colour gradient is more noticeable than for DES3YR. This in turn means that the median local $U-R$ is redder than for \citetalias{Kelsey2021}, and the median local $\mstellar$ is higher, motivating our choice of division point locations for local properties.
\subsubsection{$\mstellar$}
Focusing first on global $\mstellar$, as presented in \tref{table:5yr_4values_BBC1D}, the Hubble residual step of $0.065\pm0.013$ mag ($4.9\sigma$) agrees with prior analyses \citep[e.g.][]{Sullivan2010, Childress2013, Smith2020}. For local $\mstellar$, the Hubble residual step is smaller ($0.046\pm0.013$) in magnitude, but is still $3.7\sigma$ in significance. For both global and local $\mstellar$, the r.m.s. values are lower for lower mass regions than for higher masses, consistent with \citetalias{Kelsey2021}.
\subsubsection{$U-R$}
Moving to $U-R$, \tref{table:5yr_4values_BBC1D} shows that all the steps, for both local and global, are slightly smaller than, but consistent with the findings of \citet{Roman2018}, \citet{Rigault2020} and \citetalias{Kelsey2021}. The global colour step ($0.071\pm0.012 \textrm{ mag}; 5.7\sigma$) is larger than the stellar mass steps, but the local $U-R$ ($0.063\pm0.012 \textrm{ mag}; 5.1\sigma$) is consistent with the global mass. Overall, the $U-R$ colour steps are fairly similar whether measured globally or locally, in agreement with \citetalias{Kelsey2021} and \citet{Roman2018} (for $U-V$), likely due to the strong correlations between global and local colour. As in \citetalias{Kelsey2021} and as for the stellar masses, the r.m.s. values are lower in bluer environments.
\subsection{Refitting $\alpha$ and $\beta$}\label{refit}
In \citetalias{Kelsey2021}, a tentative $\sim2\sigma$ difference in optimal $\alpha$ and $\beta$ values on each side of the environmental property division point was found. To compare with DES5YR, we refit $\alpha$ and $\beta$ for subsamples split by $\mstellar$ and rest-frame $U-R$. This comparison could uncover whether the steps in luminosity are driven by underlying relationships between $x_1/c$ and environmental properties.
As can be seen in \tref{table:env_split-alpha-beta}, the differences in $\beta$ between subsamples with different environmental properties are the most pronounced, on the order of $3\sigma$ for all properties, with lower $\beta$ values for high mass or redder regions. This difference agrees with \citet{Sullivan2011, BroutScolnic2021, Kelsey2021}, clearly indicating different colour-luminosity relationships for different environments. On the other hand, unlike \citetalias{Kelsey2021}, differences in $\alpha$ are only potentially significant for local properties, being strongest for local $\mstellar$ ($\sim3\sigma$). Due to the known correlation between $x_1$ and age \citep{Howell2009, Neill2009, Johansson2013,Childress2014, Wiseman2021}, this suggests that local properties are better age discriminators than global properties. This is likely a result of the age gradients within galaxies as discussed in \sref{envHR}.
\input{1D_Tables/envsplit-alpha-beta.tex}
\section{The effect of host galaxies on SN colour}\label{colour}
\subsection{Splitting the sample based on colour} \label{csplit}
Motivated by our findings from \sref{refit}, and \citetalias{Kelsey2021}; \citetalias{BroutScolnic2021}, \citet{Popovic2021, Popovic2022} which suggest that the environmental \lq steps\rq\ in SN luminosity may be driven by underlying relationships between SN $c$ and galaxy properties, we split the SN Ia sample into two based on the SN colour ($c \leq 0$ and $c > 0$), and analyse the relations between $\Delta \mu$ and environmental property for each subsample. We examine these relationships for both global and local host galaxy properties.
The resulting steps for local and global $\mstellar$ and rest-frame $U-R$ for the different $c$ subsamples are displayed in \fref{fig:csplit_steps}, with numerical values given in \tref{table:c_split_steps_BBC1D}.
\input{1D_Tables/csplit_steps_table_BBC1D.tex}
In all cases, the step size is larger in red SNe Ia than in blue SNe Ia, but to varying levels of significance. There is a $3\sigma$ difference between Hubble residual step sizes for global $\mstellar$, as also seen in \citetalias{Kelsey2021}. This difference indicates that $\mstellar$ has a strong relationship with $c$, pointing to the link between host galaxy mass and dust. This is also consistent with the $3\sigma$ difference in step size for local $U-R$. The differences are not significant ($\sim2\sigma$) for the other environmental properties, indicating a weaker link between those properties and SN $c$. This result is different compared to \citetalias{Kelsey2021}, where all environmental property steps have differences of $>2\sigma$ when split into subsamples by $c$.
As in \citetalias{Kelsey2021} and \citetalias{BroutScolnic2021}, the r.m.s. values, presented in \tref{table:c_split_rms_BBC1D}, for SNe Ia with $c < 0$ are considerably smaller than those for SNe Ia with $c > 0$, with the smallest values found for $c < 0$ in low stellar mass or blue environments ($\sim0.14$). This lends more weight to the argument posed in \citet{Gonzalez-Gaitan2020} and \citetalias{Kelsey2021} that SNe Ia in the lower mass, higher star-forming, bluer regions are a more homogeneous sample that may be better standard candles. Our sample of blue SNe Ia in blue or low $\mstellar$ environments also have lower r.m.s. scatter than those obtained for a NIR sample \citep{Jones2022}, potentially raising questions about the necessity of space-based observations for SNe Ia cosmology.
\input{1D_Tables/csplit_rms_table_BBC1D.tex}
The relationships with $c$ and Hubble residuals are presented in a different form in \fref{fig:quads}, using hexbinned heatmaps in the parameter space of environmental property and SN $c$, with bins shaded according to the mean Hubble residual of events in that bin. These plots show that the most homogeneous SN Ia sample with close to zero Hubble residual is in the lower left quadrants, indicating bluer SNe and low $\mstellar$ and/or blue $U-R$ regions.
\subsection{Comparison to \citet{BroutScolnic2021}} \label{BS20}
\citetalias{BroutScolnic2021} suggest that the dominant component of SN Ia intrinsic scatter is caused by variation in the total-to-selective extension ratio $R_{V}$ distribution as a function of host galaxy properties. They found that the Hubble residual trends with host $\mstellar$ were modelled well by considering the SNe $c$ distribution to be a two-component combination consisting of an intrinsic Gaussian distribution, and an extrinsic exponential $E(B-V)$ dust distribution. This extrinsic dust distribution is host galaxy $\mstellar$ dependent, with a Gaussian $R_V$ distribution, where mean $R_V = 2.75$ in low mass host galaxies and mean $R_V = 1.5$ in high mass hosts. The different $R_V$ values result in different effective colour-luminosity relationships either side of the mass step division point. \citetalias{BroutScolnic2021} suggest that the mass step is therefore primarily caused by a difference in dust properties for SNe Ia with different $c$. This interpretation is consistent with the finding in \citetalias{Kelsey2021} - it is physics that affects the SN colour that is driving the Hubble residual host galaxy correlations.
To compare our analysis with \citetalias{BroutScolnic2021}, we extend the study of SN $c$ for different host properties, by comparing the Hubble residuals with a finer binning of SN colour, rather than simply red ($c>0$) or blue ($c<0$). This follows \citetalias{BroutScolnic2021} Fig.~6.
\subsubsection{Global $\mstellar$}\label{GlobM}
First, we present the results with host galaxy $\mstellar$ (\fref{fig:likeBS20}). Overplotted is the SN Ia sample used in \citetalias{BroutScolnic2021} (a mostly independent publicly available, spectroscopically classified, photometric light-curve sample consisting of a combination of data from the Foundation, PS1, SNLS, SDSS, CSP, CfA surveys\footnote{Foundation: \citet{Foley2018}, Pan-STARRS1 (PS1): \citet{Rest2014, Scolnic2018}, SuperNova Legacy Survey (SNLS): \citet{Betoule2014}, Sloan Digital Sky Survey (SDSS): \citet{Sako2011}, Carnegie Supernova Project (CSP): \citet{Stritzinger2010}, Harvard-Smithsonian Center for Astrophysics (CfA3+4): \citet{Hicken2009, Hicken2009b, Hicken2012}.}, and DES3YR) with a redshift cut of $z < 0.6$ applied for consistency with our analysis. The two data sets -- DES5YR and \citetalias{BroutScolnic2021} -- generally follow similar trends, and thus we expect that the predictions of the \citetalias{BroutScolnic2021} model will adequately model the relationships between environmental properties and $c$ for our DES5YR sample. \fref{fig:likeBS20}(a), as in \citetalias{BroutScolnic2021} Fig.~6, shows little difference between the r.m.s. values for samples in high and low $\mstellar$ for the bluer SNe, but this difference increases for the red SNe, also mirrored in the larger step sizes in the red bins. This increase in r.m.s. scatter and host $\mstellar$ step size towards the redder (right hand) end of the plot suggests that the overall $\mstellar$ step is driven by the red SNe.
We fit a variety of polynomial curves to the low and high $\mstellar$ data points by minimising data-curve $\chi^2$, generating simple functions for the observed $\mstellar$ - $c$ dependent Hubble residual relationships, so that the effect of these trends can be removed from the Hubble residuals and remaining environmental property relationships uncovered. Similarly low $\chi^2$ values were found when fitting quadratic curves and when fitting two separate linear relations for positive and negative $c$, for both low and high $\mstellar$. These linear fits resulted in similar remaining relationships once their trends were removed from the data. However, we proceed with the quadratic fits, due to the fact that they are smooth, continuous functions. There is no clear reason why the colour-luminosity relation would change dramatically at any particular $c$ value, intuitively it is more likely to be a continuous relationship, meaning that combining linear functions for different $c$ bins may not be as realistic. As illustrated in \fref{fig:likeBS20}(b), these quadratic fits generate simple functions for the $\mstellar$ - $c$ dependent Hubble residual relationships. By subtracting these curves from the Hubble residual of each SN in our sample, we correct for these observed $c$ - dependent $\mstellar$ trends. As shown in \tref{table:fitremain_BBC1D}, this simple approximation of the \citetalias{BroutScolnic2021} dust model removes the global host galaxy mass step from our data ($0.001\pm0.013 \,\textrm{mag};\ 0.1\sigma$), however we find remaining rest-frame $U-R$ steps of $0.025\pm0.012 \,\textrm{mag};\ 2.1\sigma$ for global and $0.023\pm0.012 \,\textrm{mag};\ 1.9\sigma$ for local when the $c$ - dependent $\mstellar$ relation is removed, perhaps suggesting that the $\mstellar$ dust model is not the full picture, and should include or be fully based on the $U-R$ tracer instead (see \sref{GlobUR}).
\input{1D_Tables/fit+remainall_BBC1D.tex}
Whilst the remaining $U-R$ steps are small in our analysis, \citet{Roman2018} found a significant ($5\sigma$) remaining $U-V$ step when correcting first for the overall global mass step in their analysis (not dependent on $c$), suggesting that additional information can be provided by local properties when combined with global properties. Our results also agree with \citet{Galbany2022}, where $>2\sigma$ steps in sSFR and H$\alpha$ equivalent width remain once the mass step has been corrected for, whilst a $<2\sigma$ step in $\mstellar$ remains once the reverse is done. This suggests that sSFH and H$\alpha$ equivalent width (both related to the age of the stellar population) are better than $\mstellar$ at improving SN Ia standardisation.
\subsubsection{Global $U-R$} \label{GlobUR}
We repeated the above analysis, but starting with and fitting for the relationship between global rest-frame $U-R$ and $c$, instead of fitting for the global $\mstellar$ - $c$ dependent Hubble residual relationship. We split into \lq{low}\rq\ and \lq{high}\rq\ by splitting at $U-R = 1$, as motivated by \citetalias{Kelsey2021}. Again, quadratic functions fit the data best through a $\chi^2$ minimisation, which we subtracted from the Hubble residual for each SN to correct for $U-R$ - $c$ dependent Hubble residual relationships, as shown in panels (c) and (d) of \fref{fig:likeBS20}.
As shown in \tref{table:fitremain_BBC1D}, this approximate correction removes the global $U-R$ step from the data, and we find remaining mass steps of only $0.012\pm0.013\,\textrm{mag} (1\sigma)$ for global $\mstellar$ and $0.010\pm0.012\,\textrm{mag} (0.8\sigma)$ for local. These post-correction steps are smaller than the remaining $U-R$ steps after the global $c$-$\mstellar$ relation was removed, suggesting that a $U-R$ correction encompasses more of the overall Hubble residual vs host environment relationship than the $\mstellar$ correction; as seen in \citetalias{Kelsey2021}.
\subsubsection{Local corrections}
We repeat the corrections of \sref{GlobM} and \sref{GlobUR}, but for local properties instead of global, again presenting our results in \tref{table:fitremain_BBC1D}.
The $\geq3\sigma$ steps that remain in both local and global $U-R$ and in global $\mstellar$ when fitting for local $\mstellar$ are particularly interesting. Considerable remaining steps remain once the trend with $c$ has been removed, suggesting that local mass may not be removing any trends or perhaps is less correlated with the other parameters than expected, however this disagrees with the trends shown in \citetalias{Kelsey2021} that local and global mass are correlated (albeit with scatter). This finding suggests that local mass may not be linked to dust in the same way as suggested by \citetalias{BroutScolnic2021} for global mass. We note in particular that local mass is the only parameter with a $<1\sigma$ step for $c<0$ (\tref{table:c_split_steps_BBC1D}), so may not follow the same trends with $c$ as the other parameters. Local mass can be understood as a stellar density, tracing the population of old stars in the region, so may be linked to age. Further investigation of this finding is needed in future study, and may require better resolved local properties than those available using DES. Higher resolution may help to determine the location of dust in the host galaxy, either contained in the local circumstellar region around a SNe, or more dispersed throughout the global host galaxy.
For local $U-R$, after fitting for the $c$ dependent relationship, steps of $<2\sigma$ remained in all other properties. This is likely reflective of the key result for host $U-R$, that a $U-R$ correction encompasses more of the dispersion than an $\mstellar$ correction. Within a 4kpc radius, local $U-R$ may not be truly \lq{local}\rq\ enough to see a clear difference compared to global $U-R$.
\section{Discussion} \label{discussion}
Similarly to \citetalias{Smith2020}, \citetalias{BroutScolnic2021} and \citetalias{Kelsey2021}, we find a $\sim3\sigma$ significant difference between global $\mstellar$ step sizes when splitting into subsamples based on SN $c$. The data agrees well with the dust explanation of \citetalias{BroutScolnic2021} and thus it is likely that the $\mstellar$ Hubble residual step differences for $c$ subsamples are due to the role of dust.
However, with the larger sample afforded by the DES5YR photometric sample, we see a different result to \citetalias{Kelsey2021} with regards to global $U-R$ steps when splitting based on $c$. \citetalias{Kelsey2021} found $\sim3\sigma$ differences in step sizes in global $U-R$ between red and blue SNe, as opposed to the smaller $\sim2\sigma$ difference seen with this DES5YR sample.
By fitting for $c$-dependent global host $\mstellar$ Hubble residual relationships in an approximation of the \citetalias{BroutScolnic2021} dust model, we are able to effectively remove the mass step from the data. Such a method has been introduced as a \lq{4D}\rq\ bias correction in \citet{Popovic2021}, and has been shown to result in a $\sigma w_{\textrm{sys}} \sim 0.005$ \citep{Popovic2022}. However, in this analysis we found an intriguing $2\sigma$ remaining global and local $U-R$ steps once the mass step has been removed, indicating that, whilst $\mstellar$-based dust modelling may explain the mass step \citepalias{BroutScolnic2021}, it may not fully explain the SN luminosity dispersion. Further investigation of this tentative result is needed.
Despite $\mstellar$ and $U-R$ being highly correlated, our analysis shows that the most Hubble residual dispersion across environmental properties was removed when correcting for a $c$-dependent global $U-R$ relation. As $U-R$ is connected to stellar age, this is expected given an older stellar population is one in which a larger fraction of the hotter stars have had time to explode and create dust. This result motivates further work into integrating mass and age simultaneously into scatter models and bias correction, with initial investigations presented in \citet{Wiseman2022}.
\subsection{Impact on Cosmology}
Based on this analysis, our suggestion for future cosmology analyses is to correct for a global $c$-dependent $U-R$ effect as this removes the most dispersion in all other environmental properties. Alternatively, corrections could be combined to remove more dispersion than one correction alone. To reduce potential bias in the standardisation, these should be simultaneously fit with the other light-curve standardisation parameters \citep{Rose2021, Dixon2021}.
However, given the homogeneity of blue ($c < 0$) supernovae in low mass or locally blue environments (as shown in \tref{table:c_split_rms_BBC1D} and \fref{fig:quads}), it may be simplest and of most immediate value to use these SNe in cosmology \citep{Kelsey2021, Gonzalez-Gaitan2020}, mitigating the need for environment correction. This is not a new suggestion, and there is a wealth of information pointing to the benefits of such a cut. For example, \citet{Rigault2013} postulate that SNe Ia from locally passive environments are the cause of the biases they observed due to their higher scatter, and they suggest adding a selection cut to only include those in locally star forming (i.e. blue) environments for cosmology. This is emphasised by \citet{Childress2014}, \citet{Kelly2015}, \citet{Henne2017} and \citet{Kim2018} who all find consistent results, and make the same conclusions about selecting star forming galaxies. \citet{Graur2015} and \citet{Kim2018} both suggest that the scatter is further constrained by limiting to low-mass ($\leq 10^{10} M_\odot$) globally star-forming host galaxies. In another test, through the analysis of ejecta velocities, \citet{Wang2009}, \citet{FoleyKasen2011} and \citet{Siebert2020} all find that the SN scatter can be reduced by using lower-velocity, bluer supernovae. By combining all of this knowledge from previous analyses, and the confirmations from \citetalias{Kelsey2021}, \citet{Gonzalez-Gaitan2020} and this study, we should use a subset of blue ($c < 0$) SNe in low mass/blue/star-forming environments to provide the most homogeneous sample for future cosmology.
\section{Summary} \label{summary}
By expanding the findings of our previous study into the relationship between SNIa host environment and $c$ \citepalias{Kelsey2021} to a larger sample consisting of SNIa from DES5YR, we have provided more weight to suggestions for future cosmological analyses, and have added our point of view to the historic mass vs age debate.
From our analysis our key findings are as follows:
\begin{enumerate}
\item Hubble residual steps in environmental properties are consistent with prior analyses, with values of $\sim5\sigma$ for global $\mstellar$, and for global and local rest-frame $U-R$. The local mass step is slightly smaller at $\sim4\sigma$.
\item When splitting our data into subsamples based on $c$, the largest, and most statistically significant, differences in Hubble residual \lq{step}\rq\ are associated with host \mstellar\ and local $U-R$, agreeing with \citetalias{Kelsey2021}.
\item As in \citetalias{Kelsey2021} and \citet{Gonzalez-Gaitan2020}, we observe the lowest rms scatter, and thus highest homogeneity for blue ($c < 0$) supernovae in low mass or blue environments. This suggests that such a subsample of supernovae may provide the best sample for use in future cosmological analyses.
\item Despite removing the mass step, intriguing $2\sigma$ steps in global and local $U-R$ remain after fitting for a simple approximation of the \citetalias{BroutScolnic2021} dust model. This suggests that current dust modelling may not fully explain the dispersion in SN luminosity.
\item The dispersion is minimised considering a $c$-dependent global $U-R$ relation, implying that $U-R$ provides different information about the environment of SNe Ia than $\mstellar$, and thus may be more linked to dust.
\end{enumerate}
This analysis has important cosmological implications, which should be taken into account in the next generation of cosmological analyses. On one hand, the homogeneity of blue SNe in low mass or blue environments provides more weight to the argument that they are the best subsample to use for precision cosmology, so it may simply be easiest to just use those. On the other hand, to gain insight into the true astrophysical cause of the SNe Ia dispersion, combining environmental corrections or studying the impact of dust on galaxy $U-R$ may provide the answers for the true relationships between SNe Ia and their environments.
\input{acknowledgements.tex}
\input{data-availability.tex}
\bibliographystyle{mnras}
\bibliography{biblio} %
\input{affiliations.tex}
\input{appendix.tex}
\bsp %
\label{lastpage} |
Title:
Low redshift calibration of the Amati relation using galaxy clusters |
Abstract: In this work, we use angular diameter distances of 38 galaxy clusters with
joint X-ray/SZE observation to circumvent the circularity problem in the Amati
relation for Gamma-ray Bursts (GRBs). Assuming the validity of cosmic-distance
duality relation, we obtain the luminosity distance from the cluster angular
diameter distance and use that to calculate the isotropic equivalent energy of
two different GRB datasets, after restricting the GRB redshift range to
$z<0.9$. We then check the validity of the Amati relation for both these
datasets. The best-fit Amati relation parameters using galaxy cluster distances
as low-redshift anchors are consistent with a previous estimate for the same
dataset. The intrinsic scatter which we obtain for the two datasets is about
45% and 15%, and is comparable with that found by other distance anchors used
to vet the Amati relation.
| https://export.arxiv.org/pdf/2208.00895 |
\newcommand{\bthis}[1]{\textcolor{black}{#1}}
\newcommand{\apjl}{Astrophys. J. Lett.}
\newcommand{\apjs}{Astrophys. J. Suppl. Ser.}
\newcommand{\aap}{Astron. \& Astrophys.}
\newcommand{\nar}{New Astronomy Reviews}
\newcommand{\aj}{Astron. J.}
\newcommand{\araa}{Ann. Rev. Astron. Astrophys. } %
\newcommand{\mnras}{Mon. Not. R. Astron. Soc.}
\newcommand{\ssr}{Space Science Revs.}
\newcommand{\apss}{Astrophysics \& Space Sciences}
\newcommand{\jcap}{JCAP}
\newcommand{\pasj}{PASJ}
\newcommand{\pasp}{PASP}
\newcommand{\pasa}{Pub. Astro. Soc. Aust.}
\newcommand{\physrep}{Phys. Rep.}
\renewcommand{\arraystretch}{2.5}
\title{Low redshift calibration of the Amati relation using galaxy clusters}
\author{Gowri \surname{Govindaraj}}\altaffiliation{E-mail:ep20btech11007@iith.ac.in}
\author{Shantanu \surname{Desai}}
\altaffiliation{E-mail: shntn05@gmail.com}
\affiliation{Department of Physics, Indian Institute of Technology, Hyderabad, Telangana-502284, India}
\section{Introduction}
Gamma-ray bursts (GRBs) are short-duration single-shot transients having energies in the keV-GeV energy regime~\cite{Kumar,Luongo21}, which were first discovered in the 1970s~\cite{firstgrb}. They are broadly classified into two categories: short and long, depending on whether $T_{90}$ is less than or greater than two seconds~\cite{Kouv93}. Long-duration GRBs have been associated with core-collapse supernova~\cite{Bloom} and short GRBs with binary neutron star mergers~\cite{Nakar}. However, there are still complex issues associated with the classification and exceptions to the above dichotomy have also been found (see Refs.~\cite{Kulkarni,Bhave} and references therein).
GRBs are located at cosmological distances, with its maximum redshift greater than nine~\cite{Levan}. However, a distinct time dilation signature in the GRB light curves (signature of cosmological expansion) is yet to be unequivocally demonstrated~\cite{Singh}.
For more than two decades, GRBs have also been proposed as standard candles using bivariate as well fundamental plane based correlations between different GRB observables in both the prompt as well the afterglow phase~\citep{Ito,Delvecchio,DainottiAmati,Moresco,Luongo21,Dainotti22,PradyumnaGRB}.
One of the most widely studied relations among these is the Amati relation, which posits a tight relation between the spectral
peak energy in the GRB rest-frame ($E_p$) and the isotropic equivalent radiated energy ($E_{iso}$)~\cite{Amati02,Amati06}. An improved variant of the Amati relation has also been recently proposed using Gaussian Copula~\cite{Liu22}.
However, given the paucity of GRBs at low redshifts, all the GRB correlation-based studies can only be done after assuming a cosmological model~\cite{Collazzi,Amati19}. Other systematics related to the Amati relation have been recently reviewed in Refs.~\cite{Moresco,Ito}.
To get around the above circularity problem in the Amati relation, two approaches have been used in literature. One way is to simultaneously constrain both the GRB correlation and cosmological model parameters~\cite{Amati08,Khadka20,Khadka21}. Alternately,
a number of ancillary probes have also been used to get model-independent estimates of distances corresponding to the GRB redshift, such as Type 1a SN~\cite{LiangAmati,Kodama08,Demianski17,Liu22}, Cosmic chronometers~\cite{Montiel,Amati19,LuongoOHD,Luongo}, BAO $H(z)$ measurements~\cite{Luongo}, X-ray and UV luminosities of quasars~\cite{Dai21} and \rthis{also cosmography using BAO and Type 1a SN~\cite{Luongocosmo}}. In a similar vein, we use the angular diameter distances to galaxy clusters to calibrate the low redshift end of the Amati relation, without relying on an underlying cosmological model.
Galaxy clusters are the most massive virialized objects
in the universe~\cite{Allen2011,Borgani12,Vikhlininrev} and have proved to be wonderful laboratories for
studying a wide range of topics from galaxy evolution to Cosmology to Fundamental Physics~\cite{Allen2011,Desai18,Boraalpha,BoraDesaiCDDR,Mendonca,PradyumnaRAR,Gopika,BoraDM,Chiu22}. However the redshift range of galaxy clusters is not as large as that of GRBs with the redshift of the most distant galaxy cluster is currently less than two. Therefore, we can only self-consistently test the validity of Amati relation at low redshifts ($z \leq 1$). Such a study could also be used to probe the redshift evolution of the Amati relation~\cite{Dai21}.
This manuscript is structured as follows. We describe the GRB and galaxy cluster data used for our analysis in Sect.~\ref{sec:data}. Our results are discussed in Sect.~\ref{sec:results}. We conclude in Sect.~\ref{sec:conclusions}.
\section{Datasets}
\label{sec:data}
\subsection{GRB datasets}
We carry out our analyses using two homogeneous GRB datasets compiled in literature. The first is the A220 dataset~\cite{Khadka21} and the other is the dataset compiled in Refs.~\cite{Demianski17,Demianski2}, which we refer to as the D17 dataset. The A220 dataset consists of 220 long GRBs (collated in Tables 7 and 8 in Ref.~\cite{Khadka21}). These span the redshift range $0.0331 \leq z \leq 8.20$. The A220 dataset comes with the GRB redshift, peak energy in the rest frame ($E_p$), and bolometric fluence ($S_{bol}$).
The D17 dataset, consists of 162 long GRBs in the redshift range $0.125 \leq z \leq 9.3$. For each GRB, its redshift, distance modulus (using the closest Type 1a SN), $E_p$, and $E_{iso}$. The dataset is chosen based on joint detections by SWIFT/BAT and Fermi/GBM or Konus-WIND. Details of the selection criterion for selecting this dataset are outlined in Ref.~\cite{Demianski17}.
Both A220 and D17 datasets also contain $1\sigma$ error bars for each of the aforementioned quantities. We also note that 24 GRBs are common among the two datasets.
Most recently, a study of the Amati relation for the low redshift end of the Amati relation was also done in another work~($0.125 \leq z \leq 1$)\cite{Dai21} (D21 hereafter).
\subsection{Galaxy cluster dataset}
The galaxy cluster dataset we use is the catalog of 38 clusters with joint Sunyaev-Zeldovich (SZ)~\cite{SZ} and X-ray observations compiled in the redshift range $0.14 \leq z \leq 0.89$~\cite{Bona06}. The SZ observations were carried out with BIMA and OVRO. The X-ray observations were carried out using the Chandra X-ray observatory through the guaranteed time program allocated to Leon van Speybroeck. More details of the X-ray and SZ observations and data reduction can be found in Refs.~\cite{Bona06,Bona04}. The angular diameter distances (and associated errors) were estimated by assuming a double-$\beta$ density profile~\cite{Mohr99} for the gas density distribution and spherical geometry for the cluster. This galaxy dataset has been widely used for a variety of model-independent cosmological tests such as cosmic distance-duality relation (CDDR)~\cite{Holanda10, Holanda11,Liang13,Santos}, tests of $\Lambda$CDM vs $R_h=ct$~\cite{Melia}, constraints on dark energy~\cite{ChenRatra} etc. In this work, we use the angular diameter distances of these clusters along with CDDR to get the luminosity distance corresponding to the redshift of a particular GRB.
\section{Analysis and Results}
\label{sec:results}
For the analysis done in this work, we rewrite the Amati relation as a linear regression relation using the logarithms of $E_{iso}$ and $E_p$, similar to D21:\footnote{Note that in some other works~\cite{Khadka21}, the regression variables $x$ and $y$ are flipped compared to the convention used here.}:
\begin{equation}
y = a x + b
\label{eq:amati}
\end{equation}
where $y \equiv \log (E_p (keV))$ and
$x \equiv \log (E_{iso} (erg))$.
Note that $E_p$ is related to the observer frame peak energy given by $E_p= E_p^{obs}(1+z)$. Also
$E_{iso}$ is related to the bolometric fluence $S_{bolo}$ according to:
\begin{equation}
E_{iso}=4 \pi d_L^2 S_{bol} (1+z)^{-1},
\label{eq:eiso}
\end{equation}
where $d_L$ is the luminosity distance corresponding to the redshift ($z$).
The first step in the analysis involves obtaining a model-independent estimate of $d_L$ from galaxy cluster observations. ~\citet{Bona06} provide the angular diameter distance ($D_a$) and redshift ($z$) data for 38 clusters using joint X-ray/SZ observations. With this data, we carry out a non-parametric reconstruction of $D_a$ as a function of $z$ using Gaussian Process Regression (GPR). GPR is a generalization of a Gaussian distribution, which is characterized by a mean and a covariance function (usually called the kernel function)~\cite{seikel12}. More details about GPR can be found in our previous works~\cite{HS,BoraDesaiCDDR,Borafg,BoraDM,Agrawal21,Mendonca,Mendonca2,Bora22}. For this analysis, we use the GPR code titled {\tt HCGP}~\cite{Camera} and employ the Radial Basis Function kernel. The GPR reconstruction of $D_A$ along with the associated $1\sigma$ error bars can be found in Fig.~\ref{fig1}. Although GPR can also be used for extrapolation, in this work we restrict our analyses to a subsets of the GRB datasets, which have the same redshift range as our cluster dataset so as to avoid any inaccuracy in extrapolation. This will also allow us to directly compare with the D21 results, who studied the evolution of the Amati relation by splitting the D17 dataset into four redshift subsamples.
Once we have reconstructed $D_a$ at any $z$, we can estimate $D_L$ from $D_A$ using the CDDR relation $D_L = D_A (1+z)^2$. The assumptions behind CDDR can be found in Ref.~\cite{BoraDesaiCDDR} and references therein. The CDDR has been validated in a model-dependent fashion in a number of works~\cite{holanda19,BoraDesaiCDDR}.
For the D17 dataset, we recalculated $E_{iso}$ from Eq.~\ref{eq:eiso} using the $D_L$ estimated from GPR and the distance modulus ($\mu$) provided in ~\cite{Demianski17}. For the A220 dataset, $E_{iso}$ was evaluated directly from Eq.~\ref{eq:eiso} using the $S_{bolo}$ provided for the A220 dataset. Then using these values for $E_{iso}$ and the original $E_{p}$, we then find the best-fit parameters of the Amati relation using Bayesian regression. For this purpose,
we use the following likelihood function based on Orthogonal Distance Regression, which was previously used to analyze the Baryonic Faber-Jackson relation~\cite{Tian21} (See also ~\cite{Lelli19}) \footnote{Some previous works on testing the Amati relation ~\cite{Dai21,Moresco,Demianski17} have used a different likelihood (see for example, Eq. 11 in ~\cite{Dai21}). However, if one maximizes such a likelihood, its parameters will diverge to infinity. This equation is usually attributed to ~\cite{Reichart}. However, we could not find this equation in ~\cite{Reichart}. Therefore, there must be some typographical error in the expression for the likelihood in the above works.}
\begin{eqnarray}
-2\ln L &=& \large{\sum_i} \ln 2\pi\sigma_i^2 + \large{\sum_i} \frac{[y_i-(ax_i+b)]^2}{\sigma_i^2 (a^2+1)}
\label{eq:eq8} \\
\sigma_i^2 &=& \frac{\sigma_{y_i}^2+a^2\sigma_{x_i}^2}{a^2+1}+\sigma_{s}^2
\end{eqnarray}
where $x$ and $y$ have the same meaning as in Eq~\ref{eq:amati}; $\sigma_x$ and $\sigma_y$ denote the errors in $\log (E_{iso})$ and $\log (E_p)$, obtained using error propagation; and $\sigma_{s}$ denotes the loguniform intrinsic scatter which characterizes the tightness of the relation. We used uniform priors on $a$ and $b$, and log-normal priors on $\sigma_{s}$ : $a \in [0,1]$, $b \in [-30,-10]$, $\sigma_s \in [10^{-5}, 1]$. We sample the likelihood using the {\tt emcee} sampler~\cite{emcee}.
The marginalized 68\%, 90\%, and 95\% credible intervals can be found in Fig.~\ref{fig2} and Fig.~\ref{fig3} for the D17 and A220 GRB datasets with redshifts in the same range as the cluster catalog in ~\cite{Bona06}. The scatter plots for $E_p$ versus $E_{iso}$ using the updated values for $d_L$ for both the datasets are shown in Fig.~\ref{fig4}. A tabular summary of our results can be found in Table~\ref{table1}.
For the D17 dataset, the best-fit value of $a$ is equal to $0.35^{+0.15}_{-0.13}$ with an intrinsic scatter of 15\%. Our value of $a$ is consistent with the best-fit value found in D21 (for the lowest redshift sample) within $1\sigma$ (cf. Table 1 of D21). The corresponding scatter obtained by D21 (albeit with a different likelihood) for the D17 dataset using the lowest redshift sample is equal to 26\%. Therefore, our scatter is comparable to that obtained in D17.
For the A220 dataset, we find $a= 0.44^{+0.07}_{-0.09}$ with an intrinsic scatter of 45\%. No other work has analyzed only the low redshift A220 subsample and hence a direct comparison is not possible. However, ~\citet{Khadka21} find that the value of $a$ (in our notation) for the full A220 dataset (after doing a simultaneous fit to $\Lambda$CDM model) is given by $a=0.76\pm 0.044$, which is about $3.5\sigma$ discrepant with our analysis using the low-redshift subsample. The corresponding scatter obtained by ~\citet{Khadka21} for A220 dataset (using the full redshift sample), obtained from a simultaneous fit to Cosmology and the Amati relation is about 46\%. Most recently, a study of the Amati relation using A220 dataset was also carried out in ~\citet{Liu22} using the luminosity distance from Pantheon Type 1a SN, and the intrinsic scatter for the full A220 sample found to be about 50\%. Therefore, the scatter we obtained for this sample is comparable with the above estimates. Note however that the aforementioned works do not use the same likelihood as the one in this work.
Therefore, the intrinsic scatter for the Amati relation, which we find using the low redshift end of the A220 and D17 samples by using galaxy clusters as low redshift distance anchors is equal to 45\% and 15\%, respectively. This scatter is comparable to the same obtained using other model-independent probes as distances anchors to address the circularity problem in GRBs. However, given this somewhat large scatter, we conclude that the Amati relation is not as tight as some of the other fundamental plane GRB relations, when determined in a model-independent fashion. (See Table 2 of Ref.~\cite{PradyumnaGRB} and references therein for a comparison to other GRB correlations.) Hence, it would be premature to use the Amati relation as a stand-alone probe of precision Cosmology.
\begin{table}[h]
\begin{tabular}{ |c| c|c| c| }
\hline
\textbf{Dataset} & \textbf{a} &\textbf{b} & \textbf{Scatter} \\
\hline
A220 &$0.44^{+0.07}_{-0.09}$ &$-20.22^{+4.52}_{-3.76}$ & $0.45^{+0.091}_{-0.066}$ \\
D17 & $0.35^{+0.15}_{-0.13}$ &$-16.19^{+6.91}_{-7.81}$ &$0.15^{+0.018}_{-0.030}$\\
\hline
\end{tabular}
\caption{\label{table1}Summary of our results for the Amati relation using both the GRB datasets.}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
In this work, we have used galaxy cluster distances to address the circularity problem in the the Amati relation due to the paucity of GRBs at low redshifts. For this purpose, we use the $D_A$ of 38 galaxy clusters ($z<0.89$) obtained in a model-independent fashion using joint X-ray/SZ observations. From this, we obtain $D_L$ as a function of redshift, after positing the CDDR and using the GPR-based non-parametric regression technique. This $D_L$ was used to reconstruct $E_{iso}$ for two different GRB datasets (A220 and D17), using the low redshift GRB subset ($z<0.9$) in order to ensure that the redshift range overlaps with that of the galaxy cluster sample. Therefore, using galaxy clusters as anchors in this way, we can test the efficacy of the Amati relation at low redshift and probe its redshift evolution.
The best-fit credible intervals for the Amati relation parameters for the two GRB datasets used in this work, can be found in Fig~\ref{fig2} and Fig.~\ref{fig3}. We find that the best-fit parameters for one of the GRB datasets are consistent with the results in D21 within $1\sigma$, which also analyzed the same dataset, using quasar UV and X-ray fluxes as distance anchors instead of galaxy clusters.
However, the intrinsic scatter which we obtain for both the datasets is quite large, viz. 44\% and 15\%. Although this scatter is comparable to that obtained using other cosmological probes as distance proxies or via doing a joint cosmology fit to the same datasets, its large value implies that the Amati relation cannot be used as a stand-alone precision probe of Cosmology.
\rthis{However, the Amati relation can still be used to test and constrain models of prompt emission of GRBs. For example the Amati relation can be reproduced from the internal shock model of prompt emission~\cite{Nava} and conversely, this relation was also used to constrain the dynamics of the flow and Lorentz factor~\cite{Nava}. A variety of studies have shown that the Amati relation can be explained based on the viewing angle in various jet models~\cite{Levinson}. This relation can also be explained within the alternative Cannonball model of GRBs~\cite{Dado}. A comprehensive review of constraints on various prompt emission models from the Amati relation can be found in ~\cite{Dai18,Ito}. The Amati relation can also be used to discriminate between different GRB classes and understand their nature and differences~\cite{Amati06,QinChen}.
However, for all these studies one must ensure that these are not affected by selection effects since the bursts with a high $E_{peak}$ and a relatively low $E_{iso}$ are more likely to remain undetected and be under-represented in observed samples~\cite{DainottiAmati}.}
On the GRB side, more breakthroughs should come from next generation missions such as SVOM~\cite{SVOM} and THESEUS~\cite{Theseus}. On the galaxy cluster side, one should expect a large number of additional X-ray/SZ observations from the next generation missions such as Athena~\cite{Athena} and Simons Observatory~\cite{Simons}. More precise tests of the Amati relation using GRBs in conjunction with galaxy clusters should therefore be possible within this decade.
\section*{Acknowledgments}
\rthis{We are grateful to the anonymous referee for useful comments and feedback on the manuscript.}
\bibliography{main} |
Title:
The velocity distribution of outflows driven by choked jets in stellar envelopes |
Abstract: Many stripped envelope supernovae (SNe) present a signature of high-velocity
material responsible for broad absorption lines in the observed spectrum. These
include SNe that are associated with long gamma-ray bursts (LGRBs) and
low-luminosity GRBs (llGRBs), and SNe that are not associated with GRBs.
Recently it was suggested that this high velocity material originates from a
cocoon that is driven by a relativistic jet. In LGRBs this jet breaks out
successfully from the stellar envelope, while in llGRBs and SNe that are not
associated with GRBs the jet is choked. Here we use numerical simulations to
explore the velocity distribution of an outflow that is driven by a choked jet
and its dependence on the jet and progenitor properties. We find that in all
cases where the jet is not choked too deep within the star, the outflow carries
a roughly constant amount of energy per logarithmic scale of proper velocity
over a wide range of velocities, which depends mostly on the cocoon volume at
the time of its breakout. This is a universal property of jets driven outflows,
which does not exist in outflows of spherically symmetric explosions or when
the jets are choked very deep within the star. We therefore conclude that jets
that are choked (not too deep) provide a natural explanation to the fast
material seen in the early spectra of stripped envelope SNe that are not
associated with LGRBs and that properties of this material could reveal
information on the otherwise hidden jets.
| https://export.arxiv.org/pdf/2208.14459 |
\begin{keywords}
stars: jets -- gamma-ray burst: general -- supernovae: general -- hydrodynamics
\end{keywords}
\section{Introduction}
Both long and short GRB jets have to cross a significant amount of matter (the stellar atmosphere for long GRBs and the merger's ejecta in short ones) before producing the observed $\gamma$-rays.
This understanding has lead to great interest in jet propagation within surrounding matter and the question was explored both analytically \citep[e.g.,][]{Blandford_Rees1974, Begelman_Cioffi1989, Meszaros_Waxman2001, Matzner2003, Lazzati_Begelman2005, Bromberg2011} and numerically \citep[e.g.,][]{Marti+1995, Marti+1997, Aloy+2000, macfadyen_supernovae_2001, Reynolds+2001, Zhang+2004, Mizuta+2006, Morsony+2007, Wang+2008, Lazzati+2009, Mizuta+2009, Morsony+2010, Nagakura+2011, Lopez+2013, Ito+2015, Lopez+2016, Harrison2018}.
This, naturally, raises the possibility that some jets are ``choked" during their propagation and are unable to break out of the surrounding dense medium.
The observed temporal distributions of both long \citep{Bromberg+2012} and short \citep{Moharan_Piran2017} GRBs suggest that this happens in both types of events and there are indications that this also happens in some Supernovae \citep{Piran2019}.
In cases that the jet does not emerge we may still observed the signature of the cocoon that forms.
First, the breakout of the shock driven by the cocoon produces a bright flash.
For example, a cocoon breakout is most likely the origin of low-luminosity GRBs ({\it ll}GRBs) \citep{Kulkarni1998, macfadyen_supernovae_2001, Tan+2001, campana_association_2006, Wang+2007, waxman_grb_2007, katz_fast_2010, Nakar_Sari2012,Nakar2015}.
These type of GRBs are rarely observed, however, when their low luminosity is taken into account it was realized that they are more numerous than regular LGRBs \citep{Soderberg+2006}.
Another signature arises from the fast cocoon material that engulfs the star once the hot cocoon material breaks out and spreads.
Specifically, this material leads to very broad absorption lines that are visible as long as it is optically thick \citep{Piran2019}.
Such lines have been observed in several SNe some accompanied by {\it ll}GRBs \citep{Galama+1998, Iwamoto+1998, Modjaz+2006, Mazzali+2008, Bufano+2012, Xu+2013, Ashall+2019,Izzo_et_al_2019} and others without \citep{Mazzali+2000, Mazzali+2002, Mazzali+2009}.
Finally, the cooling emission of the cocoon will also generate a potentially detectable UV-optical transient on time scale of hours to days \citep{nakar_piran2017}.
The important signature that helps determining the origin of the broad absorption lines is the energy-velocity distribution of the fast moving material.
Regular spherical explosion result in a very steep distribution with roughly $\de E(v)/\de \ln v \propto v^{-5}$ \citep[e.g.,][]{nakar_sari2010}.
However, when a jet is involved in the explosion this distribution is expected to be much shallower with much more energy at high velocities.
Recently, \cite{eisenberg2022} have shown that when the jet is successful the cocoon generates a unique energy-velocity flat distribution with $\de E/\de \ln \Gamma\beta \propto {\rm const.}$ over a wide range of velocities from sub to mildly relativistic, where $\beta=v/c$ and $\Gamma$ is the corresponding Lorentz factor.
They also found that, when the jet is choked, it leaves a unique signature of a flat energy-velocity distribution.
However, in the case of choked jets the flat distribution covers a range of velocities that is narrower than that of outflows driven by successful jets.
Motivated by these results we examine here in detail the energy-velocity distributions of different chocked jet, focusing on the relation between the properties of the choked jet and the final energy-velocity distribution of the outflow after it becomes homologous.
For our study we use a large set of 2D relativistic hydrodynamical simulations.
We consider explosions that are driven by choked jets in which we vary the opening angle and the engine working time of the jet as well as the structure of the progenitor.
We follow the simulations until the entire outflow becomes homologous and examine what is the relation between these properties and the outflow energy-velocity distribution.
The paper is structured as follows.
In Section \ref{sec: methodology} we describe the numerical procedure adopted for the simulations.
In Section \ref{subsec: sim_setup} we describe the code choice and the composite mesh structure adopted, while in Section \ref{subsec: ics} we report in detail the setup for the stellar and interstellar environment and the initial conditions for the relativistic jet.
Numerical aspects are discussed in two appendixes: a resolution study is described in Appendix \ref{sec: appendix A} and in Appendix~\ref{sec: appendix B} we explore the different use of a numerical smoothing function for the stellar density profile.
In Section \ref{sec: results} we explore the results of our set of simulations.
We summarize our findings and consider the implications to observations in Section \ref{sec: conclusions}.
\section{Methodology}
\label{sec: methodology}
\subsection{Simulation Setup}
\label{subsec: sim_setup}
Our simulations are performed using the open source massively parallel multidimensional relativistic magneto-hydrodynamic code {\textsc{pluto}} (v4.3) \citep{Mignone2007}.
The code uses a finite-volume, shock-capturing scheme designed to integrate a system of conservation laws where the flow quantities are discretized on a logically rectangular computational grid enclosed by a boundary.
We use the special relativistic hydrodynamics module in 2D cylindrical coordinates.
We perform our calculations using a parabolic reconstruction scheme combined with a third-order Runge-Kutta time stepping.
We also force the code to reconstruct the 4-velocity vectors at each time step.
The 2D simulations enables us to reach high resolution with reasonable computational resources.
3D simulations carried by \citet{Harrison2018} suggest a similar generic evolution of the jet for the same parameters.
The propagation of the outer cocoon and the jet head are similar both in 2D and 3D simulations and the differences for the velocity distribution of the system are negligible - especially for choked jets - as shown in \citet{eisenberg2022}.
The main difference arises in the morphology of the jet head in 2D simulations that is affected by a plug at the head front.
This plug diverts some of the jet elements sideways to dissipated their energy in oblique shocks but it is irrelevant for the cocoon structure in which we are interested here.
Another difference is the stability of the boundary between the jet and the inner cocoon \citep{Gottlieb_Nakar_Bromberg_2021}.
However, all properties that are important for the cocoon (like mixing at the head, mixing between inner and outer cocoon and the propagation of the outer cocoon are all similar for 2D and 3D.
Those differences are not significant for our purposes.
We chose the equation of state of the fluid to be ideal and with a constant relativistic polytropic index of $4/3$.
This equation of state is applicable for a relativistic gas (as in the jet) as well to a radiation dominated Newtonian gas, such as the shocked stellar envelope.
To study the long term evolution of the jet and the cocoon from the star, we use a large grid spanning for several orders of magnitude.
This allows us to track the evolution of the system for at least two minutes after the breakout.
At that time the entire stellar envelope is shocked by the cocoon and it expands enough to become homologous.
We use a grid of size $4736 \times 4636$ cells, with the radial cylindrical coordinate\footnote{Throughout the paper $r$ is used for the 2D cylindrical radius while $R$ stands for the 3D radius.}
extending within the range $r = [0,350] \times 10^{10} \cm$ and the vertical coordinate extending within the range $ z= [0.1 , 360] \times 10^{10} \cm$.
We use a combination of a uniform and two non-uniform mesh grids in $r-z$ coordinates with a decreasing resolution from the inner region of the simulation box to the outer boundaries.
The grid mesh is uniform in the inner part to maintain a high resolution of the jet injection and the formation of the resulting high pressure cocoon.
The uniform mesh has $1000 \times 900$ grid points extending in the ranges $r = [0, 1] \times 10^{10} \cm$ and $ z=[0.1, 1] \times 10^{10} \cm$ with a resolution along both coordinates of $\Delta (r,z)_\mathrm{unif.} = 10^{7} \cm $.
Next to the uniform mesh we placed a stretched mesh with $1278^2$ grid points extending along both coordinates within the range $(r,z) = [1,6] \times 10^{10} \cm$ with a stretching ratio of $\sim1.0018$
The number of grid points for this mesh is chosen such that its initial grid spacing is the same of to the adjacent uniform mesh $\Delta (r,z)_\mathrm{s, init} = \Delta(r,z)_\mathrm{unif.} = 10^7 \cm$ and its final grid spacing is $\Delta(r,z)_\mathrm{s, final} = 10^8 \cm $.
We cover the remaining grid at larger distances with a logarithmic spaced mesh with $2458^2$ grid points extending within the range $(r,z) = [6, 360] \times 10^{10}\cm$.
The number of grid points is chosen such that the grid spacing of the mesh at $(r,z) = 6 \times 10^{10} \cm$ coincides to the resolution of the stretched mesh, such that $\Delta(r,z)_\mathrm{log, init} = \Delta(r,z)_\mathrm{s, final} = 10^8 \cm $.
In this way we ensure a smooth increase of the resolution without jumps for the entire simulation grid.
A detailed resolution study for these simulations is reported in Appendix \ref{sec: appendix A}.
We inject the jet along the inner lower $z$ boundary, denoted $z_0$ (see \ref{sec:jet}).
Otherwise, we impose a reflective boundary condition at this boundary as it approximates the equatorial plane of the system.
We impose axial-symmetric conditions for the inner vertical boundary.
Both outer boundaries are set to outflow.
\subsection{Initial conditions}
\label{subsec: ics}
\subsubsection{The star}
We approximate the stellar density profile as a continuous power law that mimics the sharp decline of density in radius near the stellar edge\footnote{This profile diverges at the origin but this region does not influence the jet propagation and it is not included in our computational domain.}
\begin{equation}
\label{eq: rho_profile}
\rho(R) = \begin{cases}
\rho_* \left( \dfrac{R_*}{R} - 1\right)^2 + \rho_0 ,& \mathrm{for} ~ R \leq R_* \ , \\
\rho_0, & \mathrm{for} ~ ~ R > R_* \ .
\end{cases}
\end{equation}
Here we choose $\rho_* = 100 ~ \g ~ \cm^{-3}$ and $R_* = 3\times 10^{10} \cm$.
The total integrated mass of the star is $M_* = (9 \pi / 5) ~ M_\odot$ (see \ref{sec:scale} for scaling of these parameters to other values.)
For this density profile the local slope, $\alpha \equiv \de \log \rho(R)/\de \log R = {-2}/{(1-R/R_*)} $.
The slope, $\alpha$, reaches the critical value of 3 for $R = R_*/3$. Beyond this value a spherical blast wave accelerates and eventually looses causality.
We present our results for this specific density profile.
However, in Sec.~\ref{sec: diff_profiles} we show that the results for different stellar density profiles (both inner and outer) are qualitatively similar.
Surrounding the star we have an external CSM density of $\rho_0 = 1.67 \times 10^{-21} ~ \g ~\cm^{-3}$.
This exact value is unimportant as it is added just to avoid a numerical vacuum.
The interaction of the jet or the cocoon outflow with this CSM is insignificant.
To avoid numerical artifacts arising from the sudden drop in density at the edge of the star we smooth the density of the outer edge of the star with a power law:
\begin{equation}
\label{eq: smooth}
\rho_\mathrm{smooth} (R) = \rho_\mathrm{s} \left( \dfrac{R}{R_\mathrm{s}} + 1\right)^{-8} \ ,
\end{equation}
with $ \rho_\mathrm{s} = 0.05 ~ \g ~ \cm^{-3}$ and a gradient scale of $R_\mathrm{s} = 5 \times 10^8 \cm$.
We verified that this arbitrary choice of the smoothing function does not affect our results (see Appendix ~\ref{sec: appendix B}).
In order to avoid any initial random motion we set a uniform and low ambient pressure of $P = 3.5 ~ \keV ~ \cm^{-3}$ within the simulation grid.
\subsubsection{The jet}
\label{sec:jet}
We inject a collimated jet with a constant luminosity $L_\jet$, operating for $t_\mathrm{e}$ so that the total injected energy is $E_{0} = L_\jet \times t_\mathrm{e} = 10^{51} \erg$.
A uniform jet is injected through a nozzle with a velocity in the $z$ direction with an initial bulk Lorentz factor $\Gamma_{0,\jet}$ a density $\rho_\jet $ and a specific enthalpy $h_\jet \gg 1$.
Being relativistically hot the jet spreads quickly to form an initial opening angle $\theta_\jet \simeq 1/(1.4 \Gamma_{0,\jet})$ (see details on this injection method at \citealt{Mizuta2013} and \citealt{Harrison2018}).
The jet is numerically initialized by the injection of density, pressure, and momentum along the $z-$direction through a nozzle parallel to the $z-$axis with a radius $r_\jet$ at an initial height $z = z_0$.
The head cross section is then $\Sigma_\jet = \pi r_\jet^2$.
For an initial opening angle $\theta_\jet > 0.1~\rad$, we set up $r_\jet = 10^8~ \cm$, allowing a sufficient mesh coverage over the nozzle and we set the initial injection height at $z_0 = 10^9~\cm$.
We consider a constant jet luminosity $L_\jet$.
This determines the product $\rho_\jet h_\jet$ as:
\begin{equation}
\label{eq: rho_j}
\rho_\jet h_\jet = \dfrac{L_\jet}{\Sigma_\jet \Gamma_{0,\jet}^2 c^3} \ .
\end{equation}
We choose $h_\jet = 100$.
This choice of the enthalpy is arbitrary, as long as $h_\jet \gg 1$.
The jet's pressure is given by $ P_\jet = (h_\jet - 1) {\rho_\jet c^2}/{4}$.
We explored the parameter space running simulations for different initial values of $L_\jet $ at steps of $2.5 \times 10^{50}~\erg~\s^{-1}$ from $2.5 \times 10^{50}~\erg~\s^{-1}$ to $2 \times 10^{51}~\erg~\s^{-1}$ for a total of 9 different luminosities.
For each value of the luminosity, we run simulations for a set of different opening angles $\theta_\jet = [0.05, 0.1, 0.2, 0.4, 0.6]~ \rad$.
As we keep the total jet energy fixed these conditions translate to different engine working times $t_\mathrm{e} = 10^{51}\erg / L_\jet$.
For each of the 9 values of the luminosity we run 5 different values of the opening angle for a total of 45 simulations.
\subsubsection{Scaling relations}
\label{sec:scale}
While we consider specific numerical values for the stellar and jet parameters, our solutions can be scaled to other values.
The equation of motion of the forward shock speed $\beta_\head$, is regulated by $\Tilde{L}$ \citep{Matzner2003,Bromberg2011}:
\begin{equation}
\label{eq: tilde_L2}
\Tilde{L} \simeq \dfrac{L_\jet}{ \Sigma_\jet \rho(R) c^3} \ ,
\end{equation}
with
\begin{equation}
\label{eq: beta_h}
\beta_\head = \dfrac{1}{1+{\Tilde{L}}^{-1/2}} \ .
\end{equation}
The stellar size $R_*$, is the scale length of t the system.
The scalings $\Sigma_\jet \propto R_*^2$ and $\rho(R/R_*) \propto \rho_*$ we can express $\tilde{L}$ as
\begin{equation}
\Tilde{L} \propto \dfrac{E_0}{t_\engine \rho_* R_*^2} \ .
\end{equation}
If we scale the stellar radius as $R_* = \lambda R_*'$ we have to scale the density and the jet luminosity accordingly in order to maintain $\Tilde{L}$ and $\beta_\head$ unchanged.
As we show later the location where the jet is choked (i.e. where the last element launched by the jet reaches the head) with respect to the stellar radius has also to be constant.
The choking location $z_\choke$ is roughly proportional to the engine time $t_\engine$ \citep{Nakar2015}:
\begin{equation}
\label{eq: zchoke}
z_\choke = \int_0^{t_\choke} \beta_\head c \de t \simeq \beta_\head c t_\choke = \dfrac{\beta_\head c }{1-\beta_\head} t_\engine \ .
\end{equation}
where $ t_\choke = {t_\engine}/{(1 - \beta_\head)}$ is the choking time.
If $\beta_\head$ is kept constant than any transformation on $R_*$ will leave $z_\choke / R_*$ unchanged if $t_\engine = \lambda t_\engine' \propto R_*$.
Thus, scaling $E_0 = \eta E_0'$ and $R_* = \lambda R_*'$ we require that and $\rho_* = \eta \lambda^{-3} \rho_*'$.
Because $M_* \propto \rho_* R_*^3$, when keeping this scaling of $t_\engine$ and $R_*$, we can rewrite Eq.~\ref{eq: tilde_L2} as $ \Tilde{L} \propto E_0/M_*$, which means that since $\Tilde{L}$ is kept constant the typical velocity of the system $v_0 = (2E_0/M_*)^{1/2}$ is also conserved under these transformations.
Turning to the jet parameters we recall that the only relevant quantities are $L_\jet$, $t_\engine$ and $\theta_\jet$.
The first two determine $E_0$ and the latter determined $\Gamma_{0,\jet}$.
The luminosity, together with the stellar parameters determine the produce $\rho_\jet h_\jet$ with the condition that $h_\jet$ while arbitrary should be much larger than one.
In summary, given the physical scales $R_*$, $\rho_*$, $E_0$, $v_0$, $t_\engine$ and the scalings $R_* = \lambda R_*'$, $E_0 = \eta E_0'$, the parameters defining the physics of our system, i.e. $\tilde{L}$, $z_\ch/R_*$, will not change if $t_\engine = \lambda t'_\engine$ and $\rho_* = \eta \lambda^{-3} \rho'_*$.
\section{Results}
\label{sec: results}
\subsection{The jet-cocoon system}
\label{subsec: analysis}
We start analyzing our simulation set considering a jet with our \emph{canonical} parameters of 1-sided luminosity of $L_\jet = 10^{51} \erg/\s$ and $\theta_\jet = 0.2~\rad \simeq 10^\circ$.
While advancing through the stellar atmosphere the interaction of the relativistic jet with the stellar material results in a forward-reverse shock structure that is called the head of the jet \citep{Blandford_Rees1974, Begelman_Cioffi1989, Meszaros_Waxman2001, Matzner2003, Lazzati_Begelman2005, Bromberg2011}.
The jet head propagation velocity, $\beta_\head$, is much lower than the jet velocity before it reaches the head and for typical GRB jets it is Newtonian.
The shock-heated jet and stellar material that enters the head flow sideways because of the high head pressure and form a pressurized cocoon which enshrines the jet.
The contact discontinuity between the material shocked in forward and the reverse shocks divides the cocoon to inner and outer parts.
The inner cocoon is composed of tenuous jet material which has crossed the reverse shock while the outer cocoon is composed of denser shocked stellar material.
The cocoon exerts a pressure on the jet such that, if sufficiently high, collimates it, thus reducing its opening angle and consequently reducing the jet cross section compared to the uncollimated jet.
Within our chosen stellar structure model the jet head moves at a constant velocity at the inner region of the core where the local density slope $\alpha = 2$.
If the jet reaches outer regions where $\alpha > 2$ it starts accelerating.
The jet is \emph{choked} if the engine stops while the jet is propagating within the stellar envelope and the last jet element launched by the engine (\emph{tail}, hereafter) catches up with the jet head before the latter breaks out of the star.
In this case all the engine's energy goes into the cocoon.
Clearly, the choking height (Eq.~\ref{eq: zchoke}) satisfies $z_\choke < R_*$.
Otherwise we define the jet as unchoked or successful.
Throughout the following analysis we focus mostly on jets choked at various depths inside the star.
For comparison we show also the case of an unchoked jet breaking out of the star.
For a detailed study on the energy-velocity distribution of stellar explosions which are driven by successful jets see \cite{eisenberg2022}.
We divide the evolution of the jet to three different phases: 1) the injection phase and choking phase $t < t_\choke$, 2) the cocoon expansion phase $t_\choke < t < t_\bo$ and 3) the cocoon breakout phase $t > t_\bo$.
The different phases of a choked jet are shown in Fig.~\ref{fig: jet_t01}.
\subsubsection{Injection and Choking: $t \leq t_\choke$}
The engine operates for $t_\engine$ producing a jet.
This is clearly seen in the first row of Fig.~\ref{fig: jet_t01} in both the rightmost panel showing $\beta \Gamma$ and the leftmost one showing that the tracer of the jet material is concentrated mostly within a narrow cylinder along the symmetry axis $z$ with radius $r \simeq r_\jet$ (color dark red).
This behaviour is typical of the collimated regime \citep{Bromberg2011}.
After the jet engine stops the last jet material launched at the injection nozzle propagates upwards.
At $t_\engine$ the jet head is still unaware that the engine stopped and the jet head continues to propagate with $\beta_\head$ (in this specific simulation $\beta_\head \simeq 0.2)$ after the central jet engine is switched off.
However, as $\beta_\head < 1$ while the jet material moves at $\beta_\mathrm{t} \simeq 1$ the jet tail catches up with the head.
Only at that time the information that the engine stopped reaches the head and the reverse shock within the jet disappears.
This is the time where the jet is choked.
As the head propagates with a velocity of $\beta_\mathrm{h} c$ and the tail propagates at $\beta_\mathrm{t} \simeq 1$, we can estimate the the time that the jet tail will catch up with the head at $t_\choke$, as defined in Eq.~\ref{eq: zchoke}.
Until $t < t_\choke$ the jet continues to drive the head forward through the stellar atmosphere.
The second row of Fig.~\ref{fig: jet_t01} shows the system at $t = t_\choke$, roughly 0.3 seconds after the end of the engine activity, in this specific simulation.
One can see that the very fast jet material around the core disappeared. At this stage all the jet's energy has been dissipated and given to the surrounding cocoon.
\subsubsection{Cocoon expansion: $ t_\choke < t < t_\bo$}
After the jet choking, the cocoon becomes less and less collimated and proceeds spreading sideways while the forward shock decelerates when it is deep within the envelope and accelerates as it reaches the steep density gradient near the stellar edge \citep{Irwin_Nakar_Piran2019}.
During the propagation the inner cocoon transfers energy to the freshly shocked material (via PdV work).
\subsubsection{Brekout: $t> t_\bo$}
After the breakout the cocoon material spreads both radially and tangentially to engulfs the stellar surface, quickly shrouding the breakout point from most observers (see the last row of Fig.~\ref{fig: jet_t01}).
The star is blanketed by the ejecta in a time equal to $t_\mathrm{wrap} \simeq \pi R_* /2 v_\mathrm{bo}$, where $v_\mathrm{bo}$ is the breakout velocity of the cocoon near the pole.
The shock driven by the cocoon also moves tangentially towards the equator at a slower pace until the entire stellar envelope is shocked at $t_\mathrm{shock} \simeq \pi R_* /(2 v_\mathrm{p})$ where $v_\mathrm{p}$ is the pattern velocity at which the spilled material travels along the stellar surface \citep{Irwin_et_al2021}.
Shortly after reaching the equator the shocked material propagates almost radially and outwards and it becomes homologous once the outflow reaches $\sim 2R_*$.
\subsubsection{Successful jets}
Jets whose engine operates long enough break out from the stellar envelope before the end of the activity of the central engine.
These jets are not choked and can preserve an ultra-relativistic velocity once they get out of the star.
We show an example of a successful jet in Fig.~\ref{fig: jet_t4}.
Because the jet broke out without being choked the cocoon structure inside the star is mostly collimated along the vertical axis.
From the first and fourth panel which show $\Gamma\beta$ and the jet tracer respectively, it is evident how the innermost region is still dominated by tenuous, highly relativistic jet material.
Comparing the last row of Fig.~\ref{fig: jet_t01} and Fig.~\ref{fig: jet_t4} we notice that for the same normalized density and normalized pressure scale, the longer duration of the unchoked jet results in expulsion of denser and faster stellar material with respect to the choked jet.
\subsection{The spreading angle and the cocoon volume}
To describe quantitatively the geometry of the jet-cocoon during its propagation within the stellar envelope we use the aspect ratio, defined as
\begin{equation}
\label{eq: theta_of_t}
\theta(t) = \dfrac{\max(r_\mathrm{c}(t))}{z_\head(t)}\ ,
\end{equation}
where $r_\mathrm{c}$ is the cocoon cylindrical radius and $z_\mathrm{h}$ is the head position.
For $\theta \ll 1$ the aspect ratio is a good approximation of the cocoon spreading angle.
The expanded cocoon at the moment of the breakout is shown in the third row of Fig.~\ref{fig: jet_t01}.
The steep density transition results in an elongation and acceleration of the cocoon and the ejection of low-density material from the star, which rapidly engulfs the star's external layers.
We define the breakout angle $\theta_\bo$ as the geometric opening angle measured at the breakout time $t=t_\bo$, namely
\begin{equation}
\label{eq: theta_bo}
\theta_\bo = \theta (t_\bo) = \dfrac{\max(r_\mathrm{c}(t_\bo))}{R_*}\ .
\end{equation}
The evolution of $\theta(t)$ for $\theta_\jet = 0.2~\rad$ and several values of $t_\engine$ is reported in Fig.~\ref{fig: spreading_angle}.
At first, immediately after injection, the aspect ratio starts growing.
The growth continues until $z_\head$ is roughly twice the injection radius, $z_0$, at which point the aspect ratio starts decreasing, approaching the point where the cocoon opening angle is comparable to $\theta_\jet$.
This evolution reflects the time it takes the pressure in the cocoon to build up to the point that it starts collimating the jet effectively (see \citealt{Harrison2018} for details).
The evolution of the aspect ratio changes dramatically immediately as the jet is fully choked.
Since there is no more fresh jet material to drive the head its velocity drops sharply.
At the same time the cocoon pressure, and thus its sideways expansion, is not affected.
The result is that the aspect ratio grows continuously after $t_{\ch}$.
There is a short episode, just before and after the breakout when the aspect ratio decreases, as the head accelerates near the edge of the star and after the breakout.
Soon after that the aspect ratio starts increasing rapidly as some of the material that broke out of the star spreads sideways at speed that is close to the speed of light.
One clear property that is seen in the figure is that jet that are choked more deeply have longer time to expand before they breakout and therefore a deeper choking results in a wider cocoon with a larger volume at the time of breakout.
As we show next this fact has important implications for the energy-velocity distribution of the outflow.
The volume of the cocoon at breakout, $V_\bo$ is another parameter that describes the properties of the jet-cocoon system.
As the energy of a choked jet is given to the cocoon, for a given energy, the cocoon mass (and hence volume) at breakout, will corresponds to a typical expansion velocity of the cocoon material.
As the volume-averaged density in the shocked cocoon material and the volume-averaged density of the star are roughly the same, we can define a characteristic velocity at the breakout linked with the breakout volume, namely:
\begin{equation}
\label{eq: volume_bo}
\beta_\bo \simeq \beta_0 \sqrt{\dfrac{V_*}{V_\bo}} \ .
\end{equation}
\subsection{The Energy-velocity distribution}
Fig.~\ref{fig: choking_height} depicts the energy-velocity distribution of the entire set of simulations for different values of the engine working time $t_\engine$ and different initial opening angles $\theta_\jet$ at $t=120 \s$ (when the outflow is homologous and kinetic energy dominates).
The $x$-axis is normalized by $\beta_0$ and the $y$-axis by $E_0$.
Each curve is differentiated by color and labeled by a triplet of numbers describing, respectively, $\theta_\jet$, $t_\engine$, and $\sqrt{V_*/V_\bo}$.
We grouped the different curves according to $\sqrt{V_*/V_\bo}$.
For a comparison we superposed the energy-velocity distribution of an isotropic spherically symmetric explosion (black-dashed line) for all panels.
Fig.~\ref{fig: choking_height} shows, first, that in all cases the energy-velocity distribution exhibits a roughly constant energy per logarithmic scale of $\Gamma\beta$ over a range of velocities.
The distribution rises quickly before this rough plateau starts and decays sharply after it ends.
The rough plateau always starts at $\beta_0$ and its highest velocity is determined almost entirely by $V_{\bo}$, with a weak dependence on the jet opening angle.
To estimate the highest velocity of the flat part of the distribution we define $\beta_{\rm cut}$ as the velocity obtained when the energy-velocity distribution drops to 1/4 of its maximum value.
This arbitrary definition provides a velocity that is slightly larger than the end of the plateau (e.g., in the spherical case $\beta_{\rm cut}=2\beta_0$).
Fig~\ref{fig: volume_cutoff} shows that there is a strong positive correlation between $\beta_{\rm cut}$ and $\sqrt{V_*/V_{\bo}}$.
For small values of $\sqrt{V_*/V_{\bo}}<3$, where typically $z_{\choke} \ll R_*$, we see that $\beta_{\rm cut}\approx \beta_{\bo}$.
However, for larger values of $\sqrt{V_*/V_{\bo}}$ where the choking takes place not very deep within the stellar envelope, $\beta_{\rm cut} > \beta_{\bo}$.
The origin of the material faster than $\beta_{\bo}$ in these cases is the inner cocoon, which retain a significant fraction of its energy at the time of the breakout and outer cocoon material that is close to the edge of the star, where the forward shock is faster than $\beta_{\bo}$.
The value of $V_*/V_{\bo}$ is expected to depend on the jet opening angle and the choking depth. The jet opening angle determines the aspect ratio of the cocoon as long as the head that is pushed ahead by the jet is feeding the cocoon ($\theta\approx \theta_\jet$), while the choking depth determines by how much this aspect ratio increases until the breakout.
Fig.~\ref{fig: v_bo_z_ch} depicts the correlation between $V_\bo$ and $z_\choke$ for different values of the initial $\theta_\jet$.
As expected, $V_\bo$ is a function of $z_\choke$ and $\theta_\jet$.
A deeper choking height and a wider jet correspond to a larger cocoon volume upon breakout.
\subsubsection{The origin of ejecta with different final velocities}
To understand the origin of the various components of the outflow, we tracked the distribution of the ejected material using four different scalar tracers associated with four distinct regions of the star.
The division to the different regions is determined at the time of the breakout and it is shown in Fig.~\ref{fig: scheme}.
The tracers follow the mass in each of this region (at the time of breakout): I) internal-axis, II) external-axis, III) internal-equatorial, IV) external-equatorial.
Fig.~\ref{fig: 4_tracers} shows the distribution of the stellar material from each of the regions at $t=60~\s$, roughly $46~\s$ after the breakout.
Fig.~\ref{fig: e_vs_v_tracers} shows the energy-velocity distributions of the four sectors.
We see that the quasi-spherical outflow that leads the ejecta is made only of tenuous material coming from the on-axis, external layers of the progenitor directly above the expanding jet cocoon (region II).
This component contains only around $2\%$ of the total stellar mass but it contains 11\% of the total ejecta energy.
Evidently, the fastest ejecta is dominated by this sector.
The material associated with the stellar core part that is along the axis (region I; first panel in Fig.~\ref{fig: 4_tracers}) is much more concentrated than that of the external-axis region (II) but much more extended than the two equatorial sectors.
It contains $30\%$ of the total stellar mass and 46\% of the outflow energy.
This section dominates the energy distribution over a wide range of velocities.
Almost all the rest of the mass and the energy are contained in the internal-equatorial sector (III) which carries $60\%$ of the mass and 32\% of the energy.
It dominates the energy at low velocities $\lesssim \beta_0$.
Finally, the outer-equatorial section carries $5\%$ of the ejecta mass and 11\% of its energy.
All its material is moving at intermediate velocities and it is subdominant at all velocities.
\subsection{The effect of the stellar density profile}
\label{sec: diff_profiles}
\begin{center}
\begin{table*}
\label{tab: different_profiles}
\begin{tabular}{|c||ccccc|}
\hline
Jets & $t_\engine$ [s] & $\theta_\jet$ [rad] & $\rho(r)$ & $z_\choke/R_*$ & $t_\bo$ [s] \\
\hline
Canonical & 1 & 0.2 & $\propto R^{-2} (R_*-R)^2 $ & 0.21 & 13.9 \\
$\alpha2$\_$n2.5$ & 1 & 0.2 & $\propto R^{-2} (R_*-R)^{2.5} $ & 0.25 & 11.2 \\
$\alpha2$\_$n3$ & 1 & 0.2 & $\propto R^{-2} (R_*-R)^3 $ & 0.25 & 9.5 \\
$\alpha2.5$\_$n2$ & 1 & 0.2 & $\propto R^{-2.5} (R_*-R)^2 $ & 0.28 & 6.8 \\
$\alpha2.5$\_$n3$ & 1 & 0.2 & $\propto R^{-2.5} (R_*-R)^3 $ & 0.37 & 4.0 \\
\hline
Canonical\_t1.33 & 1.33 & 0.2 & $\propto R^{-2} (R_*-R)^2 $ & 0.25 & 11.5 \\
Canonical\_t2 & 2 & 0.2 & $\propto R^{-2} (R_*-R)^2 $ & 0.37 & 8.3 \\
\hline
\end{tabular}
\caption{Properties of the jets injected in different density profiles. The table lists the engine working time $t_\engine$, the initial opening angle $\theta_\jet$, the density profile $\rho(r)$ used in the run, the choking height relative to the star radius $z_\choke/R_*$, and the breakout time $t_\bo$.}
\end{table*}
\end{center}
To study the effect of different stellar density profiles we consider stellar density profiles that can be written as:
\begin{equation}
\label{eq: rhogen}
\rho (R) = \rho_*\left(\dfrac{R_*}{R}\right)^\alpha \left(1-\dfrac{R}{R_*}\right)^n \ ,
\end{equation}
where $n$ is the outer slope at the edge, and $\alpha$ is the inner slope, with $\alpha <3$.
The density profile described by Eq.~\ref{eq: rho_profile}, which is used through the rest of the paper, is roughly equivalent to the case of $\alpha=2,~ n=2$ and it will be referred as the \emph{canonical profile} hereafter.
The profiles that we consider are listed in Table.~\ref{tab: different_profiles}. For each profile we run a simulation with our canonical jet parameters, $\theta_\jet = 0.2~\rad$, $L_\jet = 10^{51}~\erg~\s^{-1}$, and we inject the jets from the same initial height ($z_0 = 10^9~\cm$).
Fig.~\ref{fig: density_profiles_s} shows a comparison of the energy-velocity distributions from different stellar profiles.
First, it shows that the distributions are all flat over a range of velocities, implying that them main feature of the outflow from an explosion driven by a choked jet is independent of exact stellar profile (a similar result was found by \citealt{eisenberg2022} in the case of explosions that are driven by successful jets).
When looking in more detail, the right-hand side shows two pairs of simulation.
Each pair shows the results of different stellar profiles with similar $\theta_\jet$ and $z_\choke$ (which dominates $V_{\bo}$).
The distributions found in the two simulations of each pair are very similar, implying that when the cocoon properties are similar the stellar profile has a minor effect on the outflow energy-velocity distribution.
On the left-hand side of Fig.~\ref{fig: density_profiles_s} we compare the energy-velocity distributions of jets with the exact same parameters (including $t_\engine$) but different envelope density profiles.
It shows that the stellar profile affects the velocity of the head (as was found previously by \citealt{Bromberg2011,Harrison2018}) and therefore jets with the same properties are choked at different heights when propagating in different density profiles.
Since the energy-velocity profile depends strongly on $z_\choke$ two jets with the same properties that propagate at different stellar profiles will result in outflows with different energy-velocity distributions, as shown on the right-hand panel of this figure.
\subsection{The energy velocity distribution at different viewing angles}
\label{sec: profiles_different_angles}
Since jet driven explosions are aspherical, one expects that the outflow will not be isotropic.
Fig.~\ref{fig: energy_profiles_volumes} depicts the energy-velocity distribution of four simulations.
For each simulation we show the distributions at six different sections, where each section is the sum of the ejecta within a range of polar direction.
To see the dependence on the initial conditions we show simulations with two different jet opening angles ($0.2$ and $0.6$ rad) and two different values of breakout volume $\volbo$.
As expected, the outflow is aspherical.
A common, also expected, property of all four simulations is that the maximal velocity of the outflow is around the jet axis at lower polar angles.
This result was found also for jet-driven explosions of successful jets \citep{eisenberg2022}.
In the two simulations with the small value of $\volbo$ (i.e., low $z_\choke$ and/or wide $\theta_\jet$; top panels) the energy-velocity distribution of the equatorial outflow ($\theta \gtrsim 60^\circ$) is similar to that of a spherical explosion, with a typical velocity $\beta_0$. The faster outflow is confined to lower angles.
In the two simulations with the large value of $\volbo$ (i.e., high $z_\choke$ and narrow $\theta_\jet$; bottom panels) a large range of velocities was seen in all directions, but still faster velocities are observed closer to the jet axis.
For clarity we present in Fig.~\ref{fig: density_profiles_1} the radial density distribution profiles, $\rho(R)$, for different viewing angles of four different simulations.
These are the same simulations and the same divisions to angular sections as in Fig.~\ref{fig: energy_profiles_volumes}.
This presentation is often used in studies of SN ejecta and at sub-relativistic velocity $\rho(R) \propto \beta^{-5} \frac{\de E}{\de \log\beta}$.
\section{Conclusions and implications to observations}
\label{sec: conclusions}
We carried relativistic hydrodynamical simulations in 2D cylindrical coordinates of stellar explosions driven by jets, focusing on configurations of choked relativistic jets and exploring how those can lead to different realizations of the velocity distribution of the outflow in its homologous expansion phase.
We followed the evolution of a relativistic jet from the injection deep inside the star to the point where it is choked and then continued to follow the cocoon as it emerges from the envelope and ultimately unbinds it up to the point that the outflow becomes homologous.
We scrutinized the various stages of the jet inside the star and analyzed what happens during the choking process and the adiabatic cocoon expansion.
While the results are given for a specific set of parameters we provided scaling relation for the physical parameters of the jet and the star in order to facilitate a dimensionless treatment of the problem.
We stress that the scaling laws presented in Sec.~\ref{sec:scale} don't involve gravity and they are valid as long as the gravitational binding energy of the star is subdominant with respect to the total energy of the jet. A conditions that holds for the powerful jets that have been observed in some SNe \citep{Piran2019}.
We summarize our findings as follows:
\begin{itemize}
\item All jet driven explosions in which the jet is not choked too deep within the star generate an outflow with a unique feature: a significant range of velocities over which the outflow carries a roughly constant amount of energy per logarithmic scale of the proper velocity ($\Gamma\beta$).
This is a universal property of jet driven explosions.
The main difference between different setups is the range of velocities over which the energy is constant.
\item The plateau of the energy-velocity distribution starts in all cases at $v_0=\sqrt{E_0/M_*}$.
The maximal velocity of the plateau depends mostly on the cocoon volume upon breakout and the corresponding velocity is $\beta_\bo=\beta_0\sqrt{V_*/V_\bo}$.
For $\sqrt{V_*/V_\bo} < 3$ the maximal velocity is comparable to $\beta_\bo$, while for larger values of $\sqrt{V_*/V_\bo}$ the maximal velocity is larger than $\beta_\bo$ and it can become mildly relativistic.
\item The volume of the cocoon upon breakout, $V_\bo$, depends on the choking height, $z_\choke$, and on opening angle of the jet upon launching.
A higher $z_\choke$ and narrower opening angle leads to a smaller $V_\bo$ and thus to an outflow that extends to higher velocities.
\item The outflow from an explosion driven by a choked jet is not isotropic.
In general, the material along the poles (that is along the jet direction) is faster while the material along the equator is slower.
\end{itemize}
A spherical explosions accelerate only a negligible fraction of the stellar mass to very high velocities.
Indeed, hydrodynamic simulations of core-collapse supernova explosions have proved to be rather aspherical without necessarily harboring a jet \citep[e.g.,][for a review]{Burrows_core_collapse_2006, Janka_2007, Janka_review_2012, Janka_review_2016}. However, the differential mass-velocity distribution in those kind of simulations \citep{Wongwathanarat_2015} does not show a significant fraction of the ejected material with sufficient velocity to mimic the high-velocity tail seen in some SNe and produced by the jets considered here.
We have shown here that the situation is drastically different when there is a jet that breaks the symmetry.
Such a jet can deposit a significant amount of energy at high velocity matter, even in case that the jet is choked within the envelope.
This excess in high velocity outflow (compared to a spherical explosion) is certainly expected when the entire stellar explosion is driven by a jet, but it is also expected if the jet is accompanied by a simultaneous more spherical explosion (see e.g., \citealt{eisenberg2022}).
If sufficiently optically thick such a high velocity material that surrounds a SN would produce a very broad absorption lines (with typical width corresponding to 0.1-0.2c) in the observed spectrum.
It will be observed in the early spectra but will disappear later when this outer envelope that is rapidly expanding becomes optically thin.
Lines that show an excess of high velocity material have been observed in several SNe \citep{Galama+1998, Iwamoto+1998, Mazzali+2000, Mazzali+2002, Modjaz+2006, Mazzali+2008, Bufano+2012, Xu+2013, Ashall+2019,Izzo_et_al_2019}.
Our result show that, as suggested by \cite{Piran2019}, a chocked jet can lead to that high velocity material.
However, we have found that some conditions are needed to observe the corresponding broad absorption lines.
First, the jet must be chocked at sufficiently large distance at the stellar atmosphere.
The signature of jets that are chocked too deep will not be so significant.
Second, as there is less fast moving material in directions far from the jet direction, the fast moving matter will become optically thin earlier in these directions.
As the broad absorption line will fade faster, this implies that observers at such viewing angles are less likely to observe the broad absorption line signature.
These last two facts imply that we may not observer broad emission line in all SNe that harbour relativistic jets.
The excess in fast material was observed in various types of stripped envelope SNe.
This include SNe that are associated with long GRBs, SNE that are associated with {\it ll}GRBs and SNe that are not associated with GRBs at all.
Long GRBs must contain successful relativistic jets.
{\it ll}GRBs contain jets which may very well be choked \citep{Kulkarni1998, macfadyen_supernovae_2001, Tan+2001, campana_association_2006, Wang+2007, waxman_grb_2007, katz_fast_2010, Nakar_Sari2012,Nakar2015}.
We do not know if SNe that are not associated with GRBs harbour jets, but if they are then these jets must be choked ones.
A previous study by \cite{eisenberg2022} have shown that successful jets can generate the energy-velocity distribution which is observed in SNe that are associated with long GRBs.
Our finding here show that choked jets can explain the energy-velocity distribution seen in SNe that are associated with {\it ll}GRBs and in SNe that are not associated with any type of GRBs.
This provides further support for the interpretation of the ``disappearing'' early very broad absorption lines in some SNe as arising from choked jets.
These findings also show that such lines may not be detected in all SNe that harbor choked jets.
Further exploration of this model, including estimates of the observed spectra and the fraction of events in which these lines will be observed will be carried out in future work.
\section*{Acknowledgments}
We kindly thank Christopher Irwin for the stimulating discussions and suggestions.
We also thank our anonymous referee for a constructive and helpful report.
This work is supported by the ERC grants TReX (TP and MP) and JetNS and an ISF grant 1995/21 (EN).
\section*{Data Availability}
The data underlying this article will be shared on reasonable
request to the corresponding author.
\bibliographystyle{mnras}
\bibliography{main}
\appendix
\section{Resolution Check}
\label{sec: appendix A}
We tested the convergence of our jet evolution in 2D varying the grid resolution of our simulation box for the case $\theta_\jet = 0.1~\rad$ and $L_\jet = 10^{51}~\erg / \s$.
The engine powering the jet in each simulation stops after 1 second and thus we expect that the jet is choked inside the stellar atmosphere at around the same height and the cocoon expands slowly before accelerating again close to the stellar edge.
We tested our setup for different resolutions: 1) $620 \times 595 $, 2) $1240 \times 1190$, 3) $1860 \times 1785$, 4) $2480 \times 2380$, and 5) $3720 \times 3570$ (purple).
Resolution 4) is what we use for all the simulations in this paper.
For these different resolutions we investigated the head velocity convergence.
Figure \ref{fig: resolution_beta} presents the head velocity as a function of time for the five resolutions previously discussed.
We see how the lower resolution runs tend to propagate slightly slower than the high resolution runs (red and purple dots) before the jet engine terminates its activity.
Low resolution runs also tend to converge slower to a constant velocity in the internal part of the the stellar atmosphere after the jet engine switches off and the choking happens.
This results in a different choking height position $z_\mathrm{ch}$ for the low resolution runs with respect to the high resolution cases (red and purple).
At around $R \simeq R_*/2$ the head velocity stabilizes and reaches a stable value of $0.06~\mathrm{c}$ for all the resolutions taken into account. The convergence to this velocity is excellent, however it takes some time for the jet to reach this velocity and ``forget" about its initial conditions, which depends on the resolution.
After the forward shock reaches $R_\ast/2$ the head accelerates as it nears the edge of the star, eventually resulting in a shock breakout at $z_\mathrm{ch}/R_* = 1$.
After the shock breakout we notice that the forward shock propagation for the low resolution runs is much slower with respect to the high resolution simulations, a clear sign that the numerical resolution is insufficient to treat the problem properly.
In Fig.~\ref{fig: resolution_energy_velocity} we plot the energy-velocity distribution for the five different resolutions we analyzed for Fig.~\ref{fig: resolution_beta}. We can clearly see how the distribution converges to a similar shape (red and purple curves at the highest resolutions) at the resolution increases. The two highest resolutions differ very little qualitatively and the jets are choked almost at the same $z_\choke$. This demonstrates that our results are sufficiently converged for the runs used in the paper.
\section{Different smoothing function}
\label{sec: appendix B}
The smoothing function for the stellar density profile (Eq.~\ref{eq: smooth}) prevents the computation to break down at the moment of the shock breakout because of the sudden drop in density of $\sim$20 orders of magnitude at the edge of the star in a relatively small distance.
The functional form of the smoothing function however is either arbitrary, so we tested whether the result is affected by the particular choice of Eq.~\ref{eq: smooth}.
We run a simulation for canonical jet parameters with the sharper smoothing function
\begin{equation}
\label{eq: smooth2}
\rho_\mathrm{smooth, 2} (R) = \rho_\mathrm{s} \left( \dfrac{R}{R_\mathrm{s}} + 1\right)^{-10} \ ,
\end{equation}
with $\rho_\mathrm{s} = 0.05~\g~\cm^{-3}$ and $R_\mathrm{s} = 5 \times 10^{8}~\cm$.
In Fig.~\ref{fig: different_smoothing} we report the comparison of the energy-velocity distribution of this new run with the old one which uses the smoothing of Eq.~\ref{eq: smooth}.
We can immediately see that the the two curves almost overlap and show insignificant differences which do not affect the final result.
This is also due to the fact that the external additional mass given by these smoothing functions is of the order of $10^{-6}~M_*$, which is completely negligible and do not alter the physical scale of the system.
\bsp %
\label{lastpage} |
Title:
Shadows around at Sgr A* and M87* as a tool to test gravity theories |
Abstract: In the framework of Randall -- Sundrum theory with extra dimension Reissner
-- Nordstrom black hole solutions with a tidal charge have been found. The
shadow around the supermassive black hole in M87 was reconstructed in 2019
based on observations with the Event Horizon Telescope (EHT) in April 2017. In
May 2022 the EHT Collaboration presented results of a shadow reconstruction for
our Galactic Center. Earlier, for Reissner -- Nordstr\"om metric we derived
analytical expressions for shadow size as a function of charge and later
generalized these results for a tidal charge case. We discuss opportunities to
evaluate parameters of alternative theories of gravity with shadow size
estimates done by the EHT Collaboration, in particular, a tidal charge could be
estimated from these observations.
| https://export.arxiv.org/pdf/2208.06805 |
\title{\bf \large Shadows around at Sgr A* and M87* as a tool to test gravity theories}
\author{Alexander F.~Zakharov\thanks{E-mail: alex.fed.zakharov@gmail.com}
}
\date{\it \small Bogoliubov Laboratory for Theoretical Physics, JINR,
141980 Dubna, Russia, \\
}
{\it Key words}: Supermassive black holes, Galactic Center, M87, Synchrotron radiation, VLBI observations.
\section{\large Introduction}
Several years ago the Event Horizon Telescope (EHT) Collaboration has been formed. The astronomers use telescopes located over the globe and they are operating at 1.3~mm wavelenth in VLBI regime or in other words the corresponding network acts as a giant telescope with Earth size.
An angular resolution of the network is around 25~$\mu as$ which is comparable with angular sizes of event horizons for supermassive black holes in
Sgr A* and M87*. In spite of huge differences in black hole masses and distances toward these objects these shadows have similar sizes (52~$\mu as$ for Sgr A* and 42~$\mu as$ for M87*) In April 2017 the EHT Collaboration observed the Galactic Center and the center of M87 galaxy. In 2019 the EHT Collaboration presented results of a shadow reconstruction for M87* and in May 2022 a shadow reconstruction for Sgr A* was presented by the EHT Collaboration. Now these images with shadows around M87* and Sgr A* are used as logos for these supermassive black holes or sometimes more generally for any astrophysical black holes. Since it is impossible
to observe dark regions (shadows) astronomers found bright regions of synchrotron emission in 1.3~mm and reconstructed shapes and sizes of shadows.
These remarkable achievements in precise observations and data analysis are based on three pillars: synchrotron emission which is generating in many astronomical objects including environments of supermassive black holes, VLBI ideas which were efficiently implemented in the EHT network and relativistic analysis of geodesics in the black hole metrics.
\section{\large Synchrotron radiation}
Radiation of electrons moving in magnetic fields (it is now called synchrotron one) was discussed in details in a fundamental book by \cite{Schott_12} but at these times there were no facilities to detect it in experiments or in astronomical observations\footnote{In recent book \cite{Connerade_21} the author described the life and times of George Adolphus Schott, who criticized the planetary atom model (\cite{Schott_39}) which was proposed by \cite{Bohr_13} based on results of remarkable experiments done by E.~Rutherford.}. People came back to this theory in forties and fifties of the last century due to an opportunity to detect a synchrotron radiation in accelerators and in astronomy, initially, in radio band and later, in a wide spectrum of electromagnetic radiation from radio up to $\gamma$-radiation.
An existence of synchrotron radiation was re-discovered by I. Pomeranchuk\footnote{Academician I. Ya. Pomeranchuk was one of the most favorite students of L. D. Landau. Isaac Pomeranchuk was the founder of the Theoretical Department at Institute of Theoretical and Experimental Physics (ITEP) and L. D. Landau was a half time researcher at the department. In 1998 ITEP established Pomeranchuk Prize (see, \url{https://en.wikipedia.org/wiki/Pomeranchuk_Prize}) and three Pomeranchuk Prize laureates later got a Nobel prize in physics: Yoichiro Nambu, Roger Penrose and
Giorgio Parisi (perhaps a number of people getting both Pomeranchuk and Nobel Prizes could be more than three but according to Pomeranchuk Prize rules of awarding a Nobel prize winner can not be chosen as a Pomeranchuk Prize laureate). L. \cite{Okun_03} wrote a short scientific biography of I. Pomeranchuk .} and his co-authors, see papers by \cite{Pomeranchuk_40,Iwanenko_44,Artsimovich_45}.
Later, a theoretical analysis of synchrotron radiation was given by \cite{Schiff_46,Blewett_46,Schwinger_49}, the first detection
of X-ray radiation from accelerated electrons in the General Electric 70-MeV synchrotron was reported by \cite{Elder_47}. Very often paper by \cite{Schwinger_49} is treated as a key theoretical paper in the field.
In 1950 D. Ivanenko, A. A. Sokolov and I. Pomeranchuk were awarded the State prize of the second grade for
works on synchrotron radiation, presented in book "Classical Field Theory" written by D. I. Ivanenko, A. A. Sokolov in Russian.
Famous Soviet astrophysicist I. S. \cite{Shklovsky_46} reminded that in forties of the last century he followed a talk about a discovery of radio emission from Sun
and he concluded that the radio emission from Sun was generated due to a synchrotron radiation phenomenon. He also concluded that the synchrotron effect is a cause of electromagnetic radiation in wide spectral band and this idea was the most brilliant from all his ideas in his entire scientific career as it was noted in book by \cite{Shklovsky_91}.
For Crab Nebula Shklovsky interpreted electromagnetic radiation in wide spectral band (from radio to X-ray) as the synchrotron emission (\cite{Shklovsky_53,Shklovsky_76}),
see also English translation of the first edition of book by \cite{Shklovsky_68}.
Martin \cite{Rees_71} supposed that radio emission from extended radio source may be explained by synchro-Compton radiation (in this case electrons are accelerating by electromagnetic waves).
\cite{Shklovsky_75}
assumed that there is a black hole at the Galactic Center with mass around $3 \times 10^4 M_\odot$
and radiation has a non-thermal origin and probably a synchrotron radiation is responsible for a significant part of radiation from the Galactic Center (earlier \cite{Linden_Bell_71} emphasized arguments supporting a necessity for a presence of supermassive black hole at the Galactic Center with mass estimates in the range $[4\times 10^3, 10^7]M_\odot$). In spite of the underestimation of the black hole mass, ideas about a presence of a black hole at Sgr A* and the synchrotron emission from the Galactic Center region received confirmation in subsequent studies.
Synchrotron radiation is a cause of energy losses in accelerators, but it plays extremely important role in astrophysics asit was noted by \cite{Ginzburg_58,Ginzburg_59,Ginzburg_65}. A more recent review on different aspects
of synchrotron radiation in astrophysical sources is given in reviews by \cite{Ginzburg_84,Ginzburg_85,Bisnovatiy_99}.
\section{\large Early VLBI in USSR}
Soviet radio engineer Leonid Matveenko was one of the first persons who understood an opportunity of inter-continental radio observations and early history of these studies is described in papers \cite{Matveenko_07a,Matveenko_07}.
In fall 1962 Matveenko reported ideas of VLBI in Pushchino at a seminar of
the Radio Astronomy Laboratory and he did not get a support to conduct such and experiment in Crimea as it was proposed.
However, these ideas were supported by participants of a seminar at Sternberg State Astronomical Institute (SSAI) of Moscow State University where
the SSAI director D. Ya. Martynov recommended to take out a patent due high scientific and technological importance of this proposal. Instead of patent
a scientific paper on the issue has been published in Soviet journal "Radiophysics" \cite{Matveenko_65}. In this paper the authors proposed independent recording the signals and subsequent processing the data. In initial version of the paper the authors proposed use a ground -- space interferometer but
the editorial board of the journal recommended to remove this idea from the accepted version of the paper as it was noted by \cite{Matveenko_07a,Matveenko_07}, however, we should mention that earlier \cite{Papaleksi_47} proposed to develop radio interferometers for geodesy and emphasized an importance of radio observations, in particular in the direction toward Galactic Center, where it was discovered a strong radio source Sgr A* (\cite{Jansky_33}).
In summer 1963 the director of the Jodrell Bank Observatory B. Lovell visited Soviet Union as a guest of Soviet Academy of Science. Matveenko delivered a talk about potential opportunities of interferometers with very large bases and Lovell noted that this idea looks feasible but he did not see any astronomical problem where such a resolution is needed \cite{Matveenko_07}. Both sides signed a memorandum on understanding about joint observations of Crimean and British telescopes at 32~cm wavelength. But these plans were not realized.
The VLBI technique for observations of compact bright radio sources has been proposed in USSR in sixties of the last century and these ideas were realized in the joint experiment US -- Russian experiment proposed by M. Cohen and K. I. Kellermann where 22 m Pushchino and 43 m Green Bank antennas were planned to use, but in the experiment Pushchino antenna was substituted with Simeiz one (\cite{Kellermann_92,Lovell_73}).
Results of observations with the first Soviet -- American interferometer at 2.8~cm and 6~cm in September and October 1969 were published by \cite{Broderick_71}.
The interferometer consisted of the 42-m radio telescope of the National Astronomical Observatory at Green Bank (West Virginia) and 22-m radio telescope at Simeiz (Crimea).
Results of Soviet -- American VLBI observations were discussed in popular Soviet journal "Science and Life" where \cite{Matveenko_73} called VLBI as "telescope with the Earth size".
In June 1971 Soviet and American teams of radio astronomers carried out interferometric observations of water vapour maser line at 1.35~cm. This interferometer was formed by the 37-m radio telescope of the Haystack Observatory at Westford (Massachusetts) \cite{Burke_72}. Several observations were done with this interferometer in 1976--1977 as it was noted by \cite{Matveenko_78}.
On April 28 and May 6, 1976 the first multi-continental interferometric observations were carried out at 1.35~cm wavelength by \cite{Batchelor_76}. Astronomers from four radio telescopes participated in these observations. These telescopes were the 64-m NASA telescope in Tidbinbilla (Australia), the 40-m antenna of the Owens Valley Radio Observatory (OVRO), the 26-m antenna at the Maryland Point Observatory of the US Naval Research Laboratory (NRL) and the 22-m antenna of the Crimean Astrophysical Observatory (Simeiz, Crimea).
\section{\large Projects of ground--space interferometers}
As it was noted earlier, in the first paper by \cite{Matveenko_65} on VLBI observations the authors discussed an opportunity ground -- space observations but these sentences were removed under request of the editorial board. However, later Soviet scientists and engineers formed a working group to develop a space radio antenna to act as a space component of ground -- based interferometer. As \cite{Matveenko_07a} reminded the head of the project was V. P. Mishin, the scientific head was L. I. Matveenko, the chief engineer was V. I. Kostenko. It was assumed that the ground -- space interferometer will have an opportunity to observe compact maser sources and AGN at 1.35~cm wavelength.
Taking into account constraints of a space launcher on mass and size of the payload it was proposed to build 3.1 m radio antenna. Optimizing many parameters of space antenna it was decided to use parabolic reflector and the Cassegrain design for irradiation \cite{Matveenko_82}. The orbit apogee of the spacecraft with the radio antenna was expected to be around $80\times 10^3$~km, while the orbit perigee was planned to be around $30\times 10^3$~km. Many of ideas introduced by Soviet scientists and engineers were successfully realised in the first ground -- space project VSOP (VLBI Space Observatory Programme); HALCA (Highly Advanced Laboratory for Communications and Astronomy) or the code name MUSES-B (for the second of the Mu Space Engineering Spacecraft series) as it was noted by \cite{Matveenko_07a}. The Japanese HALCA satellite was moving along a highly elliptical orbit with an apogee altitude of 21,400 km and a perigee altitude of 560 km, with an orbital period of approximately 6.3 hours. HALCA was launched in February 1997 and made its final VSOP observations in October 2003. Initially it was planned
to observe at in three frequency bands: 1.6 GHz, 5.0 GHz, and 22 GHz, it was found that the sensitivity of the 22 GHz band had severely degraded shape after orbital deployment and observations were done only in 1.6 GHz and 5.0 GHz. Clearly, that the highest frequency band corresponds to the best angular resolution, therefore the expected highest resolution was not reached.
In eighties the idea on a Russian space -- ground interferometer (Radioastron) started to discuss again. It was expected that the interferometer would have an angular resolution at a level of a few microarcseconds at the shortest wavelength 1.3~cm as it was noted by \cite{Kardashev_88,Kardashev_01}. However, the space antenna was launched only in 2011 and Soviet astronomers lost an opportunity to built the first ground--space VLBI radio telescope and conduct observations in 1.35~cm wavelength with the best angular resolution before the realization of Japanese HALCA mission. The Radioastron mission was successfully launched in 2011 and was operating until 2019, scientific results after five years of operation are given by \cite{Kardashev_17}.
A perspective space -- ground VLBI mission is Millimetron\footnote{https://millimetron.ru/en/general}. It will cryogenic antenna with 10 m diameter. A wavelength coverage is $[70 \mu m, 1 mm]$. Its orbit is around Lagrangian L2 point in the Sun -- Earth two body system (as many other astronomical missions). A wavelength range for space Earth-VLBI observations is $[0.5, 10]~$mm. A wavelength for single dish observations is $[0.07, 3]$~mm. An expected launch date is 2029 or later.
\section{\large GR Effects in a strong gravitational field}
In seventies of the last century \cite{Bardeen_73}\footnote{A remarkable relativist James Maxwell Bardeen passed away on June 20, 2022 and
he had an opportunity to see realizations of his theoretical picture for Sgr A* and M87*. It is a very nice case when a theoretical concept was coined in skies.} presented a picture of a dark region (a shadow) for a gedanken observation which corresponds to a bright screen located
behind a Kerr black hole and a distant observer is located in the equatorial plane. Later, \cite{Chandrasekhar_83} reproduced a similar picture in his book. However, neither Bardeen nor Chandrasekhar did not consider shadow as a possible test of GR since a) shadow sizes are extremely small to be detected for known black holes and b) there are no bright screens precisely behind black hole in astronomy. The authors represented a shadow shape as
a function $\beta(\alpha)$, where $\beta$ corresponds to impact parameter in rotation axis direction while $\alpha$ correspond to impact parameter in
in the equatorial direction.
In addition we would like to note that perhaps Bardeen and Chandrasekhar do not suggest to use the apparent shape of a black hole as GR test perhaps because the dark region (shadow) is too small to be detectable for all known estimates of black hole masses and distances toward them.
\cite{Falcke_00, Melia_01} simulated shadow formation for the Galactic Center in the framework of a toy model, where the authors took into account electron scattering for
for a radiation in mm and cm bands. The authors concluded that it is possible to observe a dark region (shadow) around the black hole in mm band, while it is not possible to see a shadow in cm band due to electron scattering.
It was expected to create a global network acting in 1.3~mm wavelength, therefore the best angular resolution of this interferometer is around
$25~\mu as$ (similar to the resolution of EHT network as it was by \cite{Akiyama_19}, while the shadow diameter was estimated as small as $30~\mu as$
assuming that the black hole mass is $2.6 \times 10^6M_\odot$ as it was evaluated by \cite{Eckart_96,Ghez_98}, therefore, expectations for shadow observations with these facilities were not very optimistic, however, now we know that the black hole mass is more $4 \times 10^6M_\odot$
and EHT Collaboration reconstructed the shadow at Sgr A* in 2022. Therefore, a problem of shadow reconstruction using EHT Collaboration observations
is very hard but realizable.
In times when the Radioastron mission was preparing for its launch it was known that its best angular resolution at the shortest wavelength 1.3~cm was around 8~$\mu as$ and it was comparable with the Schwarzschild diameter for the black hole at the Galactic Center since its mass was evaluated as high as $5 \times 10^6 M_\odot$ (based on estimates done by \cite{Rees_82}). Therefore it was expected that observations with so precise angular resolution will give an opportunity to find signatures of general relativistic effects. \cite{Zakharov_05a,Zakharov_05b} proposed to use shadow observations around the Galactic Center as a test of presence of a supermassive black hole at Sgr A* since in the case of black hole mass around $4 \times 10^6 M_\odot$ and a distance around 8~kpc toward the Galactic Center the shadow size is around $50~\mu as$.
Usually, there are no bright screens behind astrophysical black holes, however, following ideas proposed by \cite{Holz_02} in paper by \cite{Zakharov_05a} it was noted that a shadow should be surrounded by secondary images of many astrophysical sources and a presence of these secondary sources gives an opportunity to outline a shadow. It is important to note that a presence of a shadow depends only black hole metric and
it does not depend on uncertainties of our knowledge about accretion flows and only in the case if emitting regions are very close to
black hole horizons for rapidly rotating black holes the shadow sizes and shapes may be different from the standard case of a bright screen behind a black hole.
\cite{Zakharov_05a} showed that in the case of equatorial plane position of a distant observer the maximal impact parameter in the rotational axis direction is always (independently on $a$) $\beta_{max}(\alpha_{max})=3 \sqrt{3}$
while $(\alpha_{max})=-2a$.
This claim was based on an analysis of critical curve for Chandrasekhar parameters $\eta (\xi)$ which separates scatter and capture of photon in Kerr metric. This analysis was done by \cite{Zakharov_86}, see Fig. 2 in the paper and discussion therein and if one considers critical values corresponding to multiple roots of the polynomial describing a radial photon motion as functions of radial coordinate $r$ ($\xi(r)$, $\eta(r)$)and one has a maximal $(\eta(\xi))$ at $\xi=-2a$ one has $\eta(-2a) =27$ and $r(-2a)=3$ (see also the critical curve Fig. 34 (page 352 in book by Chandrasekhar 1983). If $\eta(\xi)$ is known, one could obtain $\beta(\alpha$). Therefore, the function $\eta(\xi)$ determines information about shadows for any position angle.
\cite{Zakharov_05a} expressed a hope that the shadow may be detected with Radioastron facilities if electron scattering may be ignored, while the authors expressed a strong belief that the shadow can be detected with VLBI network acting in mm band or with the projected ground--space interferometer Millimetron. The recent results by \cite{Akiyama_22a} where the shadow was reconstructed for Sgr A* remarkably confirmed our predictions. Earlier, a shadow was reconstructed by \cite{Akiyama_19} for M87*. Later, there were presented polarization maps for M87* by \cite{Akiyama_21_a}
and possible distributions of magnetic fields were also given by \cite{Akiyama_21_b} (polarization is connected with
synchrotron radiation of electrons accelerating in magnetic fields near M87*).
Based on results of these studies \cite{Kocherlakota_21} constrained charges of several metrics including Reissner -- Nordstr\"om,
Frolov, Kazakov -- Solodukhin and several other ones. We would like to note that blue dotted line in the left panel Fig. 2 shown by \cite{Kocherlakota_21} corresponds to an analytical expression for the shadow size as a function of charge and Fig. 2 as it was shown by \cite{Zakharov_05a}.
\section{\large Shadow sizes for RN black holes with a tidal charge}
In \cite{Zakharov_05a} an analytical expression for shadow radius has been obtained as a function of a black hole charge and in the derivation we used
an algebraic condition of vanishing discriminant which was used earlier by \cite{Zakharov_91,Zakharov_94}.
However, cosmic plasma is neutral we do not expect to find significant electric charge for astrophysical black holes, but
in the framework of theories with extra dimension there are solutions which look very similar to Reissner -- Nordstr\"om ones.
For instance, \cite{Dadhich_00} showed that if we
consider the Randall--Sundrum II braneworld scenario, the Reissner -- Nordstr\"om
metric may be a black hole solution in the model. Later, this solution was called the Reissner -- Nordstr\"om %
with a tidal charge.
Similar solutions may exist in the scalar-- tensor theories which are called now Horndeski theories as it was shown by \cite{Babichev_17}.
An expression for the Reissner - Nordstr\"{o}m metric may be written in the form in natural
units ($G=c=1$)
\begin {equation}
ds^{2}=-\left(1-\frac{2M}{r}+\frac{\mathcal{Q}^{2}}{r^{2}}\right)dt^{2}+\left(1-\frac{2M}{r}+\frac{\mathcal{Q}^{2}}{r^{2}}\right)^{-1}dr^{2}+
r^{2}(d{\theta}^{2}+{\sin}^{2}\theta d{\phi}^{2}),
\label{RN_0}%
\end {equation}
where $M$ is the mass of a black hole and $\mathcal{Q}$ is its charge (in the case of electric charge we have the ordinary Reissner -- Nordstr\"om %
metric, while \cite{Dadhich_00} showed $\mathcal{Q}^2$ may be negative and it is called a tidal charge).
We introduce notations $\hat {r}=r/M, \xi=L/(ME)$ and $\hat
{\mathcal{Q}}=\mathcal{Q}/M.$ Below we omit the hat symbol
for these quantities. We also introduce
$l=\xi^{2}, q=\mathcal{Q}^{2}$.
The polynomial $R(r)$ describes a motion along $r$-coordinate and it has a multiple root $r_{crit}$ if and only if the polynomial discriminant is vanishing and as it was shown by \cite{Zakharov_14} we found
the polynomial $R(r)$ has a multiple root for $ r\geq r_{+}$ if and only if
\begin {eqnarray}
l^{2}(1-q)+l(-8q^{2}+36q-27)-16q^{3}=0. \label{RN_D_8}
\end {eqnarray}
If $q=1$, then $l = 16$, or $\xi_{cr}=4$ as it was noted by \cite{Zakharov_05a,Zakharov_14}. These values also correspond to the blue curve
shown in Fig. 2 presented
by \cite{Kocherlakota_21} since at this curve we can see that
for $\mathcal{Q}=1$ we have $\xi_{cr} (\mathcal{Q})=4$.
From Eq. (\ref{RN_D_8}), we have
\begin {eqnarray}
l_{\rm cr}=\frac{(8q^{2}-36q+27)+\sqrt{\left(9-8q\right)^3}}{2(1-q)}, \label{RN_D_9}
\end {eqnarray}
Therefore, we see from the last relation that there are circular unstable
photon orbits only for $q \le \dfrac{9}{8}$.
For $1< q \le \dfrac{9}{8}$
we have naked singularities and we have unstable photon circular orbits but there are no shadows for these tidal parameters.
As it was shown by \cite{Zakharov_05a} a set of Chandrasekhar's parameters $\xi, \eta$ corresponding to the photon unstable circular orbits
separates capture and scatter regions for Kerr -- Newman black hole solutions but for generalizations of these solutions including
naked singularity cases this statement may be wrong as it was noted for naked singularities of Reissner -- Nordstr\"om metric with $1< q \le \dfrac{9}{8}$ there are unstable circular photon orbits but shadows are not formed for these metrics.
Interesting cases of the naked singularities forming shadows were considered by \cite{Shaikh_18}.
As it was firstly noted many years ago, the photon capture cross section for a charged
black hole has to be considerably smaller than the capture cross section of a
Schwarzschild black hole as one can see in corresponding figures presented by \cite{Zakharov_05b,Zakharov_14,Kocherlakota_21}. The critical value of the impact parameter,
characterizing the capture cross section for a Reissner -- Nordsr\"om black hole, is determined by the equation (\ref{RN_D_9}), since $\xi=\sqrt{l}$.
As it was shown by \cite{Zakharov_05b,Zakharov_14,Zakharov_22}
we can calculate the radius of the unstable
circular photon orbit (which is the same as
the minimum periastron distance for all orbits which are scattered).
Namely,
\begin {eqnarray}
r_{\rm crit}=2\sqrt{\frac{l_{\rm cr}}{6}} \cos{\frac{\alpha}{3}},
\label{RN_D_10}
\end {eqnarray}
where
\begin {eqnarray}
\cos \alpha={-\sqrt{\frac{27}{2 l_{\rm cr}}}}. \label{RN_D_11}
\end {eqnarray}
\section{\large Constraints on a tidal charge}
Based on estimates of shadow size for M87* done by \cite{Kocherlakota_21} in paper by \cite{Zakharov_22} it was estimated a tidal charge using
analytical expressions done \cite{Zakharov_14}, namely $q \in [-1.22, 0.814]$ at 68\% C. L. and the upper bound ($q_{upp}=0.814$) of the interval corresponds to the upper limit $\mathcal{Q}_{upp}=\sqrt{q_{upp}} \approx 0.902$ which corresponds to the value shown by blue curve in Fig. 2 of paper by \cite{Kocherlakota_21}.
\cite{Zakharov_22} found constraints on a tidal charge $-0.25 < q$ for Sgr A* based on preliminary estimates of a ring width done by \cite{Lu_18}.
Below we improve an upper limit estimate for tidal charge using new EHT estimates for shadow size in Sgr A*.
Similarly to \cite{Akiyama_22a} we adopt a shadow diameter $\theta_{\text{sh~Sgr A*}} \approx (51.8
\pm 2.3)\mu as $ at 68\% confidence level. In Fig.~\ref{Fig1} we show an allowed region for a tidal charge. Horizontal dashed lines correspond to constraints on shadow radius in $M$ units. Light green vertical strip corresponds to $q$ parameter ($-0.27 < q < 0.25$) which are currently consistent with the shadow size estimate done by the EHT collaboration for Sgr A* while yellow strips correspond to $q$ parameters which are not consistent with the shadow size estimate.
\section{\large Conclusions}
We remind contributions of Russian scientists in development of synchrotron radiation and its astrophysical applications to explain spectra of
astronomical objects. We also noted Matveenko's contribution in the development of the VLBI method for astronomical observations.
Recent remarkable results of the EHT for reconstructions of shadows for black holes in Sgr A* and M87* showed a high efficiency of this technique.
As we noted there are analytical expressions for dependence of shadow sizes on (tidal) charge and this parameter can be evaluated from shadow size estimates done with VLBI observations and data analysis.
Considerations of Randall -- Sundrum theories with extra dimension were led to conclusions on existence of Reissner -- Nordstr\"om solutions with a tidal charge. Based on our analytical expressions for shadow sizes as a function of charge we constrained tidal charge for Sgr A* and improved our previous results done by \cite{Zakharov_22}. Similar constraints on electric charge were obtained recently by \cite{Akiyama_22b}.
\subsubsection*{Acknowledgements}
The author thanks the organizers of ICRAnet -- Isfahan Astronomy Meeting for their kind invitation to present a contribution for this activity.
|
Title:
A Machine Learning Approach to Predict Missing Flux Densities in Multi-band Galaxy Surveys |
Abstract: We present a new method based on information theory to find the optimal
number of bands required to measure the physical properties of galaxies with a
desired accuracy. As a proof of concept, using the recently updated COSMOS
catalog (COSMOS2020), we identify the most relevant wavebands for measuring the
physical properties of galaxies in a Hawaii Two-0 (H20)- and UVISTA-like survey
for a sample of $i<25$ AB mag galaxies. We find that with available $i$-band
fluxes, $r$, $u$, IRAC/$ch2$ and $z$ bands provide most of the information
regarding the redshift with importance decreasing from $r$-band to $z$-band. We
also find that for the same sample, IRAC/$ch2$, $Y$, $r$ and $u$ bands are the
most relevant bands in stellar mass measurements with decreasing order of
importance. Investigating the inter-correlation between the bands, we train a
model to predict UVISTA observations in near-IR from H20-like observations. We
find that magnitudes in $YJH$ bands can be simulated/predicted with an accuracy
of $1\sigma$ mag scatter $\lesssim 0.2$ for galaxies brighter than 24 AB mag in
near-IR bands. One should note that these conclusions depend on the selection
criteria of the sample. For any new sample of galaxies with a different
selection, these results should be remeasured. Our results suggest that in the
presence of a limited number of bands, a machine learning model trained over
the population of observed galaxies with extensive spectral coverage
outperforms template-fitting. Such a machine learning model maximally comprises
the information acquired over available extensive surveys and breaks
degeneracies in the parameter space of template-fitting inevitable in the
presence of a few bands.
| https://export.arxiv.org/pdf/2208.14781 |
\title{\textbf{A Machine Learning Approach to Predict Missing Flux Densities in Multi-band Galaxy Surveys}}
\correspondingauthor{Nima Chartab}
\email{nchartab@carnegiescience.edu}
\author[0000-0003-3691-937X]{Nima Chartab}
\affiliation{The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101, USA}
\affiliation{Department of Physics and Astronomy, University of California, Irvine, CA 92697, USA}
\affil{Department of Physics and Astronomy, University of California, Riverside, 900 University Ave, Riverside, CA 92521, USA}
\author{Bahram Mobasher}
\affil{Department of Physics and Astronomy, University of California, Riverside, 900 University Ave, Riverside, CA 92521, USA}
\author{Asantha R. Cooray}
\affiliation{Department of Physics and Astronomy, University of California, Irvine, CA 92697, USA}
\author[0000-0003-2226-5395]{Shoubaneh Hemmati}
\affiliation{Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA}
\author[0000-0002-0364-1159]{Zahra Sattari}
\affil{Department of Physics and Astronomy, University of California, Riverside, 900 University Ave, Riverside, CA 92521, USA}
\affiliation{The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101, USA}
\author{Henry C. Ferguson}
\affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA}
\author{David B. Sanders}
\affiliation{Institute for Astronomy (IfA), University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA}
\author[0000-0003-1614-196X]{John R. Weaver}
\affiliation{Cosmic Dawn Center (DAWN)}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, DK-2200 Copenhagen, Denmark}
\author{Daniel K. Stern}
\affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA}
\author{Henry J. McCracken}
\affiliation{Institut d'Astrophysique de Paris, UMR 7095, CNRS, and Sorbonne Universit\'e, 98 bis boulevard Arago, 75014 Paris, France}
\author{Daniel C. Masters}
\affiliation{Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA}
\author{Sune Toft}
\affiliation{Cosmic Dawn Center (DAWN)}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, DK-2200 Copenhagen, Denmark}
\author{Peter L. Capak}
\affiliation{Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA}
\author{Iary Davidzon}
\affiliation{Cosmic Dawn Center (DAWN)}
\affiliation{Niels Bohr Institute, University of Copenhagen, Jagtvej 128, DK-2200 Copenhagen, Denmark}
\author{Mark E. Dickinson}
\affiliation{National Optical Astronomy Observatories, 950 N Cherry
Avenue, Tucson, AZ 85719, USA}
\author{Jason Rhodes}
\affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA}
\author{Andrea Moneti}
\affiliation{Institut d'Astrophysique de Paris, UMR 7095, CNRS, and Sorbonne Universit\'e, 98 bis boulevard Arago, 75014 Paris, France}
\author{Olivier Ilbert}
\affiliation{Aix Marseille Univ, CNRS, LAM, Laboratoire d'Astrophysique de Marseille, Marseille, France}
\author{Lukas Zalesky}
\affiliation{Institute for Astronomy (IfA), University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA}
\author{Conor J.R. McPartland}
\affiliation{Institute for Astronomy (IfA), University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA}
\author{Istv\'an Szapudi}
\affiliation{Institute for Astronomy (IfA), University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA}
\author[0000-0002-6610-2048]{Anton M. Koekemoer}
\affiliation{Space Telescope Science Institute, 3700 San Martin Dr.,
Baltimore, MD 21218, USA}
\author{Harry I. Teplitz}
\affiliation{Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA}
\author{Mauro Giavalisco}
\affiliation{Department of Astronomy, University of Massachusetts, 710 North Plesant Street, Amherst, MA 01003, USA}
\keywords{\small{Astronomy data analysis (1858); Astronomy data visualization (1968); Galaxy evolution (594)}}
\section{Introduction}
\label{sec:Introduction}
Future ground-based and space-borne observatories, equipped with large aperture telescopes and sensitive large format detectors will provide broad-band imaging data for more than a billion galaxies. These data are pivotal to better understanding of dark sectors of the Universe (i.e., dark matter and dark energy) as well as the evolution of galaxies and large-scale structures over cosmic time. The challenge, however, is to obtain wide waveband coverage to constrain the spectral energy distributions (SEDs) of millions of galaxies and estimate their redshifts and physical parameters such as stellar masses and star formation rates.
Template fitting is widely used to infer photometric redshifts of galaxies and their physical properties \cite[e.g.,][]{Arnouts99,Bolzonella2000,Ilbert06}. However, theoretical synthetic templates may not be representative of the real parameter space of galaxies. For example, templates can include SEDs which do not have an observational analog. This will cause degeneracy in parameter measurement, especially when we reconstruct SEDs with few bands. Many of these degeneracies are mitigated by obtaining data with wide spectral coverage (e.g., with a larger number of wavebands). An example of such a data set is the Cosmic Evolution Survey \cite[COSMOS;][]{Scoville07} that has been observed in more than 40 bands from X-ray to radio wavelengths. The wealth of information in this field provides very well-constrained SEDs for galaxies. However, not all surveys have as many photometric bands as the COSMOS field. For instance, \Euclid{} \citep{Laureijs11} will rely on near-infrared $Y$, $J$,
and $H$ bands ($960–2000\ \rm nm$), complemented by optical ground-based observations in $u$, $g$, $r$, $i$ and $z$ to measure photometric redshifts \citep{Euclid_photz20}. It is therefore instructive to use the extensive dataset in the COSMOS field to identify essential bands which carry most of the information regarding the physical properties of galaxies.
The aim of this study is to transfer the information gained in the COSMOS field to fields such as the \Euclid{} deep fields where such extensive photometry does not exist. Using the concepts of information theory, we can find if there is any information shared between the bands and use these measurements to identify the most important bands (those that reveal most of the information about the physical properties of galaxies). Based on the machine learning techniques, we can then predict fluxes in the wavebands that are not observed in a survey but share information with other available (observed) bands. This allows us to carefully design future surveys and only observe in selected wavebands that include most of the information to significantly save in the observing time.
Machine learning has become popular in recent years to build models based on spectroscopic redshifts \citep[e.g.,][]{Carrasco14,Masters17} and train models based on synthetic templates \citep[e.g.,][]{Hemmati19} or mock catalogs generated from galaxy simulations \citep[e.g.,][]{Davidzon19,Simet21}. These methods are particularly useful as machine learning algorithms can learn more complicated relations given a large and comprehensive training data set \citep{Mucesh21}. Moreover, these models speed up parameter measurement, which is an important characteristic with the flood of data imminent from upcoming surveys \citep{Hemmati19}.
In this paper, we develop a new technique based on information theory to quantify the importance of each waveband and identify essential bands to measure the physical properties of galaxies. We also develop a machine learning model to predict fluxes in missing bands and thereby improve the wavelength resolution of existing photometric data. To demonstrate the application of these techniques, we apply our methods to a sample of galaxies drawn from the latest version of the COSMOS survey \citep[COSMOS2020;][]{Weaver21}, analogous to that planned by the \Euclid{} deep fields. A new ground-based survey, Hawaii Two-0 (H20; McPartland et al. in prep), has been designed to provide complementary photometric data for the \Euclid{} mission. H20 will provide $u-$band observations from MegaCam instrument on the Canada-France-Hawaii telescope (CFHT) and $g-,r-,i-, z-$band imaging from Hyper Suprime-Cam (HSC) instrument on the Subaru telescope over 20 square degrees of the \Euclid{} deep fields. Spitzer/IRAC observations from the Spitzer Legacy Survey (SLS) are also available in the same fields \citep{Moneti21}. In this paper, we identify the importance of wavebands for an H20+UVISTA-like survey with similar wavelength coverage expected in \Euclid{} deep fields, incorporating the near-IR $YJH$ bands from UltraVista \citep{McCracken12} in addition to the H20 and SLS wavebands. We then predict fluxes in near-IR wavebands using the existing ground-based and mid-IR Spitzer/IRAC observations (H20-like) of the deep fields.
In Section \ref{sec:Data}, we briefly introduce the COSMOS2020 catalog, and use that to build a sample of H20+UVISTA-like galaxies. Section \ref{information} describes the concepts of information gain and quantifies the importance of each waveband based on that. In Section \ref{sec:Visualize}, we use dimensionality reduction techniques to visualize photometric data in 2-dimensional space to explore the feasibility of predicting fluxes in near-IR fluxes based on $ugriz$ and Spitzer/IRAC data. This is followed by Section \ref{sec:band_prediction} where we train a machine learning algorithm, Random Forest model, to predict fluxes in UVISTA/$YJH$ wavebands using data in wavebands similar to the existing H20. In Section \ref{sec:Photz-M}, we investigate the accuracy of the photometric redshifts and stellar masses given the limited number of bands available in H20-like and H20+UVISTA-like data. We discuss and summarize our results in Section \ref{sec:Discussion_Summary}.
Throughout this work, we assume flat $\Lambda$CDM cosmology with $H_0=70 \rm \ kms^{-1} Mpc^{-1}$, $\Omega_{m_{0}}=0.3$ and $\Omega_{\Lambda_{0}}=0.7$. All magnitudes are expressed in the AB system, and the physical parameters are measured assuming a \cite{Chabrier03} IMF.
\section{Data}
\label{sec:Data}
Here we use the updated version of the COSMOS catalog, COSMOS2020, to build a sample of galaxies analogous to those that will be observed in the \Euclid{} deep fields. Compared to COSMOS2015 catalog \citep{Laigle16}, COSMOS2020 provides much deeper near-IR and mid-IR (Spitzer) photometric data as well as two independent methods for photometric extraction - the conventional and a profile-fitting (\farmer{}; J. Weaver et al., in prep.) methods. We use \farmer{} photometry that contains consistent photometric data in 39 bands from FUV to mid-IR including broad, medium and narrow filters. All the data are reduced to the same scale with appropriate PSFs. Photometric redshifts are calculated using LePhare \citep{Arnouts99,Ilbert06} with a similar configuration described in \cite{Ilbert13}. Given the large number of bands with deep observations, photometric redshift solutions are accurate, reaching a normalized median absolute deviation \citep[$\sigma_{\rm NMAD}$;][]{Hoaglin83} of $0.02$ for galaxies as faint as $i\sim25$ AB mag \citep{Weaver21}. The redshifts of galaxies are then fixed on their estimated photometric redshifts and the stellar masses were estimated. In this paper, we consider COSMOS2020 photometric redshifts and stellar masses as a \enquote{ground truth} since spectroscopic redshifts are only available for a limited number of galaxies and using a mixture of photometric and spectroscopic redshifts can bias our sample towards specific populations of galaxies.
We use two sets of wavebands: 1) H20-like bands: ${\rm \mathbf{A}}\coloneqq\{u,g,r,i,z,ch1,ch2\}$, 2) H20+UVISTA-like bands: ${\rm \mathbf{B}}\coloneqq\{u,g,r,i,z,Y,J,H,ch1,ch2\}$. $u-$band observations are conducted by MegaCam instrument at CFHT, and other optical bands ($g,r,i$ and $z$) are available from Subaru's Hyper Suprime-Cam (HSC) imaging. Spitzer/IRAC channel 1,2 ($ch1,ch2$) data are compiled from all the IRAC observations of the COSMOS field \citep{Moneti21}. Near-IR photometry in $Y$, $J$ and $H$ bands are obtained from the UltraVista survey \citep{McCracken12}. We select a subset of the COSMOS2020 galaxies that are observed, but not necessarily detected, in all the aforementioned bands and have $i-$band AB magnitude $\leq 25$ with $3\sigma$ detection. These selection criteria result in 165,807 galaxies out to $z\sim 5.5$. \new{Photometric measurements in COSMSOS2020 catalog are not corrected for Galactic extinction. We corrected them using \cite{Schlafly11} dust map. Moreover,} some sources have negative fluxes in the desired bands, which is due to the variation of background flux across the image. We set these fluxes to zero.
\section{Information Gain}
\label{information}
Let's suppose that we do not have any prior information about the redshift distribution of galaxies selected from the criteria mentioned in Section \ref{sec:Data}. We, therefore, assume a uniform distribution for the redshift. As an example, if we define four bins of redshifts (\{$z_1$=(0,1], $z_2$=(1,2], $z_3$=(2,3], $z_4$=(3,4]\}) and want to identify which bin does a galaxy belong to, we can encode it in two bits, as below, \\
\begin{center}
\includegraphics[]{output-figure0.pdf}
\end{center}
Here, we need to ask two YES/NO questions to identify the bin a galaxy belongs to. However, based on the available observations of COSMOS2020, we know the redshift distribution of galaxies with $i\leq 25$ AB mag as background information. We, therefore, update the decision tree above, considering our prior information about the redshift distribution, to reduce the average number of questions we need to ask to identify the redshift bin of a galaxy. Based on the redshift distribution shown in Figure \ref{fig:z_PDF}, the probability of a galaxy being in each redshift bin is: $P(z_1)=0.56, P(z_2)=0.32, P(z_3)=0.09, P(z_4)=0.03$. Thus, one possible decision tree to identify the redshift bin of a galaxy can be built as follows,
\begin{center}
\includegraphics[]{output-figure1.pdf}
\end{center}
\noindent On average, $0.56\times 1+0.32\times 2+(0.09+0.03)\times 3=1.56$ questions (bits) are required to identify the redshift bin of a galaxy. We find that the number of bits (questions) reduced from 2 to 1.56 when we added information regarding the redshift distribution of galaxies. This decrease shows that we will get less surprised when we observe the redshift of a galaxy, given that we know what the redshift distribution looks like.
Given the above example, the optimal number of bits required to store a variable called Shannon’s entropy ($H$), is defined as \citep{Shannon48},
\begin{equation}
H(X)=-\sum_i P(x_i)\log_2 P(x_i),
\label{entropy}
\end{equation}
\noindent where $x_i$ is the possible outcome of a variable ($X$) which occurs with probability $P(x_i)$. In this formulation, $\log_2 P(x_i)$ represents the number of bits required to identify the outcome. Using equation \ref{entropy}, Shannon’s entropy of redshift based on the probabilities in four bins is 1.45 bits. This means that we can still make our tree more optimal to encode the redshift values in 1.45 bits instead of 1.56. One possible way would be building the tree to identify the redshift of two galaxies simultaneously, which makes the average number of questions per galaxy even less than 1.56. However, we do not aim to find the optimal compression algorithm to encode the redshift information. We just use Shannon’s entropy to find the maximal compression rate.
In the presence of other information, such as observed fluxes in different bands, the entropy of the redshift decreases even more. The amount of uncertainty (entropy) remaining in $X$ after we have seen $Y$ is called conditional entropy and defined as,
\begin{equation}
H(X|Y)=-\sum_{x\in X,y \in Y} P(x,y)\log_2 \frac{P(x,y)}{P(y)},
\end{equation}where $P(x,y)$ is the joint probability distribution at $(x,y)$. Moreover, the mutual information between X and Y (i.e., the amount of uncertainty in X that is removed by knowing Y) is defined as,
\begin{equation} \label{eq:mi}
\begin{split}
I(X,Y)&=H(X)-H(X|Y) \\
& = H(X) + H(Y) - H(X,Y),
\end{split}
\end{equation}where $H(X,Y)$ is the joint entropy of a pair of variables $(X,Y)$. In other words, I$(X,Y)$ is a measure of the amount of information (in bits) one can acquire about $X$ by observing $Y$. This parameter can be used to identify the waveband that will be most useful for measuring galaxy properties (e.g., redshifts). For instance, the waveband with the highest I$(redshift,waveband)$ carries the most information and decreases the entropy of the redshift the most.
The mutual information as in equation \ref{eq:mi} is defined for discrete variables. In the case of continuous variables (e.g., redshift, flux, stellar mass), we need to properly discretize the data. \cite{Kraskov04} (hereafter KSG) introduced a k-nearest neighbor estimator to compute the mutual information of continuous variables. This method detects the underlying probability distribution of data by measuring distances to the $k^{th}$ nearest neighbors of points in the data set. There is nonzero mutual information when some points are clustered in the X-Y space, which allows us to predict $y\in Y$ given an $x\in X$ coordinate. We refer readers to the original KSG paper for details of the method. Figure \ref{fig:mi} shows the mutual information between redshift and each waveband based on the KSG algorithm with $k=100$ nearest neighbors. It suggests that given the sample of $i<25$ AB mag galaxies, the $u-$band provides the largest information regarding the redshift compared to the rest of the H20+UVISTA-like bands. However, our sample is selected based on $i-$band magnitudes, so we assumed that $i-$band data are already available. Suppose that for our sample $u-$band fluxes are highly correlated with $i-$band data. In this case, $u-$band carries no information in the presence of $i-$band data. To take into account such an effect, we need to compute conditional mutual information, defined as,
\begin{equation}
I(X,Y|Z)=H(X|Z)- H(X|Y,Z),
\label{eq:cmi}
\end{equation} where I$(X,Y|Z)$ is the mutual information of $X$ and $Y$ given that $Z$ is observed. Following the KSG algorithm, we find the conditional mutual entropy to sort wavebands based on their importance. We compute I($redshift,waveband|i-band$) and choose the waveband with the highest conditional mutual information as the most important band. The conditional mutual information estimations reveal that the $r-$band is the most important waveband given that $i-$band data are available. We continue computing conditional mutual information, I($redshift,waveband|swaveband$), where $swaveband$ is the previously selected waveband.
Figure \ref{Fig:cmi} shows the non-zero conditional mutual information as we select relevant wavebands. We find that for $i<25$ AB mag galaxies, $r, u, ch2$ and $z$ bands are the bands that provide most of the information about the redshift with decreasing importance from $r-$band to $z-$band. We repeat these analyses for stellar mass measurements. As shown in Figure \ref{fig:mi_mass}, we measure the mutual information between stellar mass and each waveband for the whole sample, and in Figure \ref{fig:mi_bands_z}, we measure the same quantity, I($\log(M_*/M_\odot),waveband|i-band$), in the bins of redshifts. As we expect, the role of short wavelength bands decreases as we approach higher redshifts. We further compute the important wavebands given the availability of $i-$band data in Figure \ref{Fig:cmi_mass}. We find that $ch2$, $Y$, $r$ and $u$ bands are the most relevant bands in the stellar mass measurements with decreasing order of importance. One can constrain the redshift and repeat analysis to find the optimal bands for stellar mass measurements in the desired redshift range given the availability of $i-$band data.
One should note that these conclusions depend on the selection criteria of the sample. This method provides a powerful tool in designing future surveys and quantifying the importance of each waveband. An efficient observation can be conducted by prioritizing important wavebands identified by the information gain-based method.
Moreover, different waveband fluxes can be inter-correlated for a specific sample of galaxies. For instance, the top left panel in Figure \ref{Fig:cmi_mass} shows that IRAC/$ch1$ and $ch2$ provide a comparable amount of information for stellar mass measurements, which suggests that these bands are inter-correlated for our sample with $i<25$ AB mag. Figure \ref{fig:mi_bands} visualizes the mutual information between different bands. A greater value of mutual information indicates that wavebands are more correlated. Inter-correlation between wavebands allows us to predict/simulate fluxes of galaxies in missing bands. In the following, we investigate the possibility of predicting/simulating near-IR UVISTA/$YJH$ fluxes based on H20-like data for a sample of galaxies with $i<25$ AB mag.
\vspace{1cm}
\section{Data Visualization}
\label{sec:Visualize}
Fluxes of galaxies in $N$ wavebands are used to measure the photometric redshifts and physical parameters of galaxies. For example, the H20-like data with $N=7$ bands occupy a 7-dimensional space, where the position of each galaxy is determined by its fluxes in 7 bands. Therefore, galaxies with similar positions in $N$-dimensional space are expected to have similar redshifts and physical parameters if $N$ is large enough to fully sample the observed SED of galaxies. Similarly, it is expected that they will have similar fluxes in $(N+1)^{th}$ waveband. However, showing galaxy fluxes in a high-dimensional space (e.g., 7-dimensional space) is impossible and thus, we use dimensionality reduction techniques to present them in 2D space such that the information of higher dimension is maximally preserved. In this work, we use Uniform Manifold Approximation and Projection \cite[UMAP;][]{McInnes18} technique to visualize our sample in a 2-dimensional space. UMAP is a non-linear dimensionality reduction technique that estimates the topology of the high-dimensional data and uses this information to construct a low-dimensional representation of data that preserves structure information on local scales. It also outperforms other dimensional reduction algorithms such as t-SNE \citep[t-Distributed Stochastic Neighbor Embedding;][]{vanDerMaaten2008} used in the literature \citep{Steinhardt20} since it preserves structures on global scales as well. In a simple sense, UMAP constructs a high-dimensional weighted graph by extending a radius around each data point and connecting points when their radii overlap. This radius varies locally based on the distance to the $n^{th}$ nearest neighbor of each point. The number of the nearest neighbor (n) is the hyper-parameter in UMAP that should be fixed to construct the high-dimensional graph. Small (large) values for n will preserve more local (global) structures. Once the high-dimensional weighted graph is constructed, UMAP optimizes the layout of a low-dimensional map to be as similar as possible to the high-dimensional graph.
We use the UMAP Python library\footnote{https://github.com/lmcinnes/umap} to map 7-dimensional flux space of H20-like data to 2 dimensions considering 50 nearest neighbors to provide a balance between preserving local and global structures. We do not map magnitudes or colors since non-detected values cannot be handled properly when using them. Multi-waveband fluxes contain all the information regarding colors, but using colors misses information regarding fluxes or magnitudes. Therefore, mapping fluxes of galaxies from that space to 2-dimension is a better way than using colors. Since fluxes in different bands have fairly similar distributions, no normalization is needed before applying UMAP. In the case of significantly distinct distributions, normalization is needed to avoid the dominance of a waveband with a larger dynamic range. Figure \ref{fig:H_umap} shows a 2-D visualization of the sample with H20-like bands using the UMAP algorithm. As an example, the mapped data are color-coded by the $H-$band fluxes (not present in H20 photometry) in $\mu{\rm Jy}$. The smooth transition of the $H-$band fluxes in the 2D representation in Figure \ref{fig:H_umap} reassures us that galaxies with similar fluxes in H20-like bands also have similar $H-$band fluxes. We note that the H20-like data set does not include $H-$band data.
Visualized data in Figure \ref{fig:H_umap} show qualitatively that the $H-$band fluxes are predictable to some extent using H20-like data. To perform a quantitative assessment on how accurately one can predict fluxes in UVISTA $YJH$ bands given the H20-like observations, we train a Random Forest \cite[][]{Breiman01} model with half of our sample and evaluate the model's performance with the other half. A Random Forest consists of an ensemble of regression trees. The algorithm picks a subsample of the dataset, builds a regression tree based on the subsample and repeats this procedure numerous times. The final value is the average of all the values predicted by all the trees in the forest. Having numerous decision trees based on subsampled data makes this algorithm unbiased and unaffected by overfitting. Another advantage of this method is that the inputs do not need to be scaled before feeding into the model. In the following section, we train a Random Forest model and evaluate its accuracy.
\section{Flux predictions }
\label{sec:band_prediction}
We split the sample (described in Section \ref{sec:Data}) \new{randomly} into a training and a test sample. \new{To evaluate if the training sample is representative, we construct a 2-D projection of H20-like fluxes similar to Figure \ref{fig:H_umap} for both training and test samples. Figure \ref{fig:umap_train_test} shows the 2-D visualizations color-coded by the properties of galaxies (photometric redshift and stellar mass). We find that the training and test samples share the same properties, so the training sample is representative of the galaxies in the COSMOS field.} With 82,903 galaxies as a training sample, we build a Random Forest model with 100 regression trees to predict UVISTA $YJH$ bands from the H20-like band fluxes. We use Python implementation of the algorithm \cite[Scikit-learn;][]{scikit-learn} \footnote{https://scikit-learn.org/stable} with its default parameters to build the model. The true (observed) fluxes in the $YJH$ bands are available in the COSMOS2020 catalog. Using the trained Random Forest model, we then predict the expected fluxes for galaxies not included in the training set, with the results compared in Figure \ref{fig:Euclid_RF}. For each band, we compare the predicted magnitudes ($\rm Mag_{Predicted}$) with the true observed magnitudes ($\rm Mag_{True}$). We find that the Random Forest model predicts unbiased $YJH$ fluxes with high accuracy. The bottom panel in each figure shows the scatter of the $\rm Mag_{Predicted}-Mag_{True}$ as a function of true magnitudes. With a median magnitude discrepancy ($\Delta$) of $\sim 0.01$, we find that the offset is comparable with discrepancies that arise from different methods of photometric data reduction. \cite{Weaver21} found that the median tension between the magnitudes derived from aperture photometry and profile-fitting extraction is $\Delta\sim 0.002$ in $YJ$ bands and $\Delta\sim 0.02$ in $H-$band for sources brighter than the 3$\sigma$ depth of each band. Thus, such small offsets in the Random Forest regressor are within the intrinsic uncertainties of the data reduction techniques. Green solid and dashed lines in the sub-panels of Figure \ref{fig:Euclid_RF} show the median of $\Delta$ and 1$\sigma$ (68\%) scatter, respectively. The scatter in the prediction is $<0.17$ mag for galaxies brighter than 24 AB mag. This shows that $YJH$ near-IR observations of UVISTA can be simulated with acceptable accuracy from the available observations of H20 for a sample of galaxies with $i<25$ AB mag. \new{Our results remain consistent when we rebuild a new Random Forest with different randomly selected training samples.} While our focus in this paper is on the UVISTA/$YJH$ and H20 bands, the method we present is general and directly applicable to other surveys.
\section{Photometric redshift and stellar mass}
\label{sec:Photz-M}
In the previous section, we showed that given the observations of the H20 survey, near-IR observations of UVISTA can be constrained to some extent. In other words, observations of the COSMOS field provide valuable information regarding the distribution of galaxies in the flux space, even if we do not observe galaxies as extensively as it is done in the COSMOS field in terms of spectral coverage. When we use template fitting code with synthetic templates, we usually do not take into account this constraint. There are two approaches to incorporate this information in the photometric redshifts or physical parameters measurements. First, add a prior to fluxes in the bands that are not observed in the survey. For instance, when we perform SED fitting using H20-like bands, we can add priors to the $YJH$ bands based on a Random Forest model, which is trained over the population of galaxies from the COSMOS observations. Second, train a model based on SED-fitting results calculated with a large number of bands. In this case, when we feed our model with H20-like data, it will decide the best value of a parameter based on both the existence of similar observations in the COSMOS field (information from galaxy populations) and the SED-fitting solution for that galaxy.
In this section, we employ the latter approach to train a model to predict the photometric redshifts and the stellar masses of galaxies based on H20-like and H20+UVISTA-like bands. We train a Random Forest model based on a training sample of observed galaxies. The inputs of the model are H20-like fluxes and the output is either photometric redshift or stellar mass computed from SED fitting over 29 bands available in the COSMOS2020 catalog. We also train another similar model where the inputs are H20+UVISTA-like bands. Figure \ref{fig:z_RF} shows the performance of trained models on the test sample with 82,904 galaxies. We find that both models recover photometric redshifts and stellar masses with comparable accuracy with being slightly accurate using H20+UVISTA-like inputs. Normalized median absolute deviation ($\sigma_{\rm NMAD}$) of $\Delta z/(1+z)$ is $\sim 0.03$ for both models with $\sim 4\%$ outlier fraction. Outlier galaxies are defined as galaxies with $\Delta z/(1+z)>0.15$. The median absolute deviation of $\log (M_*/M_\odot)$ is $\sim 0.1$ dex for both models. We explain this similar performance using the results of Section \ref{information} and \ref{sec:band_prediction}. The Random Forest model with H20-like bands comprises most of the information regarding UVISTA bands as we trained the model with the population of observed COSMOS galaxies. Therefore, it should recover photometric redshifts and stellar masses as accurately as the model which includes near-IR ($YJH$) observations.
We repeat a similar analysis starting with only $i-$band data and adding other important bands in the same order as we identified in Section \ref{information}. Figure \ref{fig:RF_comp} shows the the normalized median absolute deviation of $\Delta z/(1+z)$ and $\log(M_*/M_\odot)$ as a function of bands used to measure the parameter. We find that $i-,r-,u-,ch2-,z-$band are the minimal number of bands to reach an acceptable accuracy of $\sigma^{\rm NMAD}_{\Delta z/(1+z)}=0.03$ to measure photometric redshifts of $i<25$ AB mag. For the same sample, $i-,ch2-,Y-,r-,u-$band are the optimal bands for stellar mass measurements reaching an accuracy of $\sigma^{\rm NMAD}_{\log(M_*/M_\odot)}=0.15$ dex.
\subsection{Synthetic templates}
In the following, we use UMAP to visualize photometry of synthetic SED models commonly used in template-fitting procedures. We build a set of theoretical templates using 2016 version of a library of \cite{Bruzual03}, considering \cite{Chabrier03} initial mass function. Star formation histories are modeled with an exponentially declining function (${\rm SFR} \propto e^{-t/\tau}$), where $\tau$ is the star formation timescale. Dust attenuation is applied using the \cite{Calzetti} law and solar stellar metallicity is assumed for all templates. We build $\sim750,000$ theoretical templates assuming $\tau\in(0.1,10)\ \rm Gyr$, $t\in(0.1,13.7)\ \rm Gyr$, $ A_V\in(0,2)\ \rm mag$ and $z\in(0,5.5)$. $t$ and $A_V$ are the stellar age and the extinction in the visual band, respectively. We then calculate the synthetic photometry in both H20-like and H20+UVISTA-like bands by applying the corresponding filter response function.
As we learned the topology of fluxes in the H20-like bands for real observed galaxies in COSMOS2020 catalog (Figure \ref{fig:H_umap}), we can transform H20-like band fluxes of synthetic photometry into the learned space. Figure \ref{fig:H_umap_synthetic} shows the 2-D visualization of the theoretical templates with H20-like bands in that learned space. As an example, data points in the reduced dimension are color-coded by their synthetic $H-$band fluxes in $\mu{\rm Jy}$. Comparing theoretical templates with the observed data shown in Figure \ref{fig:H_umap} reveals that model galaxies encounter degeneracies. In this specific example, we show that templates with similar H20-like fluxes have more diverse $H-$band fluxes than real observations, which can produce degenerate results when template fitting is performed based on H20-like bands. Adding information of the COSMOS2020 observations as a prior imposes a strong correlation between the observed and missing bands and makes the theoretical templates less degenerate as shown in Figure \ref{fig:H_umap}. For example, the dark blue arc in the left side of Figure \ref{fig:H_umap_synthetic} mismatches with the observational counterpart. In other words, synthetic templates predict $H-$band flux of $\sim 0.1$ $\mu{\rm Jy}$ for galaxies in that vicinity (i.e., the dark blue arc), but real observations show that they have, in fact, $H-$band flux of $\sim 10$ $\mu{\rm Jy}$. This shows that extra information that exists in the previous observations can add valuable information to template fitting analysis.
If one adds a predicted band in the template-fitting procedure, the errors should be assigned based on the $1\sigma$ scatter of the predicted flux (dashed green lines in Figure \ref{fig:Euclid_RF}). It is particularly important to properly take into account the systematic scatter of the predicted bands in template-fitting and ensure that the predicted bands are not over-weighted in best-template selection. \new{In the following section, we perform a simple template-fitting to evaluate values added by predicted fluxes.} However, it is worth highlighting that the better approach would be using a machine learning model which is trained based on template-fitting results of a galaxy population with well-constrained SEDs such as COSMOS2020 (Figure \ref{fig:z_RF}).
\subsection{Template-fitting}
\label{sec:Template-fitting}
\new{We perform template fitting for three cases using 1) H20-like bands, 2) H20-like+predicted YHJ bands, and 3) H20+UVISTA-like bands. For this purpose, we split the test sample used in Section \ref{sec:band_prediction} into half to have a validation set as well as a new test sample. The validation sample is used to measure $1\sigma$ scatter of the predicted flux (similar to dashed green lines in Figure \ref{fig:Euclid_RF}). We assign errors to the predicted fluxes of the new test sample based on $1\sigma$ scatter of the validation sample at a given magnitude. We use a template-fitting code \lephare{} with the same configuration as \cite{Ilbert15}. This configuration differs from the templates used for COSMOS2020 redshift measurements. In the COSMOS2020 catalog, the photometric redshifts are measured based on templates employed by \cite{Ilbert13}, followed by stellar masses measured in the same manner as \cite{Ilbert15} at fixed photometric redshifts, but here we fit both photometric redshifts and stellar masses simultaneously. Figure \ref{fig:z_SED} presents the results of the template-fitting for these three cases. We find that the lack of observed near-IR fluxes in template-fitting increases the $\sigma_{\rm NMAD}$ and outlier fraction by 50\% and 80\%, respectively. We also find that adding predicted fluxes improves the $\sigma_{\rm NMAD}$ and outlier fraction by 10\% and 25\%, respectively. Predicted fluxes also improve the scatter of the stellar mass measurements by 7\%.}
\new{Improvement in template-fitting results by adding predicted fluxes suggests that observationally driven priors on near-IR fluxes can help reduce both scatter and outlier fraction of SED-derived properties. Moreover, we find that adding observed near-IR data significantly ($\sim50\%$) improves the template-fitting results, but this is not the case for the Random Forest model shown in Figure \ref{fig:RF_comp} ($\sim10\%$ improvement). This suggests that machine learning models are able to fully incorporate the information gathered from extensive surveys and avoid the degeneracies in template-fitting parameters that are inevitable when a few bands are present.}
\section{Discussion and Summary}
\label{sec:Discussion_Summary}
In this paper, we present an information gain-based method to quantify the importance of wavebands and find the optimal set of bands needed to be observed to constrain photometric redshifts and physical properties of galaxies. To demonstrate the application of this method we build a subsample of galaxies from COSMOS2020 catalog with similar waveband coverage ($ugrizYJH$ and IRAC/$ch1,ch2$) that will be available in \Euclid{} deep fields. For a sample of galaxies with $i<25$ AB mag, we find that given the availability of $i$-band fluxes, $r, u$, IRAC/$ch2$ and $z$ bands provide most of the information for measuring the photometric redshifts with importance decreasing from $r-$band to $z-$band. We also find that for the same sample, IRAC/$ch2$, $Y$, $r$ and $u$ bands are the most relevant bands in stellar mass measurements with decreasing order of importance. We note that these results should be remeasured for any new sample with different selection criteria. Moreover, we present the relative importance of wavebands for stellar mass measurements in the bins of redshifts since their importance depends on the redshift. We also investigate the inter-correlation between the flux in different wavebands and use a machine learning technique to predict/simulate missing fluxes from a survey. To prove the concept, we apply the method trained on the COSMOS2020 data to predict UVISTA near-IR observations based on H20-like survey data, which include $ugriz$ and Spitzer/IRAC observations. We find that near-IR bands ($YJH$) can be predicted/simulated from ground-based ($ugriz$) and mid-IR Spitzer (IRAC/$ch1,ch2$) observations with an accuracy of $1\sigma$ mag scatter $\lesssim 0.2$ for galaxies brighter than $24$ AB mag in near-IR bands. We demonstrate that theoretical templates lack such valuable information already observed through numerous bands in the COSMOS field. We conclude that degeneracies in template-fitting can be alleviated if one trains a model based on template-fitting solutions for observed galaxies with extensive observations instead of using conventional SED fitting. We show that a model trained on H20-like bands has comparable accuracy to a model which is trained over H20+UVISTA-like bands, given that the model is trained over the observed galaxy population with a vast number of wavebands.
\cite{Masters15} mapped high-dimensional color space of COSMOS galaxies in UVISTA bands using the self-organizing map (SOM) technique \citep{Kohonen1982} and proposed a spectroscopy survey to fully cover regions in reduced color space with no spectroscopic redshifts. This survey, C3R2, was awarded 44.5 nights on Keck telescope to map the color-redshift relation necessary for weak lensing cosmology \citep{Masters17,Masters19}. Later on, \cite{Hemmati19} used SOM to map the color space of theoretical models and used the reduced map as a fast template-fitting technique. \new{In the present work, we use a new technique, UMAP, to create a 2-dimensional representation of a high-dimensional flux distribution. This technique can also be utilized to map the color space of galaxies and study their physical properties (similar to Figure \ref{fig:umap_train_test}), providing an opportunity for further analyses that can be performed in the future.}
Acquiring data for galaxy surveys over wide areas and a range of wavelengths with a large number of wavebands is costly. A new method based on machine learning algorithms is presented in this paper to supplement the present and future surveys in their missing bands with information from previous extensive surveys (e.g. COSMOS). It can be used to optimize observations of future surveys, as well as to predict photometry of observatories that have ceased operation \citep{Dobbels20}.
\section*{Acknowledgments}
\new{We thank the anonymous referee for providing insightful comments and suggestions that improved the quality of this work.} NC and AC acknowledge support from NASA ADAP 80NSSC20K0437. ID has received funding from the European Union's Horizon 2020 research and innovation program under the Marie SkЕ‚odowska-Curie grant agreement No. 896225.
\bibliography{Missing_bands} |
Title:
Indication of a Local Source of Ultra-High-Energy Cosmic Rays in the Northern Hemisphere |
Abstract: The Pierre Auger Observatory (PAO) and Telescope Array (TA) collaborations
report significant differences in the observed energy spectra of
ultra-high-energy cosmic rays (UHECRs) above 30 EeV. In this work, we present a
joint fit of TA and PAO data using the rigidity-dependent maximum energy model,
and including full marginalization over all relevant parameters. We show that
the presence of a local astrophysical source in the Northern Hemisphere, which
is only visible by the TA experiment, can reconcile PAO and TA measurements up
to the highest energies. We demonstrate that the presence of that local source
is favored at the 5.6$\sigma$ level compared to the scenario where both
experiments observe the same UHECR flux from a cosmological source
distribution. We also quantify that the astrophysical explanation can describe
the current data better than a scenario where the differences in the
observations are explained by experimental systematics (i.e., energy-dependent
shifts). Having tested different mass compositions emitted from the local
source, we conclude that the data are best described by a source lying at a
distance of about 14 Mpc that emits cosmic rays dominated by the silicon mass
group; we also discuss possible source candidates.
| https://export.arxiv.org/pdf/2208.12274 |
\title{Indication of a Local Source of Ultra-High-Energy Cosmic Rays in the Northern Hemisphere}
\author{Pavlo Plotko
\orcidlink{0000-0001-6975-5186}}
\affiliation{Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany}
\author{Arjen van Vliet
\orcidlink{0000-0003-2827-3361}}
\affiliation{Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany}
\affiliation{Department of Physics, Khalifa University, P.O. Box 127788, Abu Dhabi, United Arab Emirates}
\author{Xavier Rodrigues
\orcidlink{0000-0001-9001-3937}}
\affiliation{Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany}
\affiliation{Astronomical Institute, Fakultät für Physik und Astronomie, Ruhr-Universität Bochum, 44780 Bochum, Germany}
\author{Walter Winter
\orcidlink{0000-0001-7062-0289}}
\affiliation{Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany}
\keywords{astroparticle physics --- cosmic rays --- neutrinos --- methods: numerical}
\section{Introduction}
\label{sec:intro}
Ultra-high-energy cosmic rays (UHECRs) are the most energetic particles ever detected. These atomic nuclei with energies above $10^{18}$~eV are measured with increasing precision by the Pierre Auger Observatory~\citep[henceforth PAO,][]{PierreAuger:2015eyc}, located in Argentina, and the Telescope Array~\citep[TA,][]{TelescopeArray:2012uws,Tokuno:2012mi}, located in the state of Utah, USA.
Both PAO and TA employ a hybrid detection technique to detect the extensive air showers triggered in the atmosphere by the UHECRs: A surface detector array measures the charged secondaries that reach the ground level, while fluorescence detector stations measure the development of the air showers in the atmosphere. Located in the Southern Hemisphere, PAO observes the sky below a declination of 24.8$^\circ$~\citep{PhysRevD.102.062005}, while TA, located in the Northern Hemisphere, observes the sky above -16.0$^\circ$~\citep{Ivanov:20198M}. There are therefore large portions of the Northern and Southern Hemispheres that are observed exclusively by TA and PAO, respectively, but there is also a common declination band, $-16.0^\circ<\delta<24.8^\circ$, where the sky is observed by both experiments.
As quantified recently by a spectrum working group from PAO and TA~\citep{Ivanov:2017wH, Deligny:2019SC, Tsunesada:2021qO}, there are differences in the energy spectrum of the UHECRs as measured by both experiments, as shown in \Fig\ref{fig:comparison}, adopted from that report. In the left plot, we see that using the energy scales native to both experiments there is a difference in the overall flux normalization, as well as the spectral shape at energies above $10^{19.5}$~eV, or about 30~EeV. Although \Fig\ref{fig:comparison} shows full-sky data, discrepancies are also present in the common declination band, albeit less significantly due to higher statistical uncertainties.
The total systematic uncertainty in the energy scale of PAO is estimated at 14\%~\citep{PhysRevD.102.062005}, while for TA it is 21\%~\citep{Ivanov:20198M}. Shifting the energy scale of the experiments within these uncertainty ranges leads to a change in both the shape and normalization of the UHECR spectra, since they are plotted here as $E^3J(E)$. As shown by~\citet{Deligny:2019SC}, shifting the energy scales of the experiments by a constant value can mitigate the spectral differences below 30~EeV, as we can see in the right plot of \Fig\ref{fig:comparison}. However, above that energy the spectra become again discrepant, with TA data showing an excess in flux compared to PAO.
For this discrepancy to be explained solely by systematic effects, energy-dependent shifts must be introduced in the energy scales of both experiments. \citet{Tsunesada:2021qO} have recently quantified the expressions for these shifts that lead to the best agreement between the data sets in the common declination band. At the same time, while the PAO spectrum is independent of declination, the TA spectrum seems to present different spectral features when considering events from the Northern Hemisphere compared to the common declination band (see also~\citet{Abbasi:2018ygn}). That means that the same energy-dependent shift in principle may not be sufficient to explain the difference in the entire sky~\citep{Ivanov:2017wH, Deligny:2019SC}. This suggests an astrophysical origin of this discrepancy, namely a local source whose flux is observed by TA at the highest energies.
The existence of a nearby UHECR source or group of sources is also supported by previous analyses on the arrival directions of the TA cosmic rays. For example, using five years of data,~\citet{Abbasi_2014} reported an intermediate-scale anisotropy at 5.1$\sigma$ Li-Ma significance in the arrival directions of UHECRs above 57~EeV. More recently, an update on that analysis with twice the exposure time has revealed that this hotspot is now significant at the 3.3$\sigma$ level~\citep{Kim:2021Aj}. Additionally, TA has also confirmed a new excess of events with energies above 25~EeV from a different direction ~\citep{TelescopeArray:2021dfb}.
In this study, we investigate the hypothesis that a local source in the Northern Hemisphere is at least partially responsible for the discrepancy between the PAO and TA spectra.
Using a cutting-edge numerical model, we simulate the propagation of UHECRs from a hypothetical local source observed only by TA, as well as a cosmological distribution of sources observed by both experiments. We then perform a joint fit of the results to both TA and PAO data, considering both spectrum and composition observables and the relevant multi-parameter correlations. We then use the best-fit results to constrain the source properties.
The paper is structured as follows: in \Sec\ref{sec:methods} we briefly describe the numerical method and specify the fitting procedure used to constrain the model parameters. In \Sec\ref{sec:results} we report the results of the fit and discuss their interpretation. In \Sec\ref{sec:conclusion} we summarize our conclusions.
\section{Methods}
\label{sec:methods}
\subsection{Source parametrization}
\label{subse:source-prop_model}
We first simulate the UHECR emission from a cosmological population of UHECR sources (\textit{cosmo}). The sources are considered to be homogeneously distributed and have an emissivity in cosmic rays \lumcosmo, defined as the emitted cosmic-ray luminosity per cosmic volume, and obtained by integrating above $E>10^9$~GeV. Each emitted isotope of mass $A$ contributes a fraction $f_A$ of that emissivity. The set of all $f_A$ values defines the emitted composition, which we assume to also be the same for all sources. The contribution of the cosmological source distribution to the observed flux spectrum of each element $A$, $J_{A}^\cosmo$, can then be written as~\citep{Heinze:2019,Aab:2017,Batista:2019}:
\begin{equation}
J_A^\cosmo{}(E) \,=\ J_0^\cosmo~f_A\, f_\mathrm{cut}(E)~n(z,\text{\mcosmo})\left(\frac{E}{10^9~\text{GeV}} \right)^{-\text{\gammacosmo}},
\label{eq:emissivity}
\end{equation}
where $n\sim (1 + z)^\text{\mcosmo}$ is the cosmological source density (the index \mcosmo\ characterizes the evolution of the source density with redshift $z$), \gammacosmo~is the spectral index of the emitted cosmic rays, $J_0^\cosmo$ is the normalization of the spectra that corresponds to the total emissivity, and the factor $f_\mathrm{cut}$ introduces an exponential cutoff at the energy corresponding to the maximum rigidity $\text{\Rmaxcosmo} = E_\mathrm{max}/Z_\text{A}$:
\begin{align}
f_\text{cut}(E) = \begin{cases}
1
&, E < Z_\text{A} \text{\Rmaxcosmo} \\
\exp\left(1 - \frac{E}{Z_\text{A} \text{\Rmaxcosmo}} \right)
&, E > Z_\text{A} \text{\Rmaxcosmo},
\end{cases}
\end{align}
The maximum rigidity \Rmaxcosmo~of all emitted isotopes is the same, as is typical of sastrophysical sources optically thin to nuclear disintegration~\citep{Kotera_2015,Rodrigues:2017fmu,Biehl:2017zlw}.
Regarding the injection composition (i.e. the chemical composition of the cosmic rays as they are emitted by the sources), we consider a combination of five elements, each representative of a distinct mass group up to iron-56: $^1$H, $^4$He, $^{14}$N, $^{28}$Si and $^{56}$Fe. In terms of the propagation simulation, we start with a combination of these five isotopes at the source (whose fractions are given by $f_A$ as discussed above). As they interact with the cosmic photon backgrounds, they then produce nuclear cascades of hundreds of secondary isotopes with intermediate masses, all of which are taken into account.
We also assume the existence of a single local source in the Northern Hemisphere, which can be observed by TA but not by PAO. For simplicity, we explore scenarios where the local source emits only one of the five mass groups given above, which is done by propagating the respective representative isotope. Therefore, the emission from this source can be fully characterized by a maximum rigidity \Rmaxlocal, power-law index \gammalocal, emission luminosity \lumlocal, and one emitted cosmic-ray mass group. These parameters will be varied independently from those describing the cosmological source distribution. Finally, the comoving distance to the local source\footnote{We will generally refer to the comoving distance to the local source, while keeping in mind that the cosmic rays generally travel a longer distance due to magnetic field deflections, which cannot be included in a one-dimensional simulation. This distinction becomes more relevant the larger the value of \Dlocal.} will affect the UHECR spectrum observed at Earth, similar to the evolution parameter \mcosmo~in the case of the cosmological source distribution.
Overall, the cosmological source distribution can be fully characterized by eight parameters $\boldsymbol\lambda_\cosmo$, and the local source by five parameters $\boldsymbol\lambda_\local$:
\begin{align}
\begin{split}
\boldsymbol\lambda_\cosmo \,=\,
&( \text{\gammacosmo}, \text{\Rmaxcosmo}, \text{\mcosmo},\text{\lumcosmo},\boldsymbol{f}^\cosmo_{\text{A}}),\\
\boldsymbol\lambda_\local \,=\,
&( \text{\gammalocal}, \text{\Rmaxlocal}, \text{\Dlocal}, \text{\lumlocal}, A_\local).
\label{eq:parameter_sets}
\end{split}
\end{align}
\subsection{Propagation model}
\label{subsec:model}
Once we characterize the emitted UHECR spectra from the cosmological source distribution and the local source with two sets of parameters $\boldsymbol\lambda_\cosmo$ and $\boldsymbol\lambda_\local$ (Eq.~\ref{eq:parameter_sets}), we then inject those cosmic rays into a numerical simulation to calculate their interactions as they propagate toward Earth. For this, we use the open-source software \prince~~\citep{Heinze:2019}, which numerically solves the transport equations of the cosmic-ray spectra. We use \textsc{Talys}~\citep{Koning:2007} as the nuclear interaction model, and we adopt the Extragalactic Background Light (EBL) model by~\citet{Gilmore:2012}.
In the case of the cosmological source distribution, we consider the emission from a continuous distribution of sources up to $z=1$, since a source at any higher redshift will lie outside the cosmic-ray horizon for the energy range we focus on. In each simulation, we assume a certain value of the evolution parameter \mcosmo, which defines the strength of the cosmological evolution of the source distribution, as defined in Eq.~(\ref{eq:emissivity}).
We then add the contribution from the local source. In the \prince~framework, a single source lying at a comoving distance \Dlocal\ is equivalent to a source population whose evolution is described by a delta function, $n=\delta(D=\text{\Dlocal})$. Because the cosmological source distribution and the local source are independent, we simulate the propagation of the two separately. We then add the respective contributions at Earth to obtain the prediction for the total flux observed by TA, while for PAO observables only the contribution from the cosmological source distribution is considered.
After simulating the cosmic ray propagation according to the above procedure, we obtain the energy spectrum for each individual isotope at the top of the atmosphere, as well as values of $\langle \ln{A} \rangle$ and $\sigma^2_{\ln{A}}$ for each energy of the numerical grid. We then compute the mean of the distribution of the depth of the shower maximum, $\langle \xmax \rangle$, as well as its second moment, $\sigma(\xmax)$, following the procedure by~\citet{Heinze:2019}. Throughout this work, we will discuss the results for three air shower models separately: \textsc{Sibyll~2.3c}~\citep{riehn:2016hO},
\textsc{Epos-LHC}~\citep{Pierog:2015}, and \textsc{QGSJET-II-04}~\citep{Ostapchenko:2011}.
These predictions will then be compared with data from both the TA and PAO experiments, resulting in a joint fit.
\subsection{Joint fit of PAO and TA data}
\label{subsec:fit}
We aim at describing the spectrum and composition of UHECRs above ${E_\text{min} = 6\times10^9 ~ \text{GeV}}$, originating from the entire field of view of TA and PAO. The data from PAO consists of spectrum measurements distributed over fourteen energy bins~\citep{Verzi:2019AO}, nine data points describing $\langle X_{\mathrm{max}}\rangle$, and nine more for $\sigma(X_{\mathrm{max}})$~\citep{Yushkov:2019J8}. The TA spectrum above our threshold is described by fifteen data points and one upper limit~\citep{Ivanov:20198M}, while the data on $\langle X_{\mathrm{max}}\rangle$ and $\sigma(X_{\mathrm{max}})$ consist of five data points each\footnote{While ideally, we could draw additional information by making use of data subsets from the Northern Hemisphere, Southern Hemisphere, and the common declination band seen by both experiments, these data sets are currently not publicly available.}~\citep{Abbasi2018}.
Regarding the composition observables, i.e. $\langle X_{\mathrm{max}}\rangle$ and $\sigma(X_{\mathrm{max}})$, we adopt the values published by the two experiments independently. As argued by~\citet{deSouza:2017ZU}, a detailed comparison should take into account the different detector acceptances and resolutions, as well as the differences between the analysis techniques of the two groups. However, the tools and data for such detailed treatment have not thus far been disclosed. Furthermore, as shown in \Sec\ref{sec:results} and previously by~\citet{Heinze:2019}, the three air shower models considered in this work lead to different predictions on the observed composition, which introduces an element of uncertainty that surpasses the uncertainty from these more detailed effects. We, therefore, compare the composition data at face value and leave a more precise analysis for a future work.
As described in \Sec\ref{subse:source-prop_model}, we hypothesize that these data are explained by a cosmological source distribution, characterized by eight parameters $\boldsymbol\lambda_\cosmo$, and a local source in the Northern Hemisphere, characterized by five parameters $\boldsymbol\lambda_\local$. To account for the systematic uncertainties of TA and PAO, we further introduce six nuisance parameters $\boldsymbol\delta$: $\delta_E^\TA$ and $\delta_E^\PAO$ characterize the uncertainties in the energy scales of the TA and PAO spectra, respectively (see \Sec\ref{sec:intro}). Furthermore, we have $\delta^\TA_{\langle X_\mathrm{max}\rangle }$ and $\delta^\PAO_{\langle X_\mathrm{max}\rangle }$, which define systematic shifts in $\langle X_{\mathrm{max}}\rangle$, and $\delta^\PAO_{\sigma(X_\mathrm{max})}$ and $\delta^\TAO_{\sigma(X_\mathrm{max})}$, which define a systematic shift in $\sigma(X_{\mathrm{max}})$.
As discussed in \Sec\ref{sec:intro}, different functional forms have been proposed for the energy shifts \deltaEPAO and \deltaETA. In the main part of this work we will limit ourselves to the case of energy-independent energy shifts, \deltaEPAO$=\mathrm{const.}$ and \deltaETA$=\mathrm{const.}$, given as a percentage of each energy bin. We explore values of $\delta$ within the uncertainty range of either experiment ($\pm$14\% for $\delta_E^\PAO$ and $\pm$21\% for $\delta_E^\TA$ as described in \Sec\ref{sec:intro}). In \App\ref{app:energy-dependent_shift} we will consider, additionally, the more complex scenario where the experiments have energy-dependent systematic energy shifts, and we show that even in that scenario a local source is still compatible with the data. Interestingly, however, our baseline model, i.e. energy-independent systematics and a local source in the Northern Sky, actually provides the best joint fit to the data of all the scenarios tested, including those with energy-dependent systematics.
Regarding the composition observables, there is no precedent in the literature for the treatment of their systematic shifts in a joint fit, because to date no such joint fit has been performed. We assume that $\delta_{\xmax}$ is given as a percentage of the systematic uncertainty of each $\xmax$ data point, and likewise $\delta_{\sigma(\xmax)}$ as a percentage of each $\sigma(\xmax)$ data point. We treat the $\boldsymbol\delta$ variables as nuisance parameters, independent from each other and from the source parameters $\boldsymbol\lambda_\cosmo$ and $\boldsymbol\lambda_\local$. We search the range $\pm100\%$, which represents the boundaries of the systematic uncertainties carried by the data from either experiment. Like for the energy shifts, we consider only energy-independent values of $\delta_{\xmax}=\mathrm{const.}$ and $\delta_{\sigma(\xmax)}=\mathrm{const.}$ in the main part of this text, while in \App\ref{app:energy-dependent_shift} that condition is relaxed.
The goodness of fit relative to the PAO and TA data is calculated by means of a $\chi^2$ test:
\begin{align}
\begin{split}
\chi^2_\PAO \,=\,&
\chi^2_\PAO(
\boldsymbol{\lambda}_\cosmo,\,
\delta^\PAO_E,\,
\delta^\PAO_{\langle X_\mathrm{max}\rangle },\,
\delta^\PAO_{\sigma(X_\mathrm{max})}). \\
\chi^2_\TAO \,=\,&
\chi^2_\TAO(
\boldsymbol{\lambda}_\cosmo,\,
\boldsymbol{\lambda}_\local,\,
\delta^\TAO_E,\,
\delta^\TAO_{\langle X_\mathrm{max}\rangle},\,
\delta^\TAO_{\sigma(X_\mathrm{max})} ),
\end{split}
\end{align}
where we assume that the cosmological source distribution characterized by $\boldsymbol\lambda^\mathrm{cosmo}$ is observed by both experiments.
Finally, to compute the goodness of the joint fit, we combine the $\chi^2$ values from both experiments with the systematic uncertainties:\footnote{Here we follow \citet{Huber:2004ka, Huber:2007ji}, where similar methods have been successfully used to combine the data from multiple neutrino oscillation experiments.}
\begin{align}
\chi^2_\mathrm{global}(\boldsymbol\lambda_\cosmo,&\, \boldsymbol\lambda_\local,
\boldsymbol\delta) = \\
&\chi_\PAO^{2} + \left(\frac{\delta_E^\PAO }{\sigma_{\text{E}} ^\PAO}\right)^2 +\nonumber\\
& + \left( \frac{\delta_{\langle X_\mathrm{max}\rangle} ^\PAO}{100\%}\right)^2+ \left(\frac{\delta_{\sigma(X_\mathrm{max})} ^\PAO}{100\%}\right)^2+\nonumber\\
&\chi_\TAO^{2} + \left(\frac{\delta_E^\TAO }{\sigma_{\text{E}}^\TAO}\right)^2+ \nonumber\\
&+ \left(\frac{\delta_{\langle X_\mathrm{max}\rangle} ^\TAO}{100\%} \right)^2+ \left( \frac{\delta_{\sigma(X_\mathrm{max})} ^\TAO}{100\%}\right)^2. \nonumber
\end{align}
As we can see, this $\chi^2$ value takes int both the energy spectrum and the composition observables with energy above $E_\mathrm{min} = 6\times10^{9} \text{ GeV}$ from both experiments.
A simultaneous scan of all parameters $\boldsymbol\lambda_\cosmo$, $\boldsymbol\lambda_\local$ and $\boldsymbol\delta$ would be computationally expensive, so instead we divide it into two steps. First, we perform a scan of $\boldsymbol\lambda_\cosmo$, assuming only a cosmological source distribution. We consider the spectral and composition data from PAO in our entire energy range, as well as TA data below 25 EeV. This allows us to constrain the source distribution parameters $\boldsymbol\lambda^\mathrm{cosmo}$, as well as the systematic variables $\boldsymbol\delta$ of both PAO and TA. We scan a three-dimensional parameter grid in \gammacosmo$\times$\Rmaxcosmo$\times$\mcosmo\ with 81$\times$61$\times$61 elements. For each set of parameter values, we numerically simulate the propagation of the five different primary isotopes from the entire source population, as described previously. The result of the propagation for different values of the other parameters (\lumcosmo~and the five $\boldsymbol f_A^\mathrm{cosmo}$ parameters) can be obtained by normalizing the spectra at Earth to PAO observations.
We then fit the TA data in the full energy range, assuming the experiment observes (1) the cosmological source distribution, with the parameter values $\boldsymbol\lambda^\mathrm{cosmo}$ previously obtained, and (2) a local source characterized by $\boldsymbol\lambda^\mathrm{local}$, which we now optimize. For that, we scan a fine grid in \gammalocalГ—\RmaxlocalГ—\Dlocal~with 40$\times$61$\times$80 elements. For each set of parameter values we simulate the propagation of an UHECR spectrum composed of a single isotope $A^\mathrm{local}$ from the local source to Earth. The fluxes arriving at Earth from the local source are added to those from the best-fit cosmological source distribution, before fitting the TA data.
This two-step approach does not affect the results because %
above 30~EeV, where the experiments differ, the overall fit is driven mainly by the PAO spectrum (due to its lower uncertainties), and therefore depends primarily on the cosmological source distribution. The local source parameters can then be searched separately in step two, in order to optimize the fit to high-energy TA data.
This method differs from that employed by~\citet{Heinze:2019} in three aspects: 1) that work considered only the PAO spectrum and composition data, while we include TA data in the same fit; 2) that work considered only a cosmological source distribution, while we consider also the presence of a local source observed by TA; and 3) that work took into account only uncertainties in the energy scale, while we include also the uncertainties on $\langle X_{\mathrm{max}}\rangle$ and $\sigma(X_\mathrm{max})$.
Furthermore, for completeness, we also performed a fit of the model of a cosmological source distribution to TA data only. The results of this fit are discussed in \App\ref{app:TA} and compared to the main results of this study as well as the previous work by~\citet{Heinze:2019}. As we will show, when fitting only TA data, a cosmological source distribution is less favored than when fitting only PAO data (best-fit $\chi^2$/d.o.f.=1.6 vs.~1.3). The joint fit obtained in the main part of this paper leads to a higher value of $\chi^2$/d.o.f.=1.7. As discussed in greater detail in \Sec\ref{sec:results}, this is simply due to the inclusion of data from both experiments, which does not allow for a lower chi-squared value regardless of the model being fitted.
\section{Results and Discussion}
\label{sec:results}
In order to judge our hypothesis of a local UHECR source, we test two scenarios: 1) the null hypothesis that both TA and PAO observe the same cosmological source distribution, and 2) the hypothesis that TA additionally observes a local source in the Northern Hemisphere. Both scenarios are evaluated using the joint fit method described in \Sec\ref{sec:methods}, considering both TA and PAO data, and assuming constant systematic energy shifts, which are also optimized as part of the fit.
In~\Fig\ref{fig:main_result_predictions} we show the best-fit results of both scenarios, obtained using Sibyll as the air shower model. On the left-hand side we show the case without a local source, and on the right-hand side the case where there is a local source in the Northern Hemisphere. In this case, the source emits silicon-28 and lies at a distance of 13.9~Mpc, which corresponds to the best fit found. The exact values and uncertainties of the best-fit parameters of the cosmological source distribution, the local source, and the systematic energy shifts are provided in \Tab\ref{tab:main_result_parameters} for silicon-28. For other emitted isotopes there is a dedicated table in \App\ref{app:other_isotopes}.
The upper plots of \Fig\ref{fig:main_result_predictions} show the predicted cosmic-ray spectra. Focusing first on the data, we plot as black data points the energy-shifted PAO spectrum, and as brown points the TA spectrum. The best-fit energy shift values are provided in \Tab\ref{tab:main_result_parameters}. As expected, the relative systematic shift between the energy scales, $\delta_E^\PAO-\delta_E^\TA$, is consistent with the previous analysis by~\citet{Tsunesada:2021qO}. The absolute values of $\delta_E^\PAO$ and $\delta_E^\TA$ differ from that study because there they were chosen to be symmetric, while here they were obtained through the joint fit to our model. Like in \Fig\ref{fig:comparison}, this shift leads to an agreement between the spectra from the two experiments up to about 30 EeV, while above that energy the TA fluxes are generally higher. For reference, we show as a dashed line the threshold energy of 25~EeV above which the new excess was found in the TA data~\citep{TelescopeArray:2021dfb}.
The dashed black curves represent the contribution of the cosmological source distribution, which is observed by both experiments. The contributions from the different mass groups are shown in different colors. In the top right plot, we show in addition as a solid black curve the total contribution from the local source that emits silicon-28. This contribution is almost entirely within the silicon mass group also at Earth, as indicated by the solid yellow curve that can be seen behind the black curve. Finally, the brown curve represents the sum of the cosmological source distribution and the local source, which is the total flux observed by TA.
The first thing to note is that the best-fit spectrum from the cosmological source distribution is similar in both cases, although in the left-hand plot the contribution from the cosmological source distribution is fitted to data from both experiments and in the right-hand plot only to PAO. This is because at higher energies, where the PAO and TA spectra are discrepant, the TA data have considerably larger uncertainties, and therefore the fit of the cosmological source distribution is driven by the PAO data in either scenario. This can also be seen by comparing the left- and right-hand columns of \Tab\ref{tab:main_result_parameters}, where the parameters of the source population are in fact similar in both scenarios.
Comparing the best-fit results with data, we can see that in the left panel the cosmological source distribution alone fails to explain TA data above 30 EeV, and the overall joint fit has a high value of $109.1/44=2.5$ per degree of freedom (d.o.f.), which includes also the composition data in the bottom plots.
In contrast, the model on the right-hand side has a lower value of $\chi^2$/d.o.f.=1.7, because the tension with TA data is explained by the additional flux from the local source. While in the scenario without a local source the large value of $\chi^2$/d.o.f. was mainly due to the tension with the TA spectrum data at high energies, in this case, the fit cannot be further improved due to the low uncertainties of the PAO and TA spectra at energies below $\sim20$~EeV. Even by optimizing the energy shift, the data from both experiments cannot be brought to a more precise agreement at these low energies, which necessarily limits the quality of any joint fit. This is in contrast with the PAO-only fit by~\citet{Heinze:2019}, who obtained a lower $\chi^2$/d.o.f.=1.3.
The composition observables are shown in the bottom plots of \Fig\ref{fig:main_result_predictions}. Regarding the cosmological source distribution, as reported in \Tab\ref{tab:main_result_parameters}, the best-fit composition is dominated by helium at the 80-90\% level, followed by nitrogen. This is consistent with previous fits to PAO data only by~\citet{Heinze:2019} and~\citet{Aab:2017}. That means that the addition of the TA data does not considerably change predictions on the composition emitted by the sources. On the other hand, the content of protons emitted directly by the sources cannot be constrained because their maximum energy lies below the minimum energy threshold of our fit. This is due to the assumption of a rigidity-dependent maximum energy for the emitted cosmic rays, as described in \Sec\ref{sec:methods}.
Regarding the local source, we can see that its contribution does not change considerably the expected values of $\langle X_\mathrm{max} \rangle$ and $\sigma(X_\mathrm{max})$ above our fitting threshold (compare dashed black curves and solid brown curves in the bottom-right panels of \Fig\ref{fig:main_result_predictions}). Out of the five isotopes we tested, silicon provides the best overall fit quality for \textsc{Sibyll} and \textsc{Epos-LHC}, and nitrogen for \textsc{QGSJET}. However, scenarios where the local source emits other isotopes are also viable. As discussed later in this section and in more detail in \App\ref{app:other_isotopes}, heavier isotopes, such as iron, lead to results that fit the data only marginally worse than silicon, and require that the source lie farther from Earth. On the other hand, as shown in \App\ref{app:other_isotopes}, considerably lighter isotopes, such as protons and helium, provide comparatively poor fits to data, because the predicted composition at Earth is too light compared to TA data.
Because the two scenarios of \Fig\ref{fig:main_result_predictions} are essentially nested models (i.e. they are equal with the exception of a local source that adds a set of independent parameters), we can compare them directly using Wilks' theorem~\citep{Wilks:1938dza}. We then conclude that, under the assumption of a constant systematic energy shift in both TA and PAO, the existence of a local source in the Northern Hemisphere is favored at the 5.6$\sigma$ level compared to the model where there is only a cosmological source distribution. The other air shower models also lead to a result with high significance, as detailed in the bottom row of \Tab\ref{tab:main_result_parameters}.
In \App\ref{app:energy-dependent_shift} we explore how our assumption on the experiment systematics may affect this result. Specifically, we test the more complex case where the energy shifts in both experiments are not energy-dependent, as proposed recently by the TA and PAO working group~\citet{Tsunesada:2021qO}. We show that in that case, the presence of a local source is neither favored nor disfavored, because the discrepancies between the full f.o.v.~spectra at high energies become smaller, and the fit is no longer sensitive to the effect of the local source. However, as shown in \App\ref{app:energy-dependent_shift}, our baseline model with a local source and energy-independent experiment systematics can provide a better joint fit than a scenario involving energy-dependent systematics and is favored at level 4.4$\sigma$. This again supports the idea that the high-energy flux differences may originate at least partially from an astrophysical contribution, rather than purely from experiment systematics.
In \Fig\ref{fig:main_result_contours} we show the values of $\Delta\chi=\sqrt{\chi^2-\chi^2_\mathrm{min}}$ for a region of the parameter space, where $\chi^2_\mathrm{min}$ is the best-fit chi-squared value. On the left, we can see the parameter space of the cosmological source distribution for the scenario without a local source. The white dots represent the best-fit parameters, which correspond to the result in the left panel of \Fig\ref{fig:main_result_predictions}. The yellow, green, and blue contours represent the regions 1, 2, and 3$\sigma$ away from the best fit. These source population parameters are compatible with the result by~\citet{Heinze:2019}, although only PAO data were fitted in that work. As mentioned above, the reason for this is that at higher energies, where the experiments observe different fluxes, PAO data has significantly lower statistical uncertainties, and therefore drives the fit.
On the right panel of \Fig\ref{fig:main_result_contours} we show the parameter space of the local source in the full model, assuming silicon-28 emission, with the white dots representing the best-fit result (as on the right-hand side of \Fig\ref{fig:main_result_predictions}). As we can see, the strictest constraint obtained is on the maximum rigidity of the cosmic rays accelerated by the local source, \Rmaxlocal. As detailed further in \App\ref{app:other_isotopes}, the best-fit value of $E^\mathrm{local}_\mathrm{max}=Z_A\,R^\mathrm{}_\mathrm{max}$ does not depend on the isotope (or mix of isotopes) accelerated by the source. Another interesting feature of \Rmaxlocal~is that there are two ranges that can provide a good fit. The lower range, centered around 20~EeV, is the best-fit case shown in \Fig\ref{fig:main_result_predictions} (white dots); and for $E_\mathrm{max}>2000$~EeV, the model becomes viable again at the 2$\sigma$ level. The best fit in this energy range is shown as a red square. This second case represents a local source that is an extreme accelerator of cosmic rays up to the ZeV regime. In this scenario, the ZeV cosmic rays from the local source disintegrate efficiently into protons and simultaneously cool down to tens of EeV, leading to a pure-proton component at Earth that explains the TA excess. As we can see in the bottom-right plot, the local source must lie at a distance of at least 100~Mpc in order for this strong cooling to occur. Although for Sibyll this ``exotic'' is at best $2\sigma$ away from the best-fit case, when considering \textsc{Epos-LHC} there is in fact a viable solution within the $1\sigma$ region. These details are discussed further in \App\ref{app:exotic}.
Regarding the spectral index of the cosmic rays emitted by the local source, we can see that our result can only provide an upper limit of \gammalocal$ < 0.5$. This is because softer spectra would lead to an additional flux at Earth below 20~EeV that would overshoot the observed TA flux. On the other hand, a lower limit of \gammalocal~cannot be obtained because for hard spectra the flux becomes dominated by cosmic rays with energy close to $E_\mathrm{max}$, and therefore the precise shape of the distribution cannot be constrained.
Finally, we can see that the distance to the local source is also constrained. To better illustrate this, we show in the left panel of \Fig\ref{fig:distance} the 1$\sigma$ and 3$\sigma$ uncertainty regions on the distance traveled by the cosmic rays from the local source to Earth, for five different emitted isotopes. All other parameters are kept at their best-fit values. Silicon-28 was the case discussed so far, which provides the best fit out of the five mass groups. Assuming Sibyll as the air shower model (blue, cf.~also \Tab\ref{tab:main_result_parameters}), the source should lie at any distance below 23.1~Mpc for a result within the $1\sigma$ region, with the best fit obtained for 13.9~Mpc. This constraint on the distance arises from the optimal efficiency of photodisintegration undergone by silicon nuclei at the highest energies that is necessary to explain the data.
In the right plot of \Fig\ref{fig:distance} we show the energy loss length for silicon (yellow curve) and, as a reference, the energy range relevant for our fit (orange band): from 6 EeV, the minimum energy of the joint fit, up to 224~EeV, the highest energy for which TA provides a flux measurement. As we can see, the energy loss length of a silicon nucleus with an energy of $\sim$200~EeV is roughly 10~Mpc, which is the same order of magnitude as our optimal local source distance of 14 Mpc. If the local source were to lie much closer to Earth, the emitted silicon nuclei would undergo less photodisintegration, leading to an observed TA flux with a harder spectrum and heavier composition. However, as we can see in the left-hand plot, in that case, we are still able to explain observations within the 1$\sigma$ region of the best fit. On the other hand, if the source were to lie at a distance much larger compared to the energy loss length, efficient photodisintegration at these energies would be too thorough, producing large amounts of secondary nuclei. These lighter isotopes should then be observed by TA at lower energies (due to their lower mass number), leading to an additional flux that is not supported by the data. For that reason, distances much larger than $\sim10$ Mpc are excluded for the case of silicon.
By the same token, for a lighter element like nitrogen, the maximum distance to the source becomes limited to about 1~Mpc, as we can see on the left-hand-side plot, while for heavier isotopes like iron it is much larger, of order $100$~Mpc. All these cases can be understood by noting the different curves of the right-hand plot, as the optimal distance to the source corresponds roughly to the energy loss length of the respective isotope at the highest energy.
As shown in \Fig\ref{fig:distance}, Andromeda (M31) lies at a distance of 752~kpc. Our neighboring galaxy is therefore in principle viable as a local UHECR source in the case of intermediate-mass isotopes like nitrogen and silicon. On the other hand, a source such as the Perseus-Pisces supercluster (PPS, also known as A 426), at 70~Mpc, would satisfy the distance criterion for isotopes in the iron group. Both these objects are also supported as possible local source candidates by current data from TA, since their position is compatible with the direction of the high-energy excess recently detected~\citep{Kim:2021Aj}. Additionally, other candidate sources may also of course exist. Possible candidates may even lie outside the region of the new excess, due to cosmic-ray deflections by the Galactic magnetic fields (GMFs); see also discussion below.
In terms of energetics, as detailed in \Tab\ref{tab:main_result_parameters} and \Tab\ref{tab:other_isotopes} in \App\ref{app:other_isotopes}, the necessary cosmic-ray luminosity of a source that emits nitrogen is $10^{39}$~erg/s. For an iron source, we need a luminosity of $5.5\times10^{43}$~erg/s, which is higher due to the larger distance required. Both Andromeda and the PPS have higher total photon luminosities ($10^{44}$~erg/s and $8\times10^{44}$~erg/s~\citep{Boehringer:2021mix}, respectively), which means that they may be feasible candidates within this model from the energetics point of view. In the case of Andromeda, a very low cosmic-ray loading of $10^{-5}$ would be sufficient to explain observations. On the other hand, being a spiral galaxy, the question would still remain of what are the acceleration sites of such energetic cosmic rays, which is however outside the scope of this work.
In general, large-scale structured GMFs will cause a shift in the position in the sky of the UHECR source. On the other hand, the presence of small-scale turbulent GMFs leads to a spreading effect around the source position (see e.g.~\citet{Shaw:2022lqd}).
This spreading effect needs to be small enough for the local UHECR source in the Northern Hemisphere in order not to affect our declination-dependent interpretation of the spectrum.
The amount of spread around the (potentially shifted) source position can be estimated as (see e.g.~\citet{Lee:1995tm}):
\begin{align}
\theta_\text{rms} \approx & \frac{2}{\pi} \sqrt{D \lambda_B} \frac{ZeB}{E} \nonumber\\
\approx & 0.35^\circ Z \left( \frac{D}{10~\text{kpc}} \right)^{1/2} \left( \frac{\lambda_B}{100~\text{pc}} \right)^{1/2} \nonumber\\
& \times \left( \frac{B}{1~\mu\text{G}} \right) \left( \frac{E}{100~\text{EeV}} \right)^{-1},
\label{eq:GMFspreading}
\end{align}
with $D$ the distance through the Galaxy where these turbulent magnetic fields are present, $\lambda_B$ the correlation length of the turbulent magnetic fields and $B$ the magnetic-field strength. For silicon-28 with an energy of 30~EeV, and typical order-of-magnitude magnetic-field parameters of $D = 10$~kpc, $\lambda_B = 100$~pc and $B = 1~\mu$G, this gives $\theta_\text{rms} \approx 16^\circ$, which is roughly consistent with the extent of the new excess found by TA. However, note that the extent of the turbulent magnetic fields in our Galaxy, their correlation length and their strength are not well known. If they happen to be significantly stronger or wider spread than the order-of-magnitude estimates given here, the expected UHECR spreading increases significantly. In such a scenario, a lighter composition than silicon might be favorable.
\section{Summary and conclusions}
\label{sec:conclusion}
We have performed a joint fit to current PAO and TA data accounting for both spectra and composition observables. We have considered the standard rigidity-dependent maximum energy assumption, and considered all relevant parameters and their multi-parameter correlations. We have tested the hypothesis that both experiments observe a cosmological source distribution of equal UHECR sources, while TA observes an additional local UHECR source located in the Northern Hemisphere. We have demonstrated that the presence of the local source is favored at the $5.6\sigma$ level compared to the null-hypothesis scenario where there is no local source.
Our best joint fit reveals a local source that emits cosmic rays dominated by the silicon-28 mass group, with a hard spectrum (\gammalocal~$<-1.0$) and a maximum energy of \Emaxlocal$=20$~EeV. Although the best fit is obtained considering Sibyll~2.3c as the air shower model, good fits can also be obtained when considering QGSJET-II-04 and Epos-LHC -- which we have also tested. Besides silicon, other isotopes with masses between nitrogen and iron are also viable within the 3$\sigma$ region relative to the best fit (cf.~\App\ref{app:other_isotopes}). In the silicon scenario, the source must lie within a distance of 14~Mpc, making Andromeda a viable candidate. For heavier elements such as iron, the local source should lie at a distance of the order of 100~Mpc, compatible with an object such as the Perseus-Pisces supercluster. Both source candidates have a photon luminosity higher than the cosmic-ray luminosity required by the model, making them energetically viable.
Furthermore, both lie within the angular uncertainty region of the flux excess recently reported by TA~\citep{Kim:2021Aj}; our model predicts that their contribution should be dominant above 30 EeV, which is approximately the energy range of the TA excess.
While we have derived our main result using standard systematic shifts of the energy grids of the PAO and TA experiments (independent of energy), we have compared to a scenario where the differences in the measurements may originate from systematic energy-dependent shifts in the energy grids of the two experiments; see~\App\ref{app:energy-dependent_shift} for details. We have demonstrated that the astrophysical explanation (the local UHECR source) leads to a significantly better goodness of fit versus the tested systematics explanation ($\chi^2/\mathrm{d.o.f.}=1.7$ versus $2.5$ compared to the null hypothesis of standard systematics and a cosmological source distribution only). Note, however, that the local UHECR source cannot be significantly established on top, if the energy-dependent shifts are present. One may speculate that for slightly adjusted systematics both hypotheses may work equally work; such a systematics however requires a clear physical motivation.
We conclude that a local UHECR source provides a good description for the long-standing discrepancy of the spectrum and composition data between PAO and TA. While our $5.6\sigma$ significance with respect to the null hypothesis (cosmological source distribution only, standard systematics) is high and this astrophysical explanation is clearly more attractive than a systematical one, the claim for a ground-breaking discovery probably requires a) a better understanding of possible energy-dependent systematics, b) a scrutinizing analysis performed by the experimental collaborations using updated data, and c) an unambiguous association with (possibly observed) anisotropies. On the modeling side, have we restricted ourselves to a single mass group from the local source due to the computational effort, while a more complex model involving a mix of isotopes may eventually provide a better joint fit, and constrain the properties of the local source further. However, the higher number of parameters of such a model will require higher statistics from the Northern Hemisphere, which can only be made possible by future experiments such as the planned TAx4 experiment~\citep{TelescopeArray:2021jim}.
\begin{deluxetable*}{c| l|r|r|r||r|r|r}
\centering
\caption{Best-fit parameters corresponding to the results of the joint fit to PAO and TA. The 1$\sigma$ uncertainty region is given for 1 d.o.f.}
\label{tab:main_result_parameters}
\tablehead{
&
& \multicolumn{3}{c||}{cosmological source distribution only}
& \multicolumn{3}{c}{cosmological source distribution + local source}
\\
&
& \textsc{Sibyll~2.3c}
& \textsc{Epos-LHC}
& \textsc{QGSJET-II-04}
& \textsc{Sibyll~2.3c}
& \textsc{Epos-LHC}
& \textsc{QGSJET-II-04}
}
\startdata
\multirow{9}{*}{\rotatebox[origin=c]{90}{cosmological source distrib.}}
&\gammacosmo
&$-0.75_{-0.15}^{+0.15}$
&$0.10_{-0.1}^{+0.05}$
&$-0.60_{-0.05}^{+0.03}$
&$-0.75_{-0.45}^{+0.15}$
&$-0.85_{-0.05}^{+0.05}$
&$-0.65_{-0.03}^{+0.05}$
\\
&\Rmaxcosmo~(GV)
&$1.8_{-0.2}^{+0.2}\times10^{9}$
&$2.5_{-0.2}^{+0.2}\times10^{9}$
&$2.5_{-0.2}^{+0.2}\times10^{9}$.
&$1.8_{-0.2}^{+0.2}\times10^{9}$
&$2.0_{-0.2}^{+0.2}\times10^{9}$
&$2.5_{-0.2}^{+0.2}\times10^{9}$
\\
&\mcosmo
&$3.6_{-0.6}^{+0.6}$
&$<-4.8$
&$<-5.8$
&$3.8_{-0.6}^{+0.6}$
&$0.6_{-0.6}^{+0.6}$?
&$<-5.8$
\\
&$f_A (\%)$
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
&\multicolumn{1}{r|}{H}
& $0.004_{-0.004}^{+99.996}$
& $0.000_{-0.000}^{+86.756}$
& $0.002_{-0.002}^{+99.881}$
& $0.004_{-0.004}^{+99.928}$
& $0.001_{-0.001}^{+99.879}$
& $0.000_{-0.000}^{+84.659}$
\\
&\multicolumn{1}{r|}{He}
& $86.096_{-2.256}^{+1.986}$
& $88.799_{-0.338}^{+0.329}$
& $92.588_{-0.266}^{+0.258}$
& $80.504_{-4.948}^{+4.150}$
& $92.125_{-0.514}^{+0.485}$
& $92.471_{-0.270}^{+0.261}$
\\
&\multicolumn{1}{r|}{N}
& $13.324_{-0.696}^{+0.728}$
& $10.578_{-0.400}^{+0.414}$
& $7.222_{-0.281}^{+0.291}$
& $18.803_{-0.901}^{+0.936}$
& $7.738_{-0.298}^{+0.308}$
& $7.375_{-0.202}^{+0.207}$
\\
&\multicolumn{1}{r|}{Si}
& $0.567_{-0.094}^{+0.113}$
& $0.609_{-0.093}^{+0.110}$
& $0.181_{-0.028}^{+0.034}$
& $0.676_{-0.192}^{+0.266}$
& $0.133_{-0.034}^{+0.045}$
& $0.147_{-0.027}^{+0.033}$
\\
&\multicolumn{1}{r|}{Fe}
& $0.010_{-0.004}^{+0.008}$
& $0.015_{-0.008}^{+0.017}$
& $0.007_{-0.002}^{+0.003}$
& $0.012_{-0.006}^{+0.012}$
& $0.003_{-0.002}^{+0.003}$
& $0.005_{-0.002}^{+0.002}$
\\
\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Loc. source}}}
& isotope
& $\,$
& $\,$
& $\,$
& silicon-28
& silicon-28
& nitrogen-14
\\
&\gammalocal
& $\,$
& $\,$
& $\,$
& $<-1.0$
& $<-1.1$
& $<-1.1$
\\
&\Rmaxlocal~(GV)
& $\,$
& $\,$
& $\,$
& $1.3_{-0.1}^{+0.2}\times 10^{9}$
& $2.3_{-0.1}^{+0.3}\times 10^{9}$
& $2.5_{-0.3}^{+0.3}\times 10^{9}$
\\
&\lumlocal~(erg $s^{-1}$)
& $\,$
& $\,$
& $\,$
& $1.1_{-1.1}^{+2.0}\times 10^{42}$
& $7.3_{-7.3}^{+18.0}\times 10^{41}$
& $<1.0\times 10^{40}$
\\
&\Dlocal~(Mpc)
& $\,$
& $\,$
& $\,$
& $13.9_{-13.9}^{+9.2}$
& $11.3_{-11.3}^{+9.5}$
& $<+1.4$
\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Systematics}}
&\deltaEPAO (\%)
&$-11.6_{-0.5}^{+2.1}$
&$-8.97_{-0.5}^{+1.1}$
&$10.8_{-0.3}^{+0.0}$
&$-11.7_{-1.5}^{+0.8}$
&$-9.5_{-0.6}^{+0.5}$
&$10.9_{-0.0}^{+0.9}$
\\
&\deltaETA (\%)
&$-20.5_{-0.5}^{+1.9}$
&$-18.3_{-0.4}^{+1.0}$
&$10.8_{-0.3}^{+0.0}$
&$-19.7_{-1.3}^{+0.7}$
&$-17.6_{-0.6}^{+0.5}$
&$1.1_{-0.00}^{+0.8}$
\\
&\deltaMeanXmaxPAO (\%)
&$-25_{-27}^{+25}$
&$-100_{-0}^{+0}$
&$-100_{-0}^{+0}$
&$-26_{-23}^{+26}$
&$-100_{-0}^{+0}$
&$-100_{-0}^{+0}$
\\
&\deltaMeanXmaxTA (\%)
&$18_{-12}^{+12}$
&$-18_{-3}^{+5}$
&$-47_{-0}^{+2}$
&$22_{-11}^{+13}$
&$-12_{-5}^{+4}$
&$-31_{-2}^{+0}$
\\
&\deltaSigmaXmaxPAO (\%)
&$50_{-30}^{+26}$
&$-59_{-9}^{+15}$
&$100_{-0}^{+0}$
&$56_{-24}^{+27}$
&$-73_{-11}^{+11}$
&$100_{-0}^{+0}$
\\
&\deltaSigmaXmaxTA (\%)
&$-41_{-9}^{+7}$
&$-90_{-2}^{+4}$
&$3_{-0}^{+3}$
&$-83_{-9}^{+10}$
&$-100_{-0}^{+0}$
&$-9_{-3}^{+0}$
\\
\hline
&$\chi^2$/d.o.f.
&109.1/44
&130.4/44
&269.6/44
&67.6/40
&87.8/40
&239.6/40
\\
\hline
&{\makecell{Favored vis-a-vis\\no local source}}
& $\,$
& $\,$
& $\,$
& 5.6$\sigma$
& 5.7$\sigma$
& 4.6$\sigma$
\\
\hline
\enddata
\end{deluxetable*}
\section*{Acknowledgments}
The authors would like to thank Domenik Ehlert, Simone Garrappa, Ioana Maris, and Andrew M. Taylor for helpful comments and discussions. PP was supported by the International Helmholtz-Weizmann Research School for Multimessenger Astronomy, largely funded through the Initiative and Networking Fund of the Helmholtz Association. XR was supported by the Deutsche Forschungsgemeinschaft SFB 1491.
\clearpage
\appendix
\section{Comparison with the hypothesis of an energy-dependent shift}
\label{app:energy-dependent_shift}
In the main text, we have accounted for systematic effects between the data from both experiments by introducing constant energy shifts to the spectrum and to the composition-related observables. We showed that the hypothesis that TA observes a local source in the Northern Hemisphere is significantly favored compared to the scenario where both experiments observe the same cosmological source distribution. We now address the hypothesis that there are systematic energy shifts in the two experiments that are themselves energy-dependent, $\delta_E^\PAO(E)$ and $\delta_E^\TA(E)$, and discuss the possible role of a local source in that scenario. Possible reasons for such energy-dependent systematic shifts are discussed in~\citet{WatsonECRS:2022}. As stated there, active discussions are currently underway within the joint spectrum working group from PAO and TA on this topic.
As shown by~\citet{Tsunesada:2021qO}, the discrepancy in the spectrum observed by the two experiments, considering only data from their common declination band, can be eased if the PAO spectral data is shifted by $+4.5 \%$ and an additional $+10\% /\mathrm{decade}$ above $10^{19}$~eV, and TA data by $-4.5 \%$ and an additional $-10\% /\mathrm{decade}$ above $10^{19}$~eV. A new, slightly different proposal for energy-dependent shifts has been proposed in~\citet{WatsonECRS:2022}, which is still being investigated by the joint spectrum working group from PAO and TA; we therefore implement the established energy-dependent shifts proposed in~\citet{Tsunesada:2021qO} which have been shown to reconcile the PAO and TA spectra in the common declination band.
We note that when these energy-dependent shifts are applied to the data from outside the common declination band of the two experiments, the spectra will be discrepant above 30 EeV. This is shown in \Fig\ref{fig:comparison_energy-dependent_shift}, where we plot three data sets: PAO data from its full field of view (f.o.v., black), TA data from its full f.o.v.~(brown), and TA data from the northernmost declination band (red). These data points already contain the above-mentioned energy-dependent shifts for their respective experiment. We can see that above some 40~EeV the TA fluxes (both from the full f.o.v.~and from the northernmost sky) are higher than those from the PAO, indicating an excess just like with the simpler assumption of an energy-independent shift (\Fig\ref{fig:comparison}).
This suggests that either the experiment systematics are declination-dependent~\citep[which seems to be disfavored,][]
{Tsunesada:2021qO}, or this effect is in fact astrophysical, as we argue in this paper.
Using the same method as in the main section of this work, we now test the scenario where the systematics in the energy grids of the two experiments are described by an energy-dependent shift (thus explaining the spectral difference in the common declination band), but a local source in the Northern Hemisphere also contributes to the flux observed by TA (explaining the differences that remain outside the common declination band).
As in the main text, we start by testing the null hypothesis that there is no local source (i.e.~ only a cosmological source distribution), and perform a joint fit of the population parameters to the PAO and TA data, assuming the energy-dependent shift suggested by~\citet{Tsunesada:2021qO}. Then, assuming the same energy-dependent shift, we test the hypothesis of a local source in the Northern Hemisphere, and optimize the parameters of both the cosmological source distribution and the local source using the same search method as described in \Sec\ref{subsec:fit}.
The best-fit results for these scenarios are shown in \Fig\ref{fig:energy-dependent_shift}: on the left for the scenario without a local source, and on the right including a local source. In both cases, we consider Sybill as the air shower model, which, like in the main text, provides the best fit out of the three models tested. The best-fit parameter values are given in \Tab\ref{tab:energy_dependend_result_parameters} for all three air-shower models. We also note a qualitative difference between this case and the main result of \Fig\ref{fig:main_result_predictions}: here the best fit is obtained when the local source emits cosmic rays of the iron-56 mass group, rather than silicon-28 as in our main result.
Visually, the right-hand-side model seems to fit better the joint data at high energies; however, the two fits are actually characterized by the same value of $\chi^2$/d.o.f.=2.2. This is because the fit is strongly driven by the low-energy spectral data, rather than the high energies where the local source contributes significantly. This is also the case of the joint fit performed in the main text, and is simply due to the lower uncertainties of the data at lower energies. In this case, however, an additional complication arises from the fact that the energy-dependent shift, which is optimized for the common declination band of both experiments, leads to discrepancies between the data from the full f.o.v., most noticeably between 20 and 30~EeV (compare black and brown data points in the upper panels of \Fig\ref{fig:energy-dependent_shift}). As further discussed below, this discrepancy between the data sets is in fact one of the main factors behind the higher value of $\chi^2$/d.o.f. obtained here compared to the scenario discussed the main text. %
Let us now compare directly these results with the ones discussed in the main text. Turning our attention first to the left-hand panels of \Fig\ref{fig:main_result_predictions} and \Fig\ref{fig:energy-dependent_shift}, we see that under the assumption that there is no local source, the existence of an energy-dependent shift is favored compared to an energy-independent shift ($\chi^2$/d.o.f.=2.2 vs.~2.6). Even though the energy-dependent shift introduces two additional d.o.f. as well as discrepancies below 30 EeV in the full f.o.v.~data, these factors are still out-weighed by the large excess in the TA flux that is observed assuming an energy-independent shift.
We can then compare the right-hand-side panels of \Fig\ref{fig:main_result_predictions} and \Fig\ref{fig:energy-dependent_shift}, where a local source in the Northern Hemisphere is assumed. In this case, the energy-independent shift actually describes the full f.o.v.~data better than an energy-dependent shift ($\chi^2$/d.o.f.=1.7 vs.~2.2), due to three factors: 1) an energy-independent shift requires two parameters less compared to the energy-dependent one, which increases the effective number of d.o.f.~of the model; 2) the energy-independent shift also leads to a better agreement of the data below 30 EeV, where the spectral data has low uncertainties, allowing for the possibility of a better joint fit; and 3) with adequate parameters, the local source is capable of explaining the excess in the TA flux above 30 EeV equally well in either of the scenarios.
Overall, we can clearly say that among the four scenarios presented, the one with energy-independent systematics and a local source in the Northern Hemisphere provides the best joint fit to the full f.o.v.~data from TA and PAO ($\chi^2$/d.o.f.=1.7) and is favored at level 4.4$\sigma$ compared to the case of the energy-dependent shift and assuming no local source.
In \Fig\ref{fig:energy-dependent-local-source-north-Fe} we show a region of the parameter space of the model assuming an energy-dependent shift without (left), and with a local source (right). As we can see on the left, the best-fit minimum (white dot) lies in a different region of parameter space compared to the energy-independent scenario tested in the main text (cf.~left panel of \Fig\ref{fig:main_result_contours}). The best-fit parameters are listed in \Tab\ref{tab:energy_dependend_result_parameters} for all three air-shower models considered. Compared to the main text case, this result requires a softer emission spectrum and a mildly negative source evolution (\mcosmo$=-0.4$). As we can see by comparing the left-hand panels of Figs.~\ref{fig:main_result_predictions} and \ref{fig:energy-dependent_shift}, this negative source evolution means that photodisintegration is less effective, leading to a lower proton component; however, this fact does not affect the results since the proton component peaks below the energy threshold of this study.
Assuming \textsc{Epos-LHC} as the air-shower model we actually find three separate minima, and using \textsc{QGSJET-II-04} only one minimum. Like in the case of Sibyll, for the sake of conciseness, we provide in \Tab\ref{tab:energy_dependend_result_parameters} only the best-fit parameters, even in the cases where more than one minima are found.
Finally, as we can see by the white dots in the right panel of \Fig\ref{fig:energy-dependent-local-source-north-Fe}, and in \Tab\ref{tab:energy_dependend_result_parameters}, the local source has a similar value of the maximum rigidity \Rmaxlocal~for both the energy-independent and energy-dependent case, which underlines the fact that the TA excess is present at a similar energy. Other parameters are different:~compared to the main text case we have iron instead of silicon, a much larger distance of $\sim200~\mathrm{Mpc}$ (necessary for the more thorough disintegration of the iron nuclei, cf.~\App\ref{app:other_isotopes}), and a higher luminosity of $\sim10^{44}~\mathrm{erg/s}$, which is simply due to the larger distance. As in \Fig\ref{fig:main_result_contours}, there is also a different local minimum corresponding to an extreme accelerator of ZeV cosmic rays (red squares), as discussed in more detail in \App\ref{app:exotic}.
\begin{deluxetable}{c| l|r|r|r||r|r|r}
\centering
\caption{Best-fit parameters of the joint fit to PAO and TA, assuming energy-dependent systematic shifts in both experiments. The 1$\sigma$ uncertainty ranges are given for 1 d.o.f. }
\label{tab:energy_dependend_result_parameters}
\tablehead{
&
& \multicolumn{3}{c||}{cosmological source, no local source}
& \multicolumn{3}{c}{cosmological source distribution + local source}
\\
&
& \textsc{Sibyll~2.3c}
& \textsc{Epos-LHC}
& \textsc{QGSJET-II-04}
& \textsc{Sibyll~2.3c}
& \textsc{Epos-LHC}
& \textsc{QGSJET-II-04}
}
\startdata
\multirow{9}{*}{\rotatebox[origin=c]{90}{cosmological source distrib.}}
&\gammacosmo
& $1.30_{-0.03}^{+0.15}$
& $-0.60_{-0.05}^{+0.05}$
& $-0.65_{-0.05}^{+-0.05}$
& $-0.55_{-0.10}^{+0.45}$
& $-1.05_{-0.05}^{+0.45}$
& $-0.70_{-0.03}^{+0.03}$
\\
&\Rmaxcosmo~(GV)
& $4.5_{-0.3}^{+0.6}\times10^{9}$
& $2.2_{-0.3}^{+0.3}\times10^{9}$
& $2.5_{-0.3}^{+0.3}\times10^{9}$
& $2.0_{-0.3}^{+0.2}\times10^{9}$
& $2.0_{-0.2}^{+0.2}\times10^{9}$
& $2.5_{-0.2}^{+0.2}\times10^{9}$
\\
&\mcosmo
& $-0.4_{-1.6}^{+0.2}$
& $2.0_{-0.8}^{+0.6}$
& $<-6.0$
& $4.6_{-0.8}^{+0.4}$
& $3.4_{-1.8}^{+0.4}$
& $<-6$
\\
&$f_A (\%)$
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
&\multicolumn{1}{r|}{H}
& $0.003_{-0.003}^{+99.996}$
& $0.002_{-0.002}^{+98.898}$
& $0.002_{-0.002}^{+99.646}$
& $0.016_{-0.016}^{+99.981}$
& $0.000_{-0.000}^{+97.389}$
& $0.001_{-0.001}^{+99.997}$
\\
&\multicolumn{1}{r|}{He}
& $23.324_{-9.683}^{+13.617}$
& $88.230_{-1.180}^{+1.085}$
& $92.692_{-0.265}^{+0.257}$
& $73.512_{-11.849}^{+9.213}$
& $86.910_{-2.506}^{+2.156}$
& $92.369_{-0.300}^{+0.290}$
\\
&\multicolumn{1}{r|}{N}
& $36.272_{-6.276}^{+6.781}$
& $11.055_{-0.573}^{+0.600}$
& $7.102_{-0.214}^{+0.220}$
& $23.306_{-1.990}^{+2.116}$
& $12.557_{-0.606}^{+0.633}$
& $7.450_{-0.222}^{+0.228}$
\\
&\multicolumn{1}{r|}{Si}
& $38.957_{-2.626}^{+2.691}$
& $0.693_{-0.086}^{+0.099}$
& $0.198_{-0.030}^{+0.035}$
& $3.072_{-0.366}^{+0.414}$
& $0.524_{-0.068}^{+0.079}$
& $0.174_{-0.030}^{+0.036}$
\\
&\multicolumn{1}{r|}{Fe}
& $1.444_{-0.620}^{+1.075}$
& $0.020_{-0.006}^{+0.008}$
& $0.007_{-0.002}^{+0.003}$
& $0.093_{-0.024}^{+0.032}$
& $0.009_{-0.003}^{+0.005}$
& $0.006_{-0.002}^{+0.003}$
\\
\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Loc. source}}}
& isotope
& $\,$
& $\,$
& $\,$
& iron-56
& iron-56
& iron-56
\\
&\gammalocal
& $\,$
& $\,$
& $\,$
& $<-0.3$
& $<-0.2$
& $<-0.5$
\\
&\Rmaxlocal~(GV)
& $\,$
& $\,$
& $\,$
& $2.0_{-0.2}^{+1.2}\times 10^{9}$
& $1.8_{-0.7}^{+0.7}\times 10^{9}$
& $1.6_{-0.3}^{+0.7}\times 10^{9}$
\\
&\lumlocal~(erg $s^{-1}$)
& $\,$
& $\,$
& $\,$
& $1.3_{-0.8}^{+1.1}\times 10^{44}$
& $1.2_{-1.0}^{+0.6}\times 10^{44}$
& $5.4_{-3.1}^{+7.7}\times 10^{43}$
\\
&\Dlocal~(Mpc)
& $\,$
& $\,$
& $\,$
& $176.2_{-58.7}^{+39.4}$
& $176.2_{-89.5}^{+18.7}$
& $130.1_{-34.1}^{+46.1}$
\\
\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{Systematics}}
&\deltaMeanXmaxPAO (\%)
& $-100_{-0}^{+0}$
& $-100_{-0}^{+0}$
& $100_{-0}^{+0}$
& $53_{-18}^{+24}$%
& $-100_{-0}^{+0}$
& $-100_{-0}^{+0}$
\\
&\deltaMeanXmaxTA (\%)
& $-1_{-4}^{+7}$
& $-7_{-5}^{+4}$
& $46_{-3}^{+3}$
& $-3_{-8}^{+10}$%
& $-7_{-4}^{+5}$
& $-43_{-0}^{+0}$
\\
&\deltaSigmaXmaxPAO (\%)
& $44_{-9}^{+38}$
& $-79_{-11}^{+11}$
& $-100_{-0}^{+0}$
& $53_{-18}^{+24}$%
& $-91_{-9}^{+18}$
& $100_{-0}^{+0}$
\\
&\deltaSigmaXmaxTA (\%)
& $-62_{-2}^{+10}$
& $-99_{-1}^{+3}$
& $5_{-2}^{+2}$
& $-3_{-8}^{+10}$%
& $-100_{-0}^{+0}$
& $6_{-0}^{+0}$
\\
\hline
& $\chi^2$ / d.o.f.
& 87.2 / 42
& 128.8 / 42
& 280.7 / 42
& 82.3 / 38
& 110.1 / 38
& 270.1 / 38
\\
\hline
\hline
\enddata
\end{deluxetable}
\clearpage
\section{Other isotopes}
\label{app:other_isotopes}
As discussed in \Sec\ref{sec:results}, the isotope or mix of isotopes emitted by the local source cannot be directly constrained by our model, because this parameter is degenerate with the distance to the source. In the main result of \Fig\ref{fig:main_result_predictions} we limited the discussion to the case of silicon-28, which provides the best fit. However, other isotopes are also possible, as shown already in \Fig\ref{fig:distance} and discussed in the text thereafter.
To further illustrate this, in \Fig\ref{fig:other_isotopes} we show the best-fit results for the case where the local source emits protons (upper left), helium-4 (upper right), nitrogen-14 (lower left), and iron-56 (lower right). These results were obtained using Sibyll as the air shower model. Their respective best-fit distances were shown in the main part of this paper as blue points in the left panel of \Fig\ref{fig:distance}. We list the remaining best-fit parameters in \Tab\ref{tab:other_isotopes}, as well as for the other two air shower models tested.
As we can see, elements from any mass group up to iron-56 can provide results that are overall compatible with the joint data sets of PAO and TA, provided the emission characteristics and the source distance are adjusted according to \Tab\ref{tab:other_isotopes}. At the same time, some caveats should be noted for the different mass groups, as discussed below.
Emission of either protons or helium-4 leads to an observed composition that is light above 30~EeV, as we can see in the respective composition plots in \Fig\ref{fig:other_isotopes}. However, this does not affect the fit significantly because of the lack of composition data from either TA or PAO at these high energies. The case of a pure proton emission would lead to a result that is qualitatively similar to helium, with even larger values of $\langle X_\mathrm{max}\rangle$ and $\sigma(X_\mathrm{max})$ predicted for TA above 30 EeV.
For intermediate-mass isotopes like nitrogen-14, as well as heavier isotopes with mass up to iron-56, the composition observables exhibit the expected behavior with energy compared to our baseline scenario involving silicon, i.e. the observed composition becomes heavier with energy.
In all cases, we observe that the spectral data, rather than the composition observables, are the main factor driving the fit, due to their overall lower uncertainties. This means that with current statistics, we cannot point to any particular isotope (from the five mass groups up to iron-56) that leads to a significantly better joint fit.
\begin{deluxetable}{l|r|r|r|r|r}[htpb!]
\centering
\caption{Best-fit parameters from the joint fit to PAO and TA data, assuming the emission of different isotopes from the local source. The best-fit values of the (energy-independent) systematic shifts are the same in all cases, and are given in \Tab\ref{tab:main_result_parameters}.}
\label{tab:other_isotopes}
\tablehead{
& H
& He
& N
& Si
& Fe
}
\startdata
\multicolumn{1}{c|}{\textsc{Sibyll~2.3c}}
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
\gammalocal
&$<0.3$
&$<-1.2$
&$<-1.2$
&$<-1.0$
&$<0.3$
\\
\Rmaxlocal~(GV)
&$>9.0\times 10^{10}$
&$8.9_{-0.3}^{+1.1}\times 10^{9}$
&$2.2_{-0.3}^{+0.3}\times 10^{9}$
&$1.3_{-0.2}^{+0.2}\times 10^{9}$
&$7.9_{-0.1}^{+12.0}\times 10^{8}$
\\
\lumlocal~(erg $s^{-1}$)
&$7.6_{-7.4}^{+14.7}\times 10^{45}$
&$<1.0\times 10^{39}$
&$<4.2\times 10^{39}$
&$1.1_{-0.1}^{+2.0}\times 10^{42}$
&$5.5_{-4.0}^{+22.0}\times 10^{43}$
\\
\Dlocal~(Mpc)
& $176.2_{-46.1}^{+18.7}$
&$<0.4$
&$<0.9$
&$13.9_{-13.4}^{+9.2}$
&$95.9_{-43.8}^{+99.0}$
\\
$\chi^2$ / d.o.f.
& 88.3 / 40
& 87.4 / 40
& 69.3 / 40
& 67.6 / 40
& 69.1 / 40
\\
\hline
\multicolumn{1}{c|}{\textsc{Epos-LHC} }
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
\gammalocal
&$<0.5$
&$<-1.2$
&$<-1.1$
&$<-1.1$
&$<0.4$
\\
\Rmaxlocal~(GV)
&$>8.0\times 10^{10}$
&$8.9_{-0.2}^{+1.1}\times 10^{9}$
&$2.5_{-0.3}^{+0.3}\times 10^{9}$
&$1.3_{-0.1}^{+0.3}\times 10^{9}$
&$8.9_{-0.2}^{+11.0}\times 10^{8}$
\\
\lumlocal~(erg $s^{-1}$)
&$6.7_{-6.4}^{+19.8}\times 10^{45}$
&$<1.0\times 10^{39}$
&$<1.9\times 10^{39}$
&$1.1_{-0.1}^{+2.0}\times 10^{42}$
&$7.0_{-5.0}^{+17.0}\times 10^{43}$
\\
\Dlocal~(Mpc)
& $176.2_{-32.3}^{+18.7}$
&$<0.4$
&$<1.2$
&$11.3_{-10.9}^{+9.5}$
&$106.2_{-48.5}^{+70.0}$
\\
$\chi^2$ / d.o.f.
& 100.3 / 40
& 97.9 / 40
& 87.9 / 40
& 87.8 / 40
& 89.4 / 40
\\
\hline
\multicolumn{1}{c|}{\textsc{QGSJET-II-04}}
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
\gammalocal
&$<0.6$
&$<-1.2$
&$-<1.1$
&$<-1.0$
&$<-0.2$
\\
\Rmaxlocal~(GV)
&$>8.0\times 10^{10}$
&$8.9_{-0.2}^{+1.1}\times 10^{9}$
&$2.5_{-0.3}^{+0.3}\times 10^{9}$
&$1.4_{-0.2}^{+0.4}\times 10^{9}$
&$1.6_{-0.7}^{+0.7}\times 10^{9}$
\\
\lumlocal~(erg $s^{-1}$)
&$2.7_{-2.1}^{+0.6}\times 10^{46}$
&$<1.2\times 10^{39}$
&$2.4_{-1.3}^{+7.7}\times 10^{39}$
&$1.6_{-1.5}^{+2.3}\times 10^{42}$
&$3.2_{-2.8}^{+1.5}\times 10^{44}$
\\
\Dlocal~(Mpc)
& $194.9_{-51.0}^{+10.2}$
&$<0.4$
&$0.7_{-0.2}^{+0.7}$
&$17.0_{-11.5}^{+8.6}$
&$194.9_{-108.3}^{+20.7}$
\\
$\chi^2$ / d.o.f.
& 252.8/ 40
& 247.7 / 40
& 239.6 / 40
& 242.3 / 40
& 246.1 / 40
\\
\hline
\hline
\enddata
\end{deluxetable}
\clearpage
\section{An ``exotic'' scenario: extreme local accelerator}
\label{app:exotic}
As mentioned in the main text, additionally to our main result the parameter space of the local source contains another region that can provide a good joint fit to PAO and TA data. This is a scenario where the local source emits cosmic rays with extremely high maximum energies, above $10^{12}$~GeV.
An example of this kind of ``exotic'' solution was represented as red squares in the right-hand panel of \Fig\ref{fig:main_result_contours}, for the case where the local source emits cosmic rays of the silicon mass group, and considering Sibyll as the air shower model. However, as we demonstrate in this appendix, the extreme source can emit a composition dominated by any mass group. In the left panel of \Fig\ref{fig:exotic}, we show as red squares the best-fit parameters of the extreme local source obtained for a pure proton composition and using \textsc{Epos-LHC} as the air shower model. In \Tab\ref{tab:exotic} we provide the complete list of the best-fit parameters of the extreme accelerator for different emitted mass groups and assuming different air shower models. As we can see, the best-fit parameters are close to those obtained in our baseline model (\Fig\ref{fig:main_result_contours}): a maximum energy of order $\sim$10~ZeV, a distance to the extreme local source of $\sim$100~Mpc, and a hard spectral index. The luminosity of the local source is similar regardless of the emitted isotope, as we can see in \Tab\ref{tab:exotic}.
To understand these results we turn to the right-hand panel of \Fig\ref{fig:exotic}, where we show the predictions for the spectrum and composition observables. These plots appear equal regardless of the emitted composition. As we can see, the ZeV cosmic rays emitted by the local source suffer strong photodisintegration due to the long distance traveled, to the point where the flux arriving at Earth is completely dominated by protons. These secondary protons carry approximately the same Lorentz factor as the primary nuclei emitted by the source, apart from energy loss processes like pair production and the adiabatic expansion of the Universe. Therefore, as long as the emitted cosmic rays peak at about 10~ZeV, the proton flux observed by TA will peak at a few tens of EeV, thus explaining the excess observed by TA like in the scenarios discussed previously.
Because in this scenario the local source contributes exclusively with protons to the TA spectrum, the result predicts a high value of $\langle X_\mathrm{max}\rangle$ and $\sigma(X_\mathrm{max})$ above a few tens of EeV in TA, as we can see in the bottom right-hand-side plots of \Fig\ref{fig:exotic}. The only contribution of heavier cosmic rays is then provided by the cosmological source distribution (also observed by PAO). The reason why this scenario can still provide an acceptable fit is that the TA composition observables are not well constrained at these very high energies. However, we must also note that the fit quality provided by an extreme accelerator is overall worse compared to our baseline model, as we can see by the values of $\chi^2$/d.o.f. listed in \Tab\ref{tab:exotic}
\begin{deluxetable}{l|r|r|r|r|r}[htpb!]
\centering
\caption{Best-fit parameter values obtained from a joint fit to PAO and TA data, for the case of an extreme local accelerator in the ZeV regime. We show the results for different emitted mass groups, considering Sybill as the air shower model, and assuming an energy-independent systematic shift in the experiment energy scales. The best-fit values of the systematic shifts are the same for all isotopes, as listed in \Tab\ref{tab:main_result_parameters}. We note that the parameters for silicon are depicted as red squares in the right panel of \Fig\ref{fig:main_result_contours} (baseline model), and the case of protons is shown in the left panel of \Fig\ref{fig:exotic}.}
\label{tab:exotic}
\tablehead{
& H
& He
& N
& Si
& Fe
}
\startdata
\multicolumn{1}{c|}{\textsc{Sibyll~2.3c}}
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
\gammalocal
&$<0.3$
&$<0.0$
&$<0.0$
&$<0.4$
&$<0.1$
\\
\Rmaxlocal~(GV)
&$>1.0\times 10^{11}$
&$>3.0\times 10^{11}$
&$>2.2\times 10^{11}$
&$>1.8\times 10^{11}$
&$>1.6\times 10^{11}$
\\
\lumlocal~(erg $s^{-1}$)
&$7.6_{-7.4}^{+14.7}\times 10^{45}$
&$1.8_{-1.7}^{+0.4}\times 10^{45}$
&$1.3_{-1.3}^{+0.3}\times 10^{45}$
&$3.5_{-3.2}^{+1.4}\times 10^{45}$
&$1.7_{-1.2}^{+0.7}\times 10^{44}$
\\
\Dlocal~(Mpc)
& $176.2_{-46.1}^{+18.7}$
& $176.2_{-46.1}^{+18.7}$
& $176.2_{-46.1}^{+18.7}$
&$13.9_{-13.4}^{+9.2}$
&$95.9_{-43.8}^{+99.0}$
\\
$\chi^2$ / d.o.f.
& 88.3 / 40
& 88.3 / 40
& 88.3 / 40
& 88.3 / 40
& 88.3 / 40
\\
\hline
\multicolumn{1}{c|}{\textsc{Epos-LHC} }
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
\gammalocal
&$<0.5$
&$<0.2$
&$<0.2$
&$<0.4$
&$<1.3$
\\
\Rmaxlocal~(GV)
&$>8.0\times 10^{10}$
&$>1.8\times 10^{10}$
&$>1.9\times 10^{10}$
&$>1.8\times 10^{10}$
&$>1.6\times 10^{10}$
\\
\lumlocal~(erg $s^{-1}$)
&$6.7_{-6.4}^{+19.8}\times 10^{45}$
&$9.1_{-8.8}^{+17.9}\times 10^{45}$
&$8.1_{-7.7}^{+12.1}\times 10^{45}$
&$3.4_{-2.1}^{+2.4}\times 10^{45}$
&$1.5_{-0.9}^{+1.5}\times 10^{44}$
\\
\Dlocal~(Mpc)
&$176.2_{-32.3}^{+18.7}$
&$176.2_{-46.1}^{+18.7}$
&$176.2_{-46.1}^{+18.7}$
&$176.2_{-46.1}^{+18.7}$
&$176.2_{-32.3}^{+18.7}$
\\
$\chi^2$ / d.o.f.
& 100.3 / 40
& 100.4 / 40
& 100.3 / 40
& 100.3 / 40
& 100.2 / 40
\\
\hline
\multicolumn{1}{c|}{\textsc{QGSJET-II-04}}
& $\,$
& $\,$
& $\,$
& $\,$
& $\,$
\\
\gammalocal
&$<0.6$
&$<0.3$
&$<0.2$
&$<0.4$
&$<1.4$
\\
\Rmaxlocal~(GV)
&$>8.0\times 10^{10}$
&$>1.6\times 10^{11}$
&$>1.8\times 10^{10}$
&$>1.6\times 10^{10}$
&$>1.4\times 10^{10}$
\\
\lumlocal~(erg $s^{-1}$)
&$2.7_{-2.1}^{+0.6}\times 10^{46}$
&$1.3_{-0.8}^{+1.4}\times 10^{45}$
&$9.4_{-4.5}^{+11.1}\times 10^{35}$
&$3.4_{-0.6}^{+2.5}\times 10^{45}$
&$7.8_{-1.9}^{+9.7}\times 10^{43}$
\\
\Dlocal~(Mpc)
& $194.9_{-51.0}^{+10.2}$
&$176.2_{-46.1}^{+18.7}$
&$176.2_{-46.1}^{+18.7}$
&$176.2_{-46.1}^{+18.7}$
&$176.2_{-46.1}^{+18.7}$
\\
$\chi^2$ / d.o.f.
& 252.8/ 40
& 252.9 / 40
& 252.8 / 40
& 252.8 / 40
& 252.8 / 40
\\
\hline
\hline
\enddata
\end{deluxetable}
\clearpage
\section{Fitting TA data with a cosmological source distribution}
\label{app:TA}
For the sake of comparison, we now evaluate how well a cosmological source distribution can describe the TA data set above 5 EeV (i.e. neglecting now PAO measurements). The best-fit parameters of the cosmological source distribution are marked with brown squares in the left panel of \Fig\ref{fig:TA_only}. For comparison, we show as red dots the best-fit parameters obtained by~\citet{Heinze:2019} when fitting only PAO data. On the right-hand panel, we show the predicted observables for our TA-only fit. The respective parameter values are listed in \Tab\ref{tab:TA_only}.
As we can see, TA data can be fitted with a cosmological source distribution with a value of $\chi^2/\mathrm{d.o.f.}=22.9/15=1.5$, in the case where Sibyll is considered, while for the other two air shower models that value is higher, as listed in \Tab\ref{tab:TA_only}.
These results are similar to the second-best minimum obtained by~\citet{Bergman:2019/7}, who considered \textsc{Epos-LHC} and QGSJET-II-04 as air shower models. We do not obtain the same best-fit minimum obtained by \citet{Bergman:2019/7} because of differences in the analysis method:~firstly, we consider \mcosmo~as a free parameter, while~\citet{Bergman:2019/7} fixed its value to \mcosmo$=3$. Secondly, in that work the authors used the $\xmax$ distributions, while we based ourselves only on the mean and variance values of the $\xmax$ distribution, which are the only publicly available data. More information would be necessary for a more detailed analysis of the differences between these two results.
\begin{deluxetable}{l|r|r|r}[htpb!]
\centering
\caption{\label{tab:TA_only_results_parameters}Best-fit parameters obtained from the fit to the TA spectrum and composition data, assuming a cosmological source distribution (and no local source). The results are shown for the three different air shower models tested. The case where Sibyll was considered is shown in \Fig\ref{fig:TA_only}.}
\label{tab:TA_only}
\tablehead{
&Sibyll~2.3c
&\textsc{Epos-LHC}
&QGSJET-II-04
}
\startdata
\gammacosmo
& $1.40_{-0.15}^{+0.10}$
& $1.40_{-0.10}^{+0.15}$
& $0.10_{-0.30}^{+0.15}$
\\
\Rmaxcosmo~(GV)
& $7.1_{-1.5}^{+0.9}\times10^{9}$
& $7.9_{-0.9}^{+2.1}\times10^{9}$
& $3.2_{-0.3}^{+0.3}\times10^{9}$
\\
\mcosmo
& $-0.8_{-2.2}^{+1.0}$
& $<-4$
& $<-5.2$
\\
$f_A (\%)$
& $\,$
& $\,$
& $\,$
\\
\multicolumn{1}{r|}{H}
& $26.258_{-12.814}^{+18.687}$
& $28.512_{-12.879}^{+17.681}$
& $50.989_{-25.589}^{+25.081}$
\\
\multicolumn{1}{r|}{He}
& $0.034_{-0.034}^{+99.956}$
& $0.010_{-0.010}^{+99.965}$
& $36.110_{-9.623}^{+10.885}$
\\
\multicolumn{1}{r|}{N}
& $0.006_{-0.006}^{+99.936}$
& $30.493_{-7.894}^{+9.236}$
& $12.625_{-0.508}^{+0.526}$
\\
\multicolumn{1}{r|}{Si}
& $72.023_{-1.805}^{+1.737}$
& $40.357_{-6.113}^{+6.428}$
& $0.000_{-0.000}^{+82.967}$
\\
\multicolumn{1}{r|}{Fe}
& $1.680_{-1.325}^{+5.901}$
& $0.628_{-0.614}^{+21.744}$
& $0.277_{-0.021}^{+0.023}$
\\
\hline
\deltaETA (\%)
& $-17.6_{-3.4}^{+3.9}$
& $-14.1_{-3.9}^{+5.4}$
& $21.0_{-1.5}^{+0.0}$
\\
\deltaMeanXmaxTA (\%)
& 0 (fixed)
& 0 (fixed)
& 0 (fixed)
\\
\deltaSigmaXmaxTA (\%)
& 0 (fixed)
& 0 (fixed)
& 0 (fixed)
\\
\hline
$\chi^2$ / d.o.f.
& 22.9 / 15
& 30.2 / 15
& 50.9 / 15
\\
\hline
\enddata
\end{deluxetable}
\clearpage
|
Title:
Can We Fly to Planet 9? |
Abstract: Planet 9 is an hypothetical object in the outer Solar system, which is as yet
undiscovered. It has been speculated that it may be a terrestrial planet or
gas/ice giant, or perhaps even a primordial black hole (or dark matter
condensate). State-of-the-art models indicate that the semimajor axis of Planet
9 is $\sim 400$ AU. If the location of Planet 9 were to be confirmed and
pinpointed in the future, this object constitutes an interesting target for a
future space mission to characterize it further. In this paper, we describe
various mission architectures for reaching Planet 9 based on a combination of
chemical propulsion and flyby maneuvers, as well as more advanced options (with
a $\sim 100$ kg spacecraft payload) such as nuclear thermal propulsion (NTP)
and laser sails. The ensuing mission duration for solid chemical propellant
ranges from 45 years to 75 years, depending on the distance from the Sun for
the Solar Oberth maneuver. NTP can achieve flight times of about 40 years with
only a Jupiter Oberth maneuver whereas, in contrast, laser sails might engender
timescales as little as 7 years. We conclude that Planet 9 is close to the
transition point where chemical propulsion approaches its performance limits,
and alternative advanced propulsion systems (e.g., NTP and laser sails)
apparently become more attractive.
| https://export.arxiv.org/pdf/2208.10207 |
\title{\large{Can We Fly to Planet 9?}}
\correspondingauthor{Adam Hibberd}
\email{adam.hibberd@i4is.org}
\correspondingauthor{Manasvi Lingam}
\email{mlingam@fit.edu}
\author{Adam Hibberd}
\affiliation{Initiative for Interstellar Studies (i4is) 27/29 South Lambeth Road London, SW8 1SZ United Kingdom}
\author{Manasvi Lingam}
\affiliation{Department of Aerospace, Physics and Space Sciences, Florida Institute of Technology, Melbourne FL 32901, USA}
\affiliation{Department of Physics and Institute for Fusion Studies, The University of Texas at Austin, Austin, TX 78712, USA}
\author{Andreas M. Hein}
\affiliation{SnT, University of Luxembourg, 29 Avenue J.F Kennedy, L-1855, Luxembourg}
\affiliation{Initiative for Interstellar Studies (i4is) 27/29 South Lambeth Road London, SW8 1SZ United Kingdom}
\section{Introduction} \label{SecIntro}
Ever since the International Astronomical Union (IAU) modified the status of Pluto to that of a ``dwarf planet'' from its initial designation as one of nine planets in the Solar system,\footnote{\url{https://www.iau.org/public/themes/pluto/}} the Solar system is held to comprise eight \emph{known} planets currently. The debate of what exactly constitutes a ``planet'' is still ongoing \citep[e.g.,][]{MGS22}, although many scientists adhere to the taxonomic classification propounded by the IAU.
After Pluto's status was updated, the hunt for an elusive ``Planet Nine'' far beyond the orbit of Pluto -- alternatively known as ``Planet X'' -- has accordingly intensified, although it must be recognized that this endeavor long predates the aforementioned IAU decision. A historical overview of this field can be found in the recent publication by \citet{BAB19}. The most crucial modern development in this realm arguably concerns the work of \citet{BB16}: the authors theorized that the (ostensibly) improbable orbital clustering of certain Kuiper Belt Objects (KBOs) is strikingly explainable through the existence of a distant planet with mass $\sim 10\,M_\oplus$ on an eccentric orbit. Earlier notable developments in the twenty-first century relating to Planet 9 are reported in \citet{DD14}, \citet{TS14}, and \citet{GSB15}, among other papers.
This noteworthy proposal by \citet{BB16} initiated a plethora of publications centered on myriad aspects of the putative Planet 9. The areas explored in detail include: constraints on the location based on orbital dynamics and/or data from telescopes and spacecraft trajectories \citep{BBat16,DD16,FLMG,GSL16,HP16,MVW16,LI17,ML17,LHPH,CK20,DM20,FDB20,NAB21,BB22,BBB21,MRR22,SN22}, analyses of the clustering of KBOs \citep{BatB16,ST16,MEB17,BB19,DD22}, potential formation mechanisms \citep{BK16,KB16,MRD16}, avenues for detection \citep{CHK16,FML16,GSL16,LM16,JS17,RL19,RL20,AK21,SS22}, impact on dynamical stability and evolution of the Solar system \citep{DDA16,LA16,DV16,BM17,CS21}, effect(s) on solar and planetary obliquities \citep{DL16,BAK17,GDM17,LL22}, and other cognate topics \citep{PBV18,KBA20,LX20,BB21,OT21,NYG22}.
We caution, however, that the exact status and nature of the putative Planet 9 remains unresolved. Some groups have critically reassessed the evidence for the clustering of KBOs \citep{SKB17,BBS20,NGL21,BBS22}, while others have sought to explain the purported clustering through physical mechanisms independent of Planet 9 \citep{ZCT20,ZTC21}. It was even suggested that Planet 9 may be a primordial black hole \citep{SU20}, and methods for detecting this class of objects in the Solar system have been proposed and debated \citep{EW20,HL20,LR20,SL20,MAC22,NHW22}. The possibility that Planet 9 might represent a dark matter condensate of some kind was also advanced \citep{SKK16}.
Although the location and characteristics of Planet 9 -- and even its very existence -- are currently not confirmed, it is nevertheless advantageous to investigate the feasibility of a mission to this object for two major reasons. First, as described hereafter in Section \ref{SciReturn}, launching a mission to Planet 9 would yield a wealth of scientific information about its properties, plausibly more than what can be discerned through observations from ground- and space-based telescopes. Second, irrespective of whether Planet 9 as predicted by \citet{BB16} is real, formulating the specifics of such a mission could serve as a valuable starting point for missions to trans-Neptunian objects (TNOs) \citep{GV21} at similar locations (see also \citealt{MSD11,ZSF21,ZV22}) if promising targets for exploration of this kind are identified.
Therefore, the objective of this paper is to analyze various trajectories and mission designs to the hypothetical Planet 9. To err on the side of caution, we assume that \emph{current} technologies are deployed for the most part, although we comment briefly on the utility of near-future technologies in Sections \ref{SecResults} and \ref{SecDisc}. The orbital parameters for Planet 9 are adopted from \citet{BroB21}, which represents a recent estimate of this object's location. To the best of our knowledge, the only other study that analyzes trajectories to Planet 9 is \citet{CFP22}. Our paper differs from this work in the following major respects: (a) we adopt a conservative estimate for Planet 9's distance to account for the potential necessity of longer mission durations, (b) we do not restrict ourselves to just trajectories but also tackle concrete mission architectures, and (c) we incorporate alternative flyby maneuvers.
\section{Science return and benefits from missions to Planet 9}\label{SciReturn}
Before embarking on summarizing the science benefits that would accrue from sending a mission to Planet 9, it is helpful to draw on an analogy with Pluto. Even though Pluto was discovered in 1930 and much information about this object was garnered from telescope observations, a wealth of new data surpassing the old was yielded by the \emph{New Horizons} flyby of the Pluto–Charon system \citep{SGM18,SMG21}. In particular, the \emph{New Horizons} mission shed light on Pluto's geological diversity, especially its atmosphere, complex surface, and interior \citep{SBE15,GSE16,NMH16}. Additionally, the mission provided insights into Charon and Pluto's small satellites \citep{WBB16}, as well as how this system may have formed.
Note that Pluto has a semimajor axis of $a_P \approx 40$ AU, whereas the corresponding value for Planet 9 is presumed to be nearly ten times higher at $a_9 \sim 380$ AU \citep{BroB21}. The radius of Pluto is $R_P \approx 0.19\,R_\oplus$ while that of Planet 9 may be $R_9 \sim 1.5\,R_\oplus$ (refer to \citealt{BroB21}). Hence, if we posit that both these objects have a similar Bond albedo (which is however not guaranteed to hold true), the ratio of reflected light fluxes ($\delta_R$) from Planet 9 and Pluto at Earth's location can be estimated by using
\begin{equation}\label{deltaR}
\delta_R \sim \left(\frac{R_9}{R_P}\right)^2 \left(\frac{a_9}{a_P}\right)^{-4} \sim 7.7 \times 10^{-3}.
\end{equation}
The ratio of thermal emission fluxes ($\delta_E$) from Planet 9 and Pluto at Earth, assuming they possess similar temperatures (see \citealt{LM16}),\footnote{Despite the fact that Planet 9 lies far beyond the orbit of Pluto and might thus be expected to have a lower temperature, its larger size would translate to higher internal heating (thereby raising the temperature), and a non-negligible greenhouse effect cannot be dismissed altogether.} is
\begin{equation}\label{deltaE}
\delta_E \sim \left(\frac{R_9}{R_P}\right)^2 \left(\frac{a_9}{a_P}\right)^{-2} \sim 0.7.
\end{equation}
Although these calculations are evidently heuristic, they indicate that it is not just harder to detect Planet 9 (compared to Pluto), but also that it may be more challenging to infer its properties from Earth. Therefore, in view of the preceding paragraph (where Pluto was discussed), it does not seem unreasonable to contend that the scientific yield from a flyby mission to Planet 9 -- to say nothing of rendezvous or sample return missions -- could significantly exceed the scientific yield from telescope observations. As per (\ref{deltaR}) and (\ref{deltaE}), the relative gain in scientific return from sending a mission to Planet 9 might even exceed that of Pluto since the former is apparently harder to characterize via remote observations from Earth-based telescopes.
As this work seeks to assess viable mission trajectories and designs, it is not our goal to furnish an exhaustive analysis of the myriad areas that could witness major advances from a mission to Planet 9. With this caveat in mind, we list a few potential avenues below.
\begin{itemize}
\item As per current estimates, Planet 9 is predicted to have a mass of $M_9 \sim 6\,M_\oplus$ \citep{BroB21}. Planets belonging to this class -- the mass range between Earth and Neptune - are unknown hitherto in the Solar system, but have been detected often in extrasolar systems \citep{FP18}, albeit with typically high irradiances (and temperatures). There is considerable uncertainty surrounding their composition and structure, in particular whether they are endowed with massive H$_2$/He envelopes and/or substantial H$_2$O inventories \citep{JM18,ZJS19,MDA20,VGH20,HDS21,MPC21}. Hence, the characterization of Planet 9 could provide an unparalleled opportunity to investigate planets from this crucial category and acquire in-depth data.
\item Earth-based observations may conceivably yield information about Planet 9's internal heat budget \citep{CHK16} and its atmospheric composition \citep{FML16}, but obtaining spatially resolved data of its surface (assuming it is well-defined) and its interior seems unlikely. In contrast, the science payload of $\sim 30$ kg of \emph{New Horizons} \citep{WGT08} was sufficient to glean vital insights about the surface and interior of Pluto \citep{SGM18}, like its potential subsurface ocean \citep{NMH16}. With similar payload and flyby distance, a great deal might be inferred about the physical, chemical, and geological processes and composition of of Planet 9 specifically, as well as TNOs (see \citealt{HBS21}).
\item Many hypotheses have been propounded to explain how Planet 9 formed at its presumed location or migrated there \citep{BK16,KB16,MRD16}. Detailed spacecraft observations could enable us to not only differentiate between these hypotheses but also constrain the formation and dynamical evolution of the outer Solar system. For instance, if instrumentation aboard the spacecraft has the capacity to determine oxygen isotope ratios \citep{NG12,RNC03,IAG20}, they might aid in distinguishing whether Planet 9 was assembled \emph{in situ} in the Solar system or was captured from another star (see \citealt{NG12,RNC03,IAG20}); the latter scenario was theoretically studied by \citet{MRD16}. Likewise, isotopic measurements of hydrogen, nitrogen, carbon, and sulfur -- analogous to those undertaken for comets \citep{JMH09,BCC15} -- are valuable for unraveling the formation history as well as subsequent physical and chemical processes responsible for fractionation on Planet 9.
\item Last, but not least, depending on the scientific payload along with the nature of Planet 9's atmosphere and surface (if existent), it may be feasible to survey them for biosignatures. Although the majority of biosignatures have been formulated for Earth-like planets \citep{JLG17,LL21}, there is increasing emphasis on identifying biosignatures that could arise on ``exotic'' worlds \citep{ISM20,LL21}, such as those with anoxic atmospheres dominated by H$_2$ \citep{SBH13,SSR20,MPC21,WSG21,HSP22}, which is a conceivable composition for Planet 9 \citep{FML16}.
\end{itemize}
We underscore, in closing, that we have only described a select few avenues where a mission to Planet 9 is anticipated to substantially advance our knowledge of planetary science. Nonetheless, we hope that this primer serves as an adequate basis for motivating the rest of our technical exposition in the forthcoming sections.
\section{Approach}
The key components of our framework and approach are elucidated in this section.
\subsection{Orbital Mechanics}
Evidently, in order to perform some kind of meaningful analysis, we must have some notion of Planet 9's position at the present, the velocity and orbit being less relevant. The latter two parameters are of secondary importance because Planet 9's great distance translates to a slow heliocentric speed ($\sim 1.4 \si{.km.s^{-1}}$) and low mean motion ($\sim 0.04 \si{.\degree. yr^{-1}}$). Consequently, as the orbit of Planet 9 would only be required for extrapolating forwards the trajectory, such an extrapolation would yield dubious benefits, especially as the estimates of these parameters are subject to large uncertainties.
\begin{table}
\label{table:Planet9}
\caption{Ephemeris and orbital elements adopted for calculating Planet 9 missions}
\hspace{-0.8cm}
\begin{tabular}{|c|c|}
\hline
\textbf{Ephemeris} & \\ \hline
Sun Distance ($\si{AU}$) & 450 \\ \hline
RA ($\si{\degree}$) & 65 \\ \hline
DEC ($\si{\degree}$) & 20 \\ \hline
\begin{tabular}[c]{@{}c@{}}Mean Motion \\ ($\si{\degree .yr^{-1}}$)\end{tabular} & 0.04 \\ \hline
\textbf{Orbital Parameters} & Relative to Ecliptic \\ \hline
Perihelion ($\si{AU}$) & 450 \\ \hline
Eccentricity & 0.0 \\ \hline
\begin{tabular}[c]{@{}c@{}}Argument of \\ Perihelion ($\si{\degree}$)\end{tabular} & 246 \\ \hline
\begin{tabular}[c]{@{}c@{}}Longitude of \\ Ascending Node ($\si{\degree}$)\end{tabular} & 180 \\ \hline
Inclination ($\si{\degree}$) & 1.43 \\ \hline
Epoch of Perihelion ($\si{\degree}$) & 2035 JAN 01 \\ \hline
\end{tabular}
\end{table}
To reinforce the prior argument, we comment briefly on Planet 9's change of position over the duration of the spacecraft's full journey. As demonstrated hereafter, we end up with overall flight times of around $60$ years or so for chemical propulsion, which from Table \ref{table:Planet9} we can calculate will result in a change of longitude of about $2.4 \si{\degree}$, well within the range of longitudinal uncertainty of Planet 9 as per current theoretical and empirical constraints. Nevertheless for the purposes of the research conducted here, some fiducial orbital parameters are required, in addition to Planet 9's position. Values adopted for the position and velocity of Planet 9, as well as orbital elements are provided in Table \ref{table:Planet9}.
For the analysis here, save for one exception, the earliest launch date for a mission is specified as 2030 JAN 01 and the latest as 2043 JAN 01, in order to permit ample opportunity for an entire Jovian cycle of 11.9 years to complete. Given that Jupiter completes about a twelfth of a cycle per year (approximately $30 \si{\degree}$), for an opportunity arising in a particular year, Jupiter may be displaced from optimum by up to $30 \si{\degree}$. For this reason it may occasionally be of interest to compare the results of a ``true'' Jupiter location within the range mentioned above against a ``theoretically optimal'' Jupiter position, were Jupiter able to occupy any location in its orbit irrespective of its positional dependency on time.
Note that the launch range above, 2030-2043, aligns pretty well with the timeline elaborated by the \emph{Interstellar Probe} concept report \citep{9438249,MWG22}. The above time interval implicitly assumes that astronomers have discovered Planet 9 by 2030 and have accurately determined its orbital elements, thus allowing at least 8 years for telescopes to (1) detect the planet and (2) conduct observations to determine the orbit of the planet.
\subsection{Mission Trajectories}
The trajectory studies here were conducted using Optimum Interplanetary Trajectory Software (OITS); refer to \citet{AH22}. At its core is an algorithm which solves the Lambert problem using the Universal Variable Formulation as elaborated in \cite{Bate1971}. Ignoring multiple orbital cycles, there are two solutions to this problem, ``short way'' (sw) and ``long way'' (lw). These constitute two different orbits, with different orbital parameters, but with the same plane, defined by the two position vectors. Thus if the change in true-anomaly along the short way is designated $\theta$ where $0 \leq \theta \leq \pi$, then the long way corresponds to $2\pi - \theta$. By exploiting the NASA SPICE toolkit,\footnote{\url{https://naif.jpl.nasa.gov/naif/toolkit.html}} in conjunction with the selection of appropriate ephemeris data available from the NASA Horizons service (in the form of binary SPICE kernel files), extremely accurate ephemerides of a particular object as a function of time can be determined.
When dealing with more than 2 celestial bodies, amounting to $n > 2$ bodies, the Lambert problem can be solved for each consecutive pair of celestial bodies along the trajectory. This method leads to a possible $2^{n-1}$ permutations of interplanetary trajectory, comprising long ways and short ways. For each non-terminal encounter, there are four possible encounter trajectories with respect to the celestial body, an arrival pair of hyperbolic excess velocities (corresponding to sw \& lw of the preceding interplanetary trajectory) and a departure pair (sw \& lw for the subsequent one). Connecting hyperbolae with respect to the body are then computed for each arrival and departure combination, assuming a single $\Delta V$ is applied at the periapsis point, aligned with the plane defined by these hyperbolic excess velocities and tangential to the trajectory.
Two Non-Linear Programming (NLP) solvers were used for this study, namely, NOMAD and MIDACO.
To reach Planet 9 or alternatively a Sednoid, three distinct trajectory strategies are considered here, shown in Table \ref{table:Trajectory}. For OITS, a Solar Oberth maneuver (SOM) can be modeled as an ``Intermediate Point'' \citep{AH22} where the heliocentric radial distance is specified, but the heliocentric longitude and latitude are additional optimization parameters for OITS. This framework can also model a Deep Space Maneuver (DSM).
\begin{table*}[]
\label{table:Trajectory}
\caption{Trajectory Options for Planet 9}
\begin{tabular}{|c|c|c|c|}
\hline
& \textbf{Trajectory Option} & \textbf{Trajectory Description} & \textbf{Abbreviation} \\ \hline
1 & Passive Jupiter flyby & Pure Jupiter GA without thrust & PJGA \\ \hline
2 & Powered flyby of Jupiter & Combined Jupiter GA with Jupiter Oberth maneuver - JOM & JOM \\ \hline
3 & Powered flyby of the sun & Solar Oberth maneuver - SOM & SOM \\ \hline
\end{tabular}
\end{table*}
Furthermore, depending on the context, two different optimization criteria (i.e., the so-called ``objective functions'') can then be applied in OITS -- either minimizing $\Delta V$ or flight duration. With regards to the former, a constraint is required for the overall flight duration, and with respect to the latter, OITS allows for a separate $\Delta V$ constraint to be specified at each encounter.
The terminology adopted herein for abbreviating the different mission trajectory scenarios departs slightly from tradition as both home planet (Earth) and target planet (P9) are included in the sequence. Thus E-J-P9 refers to a trajectory from Earth to Jupiter to Planet 9. This extension of the current standard is so that the abbreviation more accurately reflects the precise inputs required by OITS to generate the trajectory. This trajectory, E-J-P9, is an example of a Jupiter Oberth maneuver (JOM) to Planet 9. As a further example, in the case of a SOM we have E-J-3SR-P9, where 3SR indicates the perihelion distance of the SOM from the center of the Sun -- to wit, $3$ Solar radii ($3\,R_\odot$); note that $1\,R_\odot$ (1SR) is equivalent to 0.00465 $\si{AU}$.
A note on $V_{\infty}$ Leveraging Maneuvers (VLMs) is warranted. These maneuvers represent a mechanism by which the $\Delta V$ necessary to travel to Jupiter could be reduced. In addition, VLMs permit longer duration launch windows as well as a lower magnitude $C_{3}$ at launch, although the latter can be offset to an extent by the increased $\Delta V$ required at the Earth return. They involve a launcher injection into a heliocentric elliptical orbit with a DSM at aphelion. A return to Earth then ensues whereupon a powered gravity assist (GA) is executed to transfer to Jupiter. There are two aphelia distances investigated in this study, 2.2 \si{AU} and 3.2 \si{AU}, indicating a resonance of 2 years or 3 years respectively.
The minimum periapsis altitude at each planetary encounter is chosen as 200 \si{km}, thus implying that the NLP will seek trajectories for which the $\Delta V$ application occurs at altitudes greater than this.
\subsection{Heat Shield Calculations}\label{SSecHeatS}
An equation is required here that would relate the heat shield mass, $M_{hs}$, to:
\begin{enumerate}
\item total mass of the spacecraft, $M_{sc}$
\item solar distance $R_{sc}$ of the spacecraft at perihelion
\end{enumerate}
To this end, scaling of the relevant variables can be performed by benchmarking against an already-known, tried and tested reference for which data is readily available such as the well-known Parker Solar Probe (PSP) for example \citep{DEG19}.
Let us denote the mass of the PSP as $M_{psp}$ and its closest approach to the sun as $R_{psp}$. It is reasonable to suppose that the mass of the heat shield for a spacecraft, $M_{hs}$, must be proportional to the surface area of the spacecraft (as it must encompass a fixed fraction of the spacecraft), so therefore we end up with
\begin{equation}\label{MassScalv1}
M_{hs} \propto \left( \frac{M_{sc}}{M_{psp}} \right)^{2/3}.
\end{equation}
Thus, by means of this heuristic scaling, we have partially accomplished item 1, with the rest of the procedure described hereafter. The exponent of $2/3$ appearing in the RHS of the above equation is a consequence of the area-volume scaling and the linear volume-mass relationship for fixed mass density.
For tackling item 2, let us now denote the temperature of the heat shield exposed to the sun as $T$ and the solar flux as $S$, the Stefan-Boltzmann law implies
\begin{equation}
T \propto S^{1/4}.
\end{equation}
Furthermore, the solar flux $S$ falls off with the inverse square of the distance as $R_{sc}^{-2}$, so we arrive at
\begin{equation}\label{TRrel}
T \propto R_{sc}^{-1/2}.
\end{equation}
In addition, presuming that heat transfer through the shield is via conduction, then for the specified heat shield thickness $X_{hs}$, it follows that
\begin{equation}
X_{hs} \propto T,
\end{equation}
for fixed thermal conductivity. Combining the above equation with (\ref{TRrel}) yields
\begin{equation}\label{ThickShield}
X_{hs} \propto R_{sc}^{-1/2}.
\end{equation}
For given surface area, the mass of heat shield is linearly proportional to thickness $X_{hs}$, which leads us to
\begin{equation}\label{MassScalv2}
M_{hs} \propto X_{hs}
\end{equation}
Thus, on combining (\ref{MassScalv1}), (\ref{ThickShield}), and (\ref{MassScalv2}), we finally obtain the relationship for the shield mass,
\begin{equation}\label{MassScal}
M_{hs} = M_{psp,hs} \left(\frac{R_{psp}}{R_{sc}} \right)^{1/2} \left(\frac{M_{sc}}{M_{psp}} \right)^{2/3},
\end{equation}
where $M_{psp,hs}$ is the PSP's heat shield mass, completing the implementation of item 1 delineated previously.
\subsection{Payload Masses for Chemical Propulsion}
To determine the corresponding payload masses, five parameters are required in total:
\begin{enumerate}
\item launch vehicle used
\item the 'Characteristic Energy', $C_{3}$ at launch
\item the in-flight $\Delta V$ required leading up to the Oberth maneuver
\item the $\Delta V$ required at the Oberth maneuver (which might be either JOM or SOM)
\item the performance of the propulsion systems needed to generate 3 \& 4 above \end{enumerate}
As far as (1) \& (2) are concerned, the only launch vehicle capable of delivering a sufficiently massive payload to the Earth escape orbits studied here (thence allowing the installation of two rocket stages for the spacecraft's Oberth maneuver), and for which data is available, will be NASA's Space Launch System (SLS) Block 2 variant.\footnote{\url{https://www.nasa.gov/exploration/systems/sls/fs/sls.html}} The calculations can be readily updated if and when data for viable alternatives becomes available.
Generally, where the trajectory option studied requires the instantiation of item (3) above, a dedicated restartable stage with a hypergolic liquid propellant combination of $MMH$ and $N_{2}O_{4}$ is assumed for all these in-flight maneuvers, with $I_{sp} = 341 \si{s}$ and ratio of dry-mass to wet-mass of $p = 0.1$. With regard to (4) \& (5), either one solid propellant stage from the choice of STAR 75, STAR 63F or STAR 48B is selected, or if there is sufficient spare capacity, two stages comprising a pair of these boosters is allocated. The precise combination depends on the mass available after accounting for items (1)-(3), as well as the required magnitude of (4) necessary for the Oberth maneuver.
Finally it should be emphasized that only flyby missions of Planet 9 shall be considered here. A rendezvous mission, where the spacecraft applies an extra thrust to match velocities with Planet 9, would evidently require more $\Delta V$, which due to Planet 9's low speed would be roughly equal to the spacecraft's heliocentric hyperbolic excess speed. Where the objective function is to minimize flight duration, conducting a rendezvous would have little consequence on the optimal trajectories solved by OITS. Where the purpose is to minimize $\Delta V$, then the effect would be to increase the optimal $\Delta V$ achieved by OITS by a magnitude nearly equal to the spacecraft's heliocentric hyperbolic excess.
\section{Results}\label{SecResults}
In this section, we describe the chief results for solid chemical propellant as well as some alternatives.
\subsection{Passive Jupiter GA (PJGA)}
The objective function specified here is flight duration. An example PJGA trajectory is provided in Figure \ref{fig:Passive2}. As in the \emph{Interstellar Probe} concept report \citep{9438249}, a SLS Block 2 is utilized to deliver the spacecraft to Jupiter. In \citep{9438249}, a combined ATLAS V Centaur third stage and STAR 48B fourth stage is deployed as their baseline mission to leverage a $C_{3} = 304.07 \si{.km^{2}.s^{-2}}$, with an eventual payload mass to Jupiter of $\sim 860 \si{.kg}$. Figure \ref{fig:Passive1} depicts total flight duration to Planet 9 in years against launch $C_{3}$. Observe that the flight duration is just over 60 years when the \emph{Interstellar Probe} baseline mission parameters are employed. Table \ref{table:Passive_Data} provides the numerical data for this trajectory with an arrival speed at Planet 9 of $35.4 \,\si{km.s^{-1}}$, which amounts to $\sim 7.47\, \si{AU/yr}$ at $450 \si{AU}$.
\subsection{JOM \& SOM}
The results of the SOM analysis are presented first. In this case, the objective function applied is flight duration. Two distinct scenarios are considered:
\begin{enumerate}
\item Figure \ref{fig:Pass_SOM} depicts optimal flight duration and heliocentric hyperbolic excess speed adopting a SOM with $\Delta V < 0.97 \si{.km.s^{-1}}$ preceded by a passive Jupiter flyby.
\item Figure \ref{fig:Pow_SOM} provides the same parameters for a SOM with $\Delta V < 3.0 \si{.km.s^{-1}}$ preceded by a powered Jupiter flyby with $\Delta V$ at Jupiter of $< 2.79 \si{.km.s^{-1}}$.
\end{enumerate}
The $C_{3}$ and $\Delta V$ values for the first scenario stated above are chosen to match those in Figure D-2 of \cite{9438249} which reaches a SOM perihelion of 3SR. Note that in Figure \ref{fig:Pass_SOM}, the hyperbolic excess at 3SR is 4.4 $\si{AU/yr}$, lower than that outlined in Figure D-2 of approx 4.8 $\si{AU/yr}$. This discrepancy is most likely a consequence of the \emph{Interstellar Probe} report \citep{9438249} targeting an optimal solar latitude, whereas here we assume a latitude of Planet 9 of roughly -1.4$\si{\degree}$. For the 3SR case, the payload mass to Planet 9 -- invoked for generating Figure \ref{fig:Pass_SOM} -- is taken from the \emph{Interstellar Probe} report and equals 900 $\si{kg}$, after deducting the heat shield mass (653 $\si{kg}$). As the perihelion distance increases, the solar flux reduces accordingly, indicating that for perihelia $>$ 3SR, the spacecraft mass to Planet 9 will exceed 900 $\si{kg}$ (see Section \ref{SSecHeatS}).
Case 2 introduced above, namely Figure \ref{fig:Pow_SOM}, illustrates the benefit of delivering a higher kick at the SOM, i.e. 3.0 km/s. In order to leverage such a kick, we need a higher spacecraft mass at Jupiter which ultimately necessitates a lower launch $C_{3}$, in this case 100 $\si{km^{2}.s^{-2}}$ delivering a mass of 9000 $\si{kg}$ to Jupiter (utilizing a SLS Block 2 with a Centaur D upper stage). A dedicated liquid propellant stage is exploited to conduct the burn at perijove of $\Delta V = 2.79 \si{.km.s^{-1}}$ leaving a mass of 3000 $\si{kg}$ for the SOM. A STAR 49B can subsequently provide the kick of 3.0 km/s which leaves for the 3SR case a mass of $\sim 520 \si{kg}$ after the heat shield is deducted using (\ref{MassScal}). Larger masses are achievable for higher perihelia, but with commensurately longer mission durations.
A summary of all mission scenarios, JOM \& SOM is outlined in Figures \ref{fig:DV_JOMSOM}, \ref{fig:PM_JOMSOM} \& \ref{fig:HS_JOMSOM}. Referring to Figure \ref{fig:PM_JOMSOM}, observe that the fastest trajectories to Planet 9 are SOM scenarios that exploit a leveraging maneuver of some kind, though the corresponding price is a lower payload mass. The overall fastest trajectory to Planet 9 is E-2.2-E-J-3SR-P9 yielding arrival after only 37 years. However it also has the lowest total payload mass of 133 $\si{kg}$ and when the required heat shield mass is deducted (consult Figure \ref{fig:HS_JOMSOM}) then we find this mission is actually rendered infeasible using PSP heat shield technology. This result emphasizes the importance of comparing like with like when gauging JOM \& SOM trajectories. We shall thus reference Figure \ref{fig:HS_JOMSOM} hereon.
As seen from this figure, ignoring those missions which are dangerously close to negative payload masses, the best performance with respect to flight duration is E-3.2-E-J-7SR-P9, requiring 47 years, which is a full decade longer than the mission we excluded above. After this comes the trajectory E-J-3SR-P9 -- which is not the same case as that elucidated in Figure D-2 of the \emph{Interstellar Probe} report \citep{9438249}, as it corresponds to the alternative $C_{3}$ and $\Delta V$ allocation outlined in Figure \ref{fig:Pow_SOM} and delineated previously -- with a flight duration of approximately 50 years. For comparison, the SOM equivalent to \citet[Figure D-2]{9438249} is provided in Figure \ref{fig:Pass_SOM} and would take over 100 years.
\begin{table*}[]
\caption{Numerical Data for E-J-P9 (PJGA) with $C_{3} = 304.07 \si{.km^{2}.s^{-2}}$}
\label{table:Passive_Data}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& 1 & 2 & 3 \\ \hline
Encounter & Earth & Jupiter & Planet 9 @ 450 \si{AU} \\ \hline
Time & 2033 MAY 20 & 2034 FEB 18 & 2093 DEC 01 \\ \hline
\begin{tabular}[c]{@{}c@{}}Arrival Speed \\ (\si{km.s^{-1}})\end{tabular} & 0.0000 & 27.1881 & 35.4164 \\ \hline
\begin{tabular}[c]{@{}c@{}}Departure Speed \\ (\si{km.s^{-1}})\end{tabular} & 17.4376 & 27.1902 & 35.4164 \\ \hline
\begin{tabular}[c]{@{}c@{}}DeltaV \\ (\si{km.s^{-1}})\end{tabular} & 17.4376 & 0.0010 & 0.0000 \\ \hline
\begin{tabular}[c]{@{}c@{}}Cumulative DeltaV \\ (\si{km.s^{-1}})\end{tabular} & 17.4376 & 17.4386 & 17.4386 \\ \hline
\begin{tabular}[c]{@{}c@{}}Altitude Periapsis \\ (\si{km})\end{tabular} & N/A & 34110.2 & N/A \\ \hline
\end{tabular}
\end{table*}
\subsection{Using \texorpdfstring{$LH_2 \,\&\, LOX$}\space{} Propellants}\label{SSecCryo}
With a higher specific impulse than solid propellant ($I_{sp} = 451 \si{s}$ as opposed to $\sim 300 \si{s}$ for solid), the combination of $LH_{2}/LOX$ liquid cryogens evinces the potential to dramatically enhance the performance of interplanetary missions. The key issue is one of achieving sufficiently low temperatures for long-term storage in space where the solar environment can lead to significant heating and boil-off, particularly of $LH_{2}$. One potential solution requiring no mass-costly on-board cryocoolers is to subcool the $LH_{2}$ isobarically to temperatures of less than $16\, \si{K}$ whilst the spacecraft is installed on the launcher using compact ground support equipment \citep{Mustafi2009}. An outstanding issue insofar as this technology is concerned is achieving sufficient compactness to enable the required level of refrigeration to be achieved on the launch pad.
In the spirit of the analysis conducted in \cite{MDF16,CPTOPS}, we shall assume the required level of on-ground subcooling can indeed be performed to satisfaction and therefore as a first order estimate, we assume that no onboard cryocooler mass is necessary. \cite{CPTOPS} employed an ATLAS V AV551 launcher. However, when working with the launch timescales of the missions proposed here, we can suppose that the SLS Block 2 will be available, thereupon enabling a wholesale upscale of the spacecraft dimensions.
When this upscaling is taken into consideration, we estimate a total propellant mass of 11077 \si{kg} and a dry mass of 2083 \si{kg} together with a combined spacecraft payload mass of 100 \si{kg} (which includes science payload, high-gain antenna (HGA), and so forth); this mass breakdown translates to an available $\Delta V$ of 7.97 \si{km.s^{-1}}. Let us suppose we invoke a JOM scenario for this mission and this $\Delta V$ kick is exclusively applied at Jupiter. It should be noted that an SLS Block 2 with Centaur third stage can deliver the aforementioned total mass of 13260 \si{kg} to Jupiter with a $C_3 = 94\, \si{km^2.s^{-2}}$.
Applying OITS with an optimally placed Jupiter encounter and employing the above parameters, we find an arrival at Planet 9 approximately 51 years after launch. This performance level is comparable to the SOMs using solid propellant and summarized in Figure \ref{fig:HS_JOMSOM}, and represents a notable improvement on the alternative E-J-P9 options using solid propellant, which required flight durations of $\sim 60$ years.
\subsection{Using NTP with \texorpdfstring{$LH_{2}$}\space{} Propellant}
Nuclear Thermal Propulsion (NTP), as the name indicates, involves fission of uranium isotopes (typically $^{235}U$) wherein the energy released by this fission into smaller isotopes is utilizable in various ways. One common method (NTP in essence) is to heat a cryogenic propellant like $LH_2$ (which also acts as a coolant). This $LH_2$ is then expelled from an engine nozzle with high exhaust velocity, thus giving rise to the nuclear thermal rocket. Using $LH_2$ as propellant has the added benefit of a low molecular mass because specific impulse is proportional to $1/\sqrt{\mathrm{Molar \, Mass}}$, indicating that its use is near-optimal for NTP systems. Specific impulses of at least twice that attainable by chemical rockets can be achieved in principle \citep{GH15}
As a consequence of extensive testing of NTP by the US government sponsored Rover and NERVA programs from 1955 to 1972 \citep{WALTON1991}, NTP has a surprisingly high technology readiness level (TRL) of 5-6, even though there have been no in-flight NTP trials as yet; safety and the possibility of contamination are two major reason for this absence. Despite this caveat, for the sake of comparison, and also taking advantage of the considerable literature on the subject of different types of NTP, we shall adopt and investigate a fiducial NTP system for a mission to Planet 9.
The type of NTP assumed is that of \cite{YOUINOU2022104237}, which attempts to derive a design in general conformance with previous Mars DRAs (Design Reference Architectures) \citep{5446736}. The NTP thrust is 66.7 \si{kN} with $I_{sp} \sim 910\si{s}$. We use the highest thrust to weight ratio derived in this study of 10.9, thus translating to an engine mass of $624 \si{kg}$. We assume the same subcooled $LH_2$ adopted earlier for the $LH_2$ \& $LOX$ analysis (this time without any LOX - all the propellant mass is $LH_2$), obviating the need for heavy Brayton-cycle cryocoolers. We further assume the same 100 \si{kg} spacecraft payload (including the HGA) and an SLS with Centaur 3rd stage; the trajectory adopted is that of a JOM with optimally placed Jupiter, with the $\Delta V$ available at Jupiter being 14.386 km/s.
Through the use of OITS, we determine that the resulting mission duration for this study is 41 years, a ten year improvement over the cryogenic $LH_2$ \& $LOX$ chemical option outlined in Section \ref{SSecCryo}.
\subsection{Laser Sails}
The notion of employing radiation pressure for propellant-free propulsion (i.e., light sails), derived either from sunlight or laser arrays, is nearly a century old in its modern form \citep{LL21}. More specifically, laser-driven light sails have sprung to the forefront in recent times owing to multiple proposals.
The \emph{Breakthrough Starshot Initiative} \citep{PD17,Parkin_2018},\footnote{\url{https://breakthroughinitiatives.org/initiative/3}} inaugurated in 2016, intends to use powerful Earth-based lasers beamed towards miniature spacecraft of mass $\sim 1$ g, thereby achieving rapid acceleration to speeds that are a sizable fraction of the speed of light. Likewise, \cite{HEIN2019552} outlined the possibility of reaching the interstellar object 1I/'Oumuamua by employing this type of propulsion; detailed analyses of laser sail missions to 1I/'Oumuamua are furnished in \cite{hibberd2020project} by using a speed of 300 km/s. Other recent proposals have suggested that subrelativistic laser sail technology may be exploited to access objects situated far-away in our solar system.
For instance, \citet{TML20} delineated a light sail architecture to attain a velocity of $0.001$c (i.e., 300 km/s) by utilizing laser arrays with a total power of 3 to 29 GW for spacecraft ranging from 1 $\si{kg}$ to 100 $\si{kg}$, respectively. \citet[Table 2]{KP22} formulated a comprehensive cost-optimal model and determined a 10 kg payload can be accelerated to 300 km/s with peak power of 2.5 GW, capital expenditure of $\$610$ M, and operational expenditure of $\$58$ M per mission.
In fact, \citet{EW20} suggested that laser sails could be harnessed to probe the nature of Planet 9 (viz., ascertain whether it is a primordial black hole) by measuring how its gravitational field affects the spacecraft's trajectory; see also \citet{CL17} and \citet{LR20} for related analyses. \citet{HL20} highlighted the complicating effects of drag and electromagnetic forces on such measurements if the laser sails are relativistic. However, if the location of Planet 9 were to be accurately determined through some other method, subrelativistic laser sails may be deployed (in isolation or en masse) to characterize Planet 9 \citep{TML20,HAH20}.
By implementing OITS with a departure velocity of 300 km/s, it is found that Planet 9 could be reached by laser-sail in a short span of 6.5-7 years, a dramatic improvement over the previous options elaborated in this paper. The downside, of course, is that the TRL of such laser-driven light sails is low because most proposals, including those elucidated in the preceding paragraphs, remain purely conceptual as of now.
\section{Discussion}\label{SecDisc}
The results, upon assuming canonical solid propellants, indicate that trip times for a SOM (preceded by a Jupiter encounter) start with 45 years for perihelion distance of 2 Solar radii and increases to about 70-80 years for 10 Solar radii, a distance similar to the one associated with the Parker Solar Probe. Such trip times are considerably higher than those for existing space missions to survey planets in the outer Solar system. The Voyager probes have now exceeded 40 years of mission duration, but they surveyed the outer Solar system planets (i.e., their targets) in a span of $\sim 10$ years after launch.\footnote{Proposed missions to KBOs, which are much closer than Planet 9 to the Sun, accordingly necessitate shorter flight times of $\sim 25$ years \citep{ZFA19}.} Due to programmatic reasons, it might be unlikely that a mission to Planet 9 based on solid chemical propulsion would receive funding if missions predicated on alternative propulsion technologies were to promise a quicker and more substantial science return.
Bearing this last point in mind, there are three alternative mission designs analyzed in this paper: chemical cryogenic propellants $LH_2$ \& $LOX$, NTP using propellant $LH_2$, and laser sail accelerated by a 29 $\si{GW}$ array to $0.1\%$ the speed of light. All three alternatives commonly assume a 100 \si{kg} spacecraft payload is sent to Planet 9; in comparison, the \emph{New Horizons} spacecraft possessed a total launch mass of $\sim 500$ \si{kg} and $\sim 30$ \si{kg} science payload \citep{WGT08}. This trio is evaluated in the order of decreasing flight duration.
First, on examining the $LH_2$ \& $LOX$ option, it provides a useful improvement over solid propellant by reaching Planet 9 in 51 years utilizing a JOM (i.e., \emph{without} requiring a SOM), thereby exceeding the performance of the corresponding solid propellant JOM scenarios by at least 16\%. The SOM was not investigated for this propulsion option due to the close approach to the sun and the accompanying high solar flux, which would jointly be problematic for storage and deployment of cryogenic propellants.
Second, moving to NTP (again using the JOM option outlined above), the flight duration is noticeably reduced compared to solid chemical propellants, with an approximate 30\% reduction in flight time and around 20\% compared to the case of cryogenic chemical propulsion. The flight duration for this outcome is around 41 years. If the SOM option were to be feasible, the flight duration may be expected to decrease further.
Finally, along expected lines, laser sails have the superior performance by far, with a flight duration as low as 7 years. While this short timescale makes such mission designs an appealing option, the accompanying readiness level of laser sails is substantially lower than any of the alternative options investigated here. If the TRL of this propulsion scheme were to increase in the future, it is conceivable that laser-driven light sails would constitute the long-term future for the exploration of the outer Solar system and nearby interstellar space.
At this juncture, we will briefly examine the potential of other advanced (viz., non-chemical) propulsion systems, and how their performance may stack up against the mission architectures analyzed previously in this paper. We emphasize that this list is not exhaustive as it does not include, among others, nuclear fusion propulsion \citep{GK20,AGK21}, which might become viable in the future.
\begin{itemize}
\item Laser electric propulsion: Laser electric propulsion, as proposed by \citet{schmidt2018electric} and \citet{BPA18}, might enable velocities of $\sim 20$-$40$ AU/yr (entailing laser power of several 100 MW). If we assume such a terminal velocity for the bulk of the journey, this would permit fast trip times on the order of 10-20 years. Major technology development in connection with the laser array would, however, be required.
\item Solar sails: Solar sails, which operate on the same principle as laser sails except for using solar radiation as the power source instead of lasers, have been studied for several decades for deep space exploration \citep{McIn04}. Recent publications on the SunDiver mission concept of a kg-sized, low-cost solar sail mission estimate that, purely by using existing solar sail materials, hyperbolic excess velocities of up to 6 AU/yr could be achieved \citep{GFD22}. The mission duration would then be roughly the same as (or higher than) the architectures presented in this paper. However, the development of more sophisticated sail materials (see \citealt{ADI18}) could enable terminal velocities of $\sim 20$ AU/yr to be realized \citep{LL20}, which would then enable a mission to Planet 9 within approximately 23 years.\footnote{Even higher speeds for solar sails are attainable near high-energy astrophysical objects \citep{ML20}, but this scenario is obviously not applicable to the Solar system.}
\item Electric sails: Electric sails for deep space missions have previously been proposed in \citet{johnson2019electric}; this propulsion system was first introduced by \citep{Jan04}, and a modern review can be found in \citet{BNQ22}. The study by \citet{johnson2019electric} estimates that a 500 kg payload could be accelerated to a speed of 12 AU/yr via an electric sail; higher speeds of $\gtrsim 20$ AU/yr are feasible in theory \citep{JS07,LL20}. If this result is employed, this architecture would enable a mission to Planet 9 within $\sim 37$ years. However, scaling up an electric sail for such a mission constitutes an onerous challenge, given that in-orbit demonstration was only achieved for EstCube-1, a CubeSat-size spacecraft \citep{SPK15}.
\item Magnetic sails: Magnetic sails have been likewise proposed for deep space missions since their inception \citep{ZA91}, and are theoretically capable of attaining high velocities broadly comparable to electric sails \citep{LL21}. However, their performance has been subject to debates and is strongly dependent on the availability of robust advanced superconducting materials.
\end{itemize}
The consideration of advanced (non-chemical) propulsion systems studied hitherto in the paper shows that Planet 9 appears to be intriguingly poised at the transition point where chemical propulsion reaches its limit, indicating that advanced propulsion systems are rendered desirable and perhaps even necessary.
Given that some of the technologies outlined hitherto such as sophisticated solar sails are potentially just 5-10 years away from development, it is plausible that even if we could launch a mission to Planet 9 today based on chemical propellant(s), such a spacecraft might be overtaken by a solar sail probe launched later. It would, therefore, represent the interplanetary analog of the waiting paradox, which is usually evoked for interstellar propulsion (see, e.g., \citealt{RH17}).
In closing, we reiterate that a scientific mission to characterize Planet 9 (should it prove to be real) has tremendous scientific value, with some of the chief benefits summarized in Section \ref{SciReturn}. If Planet 9 is indeed confirmed to be a planet with a mass between that of Earth and Neptune, even a flyby mission would yield a wealth of information regarding planet formation and dynamical evolution, the history of the Solar system, astrobiology, and much more. It is unlikely, in contrast, that these fields would experience advancements to the same degree if studies of Planet 9 were only restricted to data garnered from Earth- and space-based telescopes.
\section{Conclusion}
There has been renewed interest in the existence of Planet 9 and its basic properties, ever since the well-known work of \citet{BB16}. However, in light of its great distance from Earth (viz., semimajor axis of $\sim 400$ AU), a detailed characterization of this putative object may be difficult to accomplish from Earth. Thus, with this crucial limitation in mind, we explore a variety of mission architectures and trajectories to Planet 9, with the purpose of carrying out a flyby.
The various mission architectures invoked for reaching Planet 9 entail a combination of chemical propulsion (both solid propellant and liquid cryogenic propellant) and flyby maneuvers; the spacecraft payload specified in the paper is typically $\sim 100$ kg. The resulting mission duration for solid propellant ranges from 45 years to 75 years, depending on the distance from the Sun for the Solar Oberth maneuver; and for cryogenic propellant, a simple Jupiter Oberth maneuver would be sufficient to reduce transit times to approximately 50 years. These timescales are generally shorter than the flight times of 48 to 67 years obtained by \citet{CFP22} using a Jupiter gravity assist because the latter study utilizes a clearly optimistic semimajor axis of $300$ AU (see \citealt{BroB21}), whereas we adopt a conservative value of $450$ AU (as stated in Table \ref{table:Planet9}).
Looking beyond chemical propulsion, we also examine the prospects for reaching Planet 9 via more advanced, yet-to-be fully developed propulsion schemes. We find that nuclear thermal propulsion can reduce the mission duration to approximately 40 years. Further down the road, if the huge potential of laser sails is unlocked by humanity, rapid journey times of $\sim 7$ years are realizable. We also indicate how other near-future technologies such as laser electric propulsion, solar sails, and electric sails can enable flight times of $\sim 10$-$40$ years.
Thus, we are led toward the conclusion that Planet 9 comprises an object near the critical transition point where chemical propulsion approaches its performance limits (in the sense of supporting non-negative payloads) and alternative sophisticated propulsion systems (e.g., light sails) become seemingly more attractive vis-\`a-vis flight duration. Future work will necessitate analyzing the advanced propulsion schemes not investigated herein and explicating their mission architectures.
\acknowledgments
\bibliographystyle{aasjournal}
\bibliography{Planet9}
|
Title:
The SVOM/ECLAIRs image trigger with wavelet-based background correction optimised with a one-year simulation of observations |
Abstract: The SVOM mission under development will carry four instruments, and in
particular the coded-mask telescope named ECLAIRs, with a large field of view
of about 2 sr, operating in the 4-150 keV energy band. The trigger software on
board ECLAIRs will search for high-energy transients such as gamma-ray bursts
and peculiar behaviour (e.g. strong outbursts) from known X-ray sources, in
order to repoint the satellite to perform follow-up observations with the
onboard narrow field of view instruments. The image trigger, one of the two
algorithms implemented in the software on board ECLAIRs, produces images over
periods of exposure ranging from 20 seconds to 20 minutes during which the
Earth can cross the field of view. The CXB and contributions from known X-ray
sources are expected to dominate the ECLAIRs astrophysical and instrumental
background and must be taken into account and corrected prior to coded-mask
image deconvolution in order to optimise the sensitivity to faint transients.
To correct these background components, we implemented and studied a
traditional fitting method and a new method based on wavelet decomposition of
the detector image. In order to study and to assess the performance of these
methods, we performed a one-year simulation of the image trigger on board
ECLAIRs. From the images produced during this realistic observation scenario of
the SVOM mission, we also defined a way to analyse the sky images to search for
new sources. We present the algorithms behind the image trigger on board
SVOM/ECLAIRs. We show that the wavelet method we implemented provides similar
results in terms of cleaning performance compared to the traditional fitting
method, and has the benefit of not requiring any assumption on the shape of the
background on the detector. We also calibrate the detection threshold to be
adaptive and based on the quality of the reconstructed sky image.
| https://export.arxiv.org/pdf/2208.12767 |
\title{The \textit{SVOM}/ECLAIRs image trigger with wavelet-based background correction optimised with a one-year simulation of observations}
\titlerunning{The \textit{SVOM}/ECLAIRs image trigger with wavelet-based background correction}
\author{N. Dagoneau\inst{\ref{inst1}} \and S. Schanne\inst{\ref{inst1}}}
\institute{
CEA Paris-Saclay/IRFU, F-91191 Gif-sur-Yvette, France\\\email{nicolas.dagoneau@cea.fr}. \label{inst1}
}
\date{Received XXX; accepted XXX}
\abstract
{The Space-based multi-band astronomical Variable Objects Monitor (\textit{SVOM}) mission under development will carry four instruments, and in particular the coded-mask telescope named ECLAIRs, with a large field of view of about 2 sr, operating in the 4–150 keV energy band. The trigger software on board ECLAIRs will search for high-energy transients such as gamma-ray bursts and peculiar behaviour (e.g. strong outbursts) from known X-ray sources, in order to repoint the satellite to perform follow-up observations with the onboard narrow field of view instruments.}
{The image trigger, one of the two algorithms implemented in the software on board ECLAIRs, produces images over periods of exposure ranging from 20 seconds to 20 minutes during which the Earth can cross the field of view. The Cosmic X-ray Background and contributions from known X-ray sources are expected to dominate the ECLAIRs astrophysical and instrumental background and must be taken into account and corrected prior to coded-mask image deconvolution in order to optimise the sensitivity to faint transients.}
{To correct these background components, we implemented and studied a traditional fitting method and a new method based on wavelet decomposition of the detector image. In order to study and to assess the performance of these methods, we performed a one-year simulation of the image trigger on board ECLAIRs. From the images produced during this realistic observation scenario of the \textit{SVOM} mission, we also defined a way to analyse the sky images to search for new sources.}
{We present the algorithms behind the image trigger on board \textit{SVOM}/ECLAIRs. We show that the wavelet method we implemented provides similar results in terms of cleaning performance compared to the traditional fitting method, and has the benefit of not requiring any assumption on the shape of the background on the detector.
We also calibrate the detection threshold to be adaptive and based on the quality of the reconstructed sky image.
}
{}
\keywords{Instrumentation: miscellaneous -- Telescopes -- Techniques: image processing}
\section{Introduction}
Transient astrophysical events include various high-energy phenomena such as gamma-ray bursts (GRBs) and flares from X-ray binaries or highly magnetized neutron stars. These short-duration phenomena in the X-ray and gamma-ray energy domains deserve fast detection to permit multi-wavelength follow-up observations, which will allow us to study of the underlying physics and to determine the distance of the emitting source.
This strategy will be followed by \textit{SVOM} \citep{wei_deep_2016}, a French-Chinese mission currently under development and planned to be operational after 2022. The fast detection of transient sources needs to be carried out directly on board the satellite since it cannot rely on ground-based data processing due to the lack of real-time large-volume data transmission from low-Earth orbit to the ground.
The goal of the ECLAIRs coded-mask aperture telescope on board \textit{SVOM} \citep{takahashi_x-gamma-ray_2014} is to observe a large portion of the hard X-ray sky and automatically detect and localise GRBs and other kinds of transient sources, thanks to its trigger software \citep{schanne_svom_2019}. The software is implemented in the Scientific Trigger and Control Unit (UGTS, French for \textit{UnitГ© de Gestion et de Traitement Scientifique}) \citep{schanne_scientific_2013,le_provost_scientific_2013}.
The ECLAIRs instrument is made up of the UGTS; the CdTe detector plane (80 $\times$ 80 pixels of active surface 4 $\times$ 4 mm$^2$, 1 mm thick, separated by 0.5 mm), which detects photons from 4 to 150 keV; and the self-supporting tantalum mask (54 $\times$ 54 cm$^2$ in size and 0.6 mm thick), which provides imaging capabilities up to 120 keV and is located at a distance of 46 cm above the detection plane.
The UGTS is composed of ten electronic boards that perform different functions (power management, data input-output management, and data processing). Two redundant boards are dedicated to data processing. Each of these includes an FPGA to preprocess the data for the trigger software and a Leon3 dual-core CPU running the complete ECLAIRs flight software, including the trigger.
The trigger software is divided into two different algorithms to reflect the diversity of transient sources in terms of duration. The first is a count-rate trigger, well suited to detecting short GRBs. It monitors the counts recorded by the detector plane on timescales between 10 ms and 20.48 s, and performs excess detection over background and imaging of those excesses. The second algorithm is an image trigger, well suited to detecting long GRBs. It analyses images built on timescales from 20.48 s to approximately 20 min. This image trigger is also well adapted to the detection of ultra-long GRBs \citep{dagoneau_ultra-long_2020}. The present paper focuses on the general description of the image trigger, including its algorithm and the processing, which are implemented to optimise the detection of transient sources in the images that are produced.
\section{Image trigger}
\label{sec:image_trigger}
The image trigger is the name given to one of the two trigger algorithms implemented in the ECLAIRs onboard detection software. An illustration of this algorithm is given in Fig. \ref{fig:trigger_image}. This algorithm runs in cycles of 20.48 s in four configurable energy strips to reflect the spectral diversity of the transient sources; in this paper the four strips are 4--20 keV, 4--50 keV, 4--120 keV, and 20--120 keV. The 80 $\times$ 80 pixel images recorded by the detector plane within 20.48 s are called shadowgrams. The sky image is reconstructed from the shadowgram using the mask deconvolution method (as is currently used by the IBIS telescope on board the ESA INTEGRAL observatory; a description of the method can be found in \citealt{goldwurm_integral/ibis_2003}). This method permits the reconstruction of point-like sources in the energy band between 4 and 120 keV where the mask (Ta of 0.6 mm thickness) is opaque enough for such sources to produce shadowgrams with good contrast.
The sky image has a size of 200 $\times$ 200 pixels with a pixel angular size ranging from 33 arcmin in the centre of the field of view to 17.5 arcmin at the edge.
The deconvolution uses the detected number of counts per pixel $D_{\mathrm{cnt}}$ and, assuming a Poissonian distribution per detector pixel ($D_{\mathrm{var}}$ = $D_{\mathrm{cnt}}$), it produces reconstructed sky images in number of counts ($S_{\mathrm{cnt}}$) and variance ($S_{\mathrm{var}}$).
A signal-to-noise (S/N) sky image is also built: $S_{\mathrm{S/N}} = S_{\mathrm{cnt}} / \sqrt{S_{\mathrm{var}}}$.
Before the deconvolution, the shadowgrams need to be cleaned from the Cosmic X-ray Background (CXB) and the contributions from bright known X-ray sources. The cleaning step is described in Sect. \ref{sec:cleaning}. The background correction is performed on the detected counts image. In theory, this correction leads to an increase in variance in the shadowgram. However, this increase is negligible (less than $1\%$ in the corner of the shadowgram) compared to the initial number of counts in 20 s (and thus to the initial variance when modelling the distribution of the counts by a Poisson distribution). For this reason, the uncorrected count images are deconvolved to produce the variance of the sky.
The deconvolution leads to sky images that are subsequently added (separate summation of the counts and the variances) into a history of the most recent sky images. The resulting exposure time of the sky images produced by summation is $\Delta T_n=20.48\times 2^{n}$~s with $n=[0,6]$. Thus, the available scales range from $\Delta T_0$ = 20.48 s to $\Delta T_6$ = 1310.72 s (approximately 20 min, see Fig. \ref{fig:trigger_image} for an illustration). Each of the sky images is searched for a transient source. This step is described in Sect. \ref{sec:analyse}.
Unlike the image triggers developed for previous missions like Swift/BAT, the ECLAIRs image trigger must be able to operate when the Earth passes through its field of view (2 sr), which occurs approximately every orbit (90 minutes). The Earth passages are the result of the strategy set up by SVOM to point the ECLAIRs field of view towards a roughly anti-solar direction in order to trigger on sources that are immediately observable by ground-based observatories located on the Earth's night hemisphere. The management of the Earth in the field of view is explained throughout the paper at each step of the image trigger.
The ECLAIRs telescope will also transmit all the counts recorded by the camera to the ground by X-band, where the detection software can be executed offline (typically with 6--12 hr of delay) to adjust the parameters, detect low significance sources, and permit the update of the onboard software configuration by telecommand.
\section{Methodology}
\label{sec:method}
In order to develop and study the performance of the image trigger, a simulation of a one-year \textit{SVOM} observation sequence carried out by the CNES is used \citep{jaubert_realistic_2017}. This simulation gives the satellite's position on its orbit and its attitude for every minute of this year. Figure \ref{fig:pointings} shows the positions of the pointing directions on the sky in Galactic coordinates in the one-year mission simulation. There are 1919 stable pointings.
For each pointing the shadowgrams are drawn by ray-tracing CXB and known sources photons in the field of view. A spatially and spectrally flat noise component is also added to the shadowgram at the level of 0.003 counts/s/cm$^2$/keV (taken from the spectrum given in \citealt{mandrou_wide-field_2008}, Fig. 2, integrated over the ECLAIRs energy band), which takes into account the mean contribution of particles, predominantly electrons, along the orbit outside of the South Atlantic Anomaly region. The catalogue of known sources is presented in \citealt{dagoneau_onboard_2021}. The image trigger processes the shadowgrams. In the following we consider a perfect efficiency of the detector between 4 and 150 keV, and we focus on the imaging process in the ECLAIRs full band of 4--120 keV.
In this paper we present the optimisations of the image trigger in terms of imaging performance, background modelling and subtraction. The simulation of GRBs to estimate the GRB detection rate by ECLAIRs were presented previously \citep{wei_deep_2016,antier-farfar_detection_2016}, based on a prototype trigger implementation \citep{schanne_scientific_2013}.
In order to evaluate the quality of the S/N sky images produced in which the image trigger searches for new sources, we analysed the standard deviation of the S/N as well as the maximum of the S/N in each image at each timescale. In a cleaned image (i.e. in a sky image built from the deconvolution of a clean shadowgram, and possibly summed with other sky images) the S/N should follow a standard normal distribution $\cal{N}$(0,1).
\section{Before deconvolution: Cleaning}
\label{sec:cleaning}
The deconvolution may lead to some artefacts that mimic point sources when the background noise is not uniform in the shadowgram. Because of the geometry of the ECLAIRs telescope (and in particular the distance between the mask and the detector of 46 cm), an isotropic source such as the CXB manifests itself by a non-uniform distribution of the counts on the detector. The deconvolution of such a shape generates artefacts in the images of the sky that increase the width of the distribution of S/N and reduce the detection efficiency of faint sources. Before the deconvolution, the shadowgrams ($\Delta T = 20.48$~s) must therefore be corrected for the CXB, but also for the contribution of the known bright sources (the effects of not correcting bright sources are presented in \citealt{dagoneau_onboard_2021}).
\subsection{CXB correction}
To correct the CXB, two methods are implemented in the onboard software (with possible switching by telecommand). The method that will be used by default is the traditional way already used in the past (as an example for the \textit{Granat}/SIGMA data processing, \citealt{bouchet_sigmagranat_2001}), which consists in subtracting a 2D polynomial shape after a fit in the shadowgram. This 2D polynomial model is justified by the geometry of the instrument. For a large field-of-view instrument, an isotropic source in the sky such as the CXB leads to a 2D distribution of the counts in the shadowgram with more counts in the centre than in the corners. This non-flat distribution flattens out when the field of view decreases (e.g. when the distance between the mask and the detector increases). The model is defined by Eq. \ref{eq:2d_shape} where $i$ and $j$ refer to the pixel coordinates in the shadowgram:
\begin{align}
\label{eq:2d_shape}
M(i,j) = c_0 + c_1\cdot i + c_2 \cdot j + c_3 \cdot i^2 + c_4 \cdot j^2 + c_5 \cdot i \cdot j.
\end{align}
In the absence of the Earth in the field of view, this 2D shape appears as a curved shape with a maximum at the centre of the shadowgram.
However, in the case of \textit{SVOM}/ECLAIRs, this method should also operate with the presence of the Earth in the field of view. The 2D shape applied allows for this correction, and permits the modelling of a deficit of counts in the direction of the Earth (see Fig. \ref{fig:ex_image_fit_wt}) without explicitly injecting into the model the coordinates of the Earth in the field of view.
We have implemented another method using wavelets and the \textit{Г trous} algorithm (French for `with holes') \citep{holschneider_real-time_1990, starck_astronomical_2002}. This is the first time that this algorithm will be used to correct the background in detector images from a coded-mask aperture telescope prior to deconvolution (but already applied after the deconvolution to remove systematics is sky images, \citealt{krivonos_integral/ibis_2010}). It results from the fact that the background shape produced by the large size of the field of view modulated by the Earth's presence appears as a large-scale structure on the shadowgram, while the point-source contributions are imprinted in small-scale structures on the shadowgram because of the small size of the coded-mask elements.
Thus, the shadowgram can be decomposed into different scales and a background-corrected version of the shadowgram is reconstructed using only the smallest scales. The shadowgram $D_s$ at a scale $s \geq 1$ and pixel position $(i,j)$ is computed according to Eq. \eqref{eq:shds_scale_s} which is a convolution of the shadowgram at the previous scale $D_{s-1}$ and a filter $H$; $D_0$ corresponds to the raw shadowgram:
\begin{align}
\label{eq:shds_scale_s}
D_s(i,j) = \sum_{m=-l}^{l} \sum_{n=-l}^{l} H(m,n) D_{s-1}(i+2^{s-1}m, j+2^{s-1}n)
\end{align}
The algorithm is called \textit{Г trous} because, while it calculates the different scales of the detector image, the distance between two pixels considered for the convolution by the filter increases by a factor of 2. To compute the pixels of $D_1$ the pixels taken into account are neighbours in $D_0$, whereas for the calculation at scale s they are separated by a distance $2s-1$, as shown in Fig. \ref{fig:algo_wavelet}. This allows larger and larger structures to be caught in the shadowgram as the scale increases. To keep the detector image size identical at all scales ($80 \times 80$), symmetric boundary conditions are used: the image of the pixel of index $-1$ (which would be outside the matrix) is the pixel of index 0; the image of the pixel of index 80 (also outside the matrix) is the pixel of index 79. Symmetric boundary conditions are appropriate for sources in the partially coded field of view (in contrast to cyclic boundary conditions).
The 2D filter $H$ is built from a 1D filter $h$ such that $H=hh^T$. There are two possible 1D filters that are commonly used in the \textit{Г trous} algorithm \citep{starck_undecimated_2007}: $h_3=(1/4, 1/2, 1/4)$ or $h_5=(1/16, 1/4, 3/8, 1/4, 1/16)$. The wavelet coefficients $W_s$ at a scale $s \geq 1$ and pixel position $(i,j)$ are given by Eq. \eqref{eq:w_s_diff}:
\begin{align}
\label{eq:w_s_diff}
W_s(i,j) = D_{s-1}(i,j) - D_{s}(i,j).
\end{align}
The image of the original shadowgram $D$ can be recovered by summing the different wavelet coefficients $W_s$ and the last scale $D_{s_{\mathrm{max}}}$ at the largest scale considered $s=s_{\mathrm{max}}$:
\begin{align}
D(i,j) = \sum_{s=1}^{s_{\mathrm{max}}} W_s(i,j) + D_{s_{\mathrm{max}}}(i,j).
\label{eq:rebuilt_wavelet}
\end{align}
The removal of the CXB is achieved by the reconstruction of a cleaned shadowgram $D_{\mathrm{cleaned}}$ summing only a limited number of scales $s=s_{\mathrm{th}} < s_{\mathrm{max}}$. Thus, given that the wavelet coefficient is the difference between two consecutive scales, the cleaned shadowgram can be computed by the subtraction of the shadowgram at scale $s_{\mathrm{th}}$ and the raw shadowgram $D_0$:
\begin{align}
D_{\mathrm{cleaned}}(i,j) = D_0 - D_{s_{\mathrm{th}}}.
\label{eq:rebuilt_cleaned}
\end{align}
Thus, the algorithm needs two parameters to be chosen: one filter among the two presented previously ($h_3$ or $h_5$) and the scale $s_{\mathrm{th}}$ to be used in order to build the corrected shadowgram containing only the smallest scales, which contain most of the point-source coded signal while reducing the influence of the background. The preliminary choice of these parameters is made in such a way that the largest scale of the shadowgram $D_{s_{\mathrm{th}}}$ contains most of the non-uniform background. In order to evaluate this and to deduce the most adapted parameters, we simulated shadowgrams of 20 s containing only CXB (plus the additional flat noise). The shadowgrams are cleaned with the wavelet method using various combinations of the two parameters and then deconvolved to produce the sky images that are summed together to reach an exposure time of 20 min. The best parameters are those that lead to a distribution of the S/N with a standard deviation as close as possible to 1 in the images of the sky in 20 min. As with the fit method, the uncorrected counts shadowgram is used as the variance to produce the variance of the sky, and thus to compute the S/N.
Figure \ref{fig:wt_h_sth} gives the distribution of S/N in 20 min sky images for different combinations of the parameters. From this simulation, we propose using at first the filter $h_3$ and a maximum scale $s_{\mathrm{th}}=3$ for the cleaning with the wavelet method. The threshold scale $s_{\mathrm{th}}=3$ is also justified by the geometry of the instrument. It is equivalent to keeping, in the image to be deconvolved, the scales whose characteristic size is 1 ($s=0$), 2 ($s=1$), or 4 pixels ($s=2$). The ratio of the mask-element size ($m$) to detector pixel size ($d$) is $m/d= 2.54$. The value $s=2$ is the smallest value for which the characteristic size still covers the mask-element size, while with the value $s=1$ part of the point-source signal would remain in the large scales that are subtracted and would be lost in the image to be deconvolved. The two parameters can be modified after the launch by a telecommand from the ground, based on the observation of the background in flight conditions.
Figure \ref{fig:ex_image_fit_wt} gives an example of the background correction in the presence of the Earth in the field of view, using either the traditional fit method or the wavelet method. The Earth in the field of view leads to a non-uniform distribution of the counts on the detector, including a lack of counts in the lower left corner (where no part of the detector is totally obscured). Both the traditional fit method and the wavelet method make it possible to correctly model this non-uniformity and to obtain a flat distribution after subtraction.
\subsection{Galactic ridge X-ray emission non-correction}
In this section we study the influence of the Galactic ridge X-ray emission (GRXE) and show that its influence on the images produced by ECLAIRs is negligible, and that it is therefore not necessary to integrate it into our simulations nor to worry about its correction.
The GRXE is modelled by a diffuse emission region centred on the Galactic centre. Its spatial distribution is given by two perpendicular Lorentzian functions (Fig. \ref{fig:grxe}, top left) with a full width at half maximum of $21\deg$ along the Galactic plane and $1.2\deg$ perpendicularly \citep{turler_integral_2010}.
We modelled its spectrum, using as input the distribution shown in \cite{turler_integral_2010} (Fig. 10), which we approximate by a simple power law $E \cdot F_E = K \cdot E^{\alpha}$. Using the two points with coordinates ($E=4$ keV, $E \cdot F_E=10$ keV/cm$^2$/s/sr) and ($E=100$ keV, $E \cdot F_E=1$ keV/cm$^2$/s/sr), we derive the values for the power-law index $\alpha=-1.102$ and the normalization $K=46$ keV/cm$^2$/s/sr at 1 keV. This spectrum is given in a region of $60\times30\deg^2$, which corresponds to 0.54 sr. Hence, the total GRXE flux in that region is $0.549$ ph/cm$^2$/s in 4--10 keV.
We mimic the diffuse GRXE by 10000 point sources, distributed in the sky according to the two Lorentzian functions described above, where each source has a flux equal to 1/10000 of the total GRXE flux.
In a first simulation (see Fig. \ref{fig:grxe}) the Galactic plane was aligned with one of the cross-arms of the coded mask\footnote{For mechanical reasons, the coded mask has no holes on the two perpendicular central strips forming what is called the `mask cross'. This cross is shown in Figure 1 in \cite{cordier_b_svom_2019} }. As a result, we observe that in 4--10 keV the GRXE projects about 160 counts/s on the detector, while the CXB projects about 3000 counts/s. This case where the Galactic plane is aligned with a mask cross-arm is the worst; the superposition of the GRXE source projections casts the shadow of one of the cross-arms in the shadowgram (see bottom left panel of Fig. \ref{fig:grxe}). However, in the reconstructed sky images the GRXE is barely visible. Even in the longest exposure of 20 min, the reconstructed structure does not correspond to a point source, the S/R maximum is just above 6$\sigma$, and the standard deviation of the S/N rises to 1.1 (see bottom right panel of Fig. \ref{fig:grxe}), such that this structure remains below the trigger thresholds even in 20 min exposures.
We repeated the simulation with the Galactic plane tilted by $45\deg$ in the field of view (Fig. \ref{fig:grxe_roll}). In this case the projections of the simulated sources are no longer aligned with a cross-arm of the mask, which results in a detector shadowgram showing low-amplitude structures only, and a reconstructed sky image where the GRXE diffuse emission is not visible, even for exposures up to 20~min.
Therefore we conclude that the GRXE is negligible in the ECLAIRs image trigger, mainly because the counts produced by the GRXE emission region is smaller than that of the CXB, and because the GRXE is a region of diffuse emission, much larger in size than can be considered as a point source for ECLAIRs for which the mask imaging is not a completely appropriate reconstruction technique. Consequently, this emission is not taken into account in our one-year simulation nor in the image trigger software.
Generally speaking, it should be noted that the data from ECLAIRs could be used to study CXB and GRXE via specific ground-based analyses.
\subsection{Correction of known sources}
For each pointing position of the spacecraft, we can build a catalogue of bright X-ray sources and compute their coordinates in the ECLAIRs local frame. The sources that need to be corrected are mainly bright X-ray binaries \citep{dagoneau_onboard_2021}.
The correction of these known sources in the onboard catalogue is done in the shadowgram by fitting a model of the sources on the shadowgram and subtracting it. The model consists of the weighted sum of the illumination functions of the sources in the field of view (and not obscured by the Earth).
The illumination function of a source is an array of the same size as the detector, which gives, for each pixel of the detector, the pixel fraction illuminated by this source for the given position in the sky.
Depending on how the background is corrected, the fit is either performed simultaneously with the 2D polynomial shape of the CXB (adding one parameter per source in Eq. \ref{eq:2d_shape}) or after the wavelet cleaning (because the projection of the coded-mask pattern on the detector by point sources predominantly contributes on small scales).
Figure \ref{fig:ex_image_fit_wt_src} gives an example of the background correction with Earth's presence and bright sources in the field of view for the two methods. Here the field of view contains many bright sources, including the very bright Scorpius X-1 whose mask shadow is clearly visible in the bottom left corner of the detector image. In the model of the detector, the contribution from the CXB is not visible because the sources, and especially Scorpius X-1, are dominant. After the correction (for both methods), we can see that some counts from the very bright source Scorpius X-1 remain visible in the bottom left corner of the shadowgram. This is caused by the subtraction of an average level from a count distribution that has a large amplitude of variation in the case of a very bright source. However, the consequence is limited because most of the counts are subtracted and a possible residual at the position of the source in the sky will be masked when searching for a new source.
In the case where the CXB is cleaned with wavelets, the subtraction of the contribution from the known point sources is performed in a second step where the fit is based only on the models of the sources. Since the image has been transformed (Fig. \ref{fig:ex_image_fit_wt_bad_src_model}, top left), the models of the sources to be fitted must also be transformed using the \textit{Г trous} algorithm with the same parameters (Fig. \ref{fig:ex_image_fit_wt_bad_src_model}, bottom row) to avoid that the source contributions are only partially subtracted (Fig. \ref{fig:ex_image_fit_wt_bad_src_model}, top right).
\section{After deconvolution: Calibrating the detection threshold}
\label{sec:analyse}
After the cleaning of the detector images from the CXB and from the known bright-source contributions, the shadowgrams of the one-year simulation are ready to be deconvolved in order to reconstruct the sky images.
The sky pixels that are obscured by the Earth are set to 0 in the counts images $S_{\mathrm{cnt}}$, variance images $S_{\mathrm{var}}$, and S/N images $S_{\mathrm{S/N}}$ (only in the first scale of 20.48 s processed by the image trigger).
In order to prevent false triggers from a residual at the position of a corrected catalogued source, pixels within a radius of 6 pixels (configurable, see \citealt{dagoneau_onboard_2021}) around the position of the source are set to 0 in these images. For a source in the centre of the field of view, the pixels within a radius of 6 pixels cover a solid angle of about 0.01 sr. Some other sources, such as Hercules X-1, which are not bright enough to be corrected in the 20 s shadowgrams but are bright enough to appear in the images of the sky in 20 min, are also masked with zeros at all timescales.
In order to prevent some artefacts from the deconvolution in very partially coded directions of the sky, the sky pixels for which a mask pattern is projected in less than 50 pixels (configurable) of the detector are also set to 0 in these images.
This way, pixels with a S/N of exactly 0 are excluded for the computation of the standard deviation and for the new source search by the image trigger.
From the previous work we conducted \citep{dagoneau_onboard_2021}, the standard deviation of the S/N in 20 min sky images reaches, without any corrections, values higher than 50 for a pointing towards the Galactic centre and in the region of Scorpius X-1. In addition, with the correction, near the Galactic poles the correction reduces the standard deviation of the S/N in the sky images to values close to 1. Figure \ref{fig:histo_std} gives the distribution of all the standard deviations in S/N images produced during the one-year simulation (in 20 s and 20 min). In 20 s it remains close to 1 in all the produced images, but in 20 min it increases because of the more important number of bright X-ray sources, even if they are corrected, in the Galactic centre. The histograms only includes images composed of ten significant pixels at least, excluding images almost totally obscured by the Earth. The extension to values below and above 1 corresponds to sky images where the Earth occupies a large part of the field of view, and thus few pixels are used to calculate the standard deviation. In the 20 min plot, there are no values much lower than 1 because in 20 min there are far fewer pixels that are permanently masked by the Earth.
If the standard deviation allows the deviation of the S/N to be characterised from the normal distribution, the value that needs to be examined is the maximum value in the S/N images. The image trigger looks for the maximum value in the S/N images and starts an alert sequence if that value is above a threshold. For a normal S/N distribution the threshold has been set to 6.5 in order to comply with the constraint of one false alert per day as required by the alert system. This value can be obtained from simulations in which a large number of realisations of the background are simulated and corrected; the sky images are then produced by deconvolution and the maxima in S/N (or rather maxima in S/N divided by the standard deviation in the image in S/N, see below) are recorded. Integrating the distribution of these values from the right until we reach the desired false alarm rate (typically one per day), we find a threshold close to 6.5. This value is a little higher than the expected theoretical value (4.5) for 400000 independent and identically distributed sky pixels according to a normal distribution $\cal{N}$(0,1) with a threshold at 3$\sigma$ per pixel (in reality the sky pixels are not independent and background residuals may remain after the correction).
Thus, we have to limit the presence of values at S/N $>$ 6.5 to keep the threshold of 6.5 and to detect faint transients. Figure \ref{fig:histo_max} gives the distribution of all the maxima in S/N images produced during the one-year simulation (in 20 s and 20 min). In 20 s, the distribution is compatible with a threshold in S/N of 6.5, where there are only a few images with S/N $>$ 6.5. However, in 20 min, there are many images in which the maximum S/N value is much higher than 6.5. This is mainly the case in the Galactic centre where there are many sources.
We also note that the two methods used to correct the CXB give similar results in terms of maximum values and standard deviation in the S/N sky images. This is the proof that wavelets are able to correct the CXB, without any assumption on the shape of the CXB on the detector.
In order to dynamically adapt the detection threshold according to the quality of the sky images, we propose applying a threshold of 6.5 times standard deviation of the S/N in the sky image (but limited to 6.5 if the standard deviation is less than 1). Figure \ref{fig:histo_maxOvStd} gives the distribution of all the maxima divided by the standard deviation in the S/N images produced during the one-year simulation (in 20 s and 20 min). If the standard deviation is smaller than 1, the threshold remains at a value of 6.5. In 20 s, the distribution is not modified very much (because the standard deviation is close to 1). However, in 20 min, the spread of the distribution is reduced. It still overflows beyond 6.5, and thus we can introduce a small dependence of the threshold on the timescale between 6.5 in 20 s and 8 in 20 min.
We show as an example the longest timescale considered, the 20 min sky image in Fig. \ref{fig:ex_snr_image}, with the same pointing as for Fig. \ref{fig:ex_image_fit_wt_src}. In this image, away from the Earth and the positions of known sources, the S/N standard deviation is 2.08, which sets an adapted detection threshold for new sources of 6.5 $\times$ 2.08 $\approx$ 13.5. However, as we show in the distribution of Fig. \ref{fig:histo_maxOvStd}, this can be too low (the maximum in the sky image is 14.1) and raises false alerts. Increasing the threshold slightly to 8 $\times$ 2.08 $\approx$ 16.6 allows detection at values that are far enough from the bulk of the distribution (see Fig. \ref{fig:ex_snr_image_histo}).
In Fig. \ref{fig:ex_snr_image} the shape of the Earth may be surprising. This is caused by the fact that the part of the sky masked by the Earth is replaced by zero values only in the 20 s images. Thus, in the sky images summed up to 20 min, only the part of the sky continuously occulted by the Earth is masked. The regions that the Earth has only partially occulted during the 20 min remain unmasked. Figure \ref{fig:shape_earth} gives the exposure map for these 20 min, and shows the portion of the sky permanently obscured by the Earth and those that are only obscured during a fraction of the exposure time.
Finally, instead of considering separately the distributions of the measured quantities in the S/N images, the maximum and the maximum over the standard deviation, we now determine the 2D distribution of the two quantities, shown in Fig. \ref{fig:plot_2D_scale_0_fit} for the timescale of 20 s and in Fig. \ref{fig:plot_2D_scale_6_fit} for the timescale of 20 min.
On the short timescale, both distributions are largely correlated, which permits setting the thresholds in order to limit the false alert rate to the value of 6.5 for both measured quantities.
On the large timescale, the 2D distribution is much larger for the maximum, while the spread in the maximum over standard deviation is mainly observed for low values of the maximum.
This permits lowering the thresholds for both quantities, and requiring that the maximum must be above the value of 8.5 and at the same time the maximum over standard deviation above the value of 6.5. This logical `and' condition considers the events located in the upper right corner of the figure to be acceptable, and leaves more room for detection than with the more stringent thresholds derived above with the distributions of the two quantities considered separately.
\section{Conclusion}
In this paper we have presented the algorithm behind the image trigger on board \textit{SVOM}/ECLAIRs. This trigger will build and analyse images of the sky on timescales of 20 s and 20 min. We described the two methods that are implemented to correct the CXB background: a traditional fit method that has to work in the case of the Earth's presence in the field of view, and an alternative method based on the \textit{Г trous} wavelet decomposition. The two methods give similar results in terms of correction of the detector images that are built from a one-year simulation of the background, including known X-ray sources from the catalogue we developed previously. In addition, we performed a calibration of the detection threshold that adapts to the quality of the reconstructed sky image according to the standard deviation of the S/N in the sky image, with a weak dependence on the timescale.
The traditional method of correcting the background by fitting a 2D shape will be used from the beginning of flight operations, while the wavelet-based method, never applied before the deconvolution for coded-mask aperture telescopes, will be validated on the ground with the first data. The background correction with the wavelet method does not require any assumption on the shape of the background. Thus, it could be more resilient, for example towards the spatial variations of the background on the detector when the Earth crosses the field of view or following the activation of the detector by high-energy particles during passages through the South Atlantic Anomaly of the Earth's magnetic field.
The different steps that we described will be implemented in the onboard software.
The code is executed on the onboard Leon3 processor, and part of it has been benchmarked with a few reference cases for computing performance and compared to execution times obtained on a Linux machine from which we derive the following estimates.
Without sources in the field of view, the background correction takes about 0.4 s for the polynomial fit of the CXB and the wavelet background correction takes about 0.3 s. With ten sources present, the combined fit of the CXB and the sources takes about 1.1 s, while the wavelet background correction still takes 0.3 s to which 0.6 s need to be added for the fit of the ten sources alone. To these values, about 0.9 s need to be added for the sky deconvolution. It appears that the wavelet method is therefore slightly faster than the background correction by combined fit in all cases.
Once ECLAIRs is in flight, the same code implemented on board can also be executed on ground computers using all the photons recorded by the ECLAIRs camera, which will be sent to the ground through X-band with 6 h to 12 h of delay. Thus, the algorithm of the trigger and the different methods for correcting the background can be executed offline on real data to test the various parameters and to fine-tune them, in order to optimise the sensitivity of the instruments towards faint X-ray transients.
\begin{acknowledgements}
ECLAIRs is a cooperation between CNES, CEA and CNRS, with CNES acting as prime contractor. This work is supported by CEA and by the “IDI 2017” project of the French “Investissements d’Avenir” program, financed by IDEX Paris-Saclay, ANR-11-IDEX-0003-02. The authors would like to warmly thank the anonymous referee for very helpful comments and suggestions, including the question about the influence of the Galactic Ridge X-ray Emission on long duration image reconstruction.
\end{acknowledgements}
\bibliographystyle{aasjournal}
\bibliography{references}
|
Title:
Astrometric Microlensing of Primordial Black Holes with Gaia |
Abstract: The Gaia space telescope allows for unprecedented accuracy for astrometric
measurements of stars in the Galaxy. In this work, we explore the sensitivity
of Gaia to detect primordial black hole (PBH) dark matter through the
distortions that PBHs would create in the apparent trajectory of background
stars, an effect known as astrometric microlensing (AML). We present a novel
calculation of the lensing probability, and we combine this with the existing
publicly released Gaia eDR3 stellar catalog to predict the expected rate of AML
events that Gaia will see. We also compute the expected distribution of a few
event observables, which will be useful for reducing backgrounds. We argue that
the astrophysical background rate of AML like events due to other sources is
negligible (except possibly for very long duration events), and we use this to
compute the potential exclusion that could be set on the parameter space of
PBHs with a monochromatic mass function. We find that Gaia is sensitive to PBHs
in the range of $0.4~M_\odot$ - $5\times10^7~M_\odot$, and has peak sensitivity
to PBHs of $\sim 10~M_\odot$ for which it can rule out as little as a fraction
$3\times10^{-4}$ of dark matter composed of PBHs. With this exquisite
sensitivity, Gaia has the potential to rule out a PBH origin for the
gravitational wave signals seen at LIGO. Our novel calculation of the lensing
probability includes for the first time, the effect of intermediate duration
lensing events, where the lensing event lasts for a few years, but for a period
which is still shorter than the Gaia mission lifetime. The lower end of our
predicted mass exclusion is especially sensitive to these type of lensing
events. As and when time-series data for Gaia is released, our prediction of
the lensing rate and event observable distributions will be useful to estimate
the true exclusion/discovery of the PBH parameter space utilizing this data.
| https://export.arxiv.org/pdf/2208.14460 |
\title{Astrometric Microlensing of Primordial Black Holes with \Gaia{}}
\author[a]{Himanshu Verma,}
\author[a]{Vikram Rentala}
\affiliation[a]{Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai, Maharashtra, 400076, India}
\emailAdd{verma.himanshu002@gmail.com}
\emailAdd{rentala@phy.iitb.ac.in}
\abstract{The \Gaia{} space telescope allows for unprecedented accuracy for astrometric measurements of stars in the Galaxy. In this work, we explore the sensitivity of \Gaia{} to detect primordial black hole (PBH) dark matter through the distortions that PBHs would create in the apparent trajectory of background stars, an effect known as astrometric microlensing~(AML). We present a novel calculation of the lensing probability, and we combine this with the existing publicly released \Gaia{} eDR3 stellar catalog to predict the expected rate of AML events that \Gaia{} will see. We also compute the expected distribution of a few event observables, which will be useful for reducing backgrounds.
We argue that the astrophysical background rate of AML like events due to other sources is negligible (except possibly for events with very long durations, or equivalently for high mass PBHs), and we use this to compute the potential exclusion that could be set on the parameter space of PBHs with a monochromatic mass function. We find that \Gaia{} is sensitive to PBHs in the range of $0.4~M_\odot$ - $5\times10^7~M_\odot$, and has peak sensitivity to PBHs of $\sim 10~M_\odot$ for which it can rule out as little as a fraction $3\times10^{-4}$ of dark matter composed of PBHs. With this exquisite sensitivity, \Gaia{} has the potential to rule out a PBH origin for the gravitational wave signals seen at LIGO/Virgo. Our novel calculation of the lensing probability includes for the first time, the effect of intermediate duration lensing events, where the lensing event lasts for a few years, but for a period which is still shorter than the \Gaia{} mission lifetime. The lower end of our predicted mass exclusion is especially sensitive to these type of lensing events. As and when time-series data for \Gaia{} is released, our prediction of the lensing rate and event observable distributions will be useful to estimate the true exclusion/discovery of the PBH parameter space utilizing this data.
}
\section{Introduction}
Dark matter (DM) makes up 25\% of the energy density of the universe today, yet it is still unknown as to what constitutes the DM. One promising theory is that DM is made up of Primordial Black Holes (PBHs)~\cite{1975Natur.253..251C, Khlopov:2008qy, Villanueva-Domingo:2021spv}. Unlike astrophysical black holes which form due to the collapse of stars at the end of their life cycle, PBHs are expected to form in the early universe from excess density perturbations that gravitationally collapse while overcoming resistance from pressure~\cite{1971MNRAS.152...75H}. In particular, this could happen due to either an excess in the primordial power spectrum on small scales~\cite{Garcia-Bellido:1996mdl,Clesse:2015wea, Choi:2022btl}, or due to sudden drops in pressure in the matter-radiation fluid~\cite{Garcia-Bellido:2017fdg}. For other formation channels for PBHs see for e.g.~\cite{Carr:2020xqk}.
There are a number of observational constraints on PBHs that make up a significant fraction of the DM (see ref.~\cite{Carr:2020gox, Green:2020jor} for overviews of these constraints). Usually, these constraints are phrased assuming that the PBHs have a monochromatic mass function -- where a fraction $f$ of the DM density is assumed to be made up of PBHs of mass $M$. The most stringent method of setting constraints on PBHs varies depending on the assumed value of $M$. For $M$ of $10^{-11} ~M_\odot$ and above, a combination of constraints from microlensing~\cite{Oguri:2017ock,Croon:2020ouk,Green:2020jor,MACHO:2000nvd,Zumalacarregui:2017qqd,Niikura:2019kqi}, gravitational wave signatures~\cite{Kavanagh:2018ggo, LIGOScientific:2019kan, Chen:2019irf}, accretion~\cite{Brandt:2016aco, 2014ApJ...790..159M} and dynamical effects~\cite{Serpico:2020ehh,Hektor:2018qqw,Manshanden:2018tze,Lu:2020bmd}, seem to rule out a fraction $f\gtrsim 0.1$ of the DM as being made up of PBHs. However, it is possible that PBHs in this mass range make up a smaller, sub-dominant component of the DM.
A large number of experiments have leveraged the technique of \textit{photometric microlensing} (PML) to set observational constraints on the PBH abundance~\cite{Niikura2019, Smyth:2019whb, Niikura:2019kqi, Griest:2013aaa, Griest:2013esa, EROS-2:2006ryy, MACHO:2000nvd}. PML relies on the prediction of general relativity that light bends around massive objects. If PBHs constitute the DM then they are distributed throughout galaxies. If a star passes behind a PBH, then the PBH acts like a gravitational lens, and the light from such a star would get bent and focused, leading to a temporary brightening of such a star. Thus, surveys which have precision photometric sensitivity, i.e. sensitivity to small brightness variations, have been employed very fruitfully to set very strong constrains on the existence of PBHs in the mass range $10^{-11}-30 ~M_\odot$.
In addition to the apparent magnification of a star due to a PBH in its foreground, general relativity also predicts that the apparent trajectory of the star would appear to be distorted relative to its true trajectory. Such an effect is called \textit{astrometric microlensing} (AML)~\cite{1995A&A...294..287H,1995ApJ...453...37W,1995AJ....110.1427M,Gould:1996nb}. Surveys which have precision astrometric sensitivity, i.e. sensitivity to the position and velocity of the star, would be able to detect such an effect. In principle AML should be more sensitive than PML to the presence of PBHs. This is because in AML the relevant observable, which is the deflection of a star's apparent position from its true position, falls off as $1/\theta$, where $\theta$ is the angle between the true position of the lens and the star on the celestial sphere~\cite{2000ApJ...534..213D}. This is in contrast to PML where the relevant observable is the change in magnification, and this falls off faster, as $1/\theta^4$~\cite{2000ApJ...534..213D}.
In the past few years, AML techniques have already started to be employed to detect PBH and PBH-like candidates. Recently, there has been an exciting claim of discovery of an isolated stellar-mass black hole lens using the AML technique with HST data~\cite{OGLE:2022gdj}. AML searches have also been employed to detect signatures of intermediate mass black holes in the mass range $10^2-10^5~M_\odot$~\cite{2016MNRAS.460.2025K, Kains:2018vnd}. AML observations have also been proposed as follow ups for PBH candidates detected via PML methods in order to measure the properties of the lens with greater precision~\cite{Wyrzykowski:2015ppa, 2016ApJ...830...41L, Han_2019}. Besides PBHs, AML can be used to set constraints on exotic ultra-compact mini-clusters halos (UCMCHs)~\cite{Li:2012qha, Malbet:2021rgr}, and also on on more conventional DM sub-halos~\cite{Erickcek:2010fc, VanTilburg:2018ykj}.
What is needed for precision astrometry is highly accurate position measurements and also high cadence. The more accurate the position measurements, the more easily small deflections in the trajectory of a star can be detected. High cadence ensures that the star's position is tracked frequently enough that a lensing event is not missed. Both these factors affect sensitivity to lower mass PBHs. At the other extreme, for high mass PBHs, the lensing event durations can be extremely large, and if the mission life time is small compared to a typical event duration, then the detection of a distortion in the trajectory of a star can be difficult.
The \Gaia{} satellite, launched by ESA, has been surveying the sky since 2014. \Gaia{} is an optical all-sky survey satellite with a large field-of-view $\sim0.72^\circ \times 0.69^\circ$~\cite{2012A&A...538A..78L}. The data collected by this satellite provides an exciting opportunity for high precision astrometry. \Gaia{} is expected to observe over 1 billion stars and obtain sub milli-arcsecond level precision position measurements in a single pass. Already, it has provided us with unprecedented views of the Milky Way. However, at the moment, time-series data, which would enable us to make high precision searches for AML events, has not been publicly released by the collaboration.
In this work, we set out to estimate the sensitivity of \Gaia{} to PBHs through their expected AML signatures. We first make a prediction for the expected rate of AML events that \Gaia{} will detect for a given assumption of PBH parameters $f$, $M$. We argue that lensing events can be classified into three types: short, intermediate, and long duration lensing events (SDLEs, IDLEs and LDLEs), only the latter two of which are detectable by \Gaia{}. Our prediction for the lensing rate for these classes of events involves a novel calculation of the lensing probability of a given background star. We combine this probability calculation with a catalog of stars for which \Gaia{} is expected to have high precision astrometric data, to obtain a lensing event rate prediction. Besides the total rate, we also make predictions for the distribution of expected event observables, such as location on the sky, event duration, maximum deflection angle, etc. These predictions will be useful for characterizing the space of observables for genuine signal events which can distinguish them from backgrounds.
There are three main types of backgrounds that \Gaia{} will see. The first type are statistical fluctuations in the centroiding of the camera image of the background star. The second type are instrumental systematics. The third type are astrophysical backgrounds which can mimic PBH induced AML signatures. For example, besides AML events due to PBHs, \Gaia{} is also expected to see AML events due to microlensing caused by foreground stars~\cite{2002MNRAS.331..649B, 2018A&A...618A..44B}, brown dwarfs~\cite{Belokurov:2001vh, 2018A&A...620A.175K}, and free floating planets~\cite{2018Ap&SS.363..153H}. Another type of astrophysical background could be due to stars which do not follow rectilinear motion. Isolated stars are expected to approximately follow rectilinear motion in the Galactic rest frame. However, unresolved binary companions or local gravitational potential gradients, may introduce glitches in the observed trajectories compared to the trajectories expected from rectilinear motion. These glitches could also be incorrectly interpreted as AML events, and could therefore lead to an additional source of background to PBH searches.
If we could characterize the space of observables of these backgrounds, then we could compare it to the predicted space of observables for genuine lensing signals, and use it to reduce backgrounds -- for example by cutting out regions of parameter space that are unlikely to occur due to PBH induced AML signals. The detailed backgrounds are hard to quantify precisely without full numerical simulations, but we use a combination of estimates of the statistical and astrophysical background to argue that we expect negligible background events, except for very long duration events which could mimic high mass PBHs.
Once we have the signal rate, and assuming negligible background rate, we can then predict the regions of PBH parameter space that are likely to be discovered/excluded with time-series data from \Gaia{}. While this time-series data has already been collected, it has not yet been publicly released. We expect that our prediction of the lensing signal rate and the event observable distributions will prove useful to analyze this data, as and when it becomes available, in order to compute the true exclusion bounds.
We find that \Gaia{} is most sensitive between $0.4~M_\odot$ - $5\times10^7~M_\odot$, with peak sensitivity to masses $\sim 10~M_\odot$, where fractions $f$ as low as $3\times10^{-4}$ of the DM relic density can be constrained. We expect that the true exclusion can differ from our prediction towards the higher end of the mass range if the statistical or astrophysical background rate for very long duration LDLEs is significant. Our work is the first attempt to make such a prediction for \Gaia{} by looking at deflections of the trajectories of individual stars. Our work is enabled in part by the existing \Gaia{} eDR3 data release~\cite{2021A&A...649A...1G}, which already gives an indication of the number of stars that \Gaia{} is able to track, and also lists their measured properties that are relevant for calculating the AML rate.
Our work builds upon the work of Dominik and Sahu~\cite{2000ApJ...534..213D} who estimated lensing probabilities for \Gaia{}. However, in this work the authors did not attempt to calculate the number of lensing events that \Gaia{} would see or set an exclusion limit on the PBH parameter space. Moreover, in their work, the authors only considered LDLE type events, with event durations longer than the \Gaia{} mission lifetime. In our work, we show for the first time the importance of IDLE type events, which have event durations of a few months to a few years, a time-scale which is intermediate between \Gaia{}'s sampling time and its mission lifetime. As we will show, searches for IDLE type events are important for sensitivity to relatively lower mass PBHs with masses as low as $\sim 0.1~M_\odot$\footnote{SDLE type events have event durations shorter than \Gaia{}'s sampling time and will thus not be seen by \Gaia{}.}.
We now provide some broader context for the impact of the work that we present in this paper. For one, an exclusion of PBHs with $\mathcal{O}(10)~M_\odot$ mass as making up less than a fraction $f\sim 10^{-3}$ of the DM would rule out the possibility that the mergers of binary black holes being seen at LIGO/Virgo~\cite{LIGOScientific:2018glc} are due to PBHs~\cite{Carr:2020xqk}. If \Gaia{} were to \textit{discover} even a fraction of the DM in the form of PBHs in the mass range of $10^{-1}-10^7 ~M_\odot$, that we have claimed the survey is sensitive to, then this would also have profound implications. If PBHs in this mass range constitute even a small fraction of the DM density, it would rule out weakly interacting massive particles (WIMPs) as a candidate for the rest of the dark matter. Refs.~\cite{Lacki:2010zf, Boucenna:2017ghj, Adamek:2019gns, Kadota:2021jhg} have argued that annihilation of WIMP DM would be greatly enhanced due to accretion around PBHs in this mass range, and this would lead to a large indirect detection signal which is incompatible with present limits, for a PBH density fraction $f>10^{-9}$.
A technical implication of our work is that our novel lensing probability calculation can be easily adapted to predict the rate of lensing events due to UCMCHs/DM sub-halos, or ordinary star-star lensing.
This paper is organized as follows. In sec.~\ref{sec:Gaia}, we describe the properties of the \Gaia{} satellite that are relevant for AML and we also discuss the \Gaia{}~eDR3 data release from which we form a model of the stellar distribution in the Galaxy. In sec.~\ref{sec:LensingProb}, we describe the expected signals due to PBH induced AML events and we provide analytic expressions for the event durations and lensing probabilities for stars in the Galaxy. We also introduce the distinction between SDLE, IDLE, and LDLE types events in this section. In sec.~\ref{sec:lensingrate}, we combine the lensing probability calculation with the Galactic stellar model to numerically compute the expected total number of lensing events that \Gaia{} will see over its mission lifetime, and we also compute the expected distribution of some simplified AML observables. Next, in sec.~\ref{sec:background}, we describe various background processes to PBH searches. We also describe which backgrounds we expect to be important, and some background reduction techniques. In sec.~\ref{sec:Results}, assuming negligible astrophysical backgrounds, we then calculate an expected exclusion curve on the PBH parameter space. In sec.~\ref{sec:discussion}, we discuss some key assumptions of our rate estimation and how the robustness of our exclusion curve can be improved by future numerical studies.
Finally, we summarize and conclude in sec.~\ref{sec:Summary}. In the appendix, we derive some analytic expressions for the event observables.
\section{The \Gaia{} telescope and stellar catalog}
\label{sec:Gaia}
\emph{Gaia} is a space-based optical-telescope designed to obtain precise positions and velocities of stars in our Galaxy. \emph{Gaia} started surveying the sky in July, 2014 and has already surpassed its expected 5 year mission lifetime. It is expected that \Gaia{} will provide time-series data for astrometric position, proper motion, and parallax for more than a billion stars~\cite{2016A&A...595A...1G}. With such exquisite time-series data, \Gaia{} has the potential to be sensitive to glitches in the trajectories of stars due to astrometric microlensing events. \Gaia{} has not yet publicly released time-series data, although this data is expected to be released in the future~\cite{2021A&A...649A...1G}. Thus, at the moment we can not directly analyze \Gaia{} data for potential AML signals.
The goal of our work is two fold. First, we would like to estimate the expected sensitivity of \Gaia{} to AML signals due to PBH induced lensing of background stars. Second, we would like to predict the distribution of event observables such as location on the sky, maximum astrometric deflection angle, and event durations, for different assumptions of the PBH mass and abundance. As and when time-series data of \Gaia{} is made available, we expect that our work will prove useful when analyzing this data.
In this work, we utilize information from \Gaia{} in two distinct ways. First, we make use of the known detector properties, such as how often \Gaia{} scans a particular region of the sky and the uncertainty on the astrometric position of a star in a single pass, to estimate the probability of \Gaia{} detecting a lensing signature. Second, although the \Gaia{} collaboration has not released \textit{time-series} data, they have already made several public releases of high precision, \textit{time-averaged} astrometric positions, proper motion, and parallaxes of nearly 1.5~billion stars~\cite{2021A&A...649A...1G}.
This catalog already gives us the most precise map of the Milky~Way. We will therefore use this catalog to predict the event rate for AML signals.
In sec.~\ref{subsec:GaiaDetProperties}, we discuss the detector properties that will be relevant for estimating the lensing probabilities. In sec.~\ref{subsec:GaiaEDR3Catalog}, we will then discuss the \Gaia{} data release and how we build a stellar catalog from the data to predict the expected AML event rate and event observable distributions.
\subsection{\Gaia{} detector properties}
\label{subsec:GaiaDetProperties}
\Gaia{} is located at the Lagrange point L$_2$. The satellite has two telescopes with independent fields-of-view, each of angular size $0.72^\circ \times0.69^\circ$~\cite{2012A&A...538A..78L}, with an angular separation of $106.5^\circ$ between the two telescopic arms. \Gaia{} spins about an axis inclined at an average angle of 45$^\circ$ with respect to the line joining the detector and the sun with an angular speed of $60 \textrm{ arcsec/sec}$. This rotation axis precesses at a rate of $5$ revolutions per year. \Gaia{} also executes a complicated motion around the Lagrange point L$_2$. As the satellite spins about its rotation axis, the fields-of-view sweep out different regions of the sky. The precession of the rotation axis ensures that after a single rotation the satellite is scanning a slightly different patch of the sky. Given the precession rate and the spinning rate, it is expected that on average a given star would be seen once in each telescopic arm as \Gaia{} rotates, and would then drop out of the fields-of-view. The observation of a star in a single telescope during a single rotation constitutes one ``pass''. The advantage of the two fields-of-view is that this allows \Gaia{} to perform \textit{global} astrometric measurements of the position of a star relative to other background stars in another field-of-view. Thus, in a single pass, \Gaia{} will be able to make one observation of the astrometric position of a star which can later be assigned a specific global astrometric co-ordinate (say, \{$\alpha, \delta$\} in Galactic co-ordinates).
\emph{Gaia}'s motion results in a complicated scanning law~\cite{2016A&A...595A...1G} which determines the time-interval between successive observations of the same star. The time-interval between successive observations is also not constant. Over an observational time scale of $t_\textrm{obs} = 5$~years, \Gaia{}
is expected to observe each star approximately 70 times (35 times in each telescopic arm). Thus, at the end of the observational period \Gaia{} is expected to have non-uniformly sampled time-series data with about 70 astrometric positions for each star. For simplicity, we will treat the two successive observations of a star in each telescopic arm as effectively a single pass. We then take the time-interval between these effective passes to be uniform, with a spacing $t_s = 5 \textrm{ years}/35 \approx 52.2 \textrm{ days}$\footnote{In principle, the observation of a star in the two different telescopic arms can be considered to be two separate passes and could potentially be used to search for short duration microlensing events with time scales of a few hours. However, it seems unlikely given \Gaia{}'s sampling, that such events would occur at a significant enough rate to be detectable for realistic PBH distributions, so we will not discuss this possibility further in this work.}.
\Gaia{} is equipped with three color bands, white light G-band (330-1050 nm), Blue prism Photometer (BP-band: 330-680 nm), and Red prism Photometer (RP-band: 640-1050 nm). It is sensitive to stars as faint as 21 G-band magnitude. The uncertainty on the astrometric position of a star in a single pass depends upon the apparent brightness. In order to model the uncertainty, we could use use a fitted model for this as a function of $m_G$ (G-band magnitude) given in ref.~\cite{2018A&A...618A..44B}. This model is based on the Monte Carlo centroiding simulations done in ref.~\cite{2018MNRAS.476.2013R}. Rather than a direct measurement of the uncertainty on the absolute position of a star in a single pass, the model reports different uncertainties across (AC) and along (AL) the scanning direction of \Gaia{}, as a star passes through the spinning fields-of-view of each telescope. These fitted uncertainties are given by the formula,
\begin{align}
\label{eq:AsigmaSim}
\widetilde{\sigma}_a = & \frac{1}{\sqrt{2}}\begin{cases}
\begin{cases}
0.200+0.483 e^{0.690(m_G-12)} \textrm{ mas} & \textrm{\, for \,} m_G \leq 13, \\
1.140+0.420 e^{0.771(m_G-12)} \textrm{ mas} & \textrm{\, for \,} m_G > 13
\end{cases} & ; \textrm{\, AC} \\
0.030 + 0.0111 e^{0.701 (m_G -12)} \textrm{ mas}& ; \textrm{\, AL}.
\end{cases}
\end{align}
The uncertainties inside the curly brackets are for a single telescopic arm. Since, we are treating successive observations by both telescopic arms as a single pass, we have divided these uncertainties by a factor of $\sqrt{2}$ to characterize the global astrometric position error.
Alternatively, we can use the empirically observed uncertainty along the scanning direction as reported in \Gaia{}~DR2~\cite{2018A&A...616A...2L}. We use a fit to this empirically determined uncertainty using the fitting function of ref.~\cite{2020A&A...640A..83K} given by,
\begin{align}
\label{eq:Asigma}
\sigma_{a} = & \frac{\sqrt{-1.631 + 680.766 z + 32.732 z^2}\times7.75+100}{\sqrt{N_\textrm{CCD}}} \, \, \mu\textrm{as},
\end{align}
where,
\begin{equation}
z = 10^{(0.4(\textrm{max}(m_G,14) - 15))}.
\end{equation}
In this formula, we use $N_\textrm{CCD} = 18$ as the number of CCDs in both telescopes together in which each star is imaged along the scanning direction.
We have plotted the simulation based uncertainties (AC and AL) as well as the empirically determined uncertainty (AL) as a function of $m_G$ in fig.~\ref{fig:sigma}. From the figure we can see that all the uncertainties are roughly constant for for bright stars with $m_G\lesssim 13$. However, for fainter stars with $m_G \gtrsim 13$ the astrometric uncertainty degrades rapidly with increasing apparent magnitude. We also note that there is a slight difference between the empirical AL uncertainty and the simulated AL uncertainty. In practice, the typical astrometric uncertainty will lie between the uncertainties of the along and across scanning directions.
We will compute the expected exclusion on the PBHs parameter space in sec.~\ref{sec:Results} and the expected event observable distribution in sec.~\ref{sec:lensingrate}, using a fiducial choice of the empirically determined AL uncertainty ($\sigma_a$) as a reference uncertainty. This model gives a roughly constant value of $\sigma_{a} = 0.076$~mas up to $m_G \sim 14$.
\subsection{\Gaia{} eDR3 and stellar catalog}
\label{subsec:GaiaEDR3Catalog}
\Gaia{} has released an early version of its 3rd release of data (eDR3)~\cite{2021A&A...649A...1G} which contains details of sources that it has observed between July 2014 and May 2017 (34 months). This data release does not include time-series information of different passes for individual stars. Rather the \Gaia{} collaboration fits the time-series data to ``astrometric solutions'' which are parametric trajectories for each star which are assumed to only have rectilinear motion relative to the sun.
There are three possible astrometric solutions which are used to fit the trajectories of stars in the catalog. The type of solutions used to fit a particular star's trajectory depends upon the quality of data available for that star~\cite{2012A&A...538A..78L}. In particular, good quality color information is important for correcting chromatic effects which can lead to an offset in the image centroid~\cite{2016A&A...595A...3F}.
For those sources for which color information is of high quality and chromatic effects can be well corrected for, \Gaia{} reports a 5-parameter (5-p) solution. This solution is parameterized by the parallax ($\omega$), right ascension (r.a.) ($\alpha$), declination (dec.) ($\delta$), angular velocity along r.a. ($\mu_{\alpha*}$), and angular velocity along dec. ($\mu_\delta$)\footnote{$\mu_{\alpha*}$ is defined as $\mu_{\alpha*} \equiv \frac{d\alpha}{d t}\cos{\delta}$, whereas $\mu_{\delta} \equiv \frac{d \delta}{d t}$.}.
When there is lower quality color information \Gaia{} reports a 6-parameter (6-p) solution, which includes all the parameters of the 5-p solution, plus an additional parameter $\nu_{\textrm{eff}}$. This extra parameter is an effective wavenumber, which allows for chromatic effects to be estimated~\cite{2016A&A...595A...3F}.
Finally, when the data quality is insufficient to obtain a 5-p or a 6-p solution, \Gaia{} reports a 2-parameter (2-p) solution. This solution is parameterized only by the r.a. ($\alpha$) and dec. ($\delta$).
The data released by \Gaia{} in eDR3 only specifies the values of these fitted parameters for each star rather than the full time-series data. The number of sources which are fitted to a given type of solution, and their typical astrometric position uncertainties are shown in tab.~\ref{tab:eDR3content}. Note that these uncertainties are on the fitted $\alpha$, $\delta$, which are obtained for multiple epochs of observation; they do not describe the uncertainty of the astrometric position from a single pass.
\begin{table}[h]
\centering
\begin{tabular}{ |p{4cm}||p{3cm}|p{5cm}| }
\hline
Source type & Number of \newline sources & Uncertainty in position\\
\hline
Total & 1,811,709,771 & \\
5-parameter astrometry & 585,416,709 & 0.01-0.02 mas ; $m_G<$15, \newline 0.05 mas ; $m_G$=17,\newline 0.4 mas ; $m_G$=20,\newline 1.0 mas ; $m_G$=21 \\
6-parameter astrometry & 882,328,109 & 0.02–0.03 mas ; $m_G<$15, \newline 0.08 mas ; $m_G$=17, \newline 0.4 mas ;
$m_G$=20, \newline 1.0 mas ; $m_G$=21\\
2-parameter astrometry & 343,964,953 & 1-3 mas \\
\hline
\end{tabular}
\caption{This table shows the number of sources present in the \Gaia{} eDR3 catalog which are brighter than $m_G=21$. It also summarizes the distribution of the sources over 5-parameter, 6-parameter, and 2-parameter solutions. The typical uncertainties on the fitted angular position of the sources in the catalog is also shown. These uncertainties corresponds to the data taken by \Gaia{} over 34 months.}
\label{tab:eDR3content}
\end{table}
According to ref.~\cite{2021A&A...649A...1G}, time-series data will be available from the 4th data release (DR4) onwards. In order to estimate the total number of candidate lensing events expected to be seen by \Gaia{} over $t_\textrm{obs} = 5$ years of data collection, we attempt to build a simple mock catalog of the stars that \Gaia{} will see, \emph{and} which will also have high quality time-series data that can be used to identify potential lensing signals due to AML induced deflections in the apparent trajectories of stars. To this end, we make use of the catalog of stars already seen by \Gaia{} and reported in eDR3. While eDR3 only corresponds to 34 months of collected data, it is unlikely that there will be a significant number of new stars discovered by \Gaia{} which will have such a high quality of data so as to be sensitive to lensing signals. Thus we take the eDR3 catalog to be representative of all the stars (with data quality suitable for precision astrometric analysis) that \Gaia{} will see after 5 years of observation. We further select from among these stars to identify those which are likely to be suitable for detecting an AML signature.
For the purposes of estimating the expected number of lensing candidate events, we need to know the 3D position of each star ($\alpha$, $\delta$, $D_s$), where $D_s$ is the distance to the star, and the apparent $G$-band magnitude $m_G$ of the star. The position is needed to estimate the number of lenses between us and the star and the $G$-band magnitude is needed to estimate the astrometric uncertainty in the position of a star from a single pass. $D_s$ can be inferred from the parallax and thus can only be obtained for those stars for which 5-p or 6-p solutions are available. Thus, we prune our selection of \Gaia{} eDR3 stars to include only those which have either 5-p or 6-p solutions. Also, since for a large number of stars the observed parallax has large uncertainties, we use distances from the GeDR3\_dist catalog~\cite{2021AJ....161..147B}. This catalog converts parallax to distance after taking into account priors from a stellar distribution model for the Galaxy\footnote{We are trying to build a model for the stars in the Galaxy, the prior model of the stellar distribution in ref.~\cite{2020PASP..132g4501R} is informed and updated by \Gaia{} eDR3 data. These distances are not used as predicted measurements, but rather we use them in later sections to estimate the AML probabilities and lensing rates.}.
Once we apply the above selection criteria, the total number of stars in our filtered catalog is $\sim1.47$ billion. We have plotted the distribution of these stars as a function of $\alpha$, $\delta$, $D_s$, and $m_G$ in fig.~\ref{fig:GeDR3grid50} (here, and in future sections we will switch conventions and use $\alpha$, $\delta$ to represent Galactic longitude and latitude, respectively). We can see from the figure that most stars are unsurprisingly concentrated near the Galactic center at $\alpha = \delta = 0^\circ$ and $D_s = 8.5$ kpc. We can also see from the the $m_G$ distribution of the stars in the filtered catalog (bottom-right panel in the figure), that the number of fainter objects increases up to $m_G=20.7$ and then cuts-off sharply. Although, \Gaia{} may yet discover more faint stars, omitting such stars from our mock catalog will not affect our final lensing event rate prediction since such faint stars are expected to have very large astrometric errors on their positions which will make detecting a lensing signal difficult. This can be seen by comparing the $m_G$ distribution of the catalog with fig.~\ref{fig:sigma}, which shows the astrometric positioning uncertainty as a function of $m_G$.
The catalog that we are using contains all point-like sources that \Gaia{} sees. This would include all types of stars such as variable stars, white dwarfs etc., but also luminous non-stellar objects such as brown dwarfs, hot-luminous planets, etc. These objects can also be used as sources to detect AML signatures. The catalog also might contain several extra-galactic objects such as stars in the LMC and SMC, quasars, supernovae, etc. The distance to the extra-galactic objects taken from GeDR3\_dist would be incorrect because the prior model would attempt to place these objects in the Galaxy~\cite{2021AJ....161..147B}. However, since all of the extra-galactic objects only make up a tiny fraction of the catalog, we expect that this will not have a significant effect of our predictions of the AML signal rates\footnote{In ref.~\cite{2021A&A...649A...7G}, the authors attempted to extract LMC and SMC candidate objects in \Gaia{} eDR3 and found only 11,156,431 and 1,728,303, respectively. This is less than 0.1\% of the 1.8 billion objects in the full catalog.}.
In our analysis, we will consider all objects in the \Gaia{} catalog to be point sources (we will refer them as stars) within the Galactic DM halo for purposes of estimating the lensing rate due to PBHs.
\section{Lensing probability}
\label{sec:LensingProb}
In this section we will present analytic formulae for the probability of a particular source to be lensed by a PBH DM candidate. We will first review some basics of AML in sec.~\ref{subsec:AMLbasics}. Then in sec.~\ref{subsec:EventDurationsClassifications}, we will present a formula for the duration of AML signals that are potentially visible to \Gaia{} and we will classify lensing events into three types based on their event durations: (i)~short duration lensing events (SDLEs) -- those which have lensing event durations which are shorter than \Gaia{}'s sampling time $t_s = 52.2 \textrm{ days}$, (ii)~intermediate duration lensing events (IDLEs) -- those which have event durations longer than \Gaia{}'s sampling time but shorter than the mission lifetime $t_\textrm{obs} = 5$ years, (iii)~long duration lensing events (LDLEs) -- those which have event duration longer than $t_\textrm{obs}$. Previous literature~\cite{2000ApJ...534..213D}, has primarily focused on LDLE detection by \Gaia{}. While SDLEs are almost always going to be missed by \Gaia{}, IDLEs will lead to glitches in the apparent trajectories of background stars which possibly could be detectable by \Gaia{}. Our treatment of IDLEs is a novel feature of this work, which will be important for relatively lower mass PBH detection ($0.1-100 ~M_\odot$). Finally, in sec.~\ref{subsec:AMLprob}, we will present analytic formulae for lensing probabilities for both IDLEs and LDLEs in terms of a line-of-sight integral to a source of magnitude $m_G$ with co-ordinates $(D_s,\alpha,\delta)$.
\subsection{AML basics}
\label{subsec:AMLbasics}
When a star and PBH lens are nearly aligned, gravitational lensing effects are important. Since, both the stars and PBH lenses have proper motion in the sky, lensing signals will be transient phenomena. Before discussing the transient behaviour and durations of such lensing events, we will first discuss the simple case of a static source and a static lens, relative to us, in the sky. A more detailed version of the discussion that follows can be found in refs.~\cite{1992grle.book.....S, 1996astro.ph..6001N, 1998LRR.....1...12W}.
We take the source to be located at a distance $D_s$ from us and the lens is assumed to be at a distance $D_l<D_s$, and is assumed to have mass $M$. We will denote the angular separation between the source and lens as $\theta$. Gravitational lensing results in magnified multiple images of a source due to the presence of a massive lensing object in the foreground. We will focus on the case of point like source and a point like lens. In this case gravitational lensing results in two distinct images of the source, each with different magnifications. For any given source position relative to the lens, one of those images lies within an Einstein ring and the other lies outside of the Einstein ring. The Einstein ring is defined as a circle centered at the lens position on the celestial sphere with an angular size given by the Einstein angle,
\begin{align}
\label{theta_E}
\theta_E & = \sqrt{\frac{4G M }{c^2}\frac{D_s-D_l}{D_l D_s}}, \nonumber \\
& \approx 2.85 \text{ mas }\sqrt{ \frac{M}{1 ~M_{\odot}}\frac{1 \textrm{ kpc}}{D_s}\frac{D_s - D_l}{D_l}}.
\end{align}
In the case where the two images are not resolvable by a telescope, we only detect a single image located at the ``center-of-light'' (centroid) of the two images\footnote{Center-of-light is defined as magnification-weighted average position of the two images.}. In general this image is magnified and shifted relative to what we would have seen if there were no lens. The relative magnification of this image, as compared to the unlensed case, is given by,
\begin{equation}
\label{eq:CentroidMag}
m = \frac{u^2+2}{u\sqrt{u^2+4}}.
\end{equation}
where $u \equiv \theta / \theta_E$ is the source position relative to the lens, scaled by the Einstein angle.
We denote the angular shift in the image position on the celestial sphere as $\delta \theta = \theta_E \delta u $, where $\delta u$ is the angular shift in units of $\theta_E$ and is given by~\cite{1995ApJ...453...37W},
\begin{equation}
\label{eq:CentroidPos}
\delta u = \frac{u}{u^2 + 2}.
\end{equation}
The shift is along the line joining the lens to the true position of the source, and it is directed away from the lens.
Photometric microlensing attempts to detect the magnification change, whereas astrometric microlensing attempts to detect the angular shift of the image. In fig.~\ref{fig:delta-u}, we show the astrometric shift, $\delta u$ and the change in photometric magnification, $m-1$ as a function of the scaled relative angle between the true position of the source and the lens, $u$.
Both the change in magnification and the angular shift are small for large separation between the source and the lens -- typically for a separation larger than a few $\theta_E$. However, as we can see from the figure and the formulas above, the change in magnification, $m-1$, falls off as $1 / u^4$, whereas the shift in angular position, $\delta u$, falls off as $1 / u$. This implies that, for large separation between the source and the lens, astrometric microlensing can be more sensitive to the presence of a lens. In practice, the relative sensitivity of these two microlensing techniques also depends on the instrumental sensitivity to changes in magnification versus sensitivity to astrometric position measurements. In the rest of this section, we will focus only on the astrometric microlensing technique.
\subsection{Event durations and classifications of AML signals}
\label{subsec:EventDurationsClassifications}
Now we consider the effect of relative motion between the star and the lens on the apparent trajectory of the star. The relative motion of the star and the lens will change their relative separation parameter $u$. This in turn will lead to a change in the astrometric shift $\delta \theta$ of the apparent source position over time.
For simplicity, we consider only the motion of source and neglect the motion of the lens. We will also ignore the effects of parallax due to the earth's motion, which we assume can be fitted for, and subtracted from the apparent trajectory. Furthermore, we will assume that, over the time period of observation of \Gaia{}, the source only executes rectilinear motion, with a tangential speed $v$ relative to us.
We define the lens plane as a plane perpendicular to our line-of-sight which contains the lens. The source's position projected on to this plane moves with an angular speed $\mu \approx v/D_l$, where $D_l$ is the distance to the lens.
The astrometric lensing effect can be treated as the instantaneous displacement of the apparent source position relative to the true position of the source, as the source moves across the sky. Up to an overall rotation of the source-lens configuration, a general trajectory of the source in the presence of a foreground lens can then be characterized by the speed $v$ and the impact parameter $u_0$. The impact parameter is defined by the angular separation at the point of closest approach between the source and the lens, when the source's position is projected onto the lens plane.
In fig.~\ref{fig:EventDuration}, we show in yellow, several rectilinear trajectories, with different impact parameters for a background star. Here the lens, indicated by the black dot, is located at the center of the figure. In this figure, we are showing relative angular positions $u_x, u_y$ on the celestial sphere, scaled to the Einstein angle $\theta_E$. The Einstein ring ($u = 1$) is indicated by a grey dashed circle. At every position the star undergoes an AML effect which leads to a shift in its apparent position relative to its true position. This deflection is along the line joining the lens to the source, and directed away from the lens. These deflections from the true trajectories are denoted with arrows in the figure. The locus of all the arrow-tips, shown in blue, indicates the apparent lensed trajectory that we would see. The effect of lensing is thus to create a distortion of the apparent trajectory away from the true simple rectilinear motion.
By examining the observed trajectory of a star and looking for deviations from rectilinear motion, we can detect signatures of PBH lenses. Next we discuss whether such deviations can be detected by \Gaia{}.
\vspace{3mm}
\Gaia{}'s sensitivity to AML distortion in the trajectory will depend on three factors:
\begin{enumerate}
\item The uncertainty in the position measurement of a star in a given pass.
\item The rate at which \Gaia{} makes observations of a particular star's trajectory (cadence).
\item \Gaia{}'s total observation time $t_\textrm{obs}$.
\end{enumerate}
As discussed in sec.~\ref{sec:Gaia}, \Gaia{} scans a particular region of the sky after every $t_s$, where $t_s \sim 52.2$ days is the sampling time. In a given sampling, the uncertainty on the source position measurement is given by $\sigma_a(m_G)$ and depends on the G-band magnitude $m_G$ of the star, see sec.~\ref{subsec:GaiaDetProperties}. Thus, a minimum criteria for \Gaia{} to be sensitive to the deviation in the star's lensed trajectory relative to its true trajectory, is that the astrometric shift $\delta \theta = \theta_E \delta u$ should exceed the astrometric resolution $\sigma_a(m_G)$. It is easy to show from eq.~\ref{eq:CentroidPos} that this condition can be satisfied for an angular separation between star and lens (expressed in $\theta_E$ units) which lies between $u_+$ and $u_-$ where,
\begin{equation}
\label{eq:uplusminus}
u_{\pm} = \frac{1}{2} \left(\frac{\theta_E}{\sigma_a(m_G)} \pm \sqrt{\frac{\theta_E^2}{\sigma_a(m_G)^2} - 8}\right).
\end{equation}
In fig.~\ref{fig:EventDuration}, we also show in green, an annular region around the lens, where the inner radius of the annulus is $u_-$ and the outer radius is $u_+$. A star passing through this annular region will experience a deflection in its trajectory which is larger than \Gaia{}'s single pass astrometric resolution.
In the figure, we have shown a representative measure of the single pass astrometric resolution $\sigma_a(m_G)/\theta_E = 0.3$, as a red arrow annotation at the bottom right. The deflection arrows along each trajectory are color coded such that the arrows are show in black if the deflection is smaller than this threshold, and the arrows are shown in green if they are larger than this threshold. For the portion of the trajectory that lies within the annular region, the deflection arrows are green, indicating that the deflection in the trajectory is above \Gaia{}'s detection threshold.
In the figure, trajectory $A$ enters the green annular region and thus creates an above threshold deflection, after which it enters the inner annular region and the deflection falls below threshold. Then it once again enters the annular region and the deflection is above threshold, before finally exiting the annular region, and the deflection drops below threshold once again. Trajectory $B$ enters the annular region and creates an above threshold deflection, but it does not enter the inner annular region before exiting the annulus. Finally, trajectories with a large impact parameter ($u_0 > u_+$), such as trajectory $C$, will lie wholly outside the annulus and will always have deflections below threshold. Such trajectories will always yield undetectable lensing signatures.
For $\theta_E<2\sqrt{2}\sigma_a(m_G)$, the annulus shrinks to zero size, and the lensing signal would be too weak to be detectable for any impact parameter. Thus, for $\theta_E>2\sqrt{2}\sigma_a(m_G)$, and trajectories with impact parameter $u_0 < u_+$, we will have AML deflections above \Gaia{}'s threshold sensitivity, and thus, a potentially detectable lensing signature.
For a source moving with angular speed $\mu$, we can define a lensing duration $t_e$ for a given trajectory, during which the deflection of the apparent trajectory from the true trajectory is above \Gaia{}'s sensitivity threshold, i.e. $\delta \theta > \sigma_a(m_G)$. Thus, $t_e$ is just the time that would be spent by the star in the annular region. Note that this time $t_e$ need not be something that \Gaia{} actually observes, rather it should be thought of as a parameter associated with a source-lens configuration that will help us quantify whether the lensing signature is detectable by \Gaia{}.
We can divide lensing events into three categories based on the value of $t_e$,
\begin{itemize}
\item \textbf{Short duration lensing events (SDLEs)} -- those with $ t_e < t_s$. In this case, these events will most likely be missed by \Gaia{}, as the above threshold deflection is most likely to occur in between \Gaia{}'s samplings of the region of the sky containing the source and the lens.
\item \textbf{Intermediate duration lensing events (IDLEs)} -- those with $t_s< t_e < t_{\text{obs}}$. In this case, most of the trajectory that \Gaia{} observes will not suffer significant distortion due to lensing. However, there will be at least a few sample observations seen along the trajectory that show a lensing deflection $\delta \theta > \sigma_a(m_G)$. Such a trajectory can be easily distinguished from rectilinear motion because of the blip/glitch in the apparent trajectory.
\item \textbf{Long duration lensing events (LDLEs)} -- those with $ t_{\text{obs}} < t_e $. For these trajectories, almost the entire trajectory suffers from a significant distortion due to lensing, and in the extreme limit of large event duration, \Gaia{} will only see a fraction of the lensed trajectory. Counter-intuitively, this can make detection of such signals harder than the previous case since we will not have a reference for rectilinear motion against which we can detect a lensing signature.
\end{itemize}
In summary, detectable lensing events at \Gaia{} will require the following conditions to be met for the source-lens configuration.
\begin{enumerate}
\item First, the annular region of source positions around the lens, in which the deflection of the star's apparent position is above threshold, should exist. This condition requires $\theta_E > 2\sqrt{2} \sigma_a(m_G)$.
\item Second, a portion of the star's trajectory must pass through this annular region.
\item Third, the lensing event duration $t_e$, which is the time spent by the star in the annular region, must be greater than \Gaia{}'s sampling time $t_s$, i.e. the event should not be an SDLE. Depending on whether $t_e$ is greater than or less than $t_{\text{obs}}$, we will get IDLE or LDLE type events.
\item For IDLEs, we will assume that the lensing signal will be detected by \Gaia{} as long as the above three conditions are satisfied. For LDLEs, in addition to the conditions above, we will assume a threshold criteria for detection by \Gaia{} by requiring that the \textit{difference} between the deflections at the initial and final epochs of observation differ by an angle greater than $\sigma_a(m_G)$ (see ref.~\cite{2000ApJ...534..213D}).
\end{enumerate}
Assuming that the annular region exists, and a star and lens come close enough to each other that the star passes through this region, we would like to estimate what the typical event durations would be. We compute the event duration averaged over viable impact parameters which we denote as $\langle t_e \rangle$. For a given impact parameter, the event duration is $t_e = l(u_0)/\mu$, where $l(u_0)$ is the (angular) length of the portion of the trajectory which lies within the annulus. Integrating this time over $u_0$ and dividing by the allowed range of $u_0$ gives the average event duration $\langle t_e \rangle$. The allowed range of impact parameters for which the star will pass through the annular region around the lens is $0<u_0<u_+$. The integration of $l(u_0)$ over $u_0$ therefore gives the area of the upper half of the annulus. Thus we have,
\begin{align}
\label{eq:eventduration}
\langle t_e \rangle = & \frac{\pi \theta_E (u_+^2 -u_-^2)}{2 u_+ \mu}, \nonumber\\
= & \frac{\pi D_l \theta_E}{ v} \frac{\frac{\theta_E}{\sigma_a(m_G)} \sqrt{\frac{\theta_E^2}{\sigma_a(m_G)^2} - 8}}{\frac{\theta_E}{\sigma_a(m_G)} + \sqrt{\frac{\theta_E^2}{\sigma_a(m_G)^2} - 8}},
\end{align}
where the quantity under the square root is positive for detectable lensing events. For a $10~M_{\odot}$ lens, with source and lens at a distance of $\mathcal{O}(1) \text{ kpc}$ from us, and an astrometric precision of 1~mas, we would find that the average event duration is $\sim 1-2 $~\text{ years}, giving us an IDLE type event. For larger masses for the PBH, or better astrometric precision (brighter sources), or larger distances to the source, we can get event durations longer than the \Gaia{} mission time $t_{\text{obs}}$, i.e. we would get LDLE type events.
\subsection{AML event probability due to PBH DM}
\label{subsec:AMLprob}
Given an assumption of the PBH parameters $(f, M)$, we will now derive an expression for the probability $P_\textrm{star}$ for a source with apparent magnitude $m_G$ and Galactic co-ordinates $(D_s, \alpha, \delta)$ to undergo a lensing event which is detectable by \Gaia{}. This probability will be the sum of the probabilities for a source to yield either an IDLE or an LDLE type event.
We split this calculation into two parts, first we will calculate the conditional probability of lensing of such a star $p_\textrm{lensing}^c$, assuming that there is a lens located at a distance $D_l$ between us and the source, such that the lens is localized within a solid angle $\Delta \Omega$. Then we will weight this conditional probability by the probability that such a lens is actually present, assuming that the lenses are in the form of PBHs of mass $M$, which make up a fraction $f$ of the DM in the Galactic halo.
We will first compute $p_\textrm{lensing}^c$. Let us consider a patch of sky that subtends a solid angle~$\Delta \Omega$. We assume that this patch contains a source located at a distance $D_s$ from us, and a lens located at a distance $D_l$ from us. As the source moves with speed $v$ on the sky over a time $t_{\text{obs}}$, the source traverses an angular distance on the sky $\mu t_{\text{obs}}$, where $\mu$ is the proper speed of the source. We assume that $\Delta \Omega$ is sufficiently large such that the entire trajectory of the star is well contained within this patch of the sky. If the lens lies within an impact parameter $u_+ \theta_E$ on either side of the star's trajectory, a portion of the trajectory will have an AML deflection which is larger than \Gaia{}'s threshold resolution $\sigma_a(m_G)$. However, the impact parameter $u_0$, and hence the event duration $t_e$, will depend upon the angular orientation of the source's velocity vector. We take our formula in eq.~\ref{eq:eventduration} for $\langle t_e \rangle$, averaged over impact parameters, to represent the typical event duration averaged over orientations of the source velocity vector.
For SDLEs, although the deflection is above threshold, the event duration $\langle t_e \rangle$ for which the signal lasts is smaller than the sampling time, and the event will therefore be missed. In such a case we assign $p_\textrm{lensing}^c = 0$.
For IDLEs ($t_s<\langle t_e \rangle<t_\textrm{obs}$), the bulk of the trajectory is sampled while the star is not undergoing any lensing signal and therefore the apparent motion appears rectilinear. However, for a small portion of the trajectory, during which it is sampled at least a few times, the trajectory will show a clear deviation from rectilinear motion, characteristic of an IDLE. In this case we assign $p_\textrm{lensing}^c = \frac{\delta \omega_{\textrm{IDLE}} }{\Delta \Omega}$, where $\delta \omega_{\textrm{IDLE}} =\mu t_\textrm{obs} \times 2 u_+ \theta_E $ is the angular area of an imaginary rectangle swept out by the source as it moves across the sky, of width $2 u_+ \theta_E$. If the lens lies within the rectangle, it would lead to an IDLE type event.
For LDLEs ($t_\textrm{obs} < \langle t_e \rangle$), although the event duration is large, and the trajectory will be sampled frequently during a period in which the deflection of the source's apparent position from its true position is above the \Gaia{} threshold, there is no reference rectilinear part of the apparent trajectory against which we can compare this deviation. In this case, a possible requirement for the threshold for detection could be to demand that the difference between the deflections at the start and at the end of the observational period is larger than $\sigma_a(m_G)$. In ref.~\cite{2000ApJ...534..213D}, the authors made exactly this assumption and computed $p_\textrm{lensing}^c = \frac{\delta \omega_{\textrm{LDLE}}}{\Delta \Omega}$, where $\delta \omega_{\textrm{LDLE}} = \mu t_\textrm{obs}\times 2 \sqrt{\frac{ v t_{\text{obs} } }{D_l} \frac{1}{\sigma_a(m_G)}}\theta_E $.
In summary we find,
\begin{equation}
\label{eq:plensing}
p_\textrm{lensing}^c = \begin{cases}
0 & ; \, \langle t_e \rangle < t_s , \\
\frac{2}{\Delta \Omega} \left(\frac{\theta_E}{\sigma_a(m_G)} + \sqrt{\frac{\theta_E^2}{\sigma_a(m_G)^2} - 8}\right) \theta_E \mu t_{\textrm{obs}} & ; \, t_s < \langle t_e \rangle < t_{\textrm{obs}}, \\
\frac{2}{\Delta \Omega} \sqrt{\frac{t_{\text{obs} } v}{D_l} \frac{1}{\sigma_a(m_G)}} \theta_E \mu t_{\textrm{obs}} & ; \, t_{\textrm{obs}} < \langle t_e \rangle.
\end{cases}
\end{equation}
This computation of $p_\textrm{lensing}^c$ is valid only when the astrometric deflection is above threshold. To be explicit, we should multiply the right-hand-side of the above formula by the step function $\Theta(\theta_E - 2\sqrt{2}\sigma_a(m_G))$, in order to ensure this condition. In computing this probability above, we have averaged over an assumed uniform distribution of the star and the lens positions within the angular window of size $\Delta \Omega$, and we have also averaged over different orientations of the source velocity vector.
\vspace{3mm}
Now, for a particular star located at a given ($D_s, \alpha, \delta$), we can estimate the probability $P_\textrm{star}$ of such a star undergoing a microlensing event caused by PBHs of mass $M$, that make up a fraction $f$ of the Galactic DM halo. We make the assumption that the probability of lensing of a particular star is small and the lens distribution is diffuse enough such that the apparent trajectory of the star is only affected by a single lens at a time. We call this the unilensing assumption.
$P_\textrm{star}$ can then be computed by first multiplying the conditional probability $p_\textrm{lensing}^c$, weighted by the Poissonian probability of having a single lens in the angular window $\Delta \Omega$ located at a distance between $D_l$ and $D_l + \Delta D_l$, which lies between us and the source. The Poissonian probability can just be taken to be the average number of lenses in the aforementioned region. We then sum this over all distances $D_l$ between us and the source.
Thus we obtain,
\begin{align}
\label{eq:P}
P_\textrm{star} = & \int_{0}^{D_s} D_l^2 dD_l \Delta \Omega \frac{f}{M}\rho_{\textrm{DM}}(D_l,\alpha,\delta) p_\textrm{lensing}^c,
\end{align}
where we assume that the DM density is given by a standard spherically-symmetric NFW profile~\cite{NFWparameter} about the Galactic center,
\begin{equation}
\rho_{\textrm{DM}} = \frac{\rho_0}{\frac{r}{r_s}\left(1+ \frac{r}{r_s}\right)^2},
\end{equation}
where $\rho_0 =1.06\times10^7 ~M_{\odot}/\textrm{kpc}^3 $ and $r_s = 12.5 \textrm{ kpc}$ is the DM scale radius. The distance $r$ of a lens from the center of the Galaxy can be written in terms of of the distance of the lens from the earth's position $D_l$, as $r = \sqrt{D_l^2 + r_e^2 - 2 D_l r_e \cos{\delta} \cos{\alpha}}$. Here, $r_e = 8.5 \textrm{ kpc}$ is the distance between the earth and the Galactic center, and $\alpha, \delta$ give the position of the star in Galactic co-ordinates. Note that the $\Delta \Omega$ factor cancels out with the factor $1/\Delta \Omega$ in $p_\textrm{lensing}^c$. Thus, we are left with a single line-of-sight integral to the star in order to compute $P_\textrm{star}$.
For a given hypothesis of the PBH mass $M$ and density fraction $f$, the probability $P_\textrm{star}$ is a function of (i) the location $(D_s,\alpha, \delta)$, (ii) the velocity $v$, and (iii) the apparent magnitude $m_G$ (through the astrometric resolution $\sigma_a(m_G)$), of the star. The line-of-sight integral can be numerically evaluated for a given star in the \Gaia{} catalog, under a specific hypothesis for the PBH parameters. Care must be taken to evaluate $\langle t_e \rangle$ to check which case of lensing applies, and care must also be taken to check the value of the step function argument, to ensure that the lensing signal is above threshold. Both these conditions depend on the value of $D_l$ and this complicates the numerical integration of the expression in eq.~\ref{eq:P}.
\vspace{3mm}
To minimize the computational time, instead of evaluating $P_\textrm{star}$ individually for each star in the \Gaia{} catalog, we tabulated its values. First, we fixed the values of PBH parameters $(f,M)$. We assumed a fixed tangential stellar velocity $v = 200 \textrm{ km/sec}$. Then, we numerically evaluated the expression in eq.~\ref{eq:P} on a grid of $(D_s,\alpha,\delta,m_G)$ values. We show the range of values of these parameters that we have scanned over in tab.~\ref{tab:GeDR3Coarse-Grained}.
We then repeated our tabulation of $P_\textrm{star}$ for different assumptions of the PBH parameters. The dependence on $M$ is complicated, because $\theta_E$ depends on $M$, and $P_\textrm{star}$ depends on $\theta_E$ in a non-trivial way. However, the dependence of the probability on $f$ can be found by just a trivial overall rescaling.
\begin{table}[h]
\centering
\begin{tabular}{ |p{3cm}||p{2.5cm}|p{2.5cm}| p{1.5cm}| }
\hline
Parameters& min value &max value& grid size\\
\hline
$\alpha$ & -180 deg & 180 deg&50\\
$\delta$ & -90 deg & 90 deg&100\\
$D_s$& 0.5 kpc& 20 kpc&100\\
$m_G$ & 10 & 21.6&50\\
\hline
\end{tabular}
\caption{Rather than evaluate $P_\textrm{star}$ individually for each star in the \Gaia{} catalog, we tabulate $P_\textrm{star}$ on a grid of ($D_s,~\alpha,~\delta,~m_G$) values. This table show the minimum and maximum values of each parameter, and the grid size gives us the number of samples that we take for each parameter, i.e. we are evaluating $P_\textrm{star}$ on a $50\times100\times100\times50$ grid. In order to evaluate $P_\textrm{star}$ for a given star in the \Gaia{} catalog, we simply read off the value from the nearest point on our grid.}
\label{tab:GeDR3Coarse-Grained}
\end{table}
We show the results of our numerical evaluation of $P_\textrm{star}(D_s,\alpha,\delta,m_G)$ for $f = 1$ and $M=14 ~M_\odot$ in fig.~\ref{fig:LensingEvents}. To visualize this probability we have plotted its distribution as a function of two parameters at a time, while uniformly averaging over the other parameters. We have also shown the 1D distributions of this probability as function of each of the individual parameters.
From the figure, we can see several interesting features. First, the probability of lensing of a star is largest in the direction of the Galactic center $\alpha = 0^\circ$, $\delta = 0^\circ$. This is expected because of the large DM density, and hence large number of potential lenses in that direction. Second, we see that $P_\textrm{star}$ increases as a function of $D_s$ before eventually saturating beyond the NFW scale radius. This saturation is expected, as there is no further significant gain in the number of lenses along any line-of-sight at large Galactic radii. Finally, we see that $P_\textrm{star}$ is almost constant as a function of $m_G$ and then decreases sharply for $m_G\gtrsim15$. This behaviour can be attributed to \Gaia{}'s astrometric position measurement uncertainty which becomes large for $m_G>14$, see eq.~\ref{eq:Asigma}.
We also note here that it is possible to define $P_\textrm{star}^\textrm{IDLE}$ and $P_\textrm{star}^\textrm{LDLE}$ as the probabilities for a given star to undergo either an IDLE or LDLE, respectively. These can be computed by using eq.~\ref{eq:P}, with a selection of the appropriate case for $p_\textrm{lensing}^c$ in eq.~\ref{eq:plensing}. $P_\textrm{star}$ is then just the sum of these individual probabilities. If we vary the mass $M$ of the PBHs, this changes the typical event duration. As a result, in addition to the overall probability $P_\textrm{star}$ changing as a function of $M$, the relative probabilities of IDLE and LDLE type events also change. We will utilize the individual computations of $P_\textrm{star}^\textrm{IDLE}$ and $P_\textrm{star}^\textrm{LDLE}$ in sec.~\ref{sec:obs_dist}, when we try to compute the expected distribution of event observables for each type of lensing event.
\section{Lensing signal rate due to PBHs}
\label{sec:lensingrate}
Now, we are ready to numerically compute the the expected total number of lensing events that \Gaia{} will see over a time period $t_\textrm{obs}$, for a given hypothesis of the PBH parameters $(f,M)$. This result will be shown in sec.~\ref{sec:lensingrate_numerical}. Later, in sec.~\ref{sec:obs_dist}, we will also introduce some simplified event observables for IDLE and LDLE type events, and compute the expected distribution of these observables.
\subsection{Computing the number of lensing events that \Gaia{} will see}
\label{sec:lensingrate_numerical}
We start by fixing a hypothesis for the PBH parameters $(f,M)$.
In order to calculate the expected total number of lensing events ($N_l$) that \Gaia{} will see over $t_\textrm{obs}= 5$~years, we need to compute the average number of lensing events that we expect to see for each \emph{individual} star in the \Gaia{} catalog, and then add up all these values.
For each star in our reduced \Gaia{} catalog, described in sec.~\ref{subsec:GaiaEDR3Catalog}, we have $D_s$, $\alpha$, $\delta$, and $m_G$ values. We can then numerically calculate the probability $P_\textrm{star}$ (eq.~\ref{eq:P}) that a lensing event will occur and will be detected by \Gaia{} in $t_\textrm{obs} = 5$ years. This probability can also be interpreted as the average number of lensing events associated with a given star. Since, the computation for $P_\textrm{star}$ was numerically challenging, we take the nearest grid point in $D_s$, $\alpha$, $\delta$, and $m_G$ that we have tabulated $P_\textrm{star}$ values for, to determine its value. We then sum over all the stars in the \Gaia{} catalog to determine the average expected number of detectable lensing events. This is equivalent to a weighting of $P_\textrm{star}$ by the distribution of stellar locations and apparent magnitudes in the Galaxy, or equivalently, for the choice of $f = 1$ and $M = 14~M_\odot$ it is a weighting of fig.~\ref{fig:LensingEvents} by fig.~\ref{fig:GeDR3grid50}. The result of this procedure is shown in fig.~\ref{fig:NlGeDR3Coarse-Grained}, in which we plot the total number of detectable lensing events as a function of $D_s$, $\alpha$, $\delta$, and $m_G$.
From fig.~\ref{fig:NlGeDR3Coarse-Grained}, we can see that the location on the sky where the maximal number of lensing events will occur is in the direction towards the Galactic center. This is because of a combination of a large number of stars along with a large DM density integrated along the line-of-sight in this direction. A more interesting trend is the dependence of the number of lensing events on $m_G$. As $m_G$ increases, \Gaia{}'s astrometric positioning errors also increase (see eq.~\ref{eq:Asigma}), and hence the lensing probability $P_\textrm{star}$ decreases. However, there are a larger number of stars with higher values of $m_G$ in the \Gaia{} catalog (see fig.~\ref{fig:GeDR3grid50}). The competition between these two opposing effects is what leads to the non-trivial dependence on $m_G$, leading to a peak in the distribution of number of lensing events near $m_G \simeq 19$~mag.
We can also try to understand how the distribution of event numbers will change as a function of the PBH mass $M$. Since the number of events is still expected to peak in the direction of the Galactic center, we will focus our attention on the $m_G$ dependence of the expected number of events. In fig.~\ref{fig:mGDist}, we show for different PBH masses $M$, the 1D dependence of the expected number of events on $m_G$, after marginalizing over all $D_s$, $\alpha$, and $\delta$ values. At low values of $M \lesssim~10^{-2} M_\odot$, the expected number of events drops rapidly to zero. This is because of a combination of the facts that (i) the typical AML event duration drops below \Gaia{}'s sampling time $t_s$ and (ii) the typical AML deflection drops below \Gaia{}'s sensitivity threshold $\sigma_a(m_G)$. The combination of these two factors makes the AML signals unobservable at \Gaia{} for low PBH masses. For large masses the expected number of events drops more smoothly, as can be seen in the figure. This drop in the number of lensing events at large masses is because there are fewer lenses for a fixed PBH mass density, and hence the probability of AML lensing decreases.
\subsection{Expected distribution of observables for AML events}
\label{sec:obs_dist}
We have seen in sec.~\ref{sec:LensingProb} that there are two kinds of AML signals that can be detected by \Gaia{}, IDLEs and LDLEs.
In the case of IDLEs one would observe a rectilinear trajectory with a glitch or deviation from apparent rectilinear motion for a short time, i.e. for the duration of the event $t_e$. Two simplified observables can characterize such trajectories: the event duration $t_e$ during which the deflection from rectilinear motion is above \Gaia{}'s threshold, and the maximum deflection $(\delta \theta)_\textrm{max}$ of the trajectory away from rectilinear motion. For LDLE type events it is harder to find a simplified observable that would truly characterize such events. However, we can try to examine the relative deflection difference between the start point and the end point of the trajectory away from the true rectilinear trajectory. Note that for LDLE type events, the event duration $t_e$ will not be an easily reconstructable observable. Given our computation of $P_\textrm{star}$ in the previous section, and more specifically, the computation of $P_\textrm{star}^\textrm{IDLE}$ and $P_\textrm{star}^\textrm{LDLE}$, we can plot the expected distribution of these simplified observables, for a given hypothesis of PBH parameters.
\subsubsection{IDLE observables}
First, we need to compute, for a star located at $D_s, \alpha, \delta$, and with apparent brightness $m_G$, the average expected values of the observables $t_e$ and $(\delta \theta)_\textrm{max}$.
We had earlier evaluated the average event duration $\langle t_e \rangle$ over impact parameters in eq.~\ref{eq:eventduration}. We further average this over all candidate IDLE lensing events that such a star will undergo, due to lenses at various distances $D_l$ between us and the star along the line-of-sight. This gives us,
\begin{align}
\label{eq:te_avg_Dl}
\langle\langle t_e \rangle\rangle & = \frac{\int_0^{D_s} \langle t_e \rangle \frac{dP_\textrm{star}^\textrm{IDLE}}{d D_l}dD_l }{\int_0^{D_s} \frac{dP_\textrm{star}^\textrm{IDLE}}{d D_l}dD_l},
\end{align}
where the double angular brackets around $t_e$ denote averaging with respect to both -- impact parameters as well as candidate lens distances.
We then compute a similar double average for $(\delta \theta)_\textrm{max}$. The averaging over impact parameters can be performed analytically, and is given by,
\begin{align}
\langle (\delta \theta)_\textrm{max} \rangle & = \theta_E\frac{1 + \log \left[\frac{1}{8}K(K+\sqrt{K^2 - 8})\right]}{K+\sqrt{K^2 - 8}},
\end{align}
where $K=\frac{\theta_E}{\sigma_a(m_G)}$. The derivation of this equation is given in appendix~\ref{sec:appendixObs}. We can then perform a similar further averaging as we did for $t_e$, for this observable, over lens distances $D_l$.
We now compute these double average observables for each star in the \Gaia{} catalog and construct a 2-D histogram of $\langle\langle t_e \rangle\rangle$ and $\langle\langle (\delta \theta)_\textrm{max} \rangle\rangle$. While populating the histogram, we weight each star's contribution by $P_\textrm{star}^\textrm{IDLE}$, i.e. the probability that such a star will undergo an IDLE type event. The resulting histogram shows the expected distribution of the observables for IDLE type events. This histogram is shown in fig.~\ref{fig:sub-first} for $M=14~M_\odot$. We have also shown with vertical dashed lines the sampling time $t_s = 52.2$~days and the observational time $t_\textrm{obs}=5 $~years.
From the figure we can see that the distribution of typical IDLE type event durations are between 1.5-4~years at this mass. These event durations lie between $t_s$ and $t_\textrm{obs}$. However, for lower choices of mass for the PBH, the distribution of event durations will drift to lower values, and eventually for sufficiently small masses, $M\lesssim 10^{-2}~M_\odot$, would fall below the sampling time for \Gaia{}. Thus, such low mass PBHs would typically not induce an observable lensing signal. As we shall see in the next section, the statistical backgrounds are significant for event durations $t_e \lesssim 2$~years. We will demand that the IDLE signals we are looking for have event durations greater than 2 years so that this background is negligible, and this will result in \Gaia{} only being sensitive to PBHs with masses $M\gtrsim 0.1~M_\odot$.
\subsubsection{LDLE observables}
In the case of LDLE type events, the relative deflection difference observable, averaged over impact parameters, is computed in appendix~\ref{sec:appendixObs}, and is given by,
\begin{equation}
\langle \Delta_\textrm{LDLE} \rangle = \frac{v t_\textrm{obs} }{D_l} \frac{2\textrm{log}(u_T/u_\textrm{min})}{u^2_T - u^2_\textrm{min}},
\end{equation}
where $u_T = \sqrt{ \frac{ v t_{\text{obs} } }{D_l \sigma_a(m_G)} } $ and $u_\textrm{min}=\sqrt{2}$. We can then further average this over $D_l$ values, using a formula similar to \ref{eq:te_avg_Dl}, however this time with $P_\textrm{star}^\textrm{IDLE}$ replaced by $P_\textrm{star}^\textrm{LDLE}$.
We could now build a 1D histogram for this observable by populating it with all the stars in the \Gaia{} catalog, appropriately weighting each star's contribution by $P_\textrm{star}^\textrm{LDLE}$, i.e. the probability for that star to undergo an LDLE. However, instead of simply computing the 1-D distribution of $\Delta_\textrm{LDLE}$, we compute the 2-D distribution of $\Delta_\textrm{LDLE}$ \textit{and} $t_e$. Although $t_e$ will not be a genuine direct observable for LDLEs, it can still be thought of as a parameter characterizing the trajectory, albeit one that is much harder to extract. Furthermore, it has a straightforward comparison to $t_e$ for IDLE type events. The 2-D histogram for these observables for LDLE type events is shown in fig.~\ref{fig:sub-second} for several values of the PBH mass $M$ ranging from $14~M_\odot$ to $5.9 \times 10^5~M_\odot$.
Higher values of the PBH mass lead to larger event durations, but they give a similar distribution of the deflection observable. However, the total number of events decreases at large PBH masses because of the decrease in number of PBHs in the Galaxy for a fixed DM density. This decrease will limit the sensitivity of \Gaia{} to very high mass PBHs.
Note that for a given PBH mass, such as for $14~M_\odot$ as shown in the figure, it is possible to have both IDLE and LDLE type events. Ideally the distribution of $t_e$ should look continuous as we transition from IDLE to LDLE type events in figs.~\ref{fig:sub-first} and~\ref{fig:sub-second}, but the lack of smooth continuity is because of our assumptions of very large event durations $t_e \gg t_\textrm{obs}$, when deriving the probabilities and event rates for LDLE type events.
\section{Background rate}
\label{sec:background}
In this section we will discuss various sources of background that can mimic the AML signal caused due to PBHs. These backgrounds can lower the statistical significance of a potential PBH signal, and therefore careful modelling is required to estimate the background rate. We can categorize the backgrounds into three types: statistical, systematic, and astrophysical. The statistical background is due to uncertainty in the centroiding of the image of a given star. The systematic backgrounds are due to instrumental systematics. The astrophysical backgrounds are due to i) astrophysical lenses which can mimic the effects of PBH lensing, and ii) deviation from our assumption of rectilinear motion, for example due to a binary companion. A detailed study of all the backgrounds is beyond the scope of this work. We will however discuss the statistical and astrophysical backgrounds. For the statistical backgrounds we will estimate their rate for the case of IDLEs and compare it to our predicted event rates from PBHs. We will not estimate the astrophysical background rates, but we will discuss which sources of this background we expect to be relevant, and some possible ways to reduce these.
In the previous section we showed that the lensing signals can be characterized by a few observables. For example, for IDLEs, we would have the event duration $t_\textrm{e}$ during which the deflection is above \Gaia{}'s threshold sensitivity, and the maximum astrometric shift $(\delta \theta)_\textrm{max}$, whereas for LDLEs, we would have $\Delta_\textrm{LDLE}$. In addition, for both these types of events, we would also have the location in the sky $(D_s,\alpha, \delta)$ of the background star, where $D_s$ could be measured through parallax. Under a specific hypothesis of the PBH parameters $(f,M)$, we have computed the expected distribution of a few of these simplified observables for genuine AML signals from PBHs.
In principle, one could also augment these astrometric observables with \Gaia{}'s relatively crude photometric measurements to also characterize the PML magnification signal and predict the expected distribution. Each candidate lensing event can thus be thought of as a single point in the multi-dimensional space of these observables.
In general we expect that the backgrounds that could mimic an AML signal will have a different distribution in this space as compared to that of genuine AML signals. Thus, by applying suitable cuts on an observed distribution in this space, we should be able to enhance the signal-to-background ratio.
\subsection{Statistical Backgrounds}
\label{sec:background_stat}
Let us first discuss the case of statistical background for IDLEs. For an unlensed trajectory, since the centroiding of a star in \Gaia{}'s detectors has an intrinsic uncertainty of $\sigma_a (m_G)$, this would lead to statistical fluctuations in the observed position of a star. If these statistical fluctuations mimic the lensing signal due to PBHs, then they would form a statistical background.
While estimating the lensing probability for a given star $P_\textrm{star}$ due to PBHs, we assumed that the instantaneous astrometric deflection of a star's trajectory $\delta \theta$ is greater than \Gaia{}'s threshold sensitivity $\sigma_a (m_G)$. If the event duration is $ t_e $, then there would be $N_s \simeq t_e /t_s$ observations of the star's position with an above threshold lensing signal, where $t_s$ is \Gaia{}'s sampling time.
We can thus estimate the rate at which the statistical fluctuations of an unlensed trajectory generate a fake IDLE type lensing signature using the critera above. For the statistical fluctuations to mimic a lensing signal with event duration $t_e$ (where $t_s < t_e < t_\textrm{obs}$), we need $N_s$ consecutive fluctuations in the centroiding, each of which are greater than $1$-$\sigma$ away from the true rectilinear trajectory. Here $N_s = t_e/t_s$ is the number of passes that \Gaia{} would make of the star during a time $t_e$. Moreover, these fluctuations all need to be in the same direction to mimic a lensing signal. If we assume that, for a given star in the \Gaia{} catalog, the centroiding fluctuations in a single pass are governed by a gaussian distribution with standard deviation $\sigma_a (m_G)$, then the probability of greater than $1$-$\sigma$ single-sided deviation is $1-\textrm{erf}(1) \simeq 0.16$, where $\textrm{erf}$ is the error-function. Thus, for a typical trajectory with $35$ samples of the star's position as measured by \Gaia{} over $t_\textrm{obs}=5$~years, we would find that the probability for a star to undergo such a fake event due to statistical fluctuations is,
\begin{equation}
P_\textrm{star}^\textrm{stat} = 2\times(0.16)^{N_s}(35-N_s),
\end{equation}
where $0.16^{N_s}$ is the probability for $N_s$ $1$-$\sigma$ single-sided deviations in the star's apparent trajectory, the factor of $2$ takes into account that the deviations can be on either side of the trajectory, and the factor of $35-N_s$ gives the number of ways of choosing the starting pass for which the fake lensing event begins. $P_\textrm{star}^\textrm{stat}$ does not depend on the specific properties of the star such as its location and apparent magnitude. Thus, we can directly multiply it by the number of stars in the \Gaia{} catalog to obtain the expected number of statistical background events.
The fake rate is obviously very small for longer event durations ($t_s \ll t_e \lesssim t_\textrm{obs}$ or $1\ll N_s \lesssim 35$) because of the requirement that multiple consecutive passes show large deflections, for e.g. for $t_e \simeq$~2~years, we find $N_s = 14$, and $P_\textrm{star}^\textrm{stat} = 3\times 10^{-10}$. Upon multiplying this by the number of stars in the \Gaia{} catalog ($1.47\times10^9$), we would obtain $\sim 0.4$ background events with a 2 year event duration. For smaller event durations, we would find a much larger number of background events. Thus, to reduce this statistical background to negligible levels, we make the simple choice to only look at IDLEs with event duration greater than 2 years.
Here, we have looked at the dependence of the background rate on only one observable, the event duration $t_e$ and discussed the distribution of this observable due to statistical background. It is conceivable that if we examine the multi-dimensional space of observables, the background could be reduced further, thus allowing for nearly background free searches to IDLE signals with event durations even shorter than 2 years.
For LDLE type events, estimating the statistical background is trickier. The reason for this is that there is no reference rectilinear motion against which we can study a deviation. Instead, in a numerical study we could simulate several rectilinear trajectories with statistical fluctuations, and compute the probability of some of these trajectories to be statistical outliers, perhaps through quantifying their $\chi^2$ fit to a pure rectilinear trajectory. Such a detailed numerical study is beyond the scope of the present work, but we hope to take it up in a future work.
\subsection{Astrophysical background}
\label{sec:background_astro}
There are several astrophysical sources of background that can mimic the lensing signatures due to PBHs, i)~astrophysical lenses which can mimic the effects of PBH lensing, ii)~ deviation from our assumption of rectilinear motion, for example due to a binary companion. We will only briefly discuss these here, along with some suggestions for background reduction techniques.
Besides AML events due to PBHs, \Gaia{} is also expected to see AML events due to microlensing caused by foreground stars~(including objects formed from stellar collapse)~\cite{2002ApJ...576L.131A,2002MNRAS.331..649B, 2018A&A...618A..44B}, brown dwarfs~\cite{Belokurov:2001vh, 2018A&A...620A.175K, 2018AcA....68..351N}, and free-floating planets~\cite{2018Ap&SS.363..153H}. Such events form a background to the detection of PBHs through their AML signatures. These objects would predominantly lie in the disk and thus the spatial distribution of the lensing event could in principle be used to partially reduce this background. However, since both genuine AML signatures due to PBHs and such background events would peak in the direction of the Galactic center, we expect the spatial distribution of such events to provide only a modest background reduction.
Brown dwarfs and free-floating planets would have masses less than $\sim10^{-2}~M_\odot$. In practice, these objects would be indistinguishable from PBHs of similar mass. As we will see in the next section, our expected exclusion on the PBH parameter space is relevant only for PBH masses $M>10^{-2}~M_\odot$, with lower mass objects giving rise either to SDLEs, or below threshold lensing. Thus, we do not expect brown dwarfs and free-floating planets to be a relevant background.
Lensing signals due to foreground stars form a reducible background when the foreground star can be resolved by \Gaia{} and thus the source of lensing can be clearly identified. In such a case, either a portion of the trajectory or the entire trajectory could be excluded from an analysis. However, if the foreground star is too dim, for example if it is a black hole or a neutron star formed from stellar collapse, then lensing due to such objects would form an irreducible background. We would need to model the rate of such lensing events and their observable distribution to predict the expected background rate. The rate of AML signals due to unresolved foreground stars can be predicted in a way similar to that in which we predicted the event rate due to PBHs. One would have to perform a similar calculation as in sec.~\ref{sec:LensingProb}, where instead of using the DM density to characterize the lens distribution we would use a model distribution density for the dark lenses in eq.~\ref{eq:P}. In addition, we would have to modify the monochromatic mass function assumption used for the PBHs, and replace it by a distribution based on a mass model for these faint stellar objects.
A recent study along these lines using a population synthesis code was performed in ref.~\cite{2020ApJ...889...31L}. The authors of this study constructed the distribution of a similar set of observables as those for our IDLEs ($\delta \theta_\textrm{max}$ and $t_e$)\footnote{The authors of ref.~\cite{2020ApJ...889...31L} actually used the Einstein ring crossing time which is $\theta_E/\mu$ instead of the event duration $t_E$ as one of the simplified observables. In order to approximately relate the former to the latter, we can multiply it by $\theta_E/\sigma_a(m_G)$, see eq.~\ref{eq:eventduration}.}, but for lensing due to stellar objects. An examination of their results suggests that the main backgrounds that could give rise to observables in a similar region of parameter space as the PBH induced observable space (compare our fig.~\ref{fig:ObsDist} with fig.~14 in ref.~\cite{2020ApJ...889...31L}), are those which would arise from neutron stars which could mimic IDLE type events, and black holes from stellar collapse which could mimic very long duration LDLE type events. From a naive comparison by eye, it appears that they expect $\sim$ 1 neutron star event which could mimic an IDLE signal. Although this background seems negligible, a detailed study specifically with \Gaia{} observables is needed to estimate this background more precisely.
The second source of astrophysical background is due to a violation of our assumption of rectilinear motion. Isolated stars are expected to approximately follow rectilinear motion in the Galactic rest frame over the time period of observation of \Gaia{}. However, gravitational potential gradients can alter this expected behaviour. Such gradients can arise due to localized effects, e.g a binary companion, or localized gravitational perturbations (for e.g. due to a nearby star cluster), or they can arise due to steeper global gravitational gradients, such as near the center of the Galaxy.
Of these possibilities, binary companions would probably be the most significant effect, with nearly a third of main sequence stars in the Galaxy expected to be in binary systems~\cite{Lada:2006dc}. In case the binary companion is clearly identified, the effect on the stellar trajectory could perhaps be modelled and used as a baseline reference, on top of which one could look for an AML signature due to PBHs. A similar modelling could perhaps also be performed in the case when there are detectable sources of local gravitational gradients. A more conservative approach could also be to exclude such stars entirely from the analysis.
Next, we discuss various scenarios where the binary companion can not be identified. In close binaries which have a short duration with a period less than the \Gaia{} observation time, if the companion is too faint or is unresolved, it could still induce a wobble in the apparent trajectory of a star which could mimic features of IDLE type events. However, these events are unlikely to constitute a sizeable background for IDLEs due to PBHs, since they would give rise to repeating features in the apparent trajectory, as opposed to AML events which are non-repeating. For wider binaries, with event durations greater than the \Gaia{} observation time, the companion may again not be identified because it is too faint. In this case, such binaries are more likely to mimic LDLE type events due to PBHs. If the period of the binary is $\mathcal{O}$(5 - 10) years, follow up surveys could search for repeating features to rule out an AML origin. However, for even longer duration binaries, they are likely to constitute a genuine background to LDLE type events. One could perform a detailed study of simulated trajectories of such binaries and compute their expected event observable distributions. This could be used to characterize how well such kinds of background could be separated from signal.
We can also estimate the typical time-scale for deviations in trajectories induced by the gravitational potential in a stellar cluster, by computing the ratio of velocity and gravitational acceleration of a star in a typical cluster. Assuming that $v = 200$~km/s as before, and a cluster mass of $10^6~M_\odot$, with a distance of 100~pc between the star and the cluster, we would find a time scale of $\sim 10^8$~years. Therefore, such gravitational gradients would typically only mimic very long duration LDLE type events. Such anomalous accelerations will not exhibit any photometric brightening signals, and this could perhaps be one way to separate them from genuine AML signals.
We can briefly summarize the preceding discussion on astrophysical backgrounds as follows. For IDLEs we expect negligible backgrounds. For LDLEs, we expect fake lensing background from black holes from stellar collapse, unresolved long duration binary companions, and local gravitational gradients. These backgrounds, if present, are more likely to be confused with signals from very long duration LDLEs.
\section{\Gaia{}'s expected exclusion on the PBH parameter space}
\label{sec:Results}
For a given value of $(f,M)$, we have shown that we can calculate the expected number of IDLE and LDLE type events that \Gaia{} will be able to detect. We have also given a procedure to calculate the distributions of some simplified event observables. As discussed in sec.~\ref{sec:background}, there are various types of backgrounds which can mimic genuine PBH induced lensing signals. Given the distribution of expected observables for both signal and background, one could set cuts on the observable parameter space to reduce the amount of background, and maximize the signal-to-background ratio. Then, assuming that \Gaia{} takes $t_\textrm{obs}$ worth of time-series data and sees events consistent with a background-only rate, we could then find the expected exclusion on the regions of PBH parameter space that \Gaia{} can set due to non-observation of any excess event rate over background.
In this work we have only calculated the signal rate and the observable distribution from signal, however we have not computed the full background rates and the observable distributions for all the backgrounds. Thus, we can not give a fully accurate projected exclusion on the PBH parameters that can be set by \Gaia{}.
In order to get some indication of \Gaia{}'s expected sensitivity in the absence of a full background rate calculation, we can make some simple assumptions on the background rate and calculate the projected exclusion. We will assume that we only study AML events with event durations larger than 2 years in order to reject the IDLE statistical background (see sec.~\ref{sec:background_stat}). We will further make the assumption that after setting this cut, the total background from all sources is negligible in the region of observable space where we expect genuine PBH induced AML signatures. This latter assumption seems justified for IDLE events based on our discussion in \ref{sec:background_astro}, but there might be a non-negligible background for LDLEs at long event durations.
Thus, with the above assumptions, assuming that we see zero candidate lensing events, consistent with background only, we expect to rule out regions of the PBH parameter space that predict $N_l>2.3$ events at the 90\% confidence level, assuming Poissonian statistics.
In fig.~\ref{fig:exclusion}, we show the PBH parameter space $(f,M)$ and shaded contours which indicate the average total number of lensing events $N_l$ due to PBHs, for both IDLE and LDLE types, for each point in the parameter space. The boundary of these contours is the green curve, which corresponds to $N_l =2.3$. The region of shaded parameter space interior to the green curve can be excluded at the 90\% confidence level.
We have also shown on the same plot, the curves corresponding to regions of parameter space that give rise to 2.3 IDLE type events (blue curve), or 2.3 LDLE type events (orange curve). The regions above these curves can be interpreted as expected exclusions purely due to null search results for either one of these classes of lensing events.
We can see from the figure that \Gaia{} is sensitive to PBHs with masses between $0.4~M_\odot$ - $5\times10^7~M_\odot$. \Gaia{} is most sensitive to PBH masses near $10~M_\odot$ where a fraction $f$ as low as $3\times10^{-4}$ of the DM relic density can be ruled out. For PBH masses lower than $10^3~M_\odot$, most of the detectable lensing events would correspond to IDLEs with average event durations that lie between 2~-~5 years. For sufficiently low PBH masses, two conditions cause the number of detectable lensing events to drop rapidly to zero. Firstly, the Einstein angle $\theta_E$ decreases to such an extent that many of the lensing signals would fall below \Gaia{}'s detection threshold. Secondly, the average event duration $t_e$ would decrease below 2 years, for which the statistical backgrounds to IDLEs would be large. This makes the exclusion bound weaker for low PBH masses. For PBH masses greater than $10^3 M_\odot$, most of the detectable lensing events would correspond to LDLE lensing with average event duration much greater than observation time $t_{\textrm{obs}}< t_e $. For larger PBH masses, the number of lenses decreases with increasing PBH mass, and thus relatively fewer signal events are expected. This makes the bound weaker for larger values of the PBH mass $M$.
Our assumption of negligible astrophysical background is most likely to break down for long duration LDLE type events, thus, we also expect that with a realistic background calculation, we would find a reduced sensitivity for high mass PBHs as compared to what we have shown in the figure.
We have shown several other exclusion bounds from the literature in fig.~\ref{fig:exclusion}, which can be compared against our projected exclusion bound. Among the existing constraints, lensing constraints are the most robust, since they rely on fewer assumptions. We find that the AML bound on PBH density fraction $f$ that can be set by \Gaia{} for masses $\gtrsim~0.4~M_\odot$ is stronger than that of the current best bounds from PML surveys, possibly by several orders of magnitude. We have also shown in the figure, a best-fit region of the PBH parameter space corresponding to PBH mergers giving rise to all the observed gravitational wave signals seen at LIGO~\cite{Vaskonen:2019jpv}. We find that our projected exclusion could rule out this region, thus ruling out PBHs as the origin for the observed LIGO events.
A more realistic study of a possible PBH origin for the LIGO/Virgo signals was performed in refs.~\cite{DeLuca:2021wjr,Franciolini:2021tla}. In these works, the authors studied mixed models with some fraction of the LIGO/Virgo signals arising from astrophysical black hole (ABH) mergers, and some from PBHs. Using a Bayesian analysis, they found that only 20\% of the LIGO/Virgo signals are likely to have a PBH origin. This would imply a best fit fraction $f$ of PBH dark matter which is lower than that of~\cite{Vaskonen:2019jpv} by a factor of $\sim$~5~\cite{DeLuca:2021wjr}. However, the exact fraction would depend on the ABH model which is highly uncertain. We note that the expected \Gaia{} exclusion that we have found is strong enough to rule out even this smaller value of $f$, thus potentially ruling out even the best-fit prediction of~\cite{DeLuca:2021wjr}\footnote{Although no other present day constraints exist on such a low fraction of PBHs in this mass range, future gravitational wave searches for PBH mergers at higher redshift $z \gtrsim 30$ with the Cosmic explorer and Einstein telescopes could rule out a fraction $f\gtrsim 10^{-5}$ of PBHs at $10~M_\odot$ in the next decade~\cite{Ng:2022agi} .}.
\section{Discussion}
\label{sec:discussion}
The computation of the expected exclusion curve that \Gaia{} will set based on the AML technique, crucially depended on the calculation of the lensing probability $P_\textrm{star}$. There are a number of assumptions that we made while calculating $P_\textrm{star}$, some of which were made for convenience of calculation. We list some ways in which the calculation can be improved.
\begin{itemize}
\item We made an assumption of a constant relative tangential velocity $v = 200$~km/s. This can be further improved by taking into account the stellar velocities and DM velocities. Stellar velocities can potentially be obtained from the \Gaia{} catalog itself, whereas the DM velocities can be obtained perhaps from Eddington inversion techniques of the Galactic rotation curve~(see for e.g. \cite{Mandal:2018efq}). A more accurate prediction of relative velocities is needed because our prediction of the typical event durations (and hence separability from background) has a direct inverse dependence on this velocity.
\item We made the assumption of the rectilinear motion of target stars. However, the parallax motion of these stars would result in non-rectilinear motion. Moreover, a large fraction of stars in our Galaxy are in binary systems and this would result in wobbles in the star's trajectory due to its bound motion. In order to more carefully take into account these effects in the computation of $P_\textrm{star}$, we would need to perform a numerical simulation of trajectories to study the effects of lensing and how it can be distinguished from other kinds of motion.
\item We have also assumed uniformly spaced sampling in time by \Gaia{}. However, in a numerical study one could also take into account \Gaia{}'s scanning law which would give us the correct time sampling rate for stars in a given region of sky.
\item For simplicity, we have taken the astrometric error $\sigma_a(m_G)$ to be the error along the scanning direction of \Gaia{}. In a numerical study, a better estimate of $P_\textrm{star}$ could be obtained using both the across and along astrometric errors depending upon \Gaia{}'s scanning strategy.
\item The probability $P_\textrm{star}$ depends upon the assumed DM density profile. We computed $P_\textrm{star}$ for the NFW profile, but we could also try other profiles such as Burkert~\cite{Burkert:1995yz}, Einasto~\cite{Graham:2005xx,Navarro:2008kc}, isothermal~\cite{Begeman:1991iy, Bahcall:1980fb} etc. These DM density profiles differ from one another near the Galactic center, and in general are less cuspier than the NFW profile. Since the Galactic center region was where we have found the maximum probability for detecting lensing events, we therefore expect that these alternate profiles could have a sizeable effect on our estimate of the expected number of AML events.
\item We have ignored blending effects due to foreground stars which could affect the centroiding precision when detecting AML events. A realistic simulation is needed to estimate the size of such effects.
\item We have assumed that our exclusion is set by using only the AML signature detected by \Gaia{}. However, \Gaia{}'s relatively crude photometry could also be used in conjunction with the AML measurements to measure a PML signal which could potentially improve the sensitivity of the search.
\end{itemize}
In addition to improvements in the signal rate calculation, a detailed background study is needed to obtain an accurate exclusion curve. In particular, numerical studies of the LDLE statistical background are important as this could potentially affect the expected bound at high PBH masses where we expect to have the longest event durations. Also, as outlined in sec.~\ref{sec:background_astro}, we expect that the astrophysical backgrounds that need a more careful study are due to i) lensing due to dark stellar objects, and ii) wobbles in a stellar trajectory due to an unresolved binary companion or local gravitational potential gradients, that could also mimic long duration LDLEs.
\section{Summary: Main takeaway}
\label{sec:Summary}
\Gaia{}'s unprecedented astrometric precision has ushered in a new era in our understanding of the Galaxy. With its milliarcsecond level precision in a single pass, \Gaia{} has the capability to be extremely sensitive to any unusual proper motion of stars. Primordial black holes are expected to form a significant fraction of the dark matter density of the universe in many cosmological models. PBH dark matter in our own Galaxy could lead to astrometric microlensing signals of background stars which could be detected by \Gaia{}.
In this work, we have attempted to make a prediction for the expected sensitivity of \Gaia{} to the PBH parameter space ($f,M$), and to compute the expected AML event observable distributions from PBH induced lensing.
In order to estimate the potential exclusion limit that \Gaia{} can set on the PBH parameter space with 5 years of observational data, we needed to estimate both the rate of genuine PBH induced AML signals, as well as the rate of background events that could mimic such signals.
The present work has primarily focused on a precise prediction of the signal rate and associated observables. In order to compute the signal rate, we used the existing \Gaia{} eDR3 catalog as a model of the stars in the Galaxy which are potential AML targets. We then combined this with a novel probability calculation to estimate the likelihood that a given star in the \Gaia{} catalog would undergo an AML event which is detectable at \Gaia{}. While estimating this probability we argued that there would be two different classes of detectable lensing events, Intermediate Duration Lensing Events (IDLEs) and Long Duration Lensing Events (LDLEs), depending upon whether the duration of the event is smaller or larger than the total observational period of \Gaia{}. We also suggested appropriate simplified observables that can be extracted from the apparent trajectory, which characterize the signatures of AML events. For a given set of PBH parameters, our calculations showed a)~the regions of the sky which are likely to yield a large event rate and b)~the distribution of the simplified observables for both types of lensing events.
Although a full background study was beyond the scope of this work, we have also discussed various sources of background, and highlighted which backgrounds we expect to be important and deserving of further investigation. We did perform a slightly more quantitative estimate of the background for IDLE events due to statistical fluctuations induced by centroiding uncertainties. We found that event durations greater than $\sim 2$~years, would show negligible background of this type. We also argued that the astrophysical backgrounds are possibly of importance only for very long duration LDLEs. Thus, we expect that if we only consider events with event durations greater than 2 years, we would have negligible background except possibly for very long duration lensing events, which would constitute a background for high mass PBHs.
We then computed the expected exclusion curve for \Gaia{} using our predicted signal rate, and the \textit{prima facie} reasonable assumption of a negligible background rate. We found that \Gaia{} is sensitive to PBHs with mass between $0.4~M_\odot$ to $5\times 10^7~M_\odot$ with peak sensitivity to PBH masses of $10~M_\odot$, for which we can rule out such PBHs as making up as little as a fraction $f =3 \times 10^{-4}$ of the DM density. The lower end of the mass sensitivity window is due to IDLE type events which yield event durations close to 2 years, near where the statistical background becomes a significant effect. At the higher end of the mass range of interest, \Gaia{} is sensitive to LDLE type events, but loses sensitivity at high masses because the number of lenses decreases, hence decreasing the expected event rate. We expect that a more realistic treatment of the astrophysical backgrounds will only affect our projected bound for higher PBH masses which give rise to very long duration LDLEs, whereas we expect that our projected IDLE bound is more robust.
As compared to other existing bounds on the PBH parameter space, we have found that \Gaia{} will potentially have the best sensitivity to PBHs with masses near $10~M_\odot$. This region of PBH masses is particularly intriguing since it overlaps with the range of PBH masses that current gravitational wave detectors are sensitive to. An exciting implication of our work is that we expect that \Gaia{} can potentially exclude a PBH origin for the LIGO/Virgo black hole merger events.
Our work is the first attempt to combine a detailed signal rate estimation with a rudimentary background estimate to calculate the expected sensitivity of \Gaia{} to the PBH parameter space. Previous works in the literature have estimated the lensing probabilities, but not the total expected event rate or the exclusion curve. Moreover, these studies have only focused on the case of LDLEs. Our work also shows for the first time that IDLE type events can also play a significant role in excluding or discovering regions of the PBH parameter space with \Gaia{}.
We have also discussed a number of ways in which our calculation of the expected exclusion could be further improved. Most of these improvements require detailed numerical studies which we plan to pursue in the future. Once time-series data of \Gaia{} is publicly available, we expect to be able to analyze this data for potential lensing signals, and detection/non-detection of such signals can then be used to place constraints on the PBH parameter space using our expected event rate calculations.
\acknowledgments
We acknowledge useful discussions and correspondence with Jeffery J. Andrews, Varun Bhalerao, Surhud More, Jan Rybizki, and Sourav Chatterjee. The work of VR was supported by a DST-SERB Early Career Research Award (ECR/2017/000040).
\appendix
\section{Derivation of astrometric lensing rate}
\label{sec:appendixObs}
In this appendix, we derive expressions for the simplified event observables defined in sec.~\ref{sec:obs_dist}, averaged over impact parameters.
For IDLEs, the simplified observable is given by the maximum astrometric deflection $(\delta \theta)_\textrm{max}$. We first find the maximum astrometric shift $\delta u$, in $\theta_E$ units, using eq.~\ref{eq:CentroidPos}. For a rectilinear trajectory, we can take the lens-source separation to be of the form, $u = \sqrt{u_0^2 + \xi^2(t)}$, where $u_0$ is the impact parameter of the trajectory and $\xi(t) =\mu t$ is due to to the rectilinear motion of the source.
The behavior of the astrometric shift $\delta u$, as a function of $u$ was shown in fig.~\ref{fig:delta-u}. For $u_0>\sqrt{2}$, the maximum shift will occur for when $u=u_0$, i.e. $\xi=0$. However, for $u_0< \sqrt{2}$, the maximum shift will occur when $u=\sqrt{2}$, i.e. when $\xi = \pm\sqrt{2 - u_0^2}$.
Hence, the maximum deflection for a given impact parameter is given by,
\begin{align}
(\delta \theta)_\textrm{max} & = \begin{cases}
\frac{u_0\theta_E}{u_0^2 + 2} & u_0> \sqrt{2},\\
\frac{\theta_E}{2\sqrt{2}} & u_0 \leq \sqrt{2}.
\end{cases}
\end{align}
We can now average this over all viable impact parameters for which the AML signal would be above threshold, $0<u_0<u_+$ (see eq.~\ref{eq:uplusminus}), to obtain
\begin{align}
\langle (\delta \theta)_\textrm{max} \rangle & = \theta_E\frac{1 + \log \left[\frac{1}{8}K(K+\sqrt{K^2 - 8})\right]}{K+\sqrt{K^2 - 8}},
\end{align}
where $K=\frac{\theta_E}{\sigma_a(m_G)}$.
For LDLEs, we take the simplified observable to be the absolute value of the relative deflection difference between the start point and the end point of the trajectory, away from the true rectilinear trajectory. This was computed in ref.~\cite{2000ApJ...534..213D} and was found to be,
\begin{equation}
\Delta_\textrm{LDLE} = \frac{v t_\textrm{obs} }{D_l}\frac{1}{u^2},
\end{equation}
where $u$ is the angular separation of the source and lens, which is assumed not to change significantly over $t_\textrm{obs}$ for an LDLE. The observable $\Delta_\textrm{LDLE}$ will be above \Gaia{}'s astrometric threshold sensitivity $\sigma_a(m_G)$ for $u<u_T$, where
$u_T = \sqrt{ \frac{ v t_{\text{obs} } }{D_l \sigma_a(m_G)} } $.
If we further take random positions of the source relative to the lens and assume that $u$ varies in the the range $u_\textrm{min} <u<u_T$, we obtain,
\begin{equation}
\langle \Delta_\textrm{LDLE} \rangle =\frac{\int_{u_\textrm{min}}^{u_T} \Delta_\textrm{LDLE} d^2u}{\int_{u_\textrm{min}}^{u_T} d^2u}= \frac{v t_\textrm{obs} }{D_l} \frac{2\textrm{log}(u_T/u_\textrm{min})}{u^2_T - u^2_\textrm{min}}.
\end{equation}
Here, the averaging is performed over an annular patch of the sky with the integration limits defining the inner and outer radii of the annulus. We choose $u_\textrm{min} = \sqrt{2}$, corresponding to the point at which the instantaneous deflection is maximum, to be the lower limit of integration.
\bibliography{astroMLGAIA} |
Title:
Observational Signatures of Galactic Turbulent Dynamos |
Abstract: We analyse the observational signatures of galactic magnetic fields that are
self-consistently generated in magnetohydrodynamic simulations of the
interstellar medium through turbulence driven by supernova (SN) explosions and
differential rotation. In particular, we study the time evolution of the
Faraday rotation measure (RM), synchrotron radiation, and Stokes parameters by
characterising the typical structures formed in the plane of observation. We do
this by defining two distinct models for both thermal and cosmic ray (CR)
electron distributions. Our results indicate that the maps of RM have
structures which are sheared and rendered anisotropically by differential
rotation and that they depend on the choice of thermal electrons model as well
as the SN rate. Synchrotron maps are qualitatively similar to the maps of the
mean magnetic field along the line of sight and structures are only marginally
affected by the CR model. Stokes parameters and related quantities, such as the
degree of linear polarisation, are highly dependent on both frequency and
resolution of the observation.
| https://export.arxiv.org/pdf/2208.14178 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
ISM: magnetic fields -- radio continuum: ISM -- galaxies: ISM -- galaxies: magnetic fields -- (magnetohydrodynamics) MHD -- methods: data analysis
\end{keywords}
\section{Introduction}
Observations of the polarised radio synchrotron radiation
The polarised radio synchrotron radiation
emitted from the interstellar medium (ISM) of nearby disc
galaxies and its associated Faraday rotation probe the
topology and strength of the magnetic field hosted by them.
The strength of the component along the line of sight (LOS)
of the observer is estimated from the Faraday rotation
measure (RM), while from the intensity of the radiation
and its angle of polarisation the strength of the planar
field component could be inferred
\citep[e.g.][and references therein]{BeWi}.
As such, it has been established that regular magnetic
fields of kilo-parsec scales in coherence as well as the
irregular or small-scale magnetic fields are abundantly
present in nearby
\citep[e.g.][]{sofue1986global,thompson2006magnetic,krause2008magnetic,fletcher}
and also in high-redshift disc galaxies
\citep[e.g.][]{Bernet2008}.
Typical strengths of these fields
are about a few tens of a $\mu$G in Milky Way like galaxies \citep{Beck_2004},
which corresponds to approximate equipartition between magnetic and
turbulent kinetic energy.
The mechanism of the growth and
sustenance of large-scale magnetic fields in galaxies, although unclear, is very likely a turbulent dynamo, operating
in their ISM, as a result of an induction effect generated
from a combined action of supernova (SN) driven turbulence,
differential rotation, and density stratification
\citep{radler1969,Parker,radler1980,ZeldovichBook,Brandenburg2005}.
This process explains the amplification of very weak
initial fields
(which might have been generated in the early Universe
\citep{KandusEtAl2011, Subramanian2016} or through
astrophysical mechanisms
\citep{Biermann1950,subramanian1994thermal,KulsrudEtAl1997})
to a strength roughly in equipartition
with the turbulent kinetic energy density. It has also been
demonstrated in several empirical studies
\citep[see e.g.][and references therein]{shukurov2006galactic}
through turbulent
transport mechanisms, as well as in direct numerical
studies with varying setups
\citep[see eg.][etc.]{gressel2008direct,hanasz2009global,gent2013supernova}.
A better understanding of cosmic magnetic fields
and mechanisms of their origin would enhance a general
comprehension in numerous domains of astrophysics as they are believed
to play an important role for example in the angular
momentum transport in accretion discs \citep{shaku},
in the morphology of the ISM and the star
formation process \citep{price,KrumholzFederrath2019},
and also in the propagation of cosmic rays \citep{yan}.
Probing the observational signatures of the dynamo
mechanism and being able to distinguish between
various scenarios of generating the galactic magnetic
field is therefore an important open problem.
Scenarios of magnetic field evolution
include different kinds of dynamos,
like the small-scale or
fluctuation dynamo which is fast but only produces magnetic fields on length scales
smaller than the injection scales, the large-scale or mean-field dynamo, and
combinations of the two.
Complications in observing cosmic magnetism arise primarily
because the observables depend not
only upon the field itself but also on the
spatial distribution of thermal and non-thermal electrons, as well as on
their statistical correlations with the magnetic
field, which are usually not well understood. Typically
observational studies of galactic magnetic fields
rely upon theoretically inferred models of these
distributions, and it is thus important to understand
the extent to which these observational inferences depend upon
the models of electron distributions.
In the current study, we address the aforementioned issue and
estimate observables associated with galactic magnetic
fields from numerical simulations of the mean-field
dynamo with the goal of improving the interpretation
of observations of real galaxies. Similar synthetic
observational analyses have been performed for
interpreting the magnetic fields in the young galaxies
\citep{bhat2013fluctuation,sur2018}, and also for the simulations of
clusters of galaxies \citep{Sur2021}, where the
small-scale or fluctuation dynamo is thought to be the
prevalent mechanism for field generation \citep{SchoberEtAl2013}. In the
present analysis we focus on the ISM
magnetic fields generated as a result of a large-scale
dynamo.
We base this study on the data of the
MHD simulations of galactic ISM performed by
\citet{Bendre2015}, the relevant details of which are
also described in the following section. We process this
data to perform a series of synthetic radio observations,
and use physically motivated models for the
distributions of thermal and CR electrons which are also
crucial for modelling the continuum radio emission
spectrum. We further explore the dependence of its
polarisation angle, the two point correlation function
of radio emission etc. upon these distributions.
In \cite{RappazEtAL2022}, a similar analysis of ISM
simulations has been presented. These were also based
on the MHD simulations of a local patch of galactic
ISM, stirred with SN driven turbulence, which focused
on realistically simulating the multi-phase morphology
of ISM along with a detailed chemical network, and not
specifically on the dynamo mechanism itself. The
distribution of magnetic fields in these simulations
therefore was a result of asymptotic turbulent decay
of initially imposed uniform magnetic field. The
simulations that we use here however, were focused on
self consistently generating the large-scale dynamo
effect from direct simulations of SN driven turbulence
and differential shear.
The paper is organised as follows: In Sec.~\ref{s2} we summarise the
basic setup and the key results of the galactic dynamo simulations.
In Sec.~\ref{sec_observables}, we will present the post-processing of the
simulations that allows us to extract various mock observables.
Telescope effects will be discussed in Sec.~\ref{tel_eff}.
In Sec.~\ref{sec_discussion} we comment on our different assumptions
and we draw our conclusions in Sec.~\ref{sec_conclusions}.
\section{Simulations}
\label{s2}
In this section we briefly describe the setup of the
direct numerical simulations (DNS) we
analyse in
the following sections. Detailed
discussion of the DNS setup and its various outcomes are also presented in \citet{Bendre2015}.
These were non-ideal MHD simulations of a local
box
of the ISM in a typical spiral galaxy, performed using
\texttt{NIRVANA} MHD \citep{Nirvanacode} code. The simulation domain
spanned $\sim$ 0.8 \kpc by 0.8 \kpc range in radial
($x$) and azimuthal ($y$) directions, while in the
vertical $z$ direction it spanned $\sim$-2.1 to 2.1
\kpc above and below the galactic mid-plane. The
domain was resolved in $96\times 96 \times 512$
cells amounting to an uniform Cartesian grid with a
resolution of
$\delta \sim 8$\pc. Shearing periodic
boundary conditions were used at the radial while
periodic ones were used at the $y$ boundaries, to
simulate respectively the radial shear and axisymmetry
of azimuthal galactic flows.
A flat rotation curve
of galactic rotation was also included by letting the
angular velocity decrease with radius $R$ as $\Omega
\propto1/R$, with $\Omega_0 = 100~$\kms\kpc$^{-1}$ at
the centre of the domain.
The simulated ISM was
composed only of hydrogen, although its multi-phase
morphology was still captured
in a rudimentary way
by incorporating the temperature dependent rates of
heat transfer,
representing a piece-wise power law
of radiative cooling \citep[similar to][]{sachez_salcedo}.
The initial mass density $\rho$ was vertically
stratified and balanced hydrostatically with
gravity.
It had a scale height of $\sim 300$\pc
(and midplane value of $\sim10^{-24}$\g \cm$^2$).
Outflow boundary conditions were used at the vertical
boundaries which allowed the outflow of the matter but
restricted its inflow.
SN explosions were simulated as spontaneous local
expulsions of thermal energy injected at random
locations scaling with the density and at predefined
rates of 25\%, 50\%, and 100\% of the average Milky Way
SN
rate
\citep[$\sim30~$\Myr$^{-1}$ \kpc$^{-2}$, e.g.,][]{ferrier}.
We analyse all of these three models
and refer to these as Run25, Run50, and Run100
respectively in the following sections.
This setup led to a quasi steady state of kinetic
and thermal
energies in all models within first few \Myr, and the
ISM segregated into multiple phases such as cold dense
clouds in the midplane, the warm ionised phase, and
hot ISM bubble in the outer halo of the galaxy.
Magnetic energy, on the other hand amplified
exponentially for about a \Gyr, until it reached an
approximate equipartition with respect to the turbulent
kinetic energy, and thereafter it either saturated or
kept on amplifying at a drastically slower growth rate
(depending on the SN rate, see \fref{fig:Mag_energy}).
We refer to these phases as kinematic and dynamical
phase respectively in the following sections.
Large-scale magnetic fields of strengths $1-3$\muG and
of scale-heights $\sim500-
800$\pc were also generated in all models, which
have previously been analysed and explained as self
consistent solutions of the mean field dynamo \citep{Bendre2015}.
In \fref{fig:Mag_energy} we illustrate the two
different regimes of the dynamo regarding the
magnetic energy growth, and in \fref{fig:Mag_proj}
we show the average along the $z$ axis of the
total magnetic field map in the $x-y$ plane, taken
in the dynamical phase (at $T\simeq 1508$ Myr)
for Run25 as a comparison for the observables that
we will present later.
\section{Observables}
\label{sec_observables}
In this section we summarise the main results of our
analysis, describe how they are obtained, as well as the
assumptions involved.
\subsection{Rotation measure and Faraday depth}
Any linearly polarised signal can be decomposed into
two circularly polarised components with opposite
handedness. Under the effect of a magnetic field, these
two components accumulate a phase difference and difference in group
velocities
leading
to a rotation of the plane of
polarisation.
This is known as Faraday rotation,
and is characterised in the context of ISM by the
strength of the component of magnetic field parallel
to the wave vector, wavelength of the radiation, and
density of scattering thermal electrons.
The projected polarisation angle on the plane of
observation is given by
\begin{equation}
\label{theta}
\theta = \theta_0 + \lambda^2 \mathrm{FD},
\end{equation}
\noindent where $\theta_0$ is the initial angle,
$\lambda$ the wavelength of the photon and
$\mathrm{FD}$ is the Faraday depth which
is defined by \citet{burn1966depolarization} as
\begin{equation}
\label{FD eq}
\mathrm{FD} = K \int n_e \mathbf{B} \cdot \mathrm{d}\mathbf{l} ,
\end{equation}
where the integration is
along the
line of sight (LOS) from the source to the observer.
$K$
depends upon the
natural constants as $K = e^3 / (2\uppi m_e^2 c^4)
\simeq 0.812$ rad m$^{-2}$ cm$^3$ $\mu$G$^{-1}$ \pc$^{-1}$
while
$n_e$ is the thermal electron density.
The rotation measure ($\mathrm{RM}$) is then
defined as
\begin{equation}
\mathrm{RM} \equiv \frac{\mathrm{d} \theta(\lambda)}{\mathrm{d}\lambda^2},
\label{eq:rm}
\end{equation}
and is equivalent to the FD in the case of a single source along the LOS without
any internal Faraday rotation and
beam depolarisation.
Throughout this paper we
only consider LOSs parallel to the
three common axes of the simulations with observers far in
the positive direction
such that every LOS
along that axis stays parallel to other LOSs along the same axis.
Furthermore,
in this section
the sources are taken far in the
negative
direction.
As the thermal electron density was not explicitly
computed in the simulations, we model it using the following
two different prescriptions similar to \citet{RappazEtAL2022}.
\begin{itemize}
\item Mod1:
the thermal electron density is proportional to the density of the ISM
$n_e = c_n n$
with $c_n$\footnote{Note that $c_n$ is constant as long as the mean density of the simulation is preserved.
At late times of our simulations a small amount of matter can be lost and thus $c_n$ is adjusted accordingly.}
taken such that the mean thermal electron density is $0.1$ cm$^{-3}$ at each time step.
\item Mod2:
the thermal electron density is taken as constant and set to $n_e = 0.1$ cm$^{-3}$.
\end{itemize}
Using \eref{FD eq}, we then compute and compare the RM maps
obtained with the two aforementioned models of thermal electron densities.
In \fref{fig:RMz with 2 models} we plot the contours
of $\mathrm{RM}$ along the $z$ axis
for
Mod1 and Mod2 of $n_e$.
With Mod2
the typical values of $\mathrm{RM}$ are almost an
order of magnitude smaller than that from Mod1, while
the root mean squared (RMS) of the $\mathrm{RM}$
with Mod1 is about
76 rad m$^{-2}$
compared to
20 rad m$^{-2}$ with Mod2.
With both models of $n_e$ the mean RM is close to
zero.
Qualitatively, the observed structures in RM
maps are larger with the second model as it is sensitive
only to the magnetic field variations.
Thus, we expect that the variations of $\mathrm{RM}$
in the plane of observation would be important on a larger
scale
if Mod2 represented the distribution of $n_e$
correctly.
We expect the occurrence of finer structures in the
RM maps for more complex models of $n_e$, depending upon
the cross correlations between magnetic fields and $n_e$,
and their individual distributions as well.
As such, for Mod1, in which $n_e$ scales
directly with the mass density of hydrogen, we do in
fact see more small-scale structures (upper panel of
\fref{fig:RMz with 2 models}).
In principle,
the scaling between the local distribution of thermal
electrons and density could also depend on the local phase
of the ISM.
A qualitative assessment of the shapes of these structures
in RM maps suggests that they are
anisotropic, in a sense that their correlation lengths
are larger in one direction. This is presumably due to
anisotropy in the magnetic field introduced by
the background shear in the simulations. To analyse this
assertion more systematically and to quantify the
anisotropy, we compute the correlation lengths
$\ell$ from two point correlation functions of RM maps,
which we define for an arbitrary function $f(\mathbf{r})$ as
\begin{equation}
C_f(\boldsymbol{R})=\langle f(\boldsymbol{r}) f(\boldsymbol{r}') \rangle = \mathcal{F}^{-1} (|\mathcal{F}(f)|^2),
\end{equation}
where $\boldsymbol{R}=\boldsymbol{r}-\boldsymbol{r}'$ and $\mathcal{F}$ is the usual Fourier transform.
This results in elliptical structures centred at $\boldsymbol{r}=\boldsymbol{r}'$ (see e.g. \fref{fig:RMz_corrfct}), which
we normalise and integrate along any particular
line starting from the centre to obtain the correlation
length in that direction.
We thus obtain the distribution of correlation lengths
as a function of angle with respect to the radial direction.
We summarise in Table~\ref{Table:orientation of RMz}
the orientation of these ellipses, that is the angle
between the major-axis of the two point correlation contour with
$x$ axis in
degrees. We have computed these angles during the
kinematic and dynamical phases of the evolution separately,
along with their $1-\sigma$ variances.
It does not seem that
model of $n_e$ distribution has any significant impact on
these orientations in the dynamical phase, and it appears
that it is entirely decided by the background shear. The angle
along which the maximum of the correlation lengths is situated,
matches roughly with the one obtained for background shear
in \cite{bendre2022}.
Nevertheless, with increasing SN
rate the direction of the ellipses show larger variations
of the mean value.
We note however that estimates of variances in Run50
and Run100 could suffer from a lack of data points, the sampling
being almost half of the one of Run25, and that Run25 is more
sampled at the very beginning of the dynamical phase.
We also show the time evolution of the ratio $\ell_\mathrm{min}
/\ell_\mathrm{max}$ (where $\ell_\mathrm{min}$ and
$\ell_\mathrm{max}$ correspond roughly to the minor and
major axis respectively of the two point correlations of
RM maps)
in \fref{fig:RMz_corr_length} and
list its averages
over the dynamical phase in Table~\ref{Table:corr_length_ratio} (see also \fref{fig:RMz_corrlength_integral}).
We also study the correlation length of the $\mathrm{RM}$ maps
in the plane of observation in more detail.
In particular the ratio $\ell_\mathrm{min}/\ell_\mathrm{max}$ shows an interesting behaviour.
We clearly see in \fref{fig:RMz_corr_length} that for both $n_e$ models in
the early stage of the simulations $\ell_\mathrm{max}$ and $\ell_\mathrm{min}$ are comparable.
This is mainly due to the initial magnetic field that is uniform
along the $z$ axis.
As the system evolves,
the points farther away from each other
no longer stay
correlated and both $\ell_\mathrm{max}$ and $\ell_\mathrm{min}$ are reduced.
However $\ell_\mathrm{min}$ decreases faster than $\ell_\mathrm{max}$ in the
kinematic phase, indicating that the ellipses are stretched more and
more with time due to differential rotation.
Once the dynamical phase is reached the ratio is approximately
constant; mean values of $\ell_\mathrm{min}/\ell_\mathrm{max}$
can be found in Table~\ref{Table:corr_length_ratio}.
These
average values grow non-linearly with the SN rate with a
scaling slightly steeper for Mod2 than for Mod1.
Furthermore,
these ratios depend on the model of $n_e$ as well, specifically
the mean values of $\ell_\mathrm{max}$ are systematically larger
for the constant $n_e$ model (Mod2), compared to Mod1. This is
probably owing to the fact that there is an approximate correlation
between magnetic fields and the mass density ($\mathbf{\it B}\sim
\rho^{0.5}$), which makes the integrant of \eref{FD eq} proportional
to $\sim \rho^{1.5}$ in Mod1, while it scales only as $\sim\rho^{0.5}$
for Mod2 (as $n_e$ is constant).
Regarding the mean values of $\ell_\mathrm{max}$ and
$\ell_\mathrm{min}$ in the dynamical phase
(see Table~\ref{Table:corr_length_ratio}), an increase of the SN rate
reduces the elongation of the RM ellipses.
The maximum correlation length is particularly affected by the choice of $n_e$.
We note that $\ell_\mathrm{max}$ compares well with the maximum correlation length of the anisotropic magnetic field reported in \cite{bendre2022}.
This two-point correlations function, at far
away points from its origin predicts the anti-correlations (note
the negative blue regions in \fref{fig:RMz_corrfct}, outside
of the central red ellipse, along the $\ell_{min}$), and at some
time steps these anti-correlations overwhelmingly contribute to
the integration along $\ell_{min}$, making the correlation length
negative. Another relevant measure for the correlation length in
such a case is similar to Taylor microscale, which counts the
second order moments of the correlation function. It can be
obtained by fitting a parabola to the curves in
\fref{fig:RMz_corr_length} and noting where the fit crosses the
$y=0$ line.
Throughout our analysis of the $\mathrm{RM}$ maps we also
noticed that the $z$ axis is very peculiar.
In fact, it is
the only axis for which the $\mathrm{RM}$ maps distributions
are more or less Gaussian
and their standard deviations increase with time and
are also systematically
higher in Mod1 than that in Mod2.
The $z$ axis is not directly affected by differential rotation so the random fluctuations of the magnetic field and the thermal electron density along the LOS play a central role in determining the global structure of the map. More importantly these structures are formed on smaller scales than the ones induced by the mean magnetic field and electron density.
The $\mathrm{RM}$ maps along the $x$ and $y$ axes also
display elliptical structures elongated along the,
respectively, $x$ and $y$ axis
(see Figs.~\ref{fig:RMx_appendix} and \ref{fig:RMy_appendix}).
Typical structures are
usually on length scales that are comparable to the
entire domain for the $y$ axis but not for the $x$ axis (see Table~\ref{Table:corr_length_x_appendix} and Table~\ref{Table:corr_length_y_appendix}).
These maps are particularly peaked in the galactic
plane where the magnetic field and density reach their maximum.
As for the RM along the $z$ axis, we
also observe some bubble structures that could be due
to the propagation of matter from SN explosions (shock
fronts characterised by over dense regions).
The evolution of the maximal correlation length is
highly affected by the sign flipping of the $x$ and
$y$ components of the magnetic field.
\begin{table}
\centering
\begin{tabular}{c|cccc}
\hline
& & Run25 & Run50 & Run100\\ \hline
\\[-1em]
\multirow{2}{*}{Kinematic} & Mod1 & $172\pm20$ & $118\pm26$ & $121\pm38$ \\
& Mod2 & $120\pm23$ & $120\pm32$ & $115\pm43$\\\hline
\multirow{2}{*}{Dynamical} & Mod1 & $119\pm10$ & $125\pm13$ & $126\pm30$ \\
& Mod2 & $120\pm14$ & $123\pm18$ & $125\pm30$\\
\end{tabular}
\caption{Comparison for $\mathrm{RM}_z$ of the time average
of the orientation of the maximum correlation length in
degree for the two models of the free electron density and the three simulation data sets.
The errors are taken as one standard deviation of the distribution.}
\label{Table:orientation of RMz}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|cccc}
\hline
& & Run25 & Run50 & Run100\\ \hline
\\[-1em]
\multirow{2}{*}{$\ell_{\mathrm{min}}/\ell_{\mathrm{max}}$} & Mod1 & $0.077\pm0.020$ & $0.124\pm0.027$ & $0.200\pm 0.018$ \\
& Mod2 & $0.062\pm0.42$ & $0.077\pm0.045$ & $0.144\pm0.044$\\\hline
\multirow{2}{*}{$\ell_{\mathrm{min}}$ [pc]} & Mod1 & $6.8\pm1.7$ & $10.2\pm2.3$ & $14.8\pm 1.5$ \\
& Mod2 & $6.7\pm5.8$ & $8.2\pm5.1$ & $15.9\pm5.4$\\ \hline
\multirow{2}{*}{$\ell_{\mathrm{max}}$ [pc]} & Mod1 & $97\pm8$ & $92\pm10$ & $80\pm 3$ \\
& Mod2 & $152\pm14$ & $145\pm14$ & $131\pm10$\\
\end{tabular}
\caption{Comparison for $\mathrm{RM}_z$ of the
time average in the dynamical phase
of the correlation lengths ratio
($\ell_\mathrm{min}/\ell_\mathrm{max}$), the minimum correlation length ($\ell_\mathrm{min}$) and the maximum one ($\ell_\mathrm{max}$)
for the two models and the three simulation data sets.
The errors are taken as one standard deviation of the distribution.}
\label{Table:corr_length_ratio}
\end{table}
\subsection{Synchrotron radiation}
\label{synchr theory}
Relativistic charged particles emit synchrotron radiation
when being accelerated. In the magnetised ISM, they are subject to the
Lorentz force, so the acceleration is related to the mass and
the velocity of the particle, as well as to the local magnetic field.
Most of the synchrotron emission is caused by CR electrons,
as they are relatively abundant, fast, and light.
In this analysis we
only consider these CR electrons as a source of the synchrotron
radiation from our simulation box.
In the most general case,
an emitted photon is elliptically polarised in the plane of
observation and for a group of highly relativistic
electrons it can be shown
\cite[see e.g.][]{Westfold1959}
that their total resulting emission is linearly polarised in
the plane of observation, with a plane of polarisation
perpendicular to the projected magnetic field.
The intrinsic
polarisation angle of the synchrotron radiation is given by
\begin{equation}
\theta_\mathrm{i} = \frac{\uppi}{2} + \arctan{\frac{\mathrm{B}_2}{\mathrm{B}_1}},
\end{equation}
where the $\mathrm{B}_{1/2}$ denote the
magnetic field components along two orthogonal axes
perpendicular to the LOS.
As these photons pass
through the ISM, they also undergo Faraday rotation;
see \eref{theta}.
The polarisation angle of a cell of synchrotron radiation
emission is given by
\begin{equation}
\label{eq:total_ang}
\theta = \theta_\mathrm{i} + \lambda^2 \mathrm{FD'}.
\end{equation}
Note that the cell of emission suffers from its own Faraday rotation.
\citet{Sokoloff1998} showed that in this case we need to subtract
from the total $\mathrm{FD}$ one half of the $\mathrm{FD}$
of the emission cell, we call this quantity $\mathrm{FD'}$ to
differentiate from Faraday rotation of a non emitting cell.
Due to the ambiguity by a $\uppi$ rotation of the linearly
polarised synchrotron emission angle, it is impossible to distinguish
between coherent magnetic fields, with a constant direction
along the LOS, and anisotropic magnetic fields, which reverse
their sign.
We assume that the CR electron energy spectrum follows
a power law of the form $\mathrm{d}N(E)
\propto E^{-\gamma} \mathrm{d}E$. Under such assumptions,
the intrinsic fractional
polarisation (which does not include any depolarisation
effect) can be expressed as \citep{LongairBook}
\begin{equation}
p_\mathrm{i} = \frac{\alpha - 1}{\alpha - \frac{5}{3}},
\label{he}
\end{equation}
where $\alpha = (1-\gamma)/2$
is the spectral index of the synchrotron radiation. In this paper we consider $\gamma = 2.7$ \citep{Kotera2011}, which gives $p_\mathrm{i} \simeq 0.74$.
Following an approach similar to \citet{Sur2021} and
\citet{Basu2019} we define the total synchrotron intensity
map as
\begin{equation}
\label{eq synchr int}
I_{\nu} = \int N_0 n_\mathrm{CR} \mathrm{B}_{\perp}^{1-\alpha} \nu^{\alpha} ~\mathrm{d}l,
\end{equation}
with
the integration being performed along the LOS, $n_\mathrm{CR}
= \int N(E)~\mathrm{d}E / \delta^3$, which stands for the
density of CR electrons, $N_0$ is a
proportionality constant in the CR power law
while $\mathrm{B}_{\perp}$ is the component of
magnetic field perpendicular to the LOS.
Note that along the $z$ axis we compute the RM
through $\mathrm{B}_z$ which is expected to be dominated
by random variations in our set-up. However, the synchrotron emission
is obtained through $\mathrm{B}_x$ and $\mathrm{B}_y$ which capture the
effects of the mean-field dynamo. As such, any quantity derived from
this emission can also be used to characterise the effect of
the mean-field over the system.
Since the simulations
did not include CR, similarly to the density of thermal
electrons, we prescribe the CR electron distribution also
with
two different models namely:
\begin{itemize}
\item CRMod1: Here, the CR
electron density is proportional to the magnetic energy density.
This model assumes that equipartition of
CR and magnetic energy is achieved at all scales, and
is preserved throughout the evolution.
\item CRMod2: This model is based on a constant CR electron density.
The assumption involved here is
that the CR density variations are either too small
or occur on the length scales that are too large to
be relevant.
\end{itemize}
In order to generalise the discussion we normalise the synchrotron intensity map by its total flux and by this obtain a result that is independent on the frequency and on the CR normalisation $N_0$.
The maps of synchrotron intensity show similar structures
(see \fref{fig:Iz_maps}, \fref{fig:Ix_appendix} and \fref{fig:Iy_appendix}) with both models,
although CRMod1 tends to produce a larger range of values
in synchrotron intensity.
We illustrate this in \fref{fig:Iz distrib} where we
plot the probability distributions of total synchrotron intensity
maps integrated along the $z$ axis for both CR electron models, at
various times. CRMod2 results in narrower
distributions compared to CRMod1, which could also be ascribed
to the cross-correlations between the density and the magnetic field inherent for CRMod1.
The synchrotron intensity can be directly
related to the magnetic field for both models this time.
With
the models used here
$I_{\nu} \propto \int \left(\mathrm{B}_{\perp}^{3-\alpha} +
\mathrm{B}_{\perp}^{1-\alpha} \mathrm{B}_{\parallel}^2\right)~\mathrm{d}l$ for CRMod1 and
$I_{\nu} \propto \int \mathrm{B}_{\perp}^{1-\alpha} ~\mathrm{d}l$
in the case of CRMod2.
So the structures in the synchrotron map trace the
magnetic field lines, that can be stretched due to the
shearing and randomly twisted by the SN explosions.
From
the very similar qualitative aspects of the two maps
with the two different models of CRs, we conclude
that it is mainly the magnetic field
strength
that characterises
the structures.
In fact, most of the structures in the
synchrotron intensity maps are also present in the mean
magnetic field strength over $x-y$ plane
(see \fref{fig:Mag_proj}).
The $z$ axis is also peculiar regarding the distributions of the
synchrotron radiation (see \fref{fig:Iz distrib}) as a lognormal
distribution of synchrotron intensity is observed only for this axis.
It is clear from this plot that the distributions of $I_\nu$
starts
out with a peak at the lower values and with a flat
tail, and tends eventually to an approximately symmetric shape.
The mean value shifts with time as the magnetic field energy
is growing.
Contrary to the $\mathrm{RM}$, the synchrotron radiation
associated with both models of CR electron distribution is
not a linear function of
the magnetic field.
Therefore, any random fluctuations therein
would not lead to a Gaussian but rather to a lognormal distribution.
We also note that when we
take the telescope effects (\sref{tel_eff}) into account,
we observe a reduction of the distribution tails,
since this amounts to filtering over
the small scale structures.
The inclusion of telescope effects is also accompanied
by a slight shift of the distributions to the lower normalised synchrotron intensities.
\subsection{Stokes parameters}
The Stokes parameters allow to characterise the polarisation
state of a beam
\citep[e.g.][]{taylor2003canadian,haverkorn2006southern,ade2015planck,clark2019mapping}.
In this paper we
consider synchrotron emission as the only
source of the observed radio beam
and its total intensity is
given by $I_{\nu}$ as defined in \eref{eq synchr int}.
The $Q$ parameter is the relative degree of linear
polarisation along two arbitrary orthogonal axes
and the $U$ parameter is the relative degree of linear
polarisation along the same two
axes rotated by $\uppi / 4$.
Here, we ignore the
Stokes $V$ parameter
as we are only considering a linearly polarised
signal. Since we are free
to choose the orthogonal axes,
we can conveniently define
\begin{align}
Q_{\nu} &= \int p_\mathrm{i} N_0 n_\mathrm{CR} \mathrm{B}_{\perp}^{1-\alpha} \nu^{\alpha} \cos(2\theta) ~\mathrm{d}l, \\
U_{\nu} &= \int p_\mathrm{i} N_0 n_\mathrm{CR} \mathrm{B}_{\perp}^{1-\alpha} \nu^{\alpha} \sin(2\theta) ~\mathrm{d}l,
\end{align}
with $\theta$ given in \eref{eq:total_ang}.
The polarisation state of the complete beam is affected
by Faraday effects,
and the properties of its polarised
component such as the intensity of the linearly polarised signal
and the angle of polarisation are defined in terms of the
Stokes parameters as \citep{gardner1966polarization}
\begin{align}
\centering
\mathrm{PI}_{\nu} = \sqrt{Q_{\nu}^2 + U_{\nu}^2}, && \Gamma_\nu = \frac{1}{2} \arctan \left( \frac{U_{\nu}}{Q_{\nu}} \right).
\end{align}
We note that, in practice, there is an ambiguity in
determining this angle since with the ratio $U_{\nu}/
Q_{\nu}$ we lose the information of the
signs of $U_{\nu}$ and $Q_{\nu}$ separately.
We thus
use the function $\textit{arctan2}$ and the modulus of
$\uppi$ to compute the polarisation angle such that $\Gamma_\nu \in [0,\uppi]$.
The degree of linear polarisation (DOP) is then
simply given by $p_\nu = \mathrm{PI}_{\nu}/I_{\nu}$
and is in
general decreasing with wavelength due to Faraday effects \citep[see][]{Sokoloff1998}.
In \fref{fig:mean_depo_frequency} we show the depolarisation
which is
the ratio of the
observed DOP and the intrinsic polarisation of synchrotron
radiation $p_\nu/p_\mathrm{i}$.
We observe
that in the limits of large and small frequencies it reaches
constant values
that depend on the CR ray model.
The depolarisation
we calculate
here results only from the differential rotation of the plane of polarisation
induced by the Faraday effects and the addition of cells of
emission along the LOS.
When the observation frequency
is very large the Faraday effects are very small and thus
the depolarisation corresponds to the intrinsic state of
the magnetic field along the LOS. The value should depend
upon the distribution of synchrotron intensity and its
intrinsic angle of polarisation along the LOS. In the very
low frequency regime the Faraday effects are so large
that any correlation between the different intrinsic
angle of polarisation is lost, which leads to a large
decrease in the DOP. Its value then tends to the one obtained with a
completely random distribution of angles. In the
intermediate regime however, the evolution of the
mean depolarisation with the frequency is mainly
dependent on the thermal electron distribution model.
Since we have a certain distribution of FD along
the LOS, the frequency at which the term $\mathrm{FD}
\lambda^2$ becomes negligible is not the same for each
cell of synchrotron emission.
Especially, the frequency at which the mean depolarisation starts
to increase, and thus the transit from one constant regime to the other
(see \fref{fig:mean_depo_frequency}), will give the order of magnitude
of the lowest absolute values of FD encountered
along the LOS in the plane where the average is performed.
The same is also true for the upper bound
of the transition but this time for the highest values of FD.
We find that the typical structures in the total synchrotron
intensity maps are very similar to the ones observed for the
polarised intensity for very high frequency. In the first
row of \fref{fig:maps_1} we show the observed polarised
intensity along the $z$ axis at, respectively,
70~MHz, 1.4~GHz and 5~GHz
using the configuration of models that
is perhaps more realistic (Mod1 and CRMod1).
We observe throughout these typical maps that the nature of
observed structures depends strongly on the frequency of
observation. Especially on low frequencies we completely
lose the details on small spatial scales.
We also note
that our normalisation of the total incoming flux due to the
synchrotron effects is arbitrarily fixed, so the typical
values that we obtained should not be directly compared to
real observations, which is why we normalise each map by its total flux.
The spacial distribution of these structures, however, are
unaffected by this normalisation.
The DOP and the observed polarisation angle are also affected by
the frequency
(row 2 and lines in row 1 of \fref{fig:maps_1}). Contrary to
the polarised intensity maps (in row 1), the structures are
no longer smoothed when the frequency is decreased but are
mostly destroyed.
In the very low frequency regime, the typical
structures are of the order of a pixel, which we
also confirm from
the evolution of the correlation length as a function of the
frequency of observation.
There, a similar transition to the
one presented in \fref{fig:mean_depo_frequency}
is observed.
Furthermore, the DOP maps in the intermediate frequency
regime show new elongated structures that are oriented
with respect to the radial ($x$) axis, which could be due
to the differential rotation.
In the high frequency regime the DOP is relatively uniform
due mainly to the correlation of the orientation of the
projected magnetic field along the LOS.
In particular, at 5~GHz the observed angle of polarisation
is almost always perpendicular to the orientation of the
projected mean magnetic field (see \fref{fig:Mag_proj}) since the Faraday
effects are small. However, when the frequency of observation is decreased
this statement no longer holds and the correlation between the polarisation
state and the magnetic field is lost.
In Appendix \sref{Sec:appendix_obs} we also display the equivalent to \fref{fig:maps_1} for the $x$ and $y$ axes (\fref{fig:few_maps_appendix_x} and \fref{fig:few_maps_appendix_y}).
\section{Telescope effects}
\label{tel_eff}
In this section we
describe the effects of a finite telescope resolution on
the observables extracted from the simulations.
In order to make a comparison with actual observations we smooth
the observed maps with a Gaussian kernel \citep{telescopeBook},
with the aim of simulating the aforementioned effect of
finite telescope resolution.
We use a convolution kernel
with a full width at half-maximum (FWHM) being the same in all
directions.
It mainly leads to a deletion of the small scale
structures while preserving the larger ones.
However, we need to treat the DOP and polarisation angle maps
more carefully as they are obtained through the transformation
of the actual observables, i.e. the Stokes parameters $Q$ and
$U$. In practice, observations
thus depend upon the frequency of observation as well as the
resolution of the used telescope.
We set the FWHM of the Gaussian kernel to 5 pixels which
corresponds to a smoothing scale of $\sim 42$\pc. If we
consider, for example, that the telescope can achieve a
resolution of 0.5 arcsec,
the resolution of $\sim42
$\pc would locate the galaxy
at $\sim 17$ Mpc.
When comparing the properties
of RM maps with (\fref{fig:RMz with 2 models telescope})
and without (\fref{fig:RMz with 2 models}) the telescope
effects we see that we have lost the very small scale
details like the bubbles structures in the upper left
corner of Mod1. The RMS values are reduced to
57
and
18
rad m$^{-2}$ for Mod1 and Mod2, respectively.
On the other hand, we recover qualitatively the overall
information encoded in the large scale ellipses of
the maps, like the sign of the mean magnetic field
along the LOS or the orientation of the structures. Note
however that the peaked areas of the RM
map associated with
Mod1 have been erased. The length scales are not affected
significantly but are
increased by $\sim 5$ pc
which is about 5\% of its original value.
We also note that a study of the RM at a significantly
higher
redshift would require a modification of the \eref{FD eq}
\citep[see e.g.][]{2011ApJ...738..134A}.
In \fref{fig:maps_2} we present the exact same maps as in
\fref{fig:maps_1} but corrected by telescope effects. We
clearly see that the filaments and arc structures in
the polarised intensity contours are still identifiable
but thickened, thus the small scale details are
not
distinguishable from the background.
There are also
differences in the evolution regarding the frequency of
observation.
If we focus on the lowest values of the maps
we see that they are spatially distributed in such a way that
they delimit structures of higher values without agglomerating
or forming large scale structures themselves.
It is then even harder to disentangle
between the original peaked structures and the new structures induced by
the Gaussian smoothing.
The effect on the DOP and angle of polarisation are even
more pronounced as we cannot achieve the same random
pixel-wise maps on frequencies $\nu \sim 70$~MHz.
In particular, telescope effects lead to the creation
of structures that we do not observe in maps with the
maximum resolution of the simulation, i.e.~$\delta
\sim 8~\mathrm{pc}$. However, it seems that at very
low frequencies the isotropy observed at the maximum
resolution is still observed with telescope effects for these two quantities.
Furthermore, we observe that the typical values of DOP
and normalised polarised intensity are reduced by
Gaussian smoothing.
The overall shapes of the curves in \fref{fig:mean_depo_frequency}
are not affected by the telescope effects;
indeed the constant value of the mean depolarisation
in the high and low frequency limit still depends
mainly on the
CR distribution model, and the intermediate transition region
still depends on the model of thermal electrons.
However we still can notice differences.
The mean depolarisation at low frequencies is significantly
lower and closer to zero.
The frequency range of transition is smaller, especially the end of the
transition at low frequencies occurs at a higher frequency
($\nu\sim 0.5$~GHz compared to $\nu\sim 0.1$~GHz without telescope effects).
This trend is also confirmed in \fref{fig:mean_depo_telescope},
where we show the evolution of the mean depolarisation as we
increase the FWHM of the Gaussian kernel and the frequency of observation.
This evolution is not linear and highly depends on the frequency of observation.
The telescope effects are particularly important
at low and intermediate observation frequencies,
as we could directly see from the DOP maps
(see second row of \fref{fig:maps_2}).
It also supports the idea that when we have a finite resolution
observation we lose more the information generated by the
lowest values of FD along the LOS. This is different from the RM maps where we lost most of the information on the structures generated by the highest values of RM.
\section{Discussion}
\label{sec_discussion}
In this section we discuss the main limitations of
this work.
First, we would like to
point out that the simulations domain is
much smaller than a typical disc galaxy in $x$
and $y$ directions. For example the Milky Way has a
radial length scales of $\sim 30$\kpc, as opposed to
$\sim 0.8\,\times\, 0.8$ \kpc spanned by the simulation
domain in $x$ and $y$ direction. Any extracted observable thus would therefore
reveal the local correlations in the ISM turbulence, and
any correlations exceeding a \kpc scale would not be
captured. Furthermore, the reconstruction of RM maps
of an entire galaxy would also need the radial dependence
of gas distribution and any azimuthal asymmetries to be
taken into account \citep{gasdistrib}.
That is the main reason we performed our analysis along
the $z$ direction.
We already mentioned that the simulations analysed only include neutral hydrogen
but in reality several species could have non negligible contribution
to the ISM dynamic.
Especially this treatment does not allow us to get a self consistent
evaluation of the thermal and CR electrons densities.
We had to define very simple models for both of them.
The mean free electron density used here is set to
0.1 cm$^{-3}$, especially with Mod2 we neglect any
kind of spatial and temporal variations.
From observations we know that in spiral galaxies the
mean thermal electron density can range between
$\sim$ 0.1-0.01 cm$^{-3}$ in the disc plane depending
if we consider arms or inter-arm regions \citep{schnitzler,Yao,beck2019synthesizing}.
However, the free electron density is expected to be even smaller outside the
disc plane. As the scale height of the simulation box is larger than a typical disc,
the mean free electron could be overestimated.
With Mod1 we neglect contributions from other parameters, such as temperature, to
evaluate the local thermal electron density.
Even stronger assumptions were made regarding the CR models. A constant CR model would hold only on very small scales and not at all on a galactic scale.
On the contrary, equipartition seems not to hold on scales smaller than $\sim 1$ kpc \citep{equipart1kpc,equipart1kpc2}, whereas in this work our maximum resolution for any quantity is about 8.3 pc.
We would need a more rigorous model that lies in between our two hypothesises.
As an example a more complete description is proposed by \citet{SchoberCR} that includes contributions to the CR spectrum from the main interactions of electrons in galaxies.
However, this model does not include the diffusion of cosmic rays \citep[see e.g.][for an analysis of the CR diffusion]{sampson2022turbulent}.
Treating the CRs directly in the simulations can also affect
the magnetic field itself \citep{10.1093/mnras/staa3509} which eventually affects all the observables presented.
We also would like to review effects that can modify the slope of the synchrotron emission spectrum, we used a constant value of $\alpha = -0.85$ which would correspond to a typical value in the GHz frequencies \citep{Platania_1998}.
In practice when going to frequencies $\nu \ll$ GHz the spectrum is modified due to the contributions from synchrotron self-absorption and free-free absorption.
In the opposite when going higher than a few GHz we should include effects of free-free emission, eventually at very high frequency thermal emission could also be considered.
These different interactions contribute together to flatten the synchrotron spectrum \citep{Guzman,kogut2012synchrotron}.
It could be a straight forward extension of this work to see how the observables and observed structures are affected by these effects at large and low frequencies.
\section{Conclusions}
\label{sec_conclusions}
In this paper we have explored the typical observational
signatures of a galactic magnetic field that has been
self-consistently generated by a large-scale turbulent
dynamo. In particular we looked at three types of
observables namely the Faraday RM, the synchrotron
radiation, and the Stokes parameters, as well as their
related quantities such as the DOP (\sref{sec_observables}).
In the second part we applied simple telescope effects
(\sref{tel_eff}) to mimic radio observations.
The magnetic field was directly obtained from a simulation
of a galactic dynamo, while other relevant quantities for
galactic radio emission were not included in the simulation.
In particular, we had to model the distribution of thermal and
non-thermal electrons. We have defined two models of thermal
electrons, one with a
constant electron density while in the other one we let
the electron density scale with the local mass density.
Similarly for CR electrons density, we used a model
with constant CR density and in the other one we scaled
it with magnetic energy density.
We found that the RM structures are consistent with the
direction of differential rotation in the $x-y$ plane.
The SN rate does not seem to have an influence on the
overall orientation of the structures in the dynamical
phase.
However it was noticed that the stability and
the typical sizes of these structures are affected
by the SN rate.
Our results also indicate that the RM is sensitive to the
choice of the thermal electron model both in terms of qualitative
aspect of the structures and typical values of the maps.
From the RM distribution we found that along an axis not
directly
influenced by rotation measure (i.e.~the $z$ axis in our
setup) it is the magnetic field and thermal electron density
variations that dominate
the resulting maps.
Synchrotron radiation intensity maps showed the same structures
as in the mean magnetic field maps.
We found that the final qualitative aspect of the maps almost does not depend
on the choice of the model of CR electron.
However distributions
showed a significantly increased degree
of symmetry with the constant
CR model along the $z$ axis.
Maps of quantities that are derived from the Stokes parameters maps are highly dependent
on the frequency of observation. We found that decreasing the
frequency introduces something similar to a background noise
over the maps, deleting structures. It was also observed that
at an intermediate frequency completely new structures could
appear, this frequency depends on the typical FD encountered
in the photon's path.
Under telescope effects at a resolution of FWHM $\sim 42$ pc, we observed mainly deletion of small scale spatial structures in every
map.
We also found that these effects depend non-linearly on the
frequency of observation.
In particular when we applied telescope effects, completely new
structures
in the contours of various quantities related to the Stokes parameters were observed at low frequencies.
In the low frequency regime, the typical structures are mainly
induced by FD values for which the term $\lambda^2 \mathrm{FD'}$
(with $\mathrm{FD'}$ being the FD for a self-emitting cell) is
neither
too large nor negligible.
When this term is very large the cell of emission loses its correlation
with the other cells along the LOS, thus a tiny variation in the frequency
of observation will result in a random modification of Stokes $Q$ and $U$.
On the other hand when it is small, the cell of emission will not modify Stokes
$Q$ and $U$.
Finally, if FD is such that $\lambda^2 \mathrm{FD'}$ is somewhere in-between,
modifications in Stokes $Q$ and $U$ due to a frequency variation are smooth.
As such the telescope effects seem stronger at the
lower frequencies, the Gaussian smoothing tends to alter
the information given by lowest values of FD more than
high values. However we observed in RM maps that the
deletion of small scale details happened primarily on
the highest values of RM.
With this work we would like to motivate
observational analyses that study the RM and Stokes
parameters simultaneously, as they give complementary
information about the magnetic field.
We would like to emphasise that at low frequencies
observations might suffer from the effect of a finite
resolution and may therefore lead to
wrong conclusions.
With the new generation of telescopes,
such as the Square Kilometre Array (SKA), however, the
resolution of radio observations will highly increase
which might
allow lower frequencies to be explored increasing the
comprehension of galactic magnetic fields.
\section*{Acknowledgements}
We are grateful to Kandaswamy Subramanian and Yoan Rappaz for providing very
useful comments on our manuscript.
A.B.~and J.S.~acknowledge the support by the Swiss National
Science Foundation under Grant No.\ 185863.
\section*{Data Availability}
Data used in this analysis scripts will be provided upon reasonable request to the corresponding author
\bibliographystyle{mnras}
\bibliography{example} %
\appendix
\section{Two-point correlation of the RM}
\label{Sec:appendix_RMz}
In this section we illustrate the way we compute the correlation length.
\fref{fig:RMz_corrfct} represents an example of two-point correlation maps for the RM along the $z$ axis.
\fref{fig:RMz_corrlength_integral} shows the correlation function along the axes of maximum and minimum correlation length, which are simply obtained by integration.
\section{Observables along radial and azimuthal directions}
\label{Sec:appendix_obs}
We display in this section the results obtained for other two axes of the simulation, namely the radial $x$ and azimuthal $y$ directions. Typical correlation lengths of the RM in the dynamical can be found in Table~\ref{Table:corr_length_x_appendix} for the $x$ axis and in Table~\ref{Table:corr_length_y_appendix} for the $y$ axis. The equivalent maps to \fref{fig:RMz with 2 models} for the other two axes are shown in \fref{fig:RMx_appendix} and \fref{fig:RMy_appendix}. The normalised synchrotron intensity along, respectively, the $x$ and $y$ directions can be found in \fref{fig:Ix_appendix} and \fref{fig:Iy_appendix}. Finally in \fref{fig:few_maps_appendix_x} and \fref{fig:few_maps_appendix_y} the six contour plots of \fref{fig:maps_1} can be found for a LOS along the radial and azimuthal directions.
\begin{table}
\centering
\begin{tabular}{c|cccc}
\hline
& & Run25 & Run50 & Run100\\ \hline
\\[-1em]
\multirow{2}{*}{$\ell_{\mathrm{min}}$ [pc]} & Mod1 & $54\pm8$ & $45\pm4$ & $60\pm 3$ \\
& Mod2 & $44\pm7$ & $40\pm5$ & $68\pm4$\\ \hline
\multirow{2}{*}{$\ell_{\mathrm{max}}$ [pc]} & Mod1 & $116\pm11$ & $96\pm5$ & $92\pm 6$ \\
& Mod2 & $124\pm10$ & $104\pm4$ & $103\pm5$\\
\end{tabular}
\caption{Comparison for $\mathrm{RM}_x$ of the
time average over the last 500 Myr of each run
of the minimum correlation length ($\ell_\mathrm{min}$) and the maximum one ($\ell_\mathrm{max}$)
for the two models and the three simulation data sets.
The errors are taken as one standard deviation of the distribution.}
\label{Table:corr_length_x_appendix}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|cccc}
\hline
& & Run25 & Run50 & Run100\\ \hline
\\[-1em]
\multirow{2}{*}{$\ell_{\mathrm{min}}$ [pc]} & Mod1 & $224\pm16$ & $235\pm7$ & $158\pm 10$ \\
& Mod2 & $355\pm2$ & $345\pm4$ & $262\pm7$\\ \hline
\multirow{2}{*}{$\ell_{\mathrm{max}}$ [pc]} & Mod1 & $355\pm11$ & $330\pm4$ & $228\pm 11$ \\
& Mod2 & $378\pm5$ & $364\pm2$ & $307\pm5$\\
\end{tabular}
\caption{Comparison for $\mathrm{RM}_y$ of the
time average over the last 500 Myr of each run
of the minimum correlation length ($\ell_\mathrm{min}$) and the maximum one ($\ell_\mathrm{max}$)
for the two models and the three simulation data sets.
The errors are taken as one standard deviation of the distribution.}
\label{Table:corr_length_y_appendix}
\end{table}
\bsp %
\label{lastpage} |
Title:
Broadband multi-layer anti-reflection coatings with mullite and duroid for half-wave plates and alumina filters for CMB polarimetry |
Abstract: A broadband two-layer anti-reflection (AR) coating was developed for use on a
sapphire half-wave plate (HWP) and an alumina infrared (IR) filter for cosmic
microwave background (CMB) polarimetry. Measuring tiny CMB B-mode signals
requires maximizing the number of photons reaching the detectors and minimizing
spurious polarization due to reflection with an off-axis incident angle.
However, a sapphire HWP and an alumina IR filter have high refractive indices
of about 3.1, and an AR coating must be applied to them. Thermally sprayed
mullite and Duroid 5880LZ were selected in terms of index and coefficient of
thermal expansion for use at cryogenic temperatures. With these materials, the
reflectivity was reduced to about 2% at 90/150 GHz and <1% at 220/280 GHz. The
design, fabrication, and optical performance evaluation of the AR coatings are
described. The coatings were used in a current ground-based CMB experiment
called the Simons Array. They could also be applied to next-generation CMB
experiments, such as the Simons Observatory.
| https://export.arxiv.org/pdf/2208.09209 |
\abovedisplayskip=2pt
\belowdisplayskip=2pt
\title{Broadband multi-layer anti-reflection coatings with mullite and duroid \Erase{used}for half\Add{-}wave plate\Add{s} and alumina filter\Add{s} for CMB polarimetry %
}
\author{Kana Sakaguri${}^{1}$ \and Masaya Hasegawa${}^{2}$ \and Yuki Sakurai${}^{3}$ \and Charles Hill${}^{4,5}$ \and Akito Kusaka${}^{1,3,4}$ %
}
\institute{${}^{1}$Department of Physics, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan\\
\email{ksakaguri@cmb.phys.s.u-tokyo.ac.jp} \\
${}^{2}$High Energy Accelerator Research Organization, Tsukuba, Ibaraki 305-0801, Japan\\
${}^{3}$Kavli IPMU, University of Tokyo, Kashiwa, Chiba 277-8583, Japan\\
${}^{4}$Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA\\
${}^{5}$Department of Physics, University of California, Berkeley, CA 94720, USA
}
\date{Received: date / Accepted: date}
\section{Introduction}
\label{intro}
\Add{The} cosmic microwave background (CMB) has a variety of information that is useful for understanding the early universe \cite{PhysRevD.60.043504,PhysRevLett.78.2058}. In particular, B-mode polarization is a unique pattern of parity-odd polarization that derives from primordial gravitational waves and gravitational lensing. The observation of B-mode polarization from primordial gravitational waves would provide strong evidence of inflation.
For high-precision CMB experiments, \Erase{there have been}recent advances in the development of devices to reduce systematic errors originating from optical systems and to improve sensitivity \Add{have occurred}. Optical elements\Add{,} such as \Erase{a}continuously rotating half-wave plate\Add{s} (HWPs) and \Erase{a}filter\Add{s,} have been introduced. A\Erase{n} HWP modulates the polarization signal, and the filter removes infrared (IR) \Add{signals}. However, the sapphire and alumina used for HWPs and IR filters have high refractive indices of $\simeq$ 3.1 \cite{Inoue:16,Lamb}. Thus, they refract much light and increase systematic and statistical uncertainties.
\Erase{In order to}\Add{To} solve this problem, an anti-reflection (AR) coating is \Erase{particularly}critical for these materials. \Add{Some ways to apply AR coatings like layering dielectrics or machining sub-wavelength structures have been reported \cite{Raut2011,Takaku_2021}. In these methods, layering dielectrics were chosen in terms of the hardness of machining substrates, which are sapphire or alumina, machining speed of large-diameter substrates, and application possibility to the high-frequency band.} Our AR coating consists of two dielectric layers and can reduce reflectivity using the materials with different refractive indices on the surface of optical elements. By adjusting the thickness of the coating materials, an AR coating can be optimized for a specific frequency band.
In this paper, \Erase{we present}the development of broadband multi-layer AR coatings for CMB experiments with 90/150~GHz and 220/280~GHz dichroic detectors that each cover approximately 30\% of the fractional bandwidth \Add{is presented}. To realize this broadband frequency coverage, \Erase{we developed}two-layer coatings with an average reflectivity of less than 3\% \Add{were developed}. \Erase{There are}Two challenges regarding AR coatings for CMB observations \Add{should be mentioned}.
First, it is difficult to apply a coating on 50-cm-diameter sapphire or alumina with uniformity of tens of microns.
Second, the AR-coated optical elements are cooled down to 40~\Erase{K}or 4~K \Erase{in order}to reduce thermal radiation on the cryogenic detectors, which creates challenges associated with differential thermal contraction.
For the first layer, thermally sprayed mullite \cite{Inoue:16}, a ceramic material (Tocalo Corporation) \cite{TOCALO}, was used, while the second layer was Duroid 5880LZ, a composite material (Rogers Corporation) \cite{ROGERS}. These materials were selected both for their refractive index and coefficient of thermal expansion. \Erase{We discuss the}AR coating's design, fabrication, and its optical performance at room temperature \Add{are discussed}.
\section{Design and fabrication}
\label{sec:design and fab}
The target\Add{s} of our AR coating \Erase{is}\Add{are} ground-based CMB experiments, and the requirements of the HWP and IR filter are \Add{described bellow}\Erase{as follows}:
\begin{itemize}
\item The diameters of the HWP and the filter are \Erase{as large as}about 50~cm, and the AR coating needs to be applied evenly to this large diameter \cite{Hill:SPIE,Ali_2020}.
\item The reflectance should be reduced to a few percent at the detector bands.
Roughly, 80-110/130-170~GHz and 200-260/260-320~GHz were chosen to calculate the average reflectance for each one \cite{Abitbol2020,10.1117/12.2312821,abazajian2019cmbs4}.
\item The AR coatings should not delaminate when cooled because the filter and the HWP are used at 40-50 K, and $\sim$4 K in some cases \cite{Ali_2020}.
\end{itemize}
\Add{Multiple reports have been made so far \cite{Golec2020,Nadolski2018,Rosen:13}, but producing repeatable at large diameters is still challenging.}
To achieve such broadband and low-reflectivity coatings, \Erase{we developed}two-layer coatings \Add{were developed in our study}.
\Add{In our coating, the reflectance was minimized at a specific frequency band by layering dielectrics with different refractive indices, which step down to a refractive index of 1 for air. First, the optimal indices of the layers were calculated. Using the condition that the optical thickness of each layer was $\lambda/4$ wavelengths relative to the incident wavelength~$\lambda$, the optimal index relation could be derived:}
\begin{equation}
\Erase{1 = \frac{n_1^2 n_{\rm{Sapphire\, or\, Alumina}}}{n_2^2},}
\Add{n_s = \frac{n_2^2}{n_1^2}},
\end{equation}
\Add{for which}\Erase{where} $n_1$, $n_2$, and $n_{\Add{s}}$ are the indices of the first layer, second layer, and sapphire or alumina, respectively. This \Add{order originates}\Erase{comes} from the transmitted and reflected waves being 180 degrees out of phase, and the interference \Add{causes a reduction in} the reflectance \cite{1989itcm.book.....M}.
Considering the index, coefficient of thermal expansion, and compressive modulus, \Erase{we chose}coating materials \Add{were chosen} to satisfy the \Add{study} requirements. \Erase{mentioned.}Figure \ref{fig:AR conceptual diagram} shows a diagram of our AR coatings. \Erase{We selected} Mullite ceramic~\cite{TOCALO} \Add{was selected} as the first layer and Duroid 5880LZ \cite{ROGERS} aluminosilicate microspheres as the second layer.
\Add{Mullite had been used before \cite{Inoue:16}, but the combination of mullite and Duroid was used for the first time.}
The properties of the coating layers are listed in Table \ref{tab:AR properties}.
\begin{table}[tbp]
\caption{Basic properties of our AR coating materials. The AR indices, $n$, and thicknesses, $d$, for 90/150~GHz and 220/280~GHz coatings are shown. The thicknesses are the fabricated values, and the errors represent the production errors. All values are at room temperature.}
\centering
\begin{tabular}{c|c|c|c} \hline
Material & $n$ & $d$~[mm] (90/150~GHz) & $d$~[mm] (220/280~GHz) \\ \hline
Mullite & 2.52 $\pm$ 0.02 \cite{Inoue:16} & 0.254 $\pm$ 0.01 & 0.147 $\pm$ 0.01 \\ \hline
Duroid & 1.41 $\pm$ 0.01 \cite{ROGERS} & 0.385 $\pm$ 0.01 & 0.155 $\pm$ 0.01 \\ \hline
\end{tabular}
\label{tab:AR properties}
\end{table}
After selecting the coating materials, small test samples were fabricated after calculating the thicknesses of layers by minimizing the average reflectance between the frequency bands.
\Add{The predicted reflectance was calculated as an infinite plane wave incident on a homogeneous dielectric thin layer.}
The middle part of Figure \ref{fig:AR conceptual diagram} shows a photo of the fabricated AR coating sample. \Erase{We used}A 10-cm square of alumina with a thickness of 4~mm \Add{was used}.
The first layer of mullite was thermally sprayed on both alumina surfaces \cite{Inoue:16} by the Tocalo Corporation.
\Add{This process uses an established and available technology and can be fabricated to a high precision of 10~{\textmu}m.}
\Add{Because mullite has a coefficient of thermal expansion matched to that of alumina and adheres better to alumina, it would not cause delamination under cryogenic conditions \cite{Inoue:16}.}
The second layer of Duroid was glued on with \Add{40-{\textmu}m-thick} Epo-Tek, a type of epoxy.
\Add{Duroid thickness was machined to 385$\pm$10~{\textmu}m at Suzuno Giken \cite{Suzuno}.}
\Add{To compute the accurate model during simulation, the reflectance was effectively calculated with three layers including the thin layer of Epo-Tek.}
\Add{Thickness of the Epo-Tek layer is 20-40{\textmu}m. While this may appear rather large uncertainty, there is relatively an impact on the transmission due to this thickness because the Epo-Tek layer is thin and its index is almost exactly the mean of the indices of mullite and Duroid.}
\Erase{We checked}The optical performance at 300~K \Add{was checked,} and \Erase{reoptimized}the thickness \Add{was reoptimized} while considering the thickness trend resulting from the optical measurement. This process was repeated until the best coating was finally obtained.
A large-diameter sample was fabricated in the same way as the small sample after determining the optimal thickness with a small test sample, thus completing the development of coatings for 90/150~GHz. A\Erase{n} HWP sandwiched between AR-coated alumina (Figure \ref{fig:AR conceptual diagram}) and an IR filter with our coatings were used in an experiment with the telescope of the Simons Array.
\section{Optical performance and cryogenic validation}
\label{sec:opt.meas.}
The optical performance of the small samples was analyzed using a vector network analyzer (VNA) at Kavli Institute for the Physics and Mathematics of the Universe (IPMU). The reflectivity at an incident angle of 45 degrees and transmissivity of 55 to 330~GHz was measured, which is the required frequency range for CMB observation. The measurement setup at room temperature is shown in Figure~\ref{fig:setup roomtemp}.
\Add{We measured an alumina slab without AR before measuring AR coating samples as a validation of the setup and confirmed that the same fringe patterns were measured as predicted.}
Figure \ref{fig:ref result} shows the measured reflectivity (an example from the 220/280~GHz sample).
\Add{First, measured reflection at 45-degree incidence was fitted with thickness and index of each layer.
Optical measurements were performed once after spraying mullite but before applying Duroid coating, so the resulting thickness and index were reliable. The fit values were then substituted in the model with a zero-degree incident angle. Finally, the average reflectance was calculated at zero degrees at the detector bands.}
Some samples were also measured at a different incidence angle (18~degrees) to confirm that this analytical process was accurate.
Table \ref{tab:relfectivity} shows the average reflectance results of the small samples. It was possible to make samples with an average reflectance of $<$3\% for 90/150~GHz and 1\% for 220/280~GHz. The difference between the sample and \Erase{the}design mainly \Add{originates}\Erase{comes} from the thickness of each layer and the production tolerance. This error is roughly in the 2$\sigma$ range. However, the overall performance is important, and both the 90/150~GHz and 220/280~GHz samples were well constructed.
\Add{For the 90/150~GHz sample, another AR coating using other dielectrics was available, and the performance of our coating was comparable to that of it.}
\begin{table}[tp]
\centering
\caption{Average reflectivity (on-axis performance) of the sample in each band. Differences between samples and designs mainly come from the thickness of coating materials. These errors are largely due to tolerance production. It is important that the overall performances are reasonable.}
\label{tab:relfectivity}
\begin{tabular}{c|cc|cc} \hline
& \multicolumn{2}{|c|}{90/150~GHz} & \multicolumn{2}{|c}{220/280~GHz} \\ \cline{2-5}
& 90 & 150 & 220 & 280 \\ \hline
measured & 1.7\% & 3.4\% & 0.38\% & 0.87\% \\ \hline
design & 3.6\% & 0.9\% & 0.92\% & 0.18\% \\ \hline
\end{tabular}
\end{table}
\Erase{We also measured}The transmission to estimate the AR coating's loss tangent, tan$\delta$, \Add{was also measured}. By measuring the decrease in transmittance and combining it with the reflectance measurement, tan$\delta$ \Add{could}\Erase{can} be determined. The loss tangents of materials increase as the frequency increases, and it is necessary to estimate them, especially for the 220/280~GHz sample. At lower temperatures, the loss tangent is expected to decrease. Therefore, \Erase{we can place}an upper limit on the loss tangent \Add{could be added} by measuring the transmittance at room temperature.
Figure \ref{fig:transUHF} shows the measured data of the 220/280~GHz sample. For prediction, the index and thickness fit values obtained from the reflectance measurement were substituted. First, it was confirmed that the fringe patterns of reflectance and transmittance were consistent, and the results showed that the measurement system was valid.
Then,\Erase{we fit} the materials' loss tangent \Add{was fitted} while fixing the indices and thickness to the values obtained from the reflectance measurement. After fitting the measured transmission, the transmission was estimated at about 80~K, \Add{the temperature} at which the HWPs and alumina filters operate. The AR coating’s loss tangent decreased at low temperature, and transmission \Add{was expected to} increase by $\sim$20\% at 80~K.
\Add{To estimate how much the average transmittance increases at the cryogenic temperature, the values of the alumina index were fixed according to Inoue et al. \cite{Inoue:16} and the other indices and thicknesses to the measured values at room temperature, and how much the average transmittance increased (how much absorption decreased) was estimated.}
Table \ref{tab:loss values} shows the fit values and prediction of the materials' loss tangents at 80~K.
\Add{Since these estimates rely on some of the values extrapolated from other frequencies or temperature, actual measurements must be taken to confirm them.}
The performance at low temperature will be evaluated in future work.
As a cryo-mechanical test, a cooldown test was also conducted to make sure that the fabricated samples \Add{did} not delaminate when cooled.
\Add{Duroid was diced into $4\times4$~cm square islands after the Epo-Tek has cured while the mullite layer was not diced} to prevent peeling due to heat shrinkage. \Erase{We cooled}The \Add{large diameter} sample \Add{was cooled} three times to 30~K at \Add{the} High Energy Accelerator Research Organization.
\Erase{There was}No delamination after cooldown \Add{occurred,} and the optical performance was the same as before cooling.
\Add{This cooldown test was performed every time a large sample was fabricated.}
\Add{After the cooldown test, an optical test was performed to confirm that the optical performance was unchanged compared to the earlier stage, indicating that no delamination had occurred. For optical measurements, a total of nine locations were measured (center and eight peripheral locations) to ensure that the coating was uniform. The detail of the verification process will be described in our future manuscript.}
\begin{table}[tb]
\caption{Loss tangent, tan$\delta$ ($\times10^{-4}$), of the coating materials at 300 K (fit values) and 80 K (prediction). \Add{Systematic errors at 300~K were estimated from the measurement of the reference sample, alumina slab.} At lower temperatures, the loss tangent was expected to decrease, while the transmittance was expected to increase.}
\centering
\begin{tabular}{c|c|c|c} \hline
Temperature & Alumina & Mullite & Duroid \\ \hline
300 K & 7.0 $\pm$ 0.1 & 423 $\pm$ 3 & 39 $\pm$ 15 \\ \hline
80 K & 3.0 $\pm$ 1.1 \cite{Inoue:16} & 53 $\pm$ 10 \cite{Inoue:16} & $<$ 21 \cite{ROGERS} \\ \hline
\end{tabular}
\label{tab:loss values}
\end{table}
\begin{comment} %
\section{Low temperature measurement}
\label{sec:lowtemp}
\subsection{Setup}
\label{sec:setup l}
\subsection{Result}
\label{sec:result l}
\end{comment}
\section{Conclusion}
\Erase{We have presented}The design\Add{s}, fabrication, and optical performance of a 2-layer AR coating for CMB polarimetry at 90/150~GHz and 220/280~GHz \Add{are presented}. Our coatings with mullite and Duroid 5880LZ \Add{led to a reduction in} the reflectivity to about 2.7\% at 90/150~GHz and $<1$\% at 220/280~GHz.
A HWP and an IR filter with our 90/150~GHz coating was used in the Simons Array experiment. This technique could also be applied to other CMB experiments, \Add{such as} \Erase{like}the Simons Observatory.
\begin{comment}
Text with citations \cite{RefB} and \cite{RefJ}.
\subsection{Subsection title}
\label{sec:2}
as required. Don't forget to give each section
and subsection a unique label (see Sect.~\ref{sec:1}).
\paragraph{Paragraph headings} Use paragraph headings as needed.
\begin{equation}
a^2+b^2=c^2
\end{equation}
\begin{table}
\caption{Please write your table caption here}
\label{tab:1} %
\begin{tabular}{lll}
\hline\noalign{\smallskip}
first & second & third \\
\noalign{\smallskip}\hline\noalign{\smallskip}
number & number & number \\
number & number & number \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\end{comment}
\begin{acknowledgements}
We would like to thank Junji Yumoto and Kuniaki Konishi for the X-ray CT measurement understanding thicknesses of small samples.
This work was supported by JSPS Core-to-Core program grant number JPJSCCA20200003, World Premier International Research Center Initiative (WPI), MEXT, Japan, and JSPS KAKENHI Grant Number 19H00674 and 19K14732. This research was supported by FoPM, WINGS Program, the University of Tokyo.
\end{acknowledgements}
\bibliographystyle{spphys} %
\bibliography{reference} %
|
Title:
Evaluation of the potential of a gamma-ray observatory to detect astrophysical neutrinos through inclined showers |
Abstract: We assess the capabilities of a ground-based gamma-ray observatory to detect
astrophysical neutrinos with energies in the $100\,{\rm TeV}$ to $100\,{\rm
PeV}$ range. The identification of these events would be done through the
measurement of very inclined extensive air showers induced by downward-going
and upward-going neutrinos. The discrimination of neutrino-induced showers in
the overwhelming cosmic-ray background is achieved by analysing the balance of
the total electromagnetic and muonic signals of the shower at the ground. We
demonstrate that a ${\rm km^2}$-scale wide field-of-view ground-based gamma-ray
observatory could detect a couple of Very-High to Ultra-High energy (VHE-UHE)
neutrino events per year with a reasonable pointing accuracy, making it an
interesting facility for multi-messenger studies with both photons and
neutrinos.
| https://export.arxiv.org/pdf/2208.11072 |
\title{Evaluation of the potential of a gamma-ray observatory to detect astrophysical neutrinos through inclined showers}%
\author{Jaime Alvarez-Mu\~niz}
\address{Instituto Galego de F\'\i{}sica de Altas Enerx\'\i{}as (IGFAE), Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain}
\author{Ruben Concei\c{c}\~{a}o}
\email{ruben@lip.pt}
\address{Laborat\'{o}rio de Instrumenta\c{c}\~{a}o e F\'{i}sica Experimental de Part\'{i}culas (LIP) - Lisbon, Av.\ Prof.\ Gama Pinto 2, 1649-003 Lisbon, Portugal}
\address{Instituto Superior T\'ecnico (IST), Universidade de Lisboa, Av.\ Rovisco Pais 1, 1049-001 Lisbon, Portugal}
\author{Pedro J. Costa}
\address{Laborat\'{o}rio de Instrumenta\c{c}\~{a}o e F\'{i}sica Experimental de Part\'{i}culas (LIP) - Lisbon, Av.\ Prof.\ Gama Pinto 2, 1649-003 Lisbon, Portugal}
\address{Instituto Superior T\'ecnico (IST), Universidade de Lisboa, Av.\ Rovisco Pais 1, 1049-001 Lisbon, Portugal}
\author{M\'ario Pimenta}
\address{Laborat\'{o}rio de Instrumenta\c{c}\~{a}o e F\'{i}sica Experimental de Part\'{i}culas (LIP) - Lisbon, Av.\ Prof.\ Gama Pinto 2, 1649-003 Lisbon, Portugal}
\address{Instituto Superior T\'ecnico (IST), Universidade de Lisboa, Av.\ Rovisco Pais 1, 1049-001 Lisbon, Portugal}
\author{Bernardo Tom\'e}
\address{Laborat\'{o}rio de Instrumenta\c{c}\~{a}o e F\'{i}sica Experimental de Part\'{i}culas (LIP) - Lisbon, Av.\ Prof.\ Gama Pinto 2, 1649-003 Lisbon, Portugal}
\address{Instituto Superior T\'ecnico (IST), Universidade de Lisboa, Av.\ Rovisco Pais 1, 1049-001 Lisbon, Portugal}
\date{\today}%
\pacs{Valid PACS appear here}%
\section{Introduction}
\label{sec:intro}
The multi-messenger approach to astroparticle physics has the potential to address fundamental problems, such as those related to physics in extreme phenomena, the origin of ultra-high-energy cosmic rays, the nature of dark matter, the possibility of Lorentz invariance violation, and even the existence of undiscovered particles.
Numerous experiments resort to extensive air shower (EAS) arrays to study very-high-energy gamma-rays, such as HAWC~\cite{HAWC}, LHAASO~\cite{LHAASOLayout}, and the Southern Wide-field Gamma-ray Observatory (SWGO)~\cite{SWGOFuture}, currently in its planning stage. The recent observation of gamma-rays with energies above $1\,$PeV by LHAASO~\cite{LHAASOPeV} puts pressure on the construction of a facility surveying the Southern hemisphere sky. This experiment should have an effective area of the order of ${\rm km^2}$ and an excellent gamma/hadron discrimination capabilities to cope with the low fluxes reported by LHAASO.
On the other hand, experiments such as IceCube have been successfully operating over the years, demonstrating the presence of a very-high-energy neutrino flux of astrophysical origin. This flux has been seen to extend up to a few PeV with no sign of a cutoff~\cite{IceCubeFlux}.
The simultaneous measurement of gamma-rays and neutrinos coming from the same astrophysical source, known as multi-messenger measurements, is highly aspired, and it has in the last years been reshaping the experimental panorama with the addition of new, more ambitious upgrades and new experiments (see, for instance, \cite{IceCube-Gen2,CTAFundPhys,FermiGW}).
In this work, we have used shower simulations to determine whether ground-based gamma-ray EAS arrays can be used to detect neutrinos and estimate their expected sensitivity. Our study is restricted to neutrinos with energies ranging from $100\,{\rm TeV}$ to $100\,{\rm PeV}$. Signal events correspond to inclined EAS (zenith angle $\theta>60^\circ$) induced by downward and upward-going neutrinos. The main background source for this measurement is very inclined EAS resulting from the interaction of cosmic rays with the atmosphere.
The article is organized as follows: In section~\ref{sec:strategy}, the experimental strategy employed to distinguish showers induced by neutrinos from the cosmic ray background is presented. Next, in Section~\ref{sec:simulation}, the simulation framework and the sets of simulated showers are given. In Section~\ref{sec:discrimination}, the discrimination methodology is presented. In Section~\ref{sec:method}, we discuss the method to estimate the sensitivity of a ground array observatory to astrophysical neutrinos, focusing on electron neutrinos $\nu_e$. Our results on the sensitivity obtained for downward-going and upward-going neutrino-induced events are given in Sections~\ref{sec:resultsdown} and~\ref{sec:resultsup}, respectively. In Section~\ref{sec:resultsdown}, the impact of the density of detector units in the array (fill factor), of experimental reconstruction resolution, and of simulations statistics are studied. Finally, in Section \ref{sec:resultsall}, an estimate of the sensitivity considering all neutrino flavours is presented. We end the article in Section~\ref{sec:conclusions} with some final remarks and conclusions.
\section{Experimental strategy}
\label{sec:strategy}
In this work, we investigate the sensitivity of a ground-based wide field-of-view gamma-ray observatory, such as the LHAASO experiment~\cite{LHAASOLayout} or the future SWGO~\cite{SWGOFuture}, for the detection of astrophysical neutrinos in the energy range of hundreds of TeV up to hundreds of PeV. These experiments cover large effective areas of $\sim 1\,{\rm km^2}$ with a relatively high fill factor\footnote{In this context, the fill factor is the total detector sensitive area over the shower sampling area (size of the array).} ($\sim 4\%$ for LHAASO) to boost the detection of the very-low photon fluxes at $> \mathrm{PeV}$ energies.
The main source of background for these observatories is the overwhelming cosmic-ray flux that supersedes the gamma-ray flux by a factor $\sim 10^4$ above $100\,$TeV energy. To mitigate this background, experimental data is often analysed to extract the muon content of the shower, which is higher for hadron-induced showers. However, the distinction between vertical (zenith angle $\theta\lesssim60^\circ$) neutrino-induced and cosmic-ray-induced showers is complicated, as the events exhibit similar signatures. The discrimination is enhanced for inclined showers ($\theta\gtrsim60^\circ$) due to the larger depth of atmosphere between the point of first interaction and the ground~\cite{PierreAuger:2011cpc}. As the proton-air interaction cross-section is seven orders of magnitude larger than the neutrino-air one, protons typically interact in the upper layers of the atmosphere, and a proton-induced inclined shower has to cross a large amount of matter before reaching the ground level. As a consequence, most of the electromagnetic component gets absorbed, and only muons can reach the ground. As a result, ground-based array detectors sample what is commonly called an \emph{old} shower.
Neutrinos, on the other hand, can interact much closer to the detector stations, and both the electromagnetic and muonic components will be detected, what is commonly called a \emph{young} shower. Thus, the balance between the amount of measured signal due to muons and electromagnetic particles can be used to discriminate neutrino from cosmic-ray induced showers.
This strategy has also been used by the surface detector array of the Pierre Auger Observatory to place limits on the neutrino flux at EeV energies~\cite{PAOUHENus,InclinedNusPAO}.
Hence, the neutrino signatures that we investigate in this work are those of very inclined showers ($\theta$ in the range $60^\circ$ to $88^\circ$) initiated close to the ground. Neutrinos with energies in the $100~{\rm TeV}-100~{\rm PeV}$ range are taken as signal, while the background is mainly attributed to very inclined EAS induced by cosmic rays. We initially focus on studying the detection of electron neutrinos $\nu_e$ only. When these particles interact with the atmosphere, they can generate both a hadronic and an electromagnetic shower, maximizing the detection probability. Upon reaching the ground, the inclined cascade may have undergone a substantial development producing a large footprint and facilitating its detection with a surface detector array.
The key observables to discriminate between neutrino and proton-induced showers are the total amount of signal produced by electromagnetic particles (\Sem) and by muons (\Smu). The existing and planned gamma-ray experiments should be able to access both quantities. The electromagnetic signal is essential to estimate the primary energy, while \Smu is typically used to discriminate gamma from proton-induced showers. In this work, we assume that both quantities are readily available instead of performing a dedicated experiment-dependent reconstruction (see, for instance, the LHAASO experiment~\cite{LHAASOLayout} to see how these quantities can be accessed). Afterwards, in Section~\ref{sec:reconstruction}, the impact of a possible reconstruction uncertainty on the sensitivity to VHE neutrinos is discussed. This study allows for the extraction of the experimental resolution needed to allow the detection of neutrino events.
\section{Simulation Framework and Data Analysis}
\label{sec:simulation}
We have simulated the development of air showers with dedicated Monte Carlo codes, and assumed a flat EAS array composed of cylindrical water-Cherenkov detector (WCD) units with area $\sim 12\,{\rm m^2}$, spanning over an area of $1\,{\rm km^2}$. The response of the station unit is modeled using a parameterisation of the average signal as a function of the energy of the particle crossing the detector. An example of the average air-shower footprint at the ground is displayed in Fig.~\ref{fig:DetectArray}.
CORSIKA (COsmic Ray Simulations for KAscade - version 7.7410)~\cite{CORSIKA} was used to generate downward-going extensive air showers initiated by protons and neutrinos.
Neutrino-induced air showers were simulated at fixed interaction points from the ground level up to $12\,000\,{\rm m}$ in vertical height, while for proton-induced showers, the starting points were sampled taking into account the proton-air cross-section. Showers generated by upward-going neutrinos interacting within the Earth's crust and developing in the ground, were simulated using the AIRES framework, version 2.8.4a~\cite{AIRES}. Simulations were performed at fixed values of energy and zenith angle, while the azimuth angle ($\phi$) was sampled from a $2\pi$ uniform distribution. The magnetic field and the observation level of the WCD array remained unchanged in all simulations. The ground was placed at $5\,200\, {\rm m}$ above sea level, corresponding to the approximate altitude of some of the sites being considered for SWGO~\cite{SWGOFuture}.
The Earth's magnetic field was fixed to the value at the ALMA site, in Chile.
The response of the WCD stations was emulated with a parameterisation of the signal as a function of particle energy obtained with the Geant4 toolkit~\cite{Geant4}. The signals induced by shower particles were obtained by injecting them at the centre of the detector in the vertical direction.
A sketch of a WCD unit is shown in Fig.\,\ref{fig:WCDStation}. The single-layered WCD unit with multiple photo-sensors at the bottom is one of the candidate designs for the stations being considered for SWGO~\cite{wcd4pmt}. The parameterization of the average response of the WCD is obtained for electrons, muons and protons, representative of the electromagnetic, muonic and hadronic components of the shower, respectively.
It is important to note the discrimination shall be done through two shower quantities: \Smu and \Sem. As such, the lack of fluctuations on the parameterization, due to light collection and particle trajectories, would have an impact on the resolution of the reconstructed \Smu and \Sem. The impact of the experimental resolution on the reconstruction of these shower parameter will be discussed in~\ref{sec:reconstruction}.
With these simulations, we have computed \Sem and \Smu for each simulated neutrino and background proton shower at the ground array. The simulated values of \Sem and \Smu for signal and background events are fed into ROOT's Toolkit for Multivariate Data Analysis (TMVA)~\cite{TMVA} to separate the two classes of events as described in the next section.
\section{Discriminating signal and background}
\label{sec:discrimination}
The aim of this work was to minimise the background so that any neutrino candidate would be significant, at the expense of a smaller neutrino identification efficiency. This was achieved with a Fisher linear discriminant analysis performed in the parameter space of $\log_{10} (S_{\rm \mu})$ vs $\log_{10} (S_{\rm em})$. The cut in the Fisher discriminant is derived independently for each simulated zenith angle considering all the simulated proton energies ($10\,$TeV-$10\,$EeV) and neutrinos with fixed energy from $100\,$TeV to $10\,$PeV. An example is shown in Fig.~\ref{fig:FishCut} for the case of $\theta=70^\circ$. It was found that the optimal Fisher cut varies with the zenith angle, but not with the primary energy.
Two additional cuts were introduced to achieve a background-free discrimination.
Neutrino events have Fisher values predominantly above $\sim 0.5$. However, also a small fraction of low-energy proton events typically characterised by small values of \Sem can fulfil the Fisher cut. For all values of zenith angle, a cut on $\log_{10}(S_{\rm em}/{\rm p.e.}) > 5.3$ removes the majority of these background events, while minimising the loss of neutrino events. An example is shown in Fig.\,\ref{fig:FishCut}.
A second, zenith-dependent cut on \Smu was introduced to remove the contamination due to the highest-energy proton background showers. Cascades induced by protons with energies above $1\,{\rm PeV}$ produce larger muonic signals than those induced by neutrinos with energies in the $100\,{\rm TeV}\,-\,10\,{\rm PeV}$ range. By limiting the maximum value of \Smu, these background events are eliminated with minimal loss of neutrino events as can be seen in the example in Fig.\,\ref{fig:FishCut}.
Within the squared region defined by the \Sem and \Smu cuts, the value of the Fisher cut can be further adjusted to remove all background events.
\section{Sensitivity of a ground array to neutrinos}
\label{sec:method}
To estimate the sensitivity of a gamma-ray ground-based observatory to neutrinos we have calculated the expected neutrino event rate $\d N_\nu/\d t$ given by the following equation,
\begin{eqnarray}
\frac{\d N_\nu}{\d t} &=& \int^{E_{\nu,{\rm max}}}_{E_{\nu,{\rm min}}} \frac{\d \Phi_\nu}{\d E_\nu}(E_\nu) \, \frac{1}{m} \, \sigma(E_\nu) \, M_{\rm eff}(E_\nu) \, \d E_{\nu} \,,
\label{eq:Sensitivity}
\end{eqnarray}
where $\d \Phi_\nu/\d E_\nu$ denotes the differential flux of incoming neutrinos, $m$ is the mass of an air nucleon and $\sigma(E_\nu)$ is the neutrino-nucleon cross section. $M_{\rm eff}(E_\nu)$ is the effective mass of the detector (see below), while $E_{\nu,{\rm min}}$ and $E_{\nu,{\rm max}}$ denote the integration limits used for the sensitivity calculation.
In this Section we study the sensitivity to electron neutrinos only. The sensitivity to all neutrino flavors will be addressed in Section~\ref{sec:resultsall}.
\subsection{Electron Neutrino Flux}
An astrophysical flux of VHE electron neutrinos and anti-neutrinos was measured at the IceCube neutrino observatory up to a few PeV~\cite{IceCubeFlux}. The flux
of $\nu_{\rm e}$ and $\bar\nu_{\rm e}$ can be approximated by:
\begin{equation}
\frac{\d \Phi_\nu}{\d E_\nu}(E_\nu) = k^\prime \left( \frac{E_\nu}{E_0} \right)^{-2.53},
\label{eq:NuFlux}
\end{equation}
where $E_0=10^5\,{\rm GeV}$, and $k^\prime= k E_0^{-2.53} \equiv 4.98 \times 10^{-18}\,{\rm GeV^{-1}\,cm^{-2}\,s^{-1}\,sr^{-1}}$. In this work, we discuss the detection of neutrinos with energy above $100\,$TeV, where the flux of astrophysical neutrinos dominates over the one by atmospheric neutrinos. As such, we will use for electron-neutrinos the flux given in Eq.\,(\ref{eq:NuFlux}) reduced by a factor of two, assuming an equal content of $\nu_e$ and $\bar\nu_e$ at Earth. Moreover, as in this work we intend only to have an estimate of the number of neutrinos that could be detected by a generic gamma-ray observatory through the use of inclined showers, we consider only the mean values reported by IceCube, i.e., we neglect for the up-coming calculations the experimental errors claimed by the experimented.
\subsection{Neutrino-nucleon Cross-section}
In Eq.\,(\ref{eq:NuFlux}) we use the values of the neutrino-nucleon cross-section as a function of energy from~\cite{NuXSecs}, distinguishing between charged current (CC) and neutral current (NC) neutrino interactions, as shown in Fig.\,\ref{fig:NuNucXSection}.
\subsection{Neutrino efficiency and effective mass}
The effective mass represents the amount of matter within which an interacting neutrino can be identified. Eq.\,(\ref{eq:effMass}) gives the effective mass as a function of the zenith angle $\theta$, and the energy of the incoming neutrino $E_\nu$:
\begin{equation}
M_{\rm eff}^\theta(E_\nu,\theta)= 2\pi A \sin \theta \cos\theta \hspace{1mm}\int_D\varepsilon_\nu(E_\nu, \theta, D) \hspace{1mm} dD\ . %
\label{eq:effMass}
\end{equation}
The function $\varepsilon_\nu(\theta, D, E_{\nu})$ denotes the probability of identifying a neutrino considering the cuts introduced in Section~\ref{sec:discrimination}. It is a function of the slant depth of the neutrino point of first interaction of the neutrino, $D$, (expressed in ${\rm g\,cm^{-2}}$ and measured from ground), the energy of the neutrino $E_\nu$ (given in GeV), and the angle of incidence $\theta$ (in ${\rm radians}$). The surface area of the array is denoted as $A$, and was fixed at a value $A=1\, {\rm km}^2$.
The neutrino identification efficiency $\varepsilon_\nu(E_\nu,\theta,D)$ is obtained as the ratio of the number of neutrino points within the area delimited by the cuts (white region in Fig.\,\ref{fig:FishCut}) and the total number of simulated neutrino points for a given zenith angle, energy and interaction depth. An example is depicted in Fig.\,\ref{fig:effCurves} for $E_\nu=1\,{\rm PeV}$ and several zenith angles as a function of $D$. As expected the neutrino identification efficiency decreases for showers initiated far from the ground since those are more similar to showers induced by protons that typically interact in the upper layers of the atmosphere.
For each primary neutrino energy, five values of $\theta$ are considered: $60^\circ$, $70^\circ$, $75^\circ$, $80^\circ$ and $88^\circ$. The integration in $D$ of Eq.\,(\ref{eq:effMass}) is done using a cubic spline interpolation to the discrete values of $\varepsilon_\nu(E_\nu,\theta, D)$\footnote{$E_\nu$ and $\theta$ are fixed for each case.}. This results in the effective mass values for each value of $\theta$ reported in Table~\ref{tab:EffMass}.
\begin{table}[ht]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c | c}\hline
\textbf{$\theta$} & \textbf{$M_{\rm eff}^\theta(E_\nu=1{~\rm PeV},\theta)\,[{\rm g}]$} \\
\hline
$60^\circ$ & $9.73\times 10^{12}$ \\
$70^\circ$ & $1.27\times 10^{13}$ \\
$75^\circ$ & $1.65\times 10^{13}$ \\
$80^\circ$ & $9.09\times 10^{12}$ \\
$88^\circ$ & $2.21\times 10^{12}$ \\
\hline
\end{tabular}
\caption{Effective mass as given in Eq.\,(\ref{eq:effMass}) for neutrino-induced showers with $E_\nu=1~{\rm PeV}$ and several values of $\theta$.}
\label{tab:EffMass}
\end{table}
The total effective mass for a given neutrino energy is obtained by integrating the effective mass in zenith angle, $\theta \in [60^\circ ; 89^\circ]$.
The integration in zenith angle is achieved by applying a cubic spline interpolation to the $M_{\rm eff}^\theta(\theta, E_{\nu})$ values listed in Table\,\ref{tab:EffMass} for the case of $E_\nu = 1\, {\rm PeV}$. This yields a total effective mass for the reference energy $E_\nu=1\,{\rm PeV}$ of $M_{\rm eff}\simeq 2.97\times 10^{14}\,{\rm g\, sr}$.
\subsection{Electron Neutrino Interactions}
The neutrino detection efficiency and the effective mass depend on the neutrino interaction channel. In Fig.\,\ref{fig:effCurves} and Table\,\ref{tab:EffMass}, the interaction channel, either CC or NC, was randomly chosen according to their relative weights in the total cross-section. However, in CORSIKA simulations, the interaction can be chosen so that neutrinos only interact via CC or NC, allowing the estimation of the sensitivity for each interaction channel. An example of the resulting neutrino identification efficiency is presented in Fig.\,\ref{fig:NCCCEff80}, for $E_\nu=1\,{\rm PeV}$ and $\theta=80^\circ$.
As seen in Fig.\,\ref{fig:NCCCEff80}, the electron neutrino identification efficiency considering only CC interactions has non-zero values at a larger distance from ground than the one obtained using only NC interactions. This happens because, in CC interactions, the total energy of the $\nu_e$ is transferred to an electromagnetic shower, from the energetic electron produced in the interaction, and a hadronic shower from the collision with the nucleon of the atmosphere.
In NC interactions, instead of an electron, an electron-neutrino will be produced. Hence, only the typically less energetic hadronic shower can be detected reducing the efficiency. In Fig.\,\ref{fig:NCCCEff80}, it is also shown the more realistic case of the efficiency when CC and NC interactions are chosen at random depending on their relative weight in the total neutrino-nucleon cross section.
As expected, the curve NC+CC is in between the CC and NC curves.
Integrating Eq.\,(\ref{eq:effMass}) in zenith angle for a fixed energy, yields the
effective masses reported in Table\,\ref{tab:EffMasses} for \(\x{E_\nu=1\,{\rm PeV}}\).
\begin{table}[ht]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{c | c}\hline
Interaction & $M_{\rm eff}(E_\nu=1\,{\rm PeV})\,[{\rm g\,sr}]$ \\
\hline
CC & $3.60\times 10^{14}$ \\
NC & $2.27\times 10^{14}$ \\
Total & $2.97\times 10^{14}$ \\
\hline
\end{tabular}
\caption{Effective mass for the different neutrino interaction channels CC and NC, with $E_\nu=1~{\rm PeV}$. Total corresponds to the case where CC or NC are chosen randomly.}
\label{tab:EffMasses}
\end{table}
\section{Sensitivity to downward-going $\nu_e$}
\label{sec:resultsdown}
Eq.\,(\ref{eq:Sensitivity}) can be integrated over energy to obtain the electron neutrino event rate.
This is achieved by applying a cubic spline interpolation to estimate the effective mass values for neutrino energies between $100\,{\rm TeV}$ and $10\,{\rm PeV}$. The effective mass for energies outside this range is approximated via extrapolation.
The estimated electron neutrino event rates are given in Table\,\ref{tab:Sensitivities}. Different values of $E_{\nu,{\rm min}}$ and $E_{\nu,{\rm max}}$ were used in Eq.\,(\ref{eq:Sensitivity}) to study the dependence of the event rate on both the minimum energy above which the flux can be considered to be purely astrophysical with a negligible contamination from atmospheric neutrinos, and on the maximum energy to which the astrophysical flux could extend without a cutoff. As can be seen in Table\,\ref{tab:Sensitivities}, a rate of 0.3 electron neutrinos per year can be detected.
\begin{table}[ht]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{c | c}\hline
$E_{\nu,{\rm min}} - E_{\nu,{\rm max}}$ & $\frac{dN}{dt}(E_{\nu})[{\rm yr^{-1}}]$ \\
\hline
$100\,{\rm TeV}-1\,{\rm PeV}$ & $1.30 \times 10^{-1}$ \\
$100\,{\rm TeV}-10\,{\rm PeV}$ & $2.06\times 10^{-1}$ \\
$100\,{\rm TeV}-100\,{\rm PeV}$ & $3.01 \times 10^{-1}$ \\ \hline
$1\,{\rm PeV}-10\,{\rm PeV}$ & $1.06 \times 10^{-1}$ \\
$1\,{\rm PeV}-100\,{\rm PeV}$ & $1.72 \times 10^{-1}$ \\
\hline
\end{tabular}
\caption{Even rate, given by Eq.\,(\ref{eq:Sensitivity}), for electron neutrinos only in a wide-field ground-based gamma-ray observatory ($A=1\,{\rm km^2}$), for different ranges of $E_\nu$. The rates are obtained in different ranges of $E_{\nu,{\mathrm min}}$ and $E_{\nu,{\mathrm max}}$ in Eq.\,(\ref{eq:Sensitivity}).
}
\label{tab:Sensitivities}
\end{table}
The estimates of sensitivity given in Table\,\ref{tab:Sensitivities} can be extrapolated linearly to other values of detector surface area $A$. In Fig.\,\ref{fig:NusPerYear} we depict the electron neutrino event rates for as a function of $A$ for different values of $E_{\nu,{\mathrm min}}$ and $E_{\nu,{\mathrm max}}$.
\subsection{Impact of the array fill factor}
The fill factor is defined as the ratio between the sum of the area of individual detectors and the total area of the array $A$.
To infer the impact of the fill factor on the event rate, the procedure described previously is applied to a detector array of equal surface area ($1\,{\rm km^2}$) and variable fill factor. In this work we have studied the sensitivity for fill factors of $1, 3, 5, 50$ and $80\%$, yielding the results in Fig.\,\ref{fig:NusVsFF}. All the cuts described in Section~\ref{sec:discrimination} were recomputed to ensure that all the simulated proton background events are rejected.
Taking as a reference LHAASO's fill factor of $4\%$~\cite{LHAASOLayout}, the estimated neutrino event rate decreases by a factor of $\approx 3$ when compared to the initially assumed $80\%$ fill factor. It is interesting to see that the event rate increases rather slowly for fill factors between 1\% and $\sim 5\%$ and more rapidly between $\sim 10\%$ and $\sim 50\%$.
\subsection{Impact of experimental resolution}
\label{sec:reconstruction}
We have also studied the impact of experimental resolution on the expected event rate. Gaussian smearings, denoted as $\sigma_{S_{\rm em}}$ and $\sigma_{S_{\rm \mu}}$, were applied to both electromagnetic (\Sem) and muonic (\Smu) signals of the neutrino and background events, respectively.
After applying the smearing, the previously derived cuts on the Fisher discrimination described in Section~\ref{sec:discrimination} were recomputed to ensure that all simulated background events are rejected. Assuming again an array area of $A=1\,{\rm km}^2$ with an 80\% fill factor, the resulting neutrino event rates are presented in Fig.\,\ref{fig:SmearVsNusLargeScale}. Larger values of $\sigma_{S_\mu}$ and/or $\sigma_{S_{\rm em}}$ result in progressively lower event rates and hence lower sensitivity, as would be expected. Degradation of the expected number of neutrinos by a factor of $2$ is only achieved when the smear applied to the electromagnetic or muonic signal reaches an extreme value of about $200\%$. However, at PeV energy, the reconstruction resolutions of \Sem and \Smu are expected to be a few tens of percent. The reduced impact on the event rate reflects the robustness of this methodology to a possible degradation of the signal due to reconstruction.
The ability to reconstruct the geometry (arrival direction and core position) of the neutrino-induced shower events, was also investigated using a simple reconstruction algorithm. The reconstruction is performed by fitting the arrival times of the first particles reaching each WCD station to a conic shower front. The curvature of the front was taken from\cite{conic_geom}, without any further optimisation. This test was done considering an array of $A=1\,{\rm km^2}$ and a fill factor of $5\%$.
In figure~\ref{fig:XNstationSigmaTheta}, we show a density plot for the angular reconstruction resolution, $\sigma_\theta$, as a function of the neutrino interaction slant depth and the number of active stations. The resolution $\sigma_\theta$ is defined as the $68\%$ containment of the difference between the simulated and reconstructed angle. From this figure, it can be seen that the precision of the shower axis reconstruction depends both on the distance of the neutrino interaction point to the ground, and on the number of triggered stations. If the interaction happens close to the ground, the shower footprint is small, leading to a poor reconstruction. However, if the interaction happens at $\gtrsim 100\,{\rm g\,cm^{-2}}$ it is possible to achieve angular resolutions better than $\sim 1^\circ$.
Experimentally, one could apply a cut on the number of active stations. For instance, it was seen that requiring at least $\sim 30$ active stations would allow having a reconstruction resolution better than $\sim 5^\circ$. The introduction of such a condition would lead to a small $\sim 10\%$ decrease in the neutrino identification efficiency and effective mass, resulting in a proportionately lower neutrino event rate.
In figure~\ref{fig:XNstationSigmaTheta}, it can also be seen that for showers with large slant depths ($D\gtrsim 1000\,{\rm g\,cm^{-2}}$), the number of active stations can have significant variations being intrinsically connected to the shower development. However, the plot also displays the median and the standard deviation of the number of events, evidencing that most of the showers will lead to a large number of active stations. It should also be pointed out that while the number of active stations affects the quality of the reconstruction, better resolutions can be attained for neutrino-induced showers that interact higher in the atmosphere. This happens because even though fewer particles are reaching the ground, the shower footprint is more extended due to the longer shower development through the atmosphere, easing the reconstruction of the geometry.
It was verified that the order of magnitude of the claimed geometric reconstruction resolution is the same for all the energies and angles considered in this work.
Finally, it is important to note that the provided values on the reconstruction resolutions should be taken as upper limits. Dedicated reconstructions of inclined showers are expected to improve the angular resolution~\cite{AugerInclined}.
\subsection{Impact of the limited simulation statistics}
The flux of background proton-induced showers greatly exceeds the expected flux of neutrinos, implying that a reliable observation of neutrino events requires a large background rejection factor.
Simulations are needed to establish the cuts and assess a possible contamination by proton showers in order to get a significant detection in case a neutrino candidate is observed. However, the available simulations are limited in statistics due to limited computational resources and computing time.
To overcome this difficulty we have applied the following procedure. For all sets of simulated proton showers at fixed energy and zenith angle, we have obtained the Fisher distributions for proton showers within the region of interest delimited by the cuts on \Sem and \Smu as defined in Section\,\ref{sec:discrimination} (see also Fig.\,\ref{fig:FishCut}). The cumulative of these distributions (number of events above a Fisher value) are then obtained and normalized to one. This procedure gives the proton background selection efficiency, $\varepsilon_p$ or the proton contamination fraction as a function of the Fisher value. A few examples are shown in Fig.\,\ref{fig:four_graphs} in the Appendix. An exponential fit to the tail of the cumulative proton distributions is performed and used to extrapolate to higher background rejection factors (smaller contamination fractions $\varepsilon_p$) where the limited statistics of the proton simulations did not populate the tails of the distributions.
The Fisher value cumulative distribution for each zenith angle is then obtained by combining the cumulative distributions for all proton energies, weighting according to their relative contribution to the cosmic-ray flux assuming a power-law $E^{-3}$ spectrum.
For each proton selection efficiency $\varepsilon_{p}$, the matching Fisher value is extracted from the cumulative of the corresponding zenith angle and taken as the Fisher cut value.
In this way the neutrino event rate above the Fisher cut is estimated as a function of $\varepsilon_{p}$, ranging from $10^{-14}$ to $10^{-1}$, as shown in Fig.~\ref{fig:EventRatesVsEpsilonP}. The plot suggests that an electron neutrino event rate of $\sim 0.3$ per year can be achieved with proton background contamination smaller than $\sim 0.005$ per year. The $1$-sigma uncertainty of the exponential fit can be used to evaluate the corresponding uncertainty on the number of neutrinos as a function of $\varepsilon_{p}$, shown as a band in the top panel of Fig.\,\ref{fig:EventRatesVsEpsilonP}. From this exercise, it can be seen that while the uncertainty on the number of expected neutrinos increases as $\varepsilon_{p}$ decreases, it is at maximum a factor of four for a quasi background-free ($\varepsilon_p\rightarrow 0$) experiment. In any case, for values of $\varepsilon_p$ lower than $\approx 10^{-14}$, the neutrino event rate is higher than that of background.
\section{Estimate of sensitivity for all neutrino flavours}
\label{sec:resultsall}
Until this point, this work focused exclusively on the contribution of electron neutrinos to the estimated event rate. By neglecting muon- and tau-neutrino flavors, and all anti-neutrinos, the estimate presented constitutes a lower limit to the number of neutrinos a gamma-ray ground-based array may be capable of detecting. The estimated event rate for all neutrino and anti-neutrino flavors presented here, is achieved by taking advantage of the effective mass of the array for electron-neutrinos computed in Section~\ref{sec:method} explicitly for charged (CC) and neutral-current (NC) interactions, and denoted here as $\cc$ and $\nc$ respectively. Combining these quantities with the corresponding neutrino-air interaction properties, allows us to conservatively estimate the number of expected neutrinos for all flavors and interaction channels. As we are considering astrophysical neutrinos, the expected number of electron, muon and tau neutrinos are assumed to be in the ratio 1:1:1 after oscillation over cosmological distances. Moreover, the same amount of anti-neutrinos is expected. What might change is the ability to distinguish a given species of neutrino-induced showers from the background i.e. the identification efficiency $\varepsilon$ and hence the effective mass, which can be assessed based on some qualitative arguments on the characteristics of the neutrino interaction with the Earth's atmosphere.
Firstly, the effective mass of the array when accounting only for neutral current interactions is expected to be the same for all neutrino flavors and hence equal to that of $\nu_e$ NC interactions. All neutrino flavors produce the same type of hadronic shower in a NC interaction, carrying on average the same fraction of the neutrino energy. Moreover, the only difference to the Feynman diagrams responsible for the bulk of the cross-section is the neutrino mass that can be considered negligible at the very-high energies involved. As a consequence, for all neutrino flavors the expected event rate is assumed to be proportional to $\snc\,\nc$ with $\snc$ the NC-interaction cross-section.
For the case of the muon neutrino, the charged-current interaction will induce a hadronic shower and an energetic muon. One single muon is unlikely to be detected in a sparse array, so we will only consider the hadronic cascade. Again, given the extreme primary energies, the energy distribution of the secondaries arising from the hadronic vertex of the interaction is very similar to the one of an electron-neutrino (and an emerging fast electron) or a neutral current interaction. Hence, conservatively, we will assume that the effective mass of the array to muon neutrinos for the CC interaction is the same as the one of the electron-neutrinos for the neutral current interaction estimated before. This yields the expected number of CC-interacting muon-neutrinos proportional to $\scc\,\nc$, with $\scc$ the CC interaction cross-section.
It should be noted again that this is a conservative assumption, as the muon produced in a CC interaction could radiate an energetic photon via bremsstrahlung leading to the production of an electromagnetic shower that would increase the detection probability.
The tau-neutrino charged-current interaction produces a hadronic cascade plus a high-energy tau. In the atmosphere, the tau-lepton will travel on average between $\sim 5$ m and $\sim 5$ km at energies between 100 TeV and 100 PeV before decaying. The decay of the tau can either produce hadrons ($\sim 65\%$ of the time) and electrons ($\sim 17\%$ of the time) that will lead to \emph{young} cascades of particles. Muons can also be produced in the decay ($\sim 17\%$ of the time), that will be essentially undetectable, as discussed before. In this work, we have assumed that only the hadronic particles, directly emerging from the collision of the tau neutrino with the atmosphere, will produce a detectable shower, i.e. we neglect the decay of the $\tau$ lepton, and assume that the effective mass of the detector is the same as in the case of neutral-current interactions, with the expected number of CC-interacting tau neutrinos being proportional $\scc\,\nc$.
We stress that this is conservative and that a more accurate calculation of the number of expected tau neutrinos would be clearly above this estimate.
The assumptions above can be applied to anti-neutrinos $\bar\nu$, given the high energy of the involved interactions. The $\bar\nu$-air interaction properties will be similar, leading to air showers with essentially the same general properties leading to similar \Sem and \Smu, the main parameters of this analysis. Additionally, above $100\,$TeV, neutrino-air and anti-neutrino-air cross-sections are very close. Nonetheless, we have used the exact values for each energy.
Consequently, the inclusion of anti-neutrinos would likely increase the expected event rate for all neutrinos by a factor 2.
The total expected event rate would be additionally increased due to the resonant channel for the electron anti-neutrinos $\bar{\nu}_e$. Around $E_{\bar\nu} \sim 6.3\,$PeV, electron anti-neutrinos can interact with the air atomic electrons producing a real $W^-$ boson -- the so-called Glashow resonance. This resonance has in fact been observed by the IceCube neutrino observatory~\cite{IceCubeGlashow}, and represents an important contribution to the expected neutrino event rate around such energies.
In this case, the total number of expected $\bar\nu_e$-induced events, can be assumed to be proportional to $\snc\,\nc + \scc\,\cc + \sigma_G\,M_{\bar\nu_e}(\mathrm{W})$, where we denote $M_{\bar\nu_e}(\mathrm{W})$ as the effective mass for resonant anti-neutrino interactions, and $\sigma_G(E_{\bar\nu})$ is the Glashow resonance cross-section, which is a function of the anti-neutrino energy. The $W$-boson decays into hadronic particles or a lepton. Following the above considerations, $M(W)$ can be approximated as,
\begin{eqnarray}
M_{\bar\nu_e}(\mathrm{W})\simeq\sfrac{1}{9} \cc + \sfrac{2}{3} \nc + \\ \sfrac{1}{9} \left(BR_{\tau \rightarrow e} \cc + BR_{\tau\rightarrow\rm had} \nc \right),
\label{eq:MeffGlashow}
\end{eqnarray}
where we have used the approximation that the effective mass of the array for the produced electron in the decay of the $W$ is equal $\cc$ and for hadronic final states, it follows $\nc$. The fractions accompanying the effective masses in Eq.\,(\ref{eq:MeffGlashow}) account for the (approximate) branching ratios of the $W$-boson branching ratios ($BR$) to electrons ($\sim 0.11$), hadrons ($\sim 0.68$) and $\tau$-leptons ($\sim 0.11$) with $BR_{\tau \rightarrow e}\sim 0.17$ and $ BR_{\tau\rightarrow\mathrm{had}}\sim 0.65$ denoting the tau branching ratios into electron and hadronic particles, respectively. The decay of the $W$-boson to a muon is neglected since the single muon is assumed not to produce a detectable shower, as explained before.
With all the assumptions and approximations above, we have estimated the expected number of neutrinos per year, considering an extensive air shower array with an area of $1\,{\rm km^2}$ and a fill factor of $80\%$. This is shown in Fig.~\ref{fig:AllNeutrinoEventRates} as a function of neutrino energy and for the different neutrino flavors and channels. Accounting for the Glashow resonance of $\overline{\nu}_e$ has a noticeable impact on the total number of expected neutrinos in the energy region around $\approx 6\,$PeV. The integrated number of events per year above a given energy is also shown in Fig.\,\ref{fig:AllNeutrinoEventRates} as a red line. Integrating from $100\,$TeV up to 100 PeV, one would conservatively expect $\sim 2$ neutrino events per year. As discussed before, a more realistic array with a fill factor of $\sim 5\%$ would reduce the event rates by a factor $\lesssim 3$.
\section{Sensitivity to upward-going Electron Neutrinos}
\label{sec:resultsup}
A study was carried out of the possibility of upward-going neutrino events contributing to the estimated event rate in a gamma-ray ground-based array of WCD. The AIRES framework was used to simulate the development of upward-going showers,
as the version of CORSIKA code used throughout this work is unable to treat showers in dense homogeneous media such as the Earth's crust.
We simulated upward-going showers induced by electron neutrinos, although our conclusions below apply to any type of upward-going shower. Since an electron neutrino is not a default primary particle in AIRES, we obtained the secondary products of the $\nu_e$ interaction with CORSIKA, and inject those in AIRES to obtain the longitudinal and lateral development of the shower underground. The composition of the Earth's crust in AIRES is emulated by setting the atmosphere's composition to match that of standard soil. According to~\cite{TUEROS2010380} this medium is characterised by $\rho=1.8\,{\rm g\, cm^{-3}}$ and effective atomic number $Z=11$.
This simulation setup was utilised to inclined and very inclined up-going showers, $\theta$ ranging from $92^\circ$ to $120^\circ$ where the Earth is not opaque to neutrinos of PeV energies. We generated neutrinos with energy $E_\nu = 1\,{\rm PeV}$. The vertical height of the first interaction assumed values between $2\,{\rm m}$ and $5\, {\rm m}$ below the observation level, as showers were severely attenuated for higher depths and not sufficiently developed for smaller depths. Under each set of conditions, $1000$ showers were simulated.
The average footprint of the showers was inferred for each combination of $\theta$ and vertical depth underground. An example is presented in Fig.\,\ref{fig:UpG80} for showers with $\theta=100^\circ$ initiated at a vertical depth of $3\,{\rm m}$.
As can be seen in Fig.\,\ref{fig:UpG80}, the small dimensions of the footprints produced (of the order of a few tens of ${\rm m^2}$ in all cases), make their detection at a typical gamma-ray observatories such as LHAASO very difficult, particularly in the sparse array. The detection would eventually be possible in a compact array with larger filling factor of a gamma-ray observatory. For our nominal array with an $80\%$ filling factor, $\sim 50\%$ of the simulated events in the example shown in Fig.~\ref{fig:UpG80} have less than 5 triggered WCD stations as seen in the inset panel. Even in this case the involved effective areas would not be sufficient to perform a competitive measurement, since the shower has to be produced at less than $\sim 10$ m vertical depth below the array for it to develop before attenuating in the Earth. This limitation induces a small effective detection volume in comparison to other detection techniques such as the observation of an emerging $\tau$ decay in the atmosphere~\cite{InclinedNusPAO}. We conclude that showers induced by up-going neutrinos do not contribute significantly to the estimated event rate in the PeV energy range, explored in this work.
The Earth-skimming tau neutrino detection method consists on the observation of a shower induced by the decay of a tau lepton in the atmosphere. The tau is produced by a quasi-horizontal tau neutrino interacting in the Earth, with zenith angle between $\theta=90^\circ$ and typically $\theta\simeq 93^\circ$, corresponding to the zenith angle range where the shower can trigger an array of detectors. At the energies of interest in this work, $\sim$ PeV, the decay length of a tau lepton is of the order of 50 m, and for this reason the production of a tau-induced shower would be around 10 times more likely at PeV energies than the generation of a more upward-going shower inside the Earth that needs to initiated between 2 and 5 m depth, as explained above. However, this is partly compensated by the smaller solid angle where the shower can trigger the detector $\sim 0.22~{\rm sr}$ for $\theta\in(90^\circ,92^\circ)$ compared to $\sim 2.92~{\rm sr}$ for $\theta\in(92^\circ,120^\circ)$. On the other hand, the tau-decay induced shower produced in the atmosphere generates a footprint which will be highly dependent on the exit angle, altitude of decay and trigger conditions. One can roughly estimate a footprint of $\sim {\rm km}$ length on the array that would be more efficiently detected than the small and narrow upward-going shower produced in the larger density medium inside the Earth. As a result, the Earth-skimming technique would be, in principle, more efficient in relative terms than the detection of the upward-going showers discussed here. A more quantitative evaluation of the impact of the Earth-skimming tau neutrino channel on the total neutrino event rate requires a detailed simulation of the trigger efficiency of the EAS array to quasi-horizontal atmospheric showers, possibly considering the topography of the site, which is beyond the scope of this work. Our results in this respect should be regarded as conservative.
\section{Final remarks and Conclusions}
\label{sec:conclusions}
In this work, we have investigated the possibility of using gamma-ray wide field-of-view observatories to detect showers induced by astrophysical neutrinos in the 100 TeV to 100 PeV energy range. The discrimination from the overwhelming cosmic-ray-induced background is achieved through the detection of inclined showers and inspecting the balance between their electromagnetic and muonic content of the shower at the ground, two observables that are typically accessible in gamma-ray experiments and used for photon-hadron discrimination.
An end-to-end simulation procedure, emulating the detector response, was applied to electron neutrino events and conservatively extrapolated for the remaining neutrino and anti-neutrino species and interaction channels. The expected number of neutrinos observed through this method in an array with an effective area of $1\,{\rm km^2}$, for energies above $100\,$TeV is around 2 per year. This is not a considerable number, particularly when compared with dedicated experiments working in the same energy range, such as IceCube, which sees a few tens of events per year. Nonetheless, in the context of multi-messenger science, and the pursuit of these events, it is not negligible either. Note that gamma-ray observatories are already operating, or will be in the near future, so the potential gain of these additional events is essentially for free.
Moreover, this measurement was performed assuming a diffusive neutrino background implying that the detected neutrinos could be used to alert other experiments with a few minutes latency.
In this work, it was also demonstrated that, while a very sensitive detection channel at very high energies, the use of upward-going events does not add much to the expected neutrino event rate due to the reduced size of the shower footprint and the relatively shallow depths of neutrino interaction needed for the shower developing underground to arrive at the array.
The number of expected neutrinos could benefit from the topography surrounding the experiments, such as mountains, as suggested in~\cite{mountains_auger, mountains_hawc}. These experiments are usually placed at high altitudes on plateaus at the foot of mountains. A shower whose reconstructed direction coincides is compatible with emerging from inside a mountain is a clean evidence of a neutrino-induced event, although the estimated rates are small.
Finally, this work aims to be a proof-of-concept, and more sophisticated analyses that could lead to higher counts are naturally envisaged. These analyses are experiment dependent, and this work shows that it is a compelling line of research to be pursued by at $\mathrm{km}^2$-scale, gamma-ray, ground-based observatories such as those pursuing PeV gamma-ray Astronomy.
\section*{Acknowledgments}
We would like to thank to Sofia Andringa and Enrique Zas for useful discussions and suggestions, and Ioana Mari\c{s} for carefully reading the manuscript.
This work has been financed by national funds through FCT - FundaГ§ГЈo para a CiГЄncia e a Tecnologia, I.P., under project PTDC/FIS-PAR/4300/2020. R.~C.\ is grateful for the financial support by OE - Portugal, FCT, I. P., under DL57/2016/cP1330/cT0002. This work has received financial support from Xunta de Galicia (Centro singular de investigaciГіn de Galicia accreditation 2019-2022), by European Union ERDF, and by the “MarГa de Maeztu” Units of Excellence program MDM-2016-0692 and the Spanish Research State Agency, and from Ministerio de Ciencia e Innovaci\'on PID2019-105544GB-I00 and RED2018-102661-T (RENATA).
\bibliography{references}%
\appendix
\section{Fits to the Proton Fisher cumulative distribution tail}
A few examples of the normalized cumulatives of the Fisher value distributions for proton showers within the region of interest are presented here. The formula of the exponential fit performed to the tail of the cumulative is presented in each figure. The exponential fit is presented as a solid black line, and the shaded area represents its 1-sigma uncertainty. This fit was used to extrapolate to higher background rejection factors (smaller contamination fractions $\varepsilon_p$).
|
Title:
FERIA: Flat Envelope Model with Rotation and Infall under Angular Momentum Conservation |
Abstract: Radio observations of low-mass star formation in molecular spectral lines
have rapidly progressed since the advent of Atacama Large
Millimeter/submillimeter Array (ALMA). A gas distribution and its kinematics
within a few 100s au scale around a Class 0-I protostar are spatially resolved,
and the region where a protostellar disk is being formed is now revealed in
detail. In such studies, it is essential to characterize the complex physical
structure around a protostar consisting of an infalling envelope, a
rotationally-supported disk, and an outflow. For this purpose, we have
developed a general-purpose computer code `{\tt FERIA}' (Flat Envelope model
with Rotation and Infall under Angular momentum conservation) generating the
image cube data based on the infalling-rotating envelope model and the
Keplerian disk model, both of which are often used in observational studies. In
this paper, we present the description and the usage manual of {\tt FERIA} and
summarize caveats in actual applications. This program outputs cube {\tt FITS}
files, which can be used for direct comparison with observations. It can also
be used to generate mock data for the machine/deep learnings. Examples of these
applications are described and discussed to demonstrate how the model analyses
work with actual observational data.
| https://export.arxiv.org/pdf/2208.04581 | null |
Title:
Constraining Axions with ZTF J1901+1458 |
Abstract: The axion-nucleon coupling enables the production of axions through the decay
of excited ${}^{57}\textrm{Fe}$ isotopes, and axions produced in the Sun
through this process are often a target of helioscope searches. We show for the
first time that hot, highly magnetic white dwarfs such as ZTF J1901+1458 are a
viable target to search for the X-ray signature of axions that were produced by
the ${}^{57}\textrm{Fe}$ transition in the core and then converted to photons
in the magnetosphere. We calculate that a 100 ks observation of ZTF J1901+1458
with NuSTAR would constrain the coupling of axions to nucleons and photons at a
level below the bounds of both current and future planned helioscopes.
| https://export.arxiv.org/pdf/2208.00405 |
\title{Constraining Axions with ZTF~J1901+1458}%
\author{Leesa Fleury}%
\email{lfleury@phas.ubc.ca}
\affiliation{%
Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada
}%
\author{Ilaria Caiazzo}%
\email{ilariac@caltech.edu}
\affiliation{%
TAPIR, Walter Burke Institute for Theoretical Physics, Mail Code 350-17, Caltech, Pasadena, CA 91125, USA
}%
\author{Jeremy Heyl}%
\email{heyl@phas.ubc.ca}
\affiliation{%
Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada
}%
\date{\today}%
\section{\label{sec:intro}Introduction}
The X-ray emission from magnetic hot white dwarfs may reveal evidence for axions or axion-like particles \cite{2019PhRvL.123f1104D,2021arXiv210412772D}, which have been a major focus of studies to go beyond the Standard Model and to explain dark matter. The QCD axion, proposed to solve the strong CP problem \cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}, is a well motivated addition to the Standard Model.
Axion-like particles, which are pseudo-scalar particles with properties similar to the QCD axion but that do not necessarily relate to the strong CP problem,
also arise naturally in many other extensions to the Standard Model, such as compactified string theories \cite{Witten:1984dg,Conlon:2006tq,Arvanitaki:2009fg,Acharya:2010zx,Higaki:2011me,Cicoli:2012sz,Demirtas:2018akl,Mehta:2021pwf}.
Interactions of axions with photons and nucleons are generic features of QCD axion models, including the benchmark KSVZ \citep{Kim:1979if,Shifman:1979if} and DFSZ \citep{Dine:1981rt,Zhitnitsky:1980tq} models, and are common features of axion-like particle models (see e.g. \citep{DiLuzio:2020wdo} for a recent review).
Many axion models also include a coupling of axions to electrons, including the DFSZ model.
Axions couple to photons through the interaction term $\mathcal{L} \supset - g_{a\gamma\gamma} a F \tilde{F}$, where $g_{a\gamma\gamma}$ is the axion-photon coupling constant, $a$ is the axion field, and $F$ is the electromagnetic field strength tensor.
The axion interaction with a fermion species occurs through the operator $\mathcal{L} \supset - i g_{aff} a \bar{f} \gamma_5 f$, where $g_{aff}$ is the axion-fermion coupling constant and $f$ is the fermion field, such as for electrons, protons, or neutrons (i.e. $f=e,p,n$).
An effective coupling constant $g_{aNN}^\mathrm{eff}$ can also be defined for the axion interaction with the nucleon doublet field $N=(p,n)^T$ in terms of the axion-proton and axion-neutron couplings.
We define this coupling as $g_{aNN}^\mathrm{eff} = 0.16 g_{app} + 1.16 g_{ann}$ following \cite{2022EPJC...82..120D}.
The axion couplings to photons and fermions each have a model-dependent relation to the axion mass, $m_a$.
To summarize, typical values for the QCD axion couplings are \citep{2007JPhA...40.6607R}
\begin{eqnarray*}
g_{aNN}^{\textrm{\scriptsize eff}} &=& 1.6 \times 10^{-8} \left (\frac{m_a}{1\textrm{eV}} \right ) ~~\textrm{(KSVZ)}
\label{eq:gaNNKSVZ} \\
g_{a\gamma\gamma} &=& 4 \times 10^{-10} \left ( \frac{m_a}{1\textrm{eV}} \right ) \textrm{GeV}^{-1} ~\textrm{(KSVZ)}
\label{eq:gaggKSVZ} \\
g_{aee} &\approx& 3 \times 10^{-11} \left ( \frac{m_a}{1\textrm{eV}} \right )~~\textrm{(DFSZ)},
\label{eq:gaggDFSZ}
\end{eqnarray*}
but axion-like particles can have couplings much larger than these \citep{DiLuzio:2020wdo}.
The axion-nucleon coupling for the DFSZ model of the QCD axion can also be up to a factor of $\sim 4$ larger than for the KSVZ model \cite{2022EPJC...82..120D}.
For both benchmark QCD axion models, the axion-nucleon coupling is typically three orders of magnitude larger than the DFSZ axion-electron coupling.
As X-ray observations of white dwarfs searching for evidence of axions have focused on axions produced through the axion-electron interaction, the greater strength of the axion-nucleon interaction highlights the potential power of similar X-ray searches for evidence of axions produced through nuclear interactions.
Astrophysical observations are often used to search for signatures indicative of the various possible axion interactions with Standard Model particles and to constrain the axion coupling constants.
The axion-nucleon coupling has been probed indirectly through the observed neutrino emission from SN 1987A \cite{Turner:1987by,Burrows:1988ah,Raffelt:1987yt,Raffelt:1990yz,Carenza:2019pxu,Carenza:2020cis,Fischer:2021jfm} and the cooling of neutron stars \cite{Keller:2012yr,Sedrakian:2015krq,Hamaguchi:2018oqw,Beznogov:2018fda,Sedrakian:2018kdm,Leinson:2014ioa}.
Helioscope experiments, which search for axions produced in the Sun, have also been used to constrain the axion-nucleon coupling.
Axions can be produced in the Sun through the decay of excited nuclear states such as the first excited state of \fe.
The CERN Axion Solar Telescope (CAST \citep{Arik:2013nya,Arik:2015cjv}) has searched for axions produced in this way, setting current constraints for the axion-nucleon coupling \cite{CAST:2009jdc,2022EPJC...82..120D}.
A similar search has been proposed for the future planned International Axion Observatory (IAXO \citep{2022EPJC...82..120D,BabyIAXO:2020mzw,IAXO:2019mpb}), along with improved calculations for the axion flux from the \fe\ transition \cite{2022EPJC...82..120D} using updated nuclear matrix elements \cite{Avignone:2017ylv}.
White dwarfs are another popular target of searches for axions produced from astrophysical sources and have typically been used to probe the axion-electron coupling, which enables the production of axions within a white dwarf through axion bremsstrahlung.
This extra source of energy loss would modify the cooling of white dwarfs and thus the white dwarf luminosity function, which in comparison to the observed luminosity function has been used to constrain the axion-electron coupling \cite{2015ApJ...809..141H,2016ApJ...821...27G,Isern:2008nt,Isern:2008fs,Bertolami:2014wua}.
Furthermore, in the strong field surrounding magnetic white dwarfs, axions can convert to photons and vice versa.
Axions produced by bremsstrahlung which then convert to photons in the surrounding magnetic field produce a black-body-like spectrum in the X-ray, which was the focus of a 100-ks observation of the magnetic white dwarf RE~J0317-853 \citep{2021arXiv210412772D} with \chandra.
In this work, we show that hot, highly magnetized white dwarfs are also ideal targets to probe the axion-nucleon coupling via the \fe\ transition, and can in fact give bounds below those of both current and planned future helioscope constraints of axions produced in the Sun through nuclear transitions.
If the temperature of the white dwarf is sufficiently high, low-lying nuclear states may be excited within the star, and these states may decay through the emission of an axion.
Among all of the low-lying nuclear states, the excited state of \fe\ at 14.4~keV stands out through the combination of low energy and relatively large abundance within the star \cite{SolarElementalAbundances}.
In general, these axions would stream out of the star unimpeded and unnoticed. However, in the case of a strongly magnetized white dwarf like \ztf\ \cite{2021Natur.595...39C}, these axions have a chance to transform into X-ray photons at 14.4~keV as they pass through the magnetosphere of the white dwarf.
White dwarfs are ideal targets for measurements of this signature because their thermal emission in the hard X-rays is negligible, and therefore they provide the chance for a very clean detection.
We demonstrate the potential constraining power of X-ray searches for the \fe\ axion signal from magnetic white dwarfs by calculating the projected constraints for the axion-nucleon and axion-photon couplings that could be obtained with a 100~ks observation of \ztf\ by \nustar.
\section{\label{sec:calculations}Calculations}
The spectrum of axions produced through the \fe\ nuclear transition is a narrow peak at the nuclear excitation energy, $E^* = 14.4$~keV.
The resultant photon number flux at Earth induced by axions being produced via the nuclear transition process in the core of a white dwarf and then converting to photons in the magnetosphere is given by \cite{2019PhRvL.123f1104D}
\begin{equation} \label{eq:flux}
\Phi_{\gamma} = \mathcal{N}_a \ M_\mathrm{WD} \times p_{a \rightarrow \gamma} \times \frac{1}{4 \pi d_\mathrm{WD}^2}
\quad ,
\end{equation}
where $\mathcal{N}_a$ is the emission rate per unit mass of axions produced in the white dwarf interior (which we consider to be isothermal),
$M_\mathrm{WD}$ is the white dwarf mass, $p_{a \rightarrow \gamma}$ is the probability of an axion with energy $E^*$ converting into a photon in the magnetosphere of the white dwarf, and $d_\mathrm{WD}$ is the distance to the white dwarf from the point of observation.
The product $\mathcal{N}_a M_\mathrm{WD}$ is the mass averaged number of axions produced by the white dwarf per unit time, and multiplying this quantity by $E^*$ yields the luminosity of axion production under the assumption of an isothermal white dwarf core.
Modeling the core of the white dwarf as isothermal, the number density of axions produced from the \fe\ nuclear transition in the white dwarf core is \cite{2022EPJC...82..120D}
\begin{equation} \label{eq:La}
\mathcal{N}_a = \mathcal{N} \ \omega_1(T_c) \ \frac{1}{\tau_0} \frac{1}{\left(1+\alpha\right)} \ \frac{\Gamma_a}{\Gamma_\gamma}
\quad ,
\end{equation}
where $\mathcal{N}$ is the \fe\ number density per unit mass of stellar matter, $\omega_1$ is the occupation number of the first excited state at the temperature $T_c$ of the white dwarf core, $\tau_0$ is the lifetime of the excited state, $\alpha$ is the internal conversion coefficient, and $\Gamma_a / \Gamma_\gamma$ is the branching ratio of axion emission relative to photon emission.
The axion emission rate depends on the axion-nucleon coupling through the term $\Gamma_a / \Gamma_\gamma$.
For axions produced by the \fe\ transition, the axion flux from the Sun has recently been calculated by \cite{2022EPJC...82..120D} using the updated nuclear matrix elements of \cite{Avignone:2017ylv}.
This yielded an updated axion-to-photon branching ratio for ultrarelativistic axions of
\begin{equation}
\frac{\Gamma_a}{\Gamma_\gamma} = 2.32 \ \left( g_{aNN}^\mathrm{eff} \right)^2 \quad ,
\end{equation}
where $g_{aNN}^\mathrm{eff}$ is the effective axion-nucleon coupling constant, defined as $g_{aNN}^\mathrm{eff} = 0.16 g_{app} + 1.16 g_{ann}$ in terms of the axion couplings to protons and neutrons \cite{2022EPJC...82..120D}.
The core temperature of the white dwarf is another important parameter for the calculation of $\mathcal{N}_a$, as the occupation number $\omega_1$ is temperature-dependent.
We determined the core temperature of \ztf\ using the published photometry and fitting technique of \cite{2021Natur.595...39C}, with $T_c$ used as one of the free parameters instead of the effective temperature and implementing the relation between the core temperature and photon luminosity \cite{2019PhRvL.123f1104D}
\begin{equation}
k T_c \simeq \left( 0.3 \ \mathrm{keV} \right) \left( \frac{L_\gamma}{10^{-4} \ L_\odot} \right)^{0.4}
\quad ,
\end{equation}
where $k$ is the Boltzmann constant.
The results of the joint fit for $T_c$ along with the radius of the white dwarf, $R_*$, and colour excess due to interstellar reddening, $E(B-V)$, are shown in Fig.~\ref{fig:core_temp}.
Based on these results, we use a core temperature of
$k T_c = 2.85^{+1.58}_{-0.63}~\textrm{keV}$
for our calculation of the axion emission rate.
The thermal occupation number of an excited state with excitation energy $E^*$ as a function of temperature $T$ is
$\omega_1 = (2 J_1 + 1) e^{-E^*/kT} \ / \ [(2 J_0 + 1) + (2 J_1 + 1) e^{-E^*/kT}]$,
where $J_0$ and $J_1$ are the angular momenta of the ground and excited state, respectively \cite{2022EPJC...82..120D}.
These angular momenta are $J_0=1/2$ and $J_1 = 3/2$ for the \fe\ ground and first excited state, respectively \cite{Roehlsberger:2004}, giving a thermal occupation number of $\omega_1 = 2 e^{-E^*/kT} / \left(1 + 2 e^{-E^*/kT}\right)$.
This further simplifies to approximately $\omega_1 \sim 2 e^{-E^*/kT_c}$ for the core temperature of \ztf\ and an isothermal core.
A large core temperature can also broaden the width of the expected axion signal.
In the case of the Sun, the energy of the axion is broadened by the thermal motion of the nuclei, yielding a width of a few eV \cite{2022EPJC...82..120D}.
As the tempreature in the core of the white dwarf is larger than in the Sun, so is the thermal broadening (5~eV for ZTF~J1901+1458); however, the varying gravitational potential through the core has a larger effect through the gravitational redshift that is about 280~km/s at the surface and 980~km/s at the centre, yielding a width of 33~eV, which dominates over the thermal effects.
These broadening effects are negligible compared to the spectral resolution of \nustar\ ($\approx 400$~eV), so we work in terms of the total flux rather than the spectral flux and approximate that all of the axions are emitted with the same energy $E^*$.
The values for all of the other parameters used in the calculation of $\mathcal{N}_a$ are taken from the literature.
For the parameters characterizing the first excited state of \fe, we use values from \cite{Roehlsberger:2004} of $\tau_0 = 141~\textrm{ns}$ and $\alpha = 8.56$.
To get the number density of \fe\ nuclei, we use the abundances reported by \cite{SolarElementalAbundances}.
We use a proto-solar hydrogen mass fraction of 0.71 and
a number fraction of \fe\ nuclei relative to protons of $7.34 \times 10^{-7}$ (see Table 9 of \cite{SolarElementalAbundances}).
The corresponding \fe\ number density per unit mass is $\mathcal{N} = 6.24 \times 10^{50}~M_\odot^{-1}$.
Furthermore, we performed stellar evolution simulations using Modules for Experiments in Stellar Astrophysics (MESA \cite{Paxton2011,Paxton2013,Paxton2015,Paxton2018,Paxton2019}) to verify that stellar evolution processes do not alter the iron abundance in the stellar core.
Finally, for the white dwarf parameters needed in the rest of the flux calculation, we use the values for \ztf\ reported in \cite{2021Natur.595...39C}, for which the mass is $M_\mathrm{WD} = 1.346\pm0.019~M_\odot$ and the distance is $d_\mathrm{WD} = 41.4\pm0.1$~pc.
To determine the observable photon flux $\Phi_{\gamma}$ that would be induced by axions produced at the rate $\mathcal{N}_a$ in the white dwarf, we must also calculate the probability of the axions converting into photons as they propagate outward through the magnetic field surrounding the white dwarf.
Under the approximation that the axions travel along radial trajectories (relative to the center of the white dwarf), the propagation of the axion-photon field is described by the equations \cite{Raffelt:1987im}
\begin{equation}
\left[ i \partial_r + E +
\begin{pmatrix}
\Delta_\parallel & \Delta_B\\
\Delta_B & \Delta_a
\end{pmatrix}
\right]
\begin{pmatrix} A_\parallel \\ a \end{pmatrix}
= 0
\quad ,
\label{eq:axion-photon}
\end{equation}
\begin{align*}
&\Delta_{\parallel}(r) = \left(7/2\right) \left(\alpha_\mathrm{EM} / 45\pi \right) E \left[ B(r) / B_\mathrm{crit} \right]^2 \sin^2\Theta \\
&\Delta_B(r) = (1/2) \ g_{a\gamma\gamma} \ B(r) \sin\Theta \\
&\Delta_a = - m_a / (2 E)
\quad ,
\end{align*}
where $r$ is the radial component, $E \approx E^*$ is the axion energy, $a(r)$ is the axion field,
$A_{\parallel}(r)$ is the component of the electromagnetic vector potential that is parallel to the external magnetic field (in the plane whose normal vector is aligned with the direction of propagation),
$\alpha_\mathrm{EM} = 1 / 137$ is the electromagnetic fine structure constant,
$B_\mathrm{crit} = 4.414 \times 10^{13}$~G is the quantum electrodynamics critical field strength,
$\Theta$ is the angle between the magnetic field and the radial propagation direction, and
$B(r)$ is the magnetic field strength at radius $r$.
For details on the origin and solution of the axion-photon propagation equations for magnetic stars, see e.g. \cite{Raffelt:1987im,2006PhRvD..74l3003L}.
To calculate the probability of an axion converting to a photon in the external white dwarf magnetic field, we solve Eq.~\ref{eq:axion-photon} numerically for an initial pure axion state.
The probability $p_{a \rightarrow \gamma}$ is given by the squared magnitude of the asymptotic solution for $A_\parallel$.
For this calculation, we model the external magnetic field of the white dwarf as a magnetic dipole, $B(r) = B_0 \left(R_\mathrm{WD} / r \right)^3$, where $B_0$ is the magnetic field at the surface of the white dwarf and $R_\mathrm{WD}$ is the white dwarf radius.
For this form of the magnetic field, $\sin\Theta$ has a fixed value along the radial trajectory which we take to be unity.
The values of the relevant magnetic field parameters for \ztf\ are $B_0 \sim 800$~MG and $R_\mathrm{WD} = 2,140^{+160}_{-230}$~km \cite{2021Natur.595...39C}.
\begin{table}
\centering
\begin{tabular}{l | l}
Parameter \ & \ Value \\
\hline
$E^*$ & \ 14.4~keV \\
$J_0$ & \ 1/2 \\
$J_1$ & \ 3/2 \\
$\tau_0$ & \ 141~ns \\
$\alpha$ & \ 8.56 \\
$\mathcal{N}$ & \ $6.24 \times 10^{50}~M_\odot^{-1}$ \\
$M_\mathrm{WD}$ & \ 1.34~$M_\odot$ \\
$d_\mathrm{WD}$ & \ 41.44~pc \\
$B_0$ & \ 800~MG \\
$R_\mathrm{WD}$ & \ 2,100~km \\
$k T_c$ & \ $2.85^{+1.58}_{-0.63}$~keV \\
\end{tabular}
\caption{Parameter values used to calculate the X-ray flux induced by axions produced in the \fe\ nuclear transition process for \ztf.
The properties of the first excited state of \fe\ come from \cite{Roehlsberger:2004}.
The abundances used to determine $\mathcal{N}$ come from \cite{SolarElementalAbundances}.
The \ztf\ white dwarf parameters come from \cite{2021Natur.595...39C}, except for $T_c$ which is calculated in this work.}
\label{tab:params}
\end{table}
The key parameter values used to calculate the X-ray photon flux at Earth are summarized in Table~\ref{tab:params}.
The observable photon flux induced by the \fe\ transition depends on both the axion-nucleon coupling and the axion-photon coupling.
The emission rate of axions produced by the \fe\ nuclear transition in the white dwarf interior goes as $(g_{aNN}^\mathrm{eff})^2$, while the probability of the axions converting into photons in the external magnetic field goes as $(g_{a\gamma\gamma})^2$.
Thus, observations of the hard X-ray emission from magnetic white dwarfs can constrain the product of the couplings: $g_{aNN}^\mathrm{eff} g_{a\gamma\gamma}$.
Furthermore, the probability of an axion converting to a photon depends on the axion mass, so the constraints on $g_{aNN}^\mathrm{eff} g_{a\gamma\gamma}$ from X-ray observations will be a function of axion mass, which we evaluate numerically over a grid a axion mass values.
We calculate the constraints that could be obtained for $g_{aNN}^\mathrm{eff} g_{a\gamma\gamma}$ from \nustar\ observations with a total exposure time of 100~ks.
Using XSPEC simulations, we determined the minimum flux for a three-sigma detection of a narrow (width of 0.03~keV), Gaussian spectral line at 14.4~keV above the background (30-arcsecond extraction region and one arcminute off-axis) to be $1.6\times 10^{-6}$~photons~s$^{-1}$~cm$^{-2}$. In the background, we also included the thermal emission from the white dwarf atmosphere modelled as a blackbody at a temperature $k T_\textrm{eff}=3.88$~eV with a radius of 2,100~km.
For a given axion mass, the value of $g_{aNN}^{\textrm{\scriptsize eff}} g_{a\gamma\gamma}$ for which the photon number flux given by Eq.~\ref{eq:flux} equals (or exceeds) the detector threshold sets the constraint.
\section{\label{sec:results}Results}
The potential \nustar\ signature of axion production through nuclear processes is a narrow emission line at 14.4~keV (the excitation energy of the \fe\ nucleus).
The projected axion constraints that would be obtained from a 100~ks \nustar\ observation of the white dwarf \ztf\ are depicted in Fig.~\ref{fig:constraint}, where they are compared to constraints from current and future observatories.
These constraints would be more stringent than the existing constraints from CAST for all axion masses and even more stringent than the ones that will be obtained by the future proposed IAXO by an order of magnitude for small axion masses.
We show the limits for the four different configurations of both the proposed intermediate-stage BabyIAXO and the proposed fully operational IAXO (see Tab.~I of Ref.~\cite{2022EPJC...82..120D} for a summary of these configurations).
In addition to constraining $g_{aNN}^\mathrm{eff} g_{a\gamma\gamma}$, X-ray observations of \ztf\ could also constrain $g_{aee} g_{a\gamma\gamma}$ as white dwarfs can produce axions through electron bremsstrahlung, which yields a blackbody-like spectrum of axions.
This was the focus of a 100-ks observation of a cooler magnetic white dwarf, RE~J0317-853 \citep{2021arXiv210412772D}, with \chandra.
These axions are typically produced at lower energies (near the temperature of the core at a few keV), so the resulting X-rays lie squarely in the energy range probed by \chandra, but constraints could be derived from \nustar\ observations as well.
\ztf\ is a prime target to search for axion signatures in the X-rays because it is one of the hottest and most strongly magnetized white dwarfs known.
Both properties increase the predicted strength of the \fe\ nuclear transition line and bremsstrahlung emission, and therefore the chance of detection, or the significance of the constraints in case of a non-detection.
Recently published photometry and spectroscopy \citep{2021Natur.595...39C} yield somewhat broad constraints on the core temperature of the white dwarf (see Fig.~\ref{fig:core_temp}) that result in a broad range for the implied constraints on the axion (the shadowed blue area in Fig.~\ref{fig:constraint}). Fortunately, follow-up ultraviolet spectroscopy observations are scheduled for the current HST cycle \citep{2021hst..prop16753C} that should provide stronger constraints on the effective temperature and therefore the core temperature of the white dwarf, which would narrow the band of constraints that could be achieved with \nustar\ observations.
\section{\label{sec:conclusions}Conclusions}
White dwarfs are a popular target of indirect searches for axions.
While observations of white dwarfs have been used to constrain the coupling of axions to electrons, white dwarfs have not previously been identified as a target for probing the axion coupling to nucleons.
In this work, we have shown for the first time that observations of white dwarfs can be used to probe the axion-nulceon coupling through searches for the X-ray signal from hot, highly magnetized white dwarfs that would be induced by the process of axions being produced in the core via the nuclear transition of the first excited state of \fe\ and then converting into photons in the magnetosphere.
The recently discovered white dwarf \ztf\ is a compelling target to search for an X-ray signal arising from axions produced via the \fe\ transition.
We have shown that a 100~ks observation of \ztf\ by \nustar\ would constrain the coupling of axions with nucleons and photons at a level below current constraints for all axion masses and below the constraints that would be obtained by planned future terrestrial experiments at small masses.
This would provide a dramatic improvement in our knowledge of these particles that are critical to our understanding of the Standard Model and possibly dark matter as well.
\begin{acknowledgments}
This work has been supported by the Natural Sciences and Engineering Research Council of Canada through the Discovery Grants program and Compute Canada. I.C. is a Sherman Fairchild Fellow at Caltech and thanks the Burke Institute at Caltech for supporting her research.
\end{acknowledgments}
\bibliography{main}
|
Title:
Subspace identification of low-dimensional Structural-Thermal-Optical-Performance (STOP) models of reflective optics |
Abstract: In this paper, we investigate the feasibility of using subspace system
identification techniques for estimating transient Structural-Thermal-Optical
Performance (STOP) models of reflective optics. As a test case, we use a
Newtonian telescope structure. This work is motivated by the need for the
development of model-based data-driven techniques for prediction, estimation,
and control of thermal effects and thermally-induced wavefront aberrations in
optical systems, such as ground and space telescopes, optical instruments
operating in harsh environments, optical lithography machines, and optical
components of high-power laser systems. We estimate and validate a state-space
model of a transient STOP dynamics. First, we model the system in COMSOL
Multiphysics. Then, we use LiveLink for MATLAB software module to export the
wavefront aberrations data from COMSOL to MATLAB. This data is used to test the
subspace identification method that is implemented in Python. One of the main
challenges in modeling and estimation of STOP models is that they are
inherently large-dimensional. The large-scale nature of STOP models originates
from the coupling of optical, thermal, and structural phenomena and physical
processes. Our results show that large-dimensional STOP dynamics of the
considered optical system can be accurately estimated by low-dimensional
state-space models. Due to their low-dimensional nature and state-space forms,
these models can effectively be used for the prediction, estimation, and
control of thermally-induced wavefront aberrations. The developed MATLAB,
COMSOL, and Python codes are available online.
| https://export.arxiv.org/pdf/2208.02333 |
\keywords{Adaptive optics, structural-thermal-optical-performance (STOP) models, system identification, model-based control, telescopes}
\section{INTRODUCTION}
\label{sec:intro} %
Thermally-induced mechanical deformations, wavefront aberrations, and large focal shifts can negatively affect performance and significantly limit the resolution of both refractive and reflective optical systems. For example, thermal phenomena and thermally-induced aberrations can limit the achievable resolution and performance of optical lithography systems~\cite{choi2013lens,ravensbergen2013deformable,haber2013predictive,zhao2018active,habets2016multi,bikcora2014thermal,heertjes2020control,haber2013identification,bikcora2016parameter,bikcora2012lens},
space and ground telescopes~\cite{yoder2017opto,holzlohner2022structural,brooks2022precision,havey2019challenges,segato2011method,brooks2017predictive,zhang2020optimization,stahl2020advanced,stahl2020predictive,banyal2013opto,buleri2019structural,blaurock2005structural,gu2019thermal},
gravitational wave detectors~\cite{loriette2003absorption,zhao2006compensation,ramette2016analytical}, high power lasers~\cite{lyu2021stop,schmidt2019energy,abt2008temporal}, and other optical systems~\cite{turella2019structural,koppen2018topology,nordera2021methodology,li2022multilayer}. In the case of refractive optical systems consisting of lenses, absorbed thermal energy and non-uniform temperature distributions across optical elements, induce mechanical deformations and variations of refractive indices. These effects can in turn induce large focal shifts and wavefront aberrations. On the other hand, in the case of reflective optical elements, thermally created mechanical deformations are the main cause of thermally-induced wavefront aberrations. Here it should be noted that even if all internal optical elements are properly thermally insulated, thermally induced deformations of enclosures, supports, and other devices that are in direct mechanical contact with optics can cause significant optical misalignments.В
To design effective control strategies for the compensation of thermally-induced wavefront aberrations or to design novel wavefront correction devices that are based on thermo-mechanical actuation, it is often necessary to develop high-fidelity models of thermally-induced mechanical deformations and wavefront aberrations. Apart from this, high-fidelity models are important for performance prediction and worst-case analysis of optical systems under the negative influence of thermal effects. To model thermally-induced wavefront aberrations it is necessary to couple structural and thermal partial differential equations with optical parameters and ray propagation equations. These models are often referred to as Structural-Thermal-Optical-Performance (STOP) models. The development of accurate STOP models is a challenging task. First of all, STOP models involve different time scales of physical processes, as well as different types of partial differential equations and boundary conditions. Consequently, STOP models can often be numerically stiff and difficult for discretization and simulation. Secondly, for the development of efficient prediction and control algorithms, it is crucial to obtain low-dimensional models. However, discretized STOP models obtained by applying finite-element methods lead to state-space models with state dimension orders of $10^{5}$ or even $10^{6}$. Such large-scale models are impractical for real-time prediction or control. Finally, it is often the case that the parameters describing the STOP models are not accurately known or there are other model uncertainties. Consequently, it is often necessary to directly estimate the models from the experimentally collected data. All these facts call for the development of data-driven estimation and model validation approaches capable of estimating low-dimensional STOP models. This paper aims at developing and testing such approaches.
In our previous work~\cite{haber2020modeling}, we investigated the potential of using a subspace system identification method~\cite{verhaegen2007filtering,haber2020modelingHaberVerhaegen,haber2014subspace,haber2013identification} for estimating STOP models of refractive optical systems. In~\cite{haber2020modeling}, we considered a test case consisting of a single lens with an optomechanical support structure. By using the simulation data, we demonstrated that the subspace system identification method has a promising potential for accurately estimating low-order transient STOP models. However, the feasibility of the subspace identification method for estimating low-dimensional STOP models of reflective optics has not been investigated. Then, in~\cite{haber2021modeling}, we derived and experimentally verified a model of transient thermal dynamics of an 8-inch aluminum mirror prototype. In the same paper, we used model-order reduction techniques to develop low-order state-space models of thermal dynamics. The results reported in~\cite{haber2021modeling} indicate that the transient thermal dynamics of reflective optics can be approximated by low-order models. However, in~\cite{haber2021modeling}, we only consider thermal dynamics without coupling the heat equation with other equations mathematically describing thermal deformation and optical ray propagation. Consequently, it is not clear if an integrated STOP transient dynamics of reflective optics can be approximated by low-dimensional models.
Motivated by the promising results presented in~\cite{haber2020modeling,haber2021modeling}, and above described open research questions, in this paper, we investigate the feasibility and performance of the subspace system identification method for estimating STOP models of reflective optical systems. As a test case, we use a Newtonian telescope structure. We estimate and validate a state-space model of a transient STOP dynamics. First, we model the system in COMSOL Multiphysics. Then, we use LiveLink for MATLAB software module to export the wavefront aberrations data from COMSOL to MATLAB. This data is used to test the subspace identification method that is implemented in Python. Our results show that the large-dimensional STOP dynamics of the considered optical system can be accurately approximated by a low-dimensional state-space model. Due to its low-dimensional nature and state-space form, the estimated model can effectively be used for the prediction, estimation, and control of thermally-induced wavefront aberrations. Furthermore, the used estimation and validation procedures can be used for the development of feedforward adaptive optics compensation methods~\cite{haber2022dual,Roddier1999,tyson2010principles,Haber:13,Bonora2006,haber2021general,vogel2010modeling,polo2013linear}. The developed MATLAB, COMSOL, and Python codes are available online~\cite{stopCodesHaber2022,stopSubspaceCodesHaber2022}.
A few comments about the synergistic approach presented in this paper are in order. Since the purpose of this paper is to test the feasibility of the subspace identification method, we use simulated STOP data to test the identification approach. The next development stage is to experimentally verify the presented approach. This is a future research direction. In our accompanying article~\cite{haberMLSTOP2022}, we test the potential of using machine learning techniques for estimating low-order STOP models of the Newtonian telescope structure. The system identification approach presented in this paper and the machine learning approach presented in the accompanying paper~\cite{haberMLSTOP2022} complement each other.
This paper is organized as follows. In Section~\ref{sec:systemSTOPmodel}, we present the STOP model and perform step response analysis. In Section~\ref{sec:systemIdentification}, we present the system identification approach and results. Finally, in Section~\ref{sec:conclusions}, we present conclusions and briefly discuss future research directions.
\section{SYSTEM STOP MODEL}
\label{sec:systemSTOPmodel}
In this section, we develop the system STOP model. Figure~\ref{fig:Graph1} shows the system structure. This is a conceptual design obtained by combining a Newtonian telescope structure with a primary mirror support. We use ray-tracing parameters and dimensions from~\cite{comsolNewtonian2022} to perform a ray tracing analysis in COMSOL Multiphysics. Table~1 summarizes the most important geometrical and ray tracing parameters. The primary mirror, denoted by 1 in Fig.~\ref{fig:Graph1}, has holes on the back side that are used to place cooler/heater devices and thermocouples for observing the temperature. Holes, denoted by 7, are distributed over a 9 by 9 grid. The motivation for introducing the heaters/coolers originates from our previous work on designing feedback temperature control systems for optical components~\cite{haber2020modeling,haber2021modeling}. In our STOP simulations, heaters (heat inputs) are used to provide the heat power that increases the primary mirror and support structure temperatures, and consequently, introduces wavefront aberrations. Also, in our STOP simulations, we introduce external heat-flux disturbances acting on one side of the primary mirror. We are interested in developing a STOP model that relates the time series of the applied heat inputs and external heat-flux disturbances with the time-series of observed wavefront aberrations expressed in the Zernike basis.
The support structure of the primary mirror is denoted by 2 in Fig.~\ref{fig:Graph1}. A more detailed view of the mirror support structure is shown in Fig.~\ref{fig:Graph3}(b). We assume that the primary mirror and the support structure are made of an aluminum alloy, with the thermal and structural parameters given in Table~\ref{symbolGlossaryOptical}. Our modeling approach can easily be generalized to other mirror materials, mirror geometries, and mount structures. Although we followed some guidelines~\cite{lockwoodOptics2022} for designing and modeling the support structure, the support mirror structure is not optimized from the structural and thermo-mechanical perspectives. The purpose of this paper is not to propose an optimized support structure, instead, the purpose of the paper is to test the ability of subspace identification techniques to estimate STOP models. The geometry of the primary mirror support structure does not have a significant influence on our estimation results. The secondary mirror is denoted by 3 in Fig.~\ref{fig:Graph1}. The ray propagation obstruction is denoted by 4. The image (focal plane) is denoted by 5. Arrow 6 denotes the direction of rays entering the telescope.
\begin{table}[H]
\centering
\label{symbolGlossaryOptical}
\begin{tabular}{|l| l |}
\hline
Entrance pupil diameter & $0.25\;\; [m]$ \\
\hline
Primary mirror focal length & $1\;\; [m]$ \\
\hline
Primary mirror conic constant & $-1$ \\
\hline
Primary mirror focal ratio & $4$ \\
\hline
Image plane position (relative to optical axis) & $0.2 [m]$ \\
\hline
Secondary mirror diameter & $0.05 \;[m]$ \\
\hline
Secondary mirror offset (relative to optical axis) & $0.0044194\; [m]$ \\
\hline
Image plane diameter & $0.05\;[m]$ \\
\hline
Number of extra azimuthal points & $50$ \\
\hline
Primary mirror surface diameter & $0.26\; [m]$ \\
\hline
Primary mirror full diameter & $0.275\; [m] $ \\
\hline
Primary mirror thickness & $0.035\; [m] $ \\
\hline
Secondary mirror thickness & $0.01\; [m] $ \\
\hline
Wavelength & $550 \; [nm] $ \\
\hline
Mirror emissivity & $0.1 $ \\
\hline
Ambient temperature & $293.15 \; [K] $ \\
\hline
Heat transfer coefficient - convection & $5 \; [W/(m^{2}\cdot K) ]$ \\
\hline
Heat capacity at constant pressure & $900 \; [J /(kg\cdot K) ]$ \\
\hline
Thermal conductivity & $238 \; [W /(m \cdot K) ]$ \\
\hline
Coefficient of thermal expansion & $23 \cdot 10^{-6} \; [1/K]$ \\
\hline
Density & $2700 \; [kg/m^{3}]$ \\
\hline
Young's modulus & $70 \cdot 10^{9} \; [Pa]$ \\
\hline
Poisson's ratio & $0.33$ \\
\hline
\end{tabular}
\caption{Optical, thermal, and structural parameters that are used to model the STOP system.}
\end{table}
Figure~\ref{fig:Graph2}(a) shows a ray release grid and Fig.~\ref{fig:Graph2}(b) shows some of the simulated ray trajectories. For clarity, we do not overload Fig.~\ref{fig:Graph2}(b) with too many rays. That is, we only show ray trajectories of a small portion of the released rays.
\subsection{Control heater STOP results}
We simulate a step response of the system, where the inputs are control heaters. We assume that the central and four neighboring heaters placed in the holes on the back of the primary mirror are active. We assume that every heater generates $4\;[W]$ of power. The heater power is constant during simulations. To perform the STOP analysis, we couple Geometrical Optics, Solid Mechanics, and Heat Transfer in Solids COMSOL Multiphysics modules. We first define and run a COMSOL study consisting of Solid Mechanics and Heat Transfer in Solids modules. This simulation run produces time-dependent temperature and displacement fields. Then, the results of this simulation are used in ray tracing simulations. To perform ray tracing simulations, we use the Geometrical Optics module. The COMSOL and MATLAB codes used to perform STOP analysis are posted online~\cite{stopCodesHaber2022}. Fig.~\ref{fig:Graph3}(a) shows the meshed geometry. The mesh contains around $109\cdot 10^{3}$ elements. Fig.~\ref{fig:Graph3}(b) shows the support structure of the primary mirror. The green marks denote the fixed constraints (the displacement is set to zero) for performing the STOP analysis.
We simulate the transient STOP dynamics for $2\cdot 10^{4}$ seconds with a step size of $100$ seconds. The simulated temperature distributions are shown in Fig.~\ref{fig:Graph4} for $10^{3}$ and $10^{4}$ seconds. The simulated displacement distributions are shown in Fig.~\ref{fig:Graph5} for $10^{3}$ and $10^{4}$ seconds. Figure~\ref{fig:Graph6} shows (a) transient temperature and (b) displacement responses at the spatial locations defined by points P1, P2, ..., P7 that are shown in Fig.~\ref{fig:Graph1}. This graph can be used to estimate transient response parameters, such as rise time, settling time, and time constants. From Figs.~\ref{fig:Graph4} and \ref{fig:Graph5}, we can observe that the simulated temperature and displacement fields spatially correlate with the locations of the heat inputs. On the other hand, from Fig.~\ref{fig:Graph6}, we can observe that there are spatial gradients of temperature and displacement fields at the top surface of the primary mirror. At first look, the magnitudes of these gradients do not seem significant. However, wavefront aberrations results that are presented in the sequel, reveal that even these moderate gradients can cause significant wavefront aberrations and spot-diagram divergences in the focal plane.
Next, we present spot diagrams and wavefront aberrations at the focal (image) plane. Fig.~\ref{fig:Graph7}(a) shows the spot diagrams at time instants $500$, $10^{3}$, $5\cdot 10^{3}$, and $10^{4}$ seconds. Fig.~\ref{fig:Graph7}(b) shows the wavefront aberrations at the same time instants.
\subsection{Heat disturbance STOP results}
Here, we present step response results where the inputs are the external heat-flux disturbances acting on one side of the mirror. The spatial location of the external disturbances is shown in Fig.~\ref{fig:Graph8} (light blue). We simulate the STOP dynamics for $2\cdot 10^{4}$ seconds with a step size of 100 seconds. In our STOP simulations, the total heat flux power of the external disturbances is $50\;[W]$. Figure~\ref{fig:Graph2disturbance}(a) and (b) show simulated temperature spatial distributions of the primary mirror at time instants $5 \cdot 10^2$ and $2 \cdot 10^4$ seconds. The panels (c) and (d) in the same figure show simulated displacement spatial distributions at identical time instants.
Figure~\ref{fig:Graph3disturbance}(a) shows the calculated spot diagrams at the image plane at the time instants $5\cdot 10^{2}$, $5\cdot 10^{3}$, and $2\cdot 10^{4}$ seconds. Figure~\ref{fig:Graph3disturbance}(b) shows the calculated wavefront aberrations at identical time instants.
\section{SYSTEM IDENTIFICATION}
\label{sec:systemIdentification}
In this section, we briefly summarize the used system identification method and present estimation and model validation results. Additional technical details related to the used subspace identification method can be found in~\cite{haber2019subspace,haber2019identification,haber2020modelingHaberVerhaegen}. Our goal is to estimate the following state-space model:
\begin{align}
\mathbf{x}_{k+1}& =A\mathbf{x}_{k}+B\mathbf{z}_{k} \label{ssModel1} \\
\mathbf{y}_{k} & = C\mathbf{x}_{k} \label{ssModel2}
\end{align}
where $\mathbf{x}_{k}\in \mathbb{R}^{n}$ is the system state vector, the subscript $k=0,1,2,\ldots$ of all vectors denotes a discrete-time instant, $\mathbf{z}_{k} \in \mathbb{R}^{10}$ is the input vector (consisting of control inputs and disturbances) at the discrete-time instant $k$, $\mathbf{y}_{k}\in \mathbb{R}^{r}$ is the vector consisting of selected Zernike coefficients, and $A\in \mathbb{R}^{n\times n}$, $B\in \mathbb{R}^{n\times 10}$, and $C\in \mathbb{R}^{r\times n}$ are the system matrices. The input vector $\mathbf{z}_{k}$ consists of control inputs $u_{1,k},u_{2,k},\ldots, u_{9,k}\in \mathbb{R}$ and the external heat disturbance $d_{10,k}\in \mathbb{R}$:
\begin{align}
\mathbf{z}_{k}=\begin{bmatrix}u_{1,k} & u_{2,k} & \ldots & u_{9,k} & d_{10,k} \end{bmatrix}^{T}
\end{align}
The control inputs $u_{1,k},u_{2,k},\ldots, u_{9,k}$ represent the heat power generated by the heaters, and external heat disturbance represents the power of the external heat flux acting on the mirror. Figure~\ref{fig:Graph8} shows the physical locations of the control inputs (red circles) and the external heat disturbance (blue area on the mirror side).
The identification problem is to estimate the state order $n$ and state-space matrices $A,B$ and $C$ of the state-space model \eqref{ssModel1}-\eqref{ssModel2} by using time series of the collected input-output data $\{\mathbf{y}_{k},\mathbf{z}_{k} \}^{k=0,1,\ldots, N}$. That is, by using time-series of the collected Zernike coefficients, and by using time series of the control inputs and the external heat disturbance, we want to estimate the model order and state-space matrices.
We use a version of the subspace identification method that is derived and summarized in our previous papers~\cite{haber2019subspace,haber2019identification,haber2020modelingHaberVerhaegen,haber2020modeling}. We implemented the subspace identification method in Python. We used LiveLink for MATLAB module to generate data sets for testing the subspace identification method. The Python codes together with LiveLink for MATLAB and COMSOL Multiphysics codes are provided online~\cite{stopSubspaceCodesHaber2022}. In the sequel, we briefly describe the identification steps and present the results.
\subsection*{Step 1: Generate identification and validation data sets}
In this paper, we generate input-output data sets for testing the subspace identification method by simulating the system STOP model. This is a usual practice when developing and testing estimation approaches. Namely, in the development phase, the performance of the approach is first tested on simulation data. Once this initial testing phase is completed and the estimation approach is iteratively perfected, the next phase is to test the approach on experimentally collected data. In this paper, we are focused on testing the identification method by using the simulation data, and experimental verification of the developed approach is a future research direction.
First, we generate input sequences for system identification. Generally speaking, input sequences have to be sufficiently rich such that they excite the system modes in the desired frequency range. We generate input sequences (control inputs and the external heat disturbance) as binary pseudo-random numbers drawn from a uniform discrete distribution. We use the MATLAB function randi() to generate these sequences. Figure~\ref{fig:Graph9}(a) shows an example of the generated input sequence.
The input sequences are applied to the STOP model. By simulating this model, we obtain the output data. In total, by simulating the STOP model, we obtained the time series of the first $21$ Zernike modes (coefficients) that represent the output data. However, there are only 3 dominant Zernike modes whose magnitudes are significantly larger than the magnitudes of other modes. These modes are piston, horizontal tilt, and defocus, and they are shown in Fig.~\ref{fig:Graph9}(b). Consequently, we only include these modes in theВ output vector. That is, in our case, $r=3$ ($r$ is the dimension of the output vector).
We generate two independent sets of inputs. The first input set is used to generate the model outputs that are used for system identification. This data set is called the identification data set. The second input set which is statistically independent of the first input set is used to generate the outputs that are used for model validation. This data set is called the validation data set. To generate both of these data sets, we simulate the STOP model for the total time duration of $9 \cdot 10^{3}$ seconds with the discretization step of $300$ seconds. This gives in total $301$ data samples. The main issue with generating larger data sets is that it takes a significant amount of time to simulate the STOP model. We performed simulations on a desktop computer with 64 GB RAM and an Intel i9-10900 CPU. The generation of one data set on this computer takes at least 6 hours of computation time for a moderate-sized discretization mesh. For denser meshes, it might take several days to obtain a single data set.
\subsection*{Step 2: Estimation of the state-space model}
First, we estimate a Vector AutoRegressive eXogenous (VARX) model. The VARX model is postulated such that the output of the system at the time instant $k$ is a linear combination of the past inputs and outputs, from $k-1$ until $k-p$, for more details, see~\cite{haber2019subspace,haber2019identification,haber2020modelingHaberVerhaegen,haber2020modeling}. Here $p$ is referred to as the past window. The first step of the subspace identification method is to estimate the parameters of the VARX model and past window $p$. We use a simple least-squares technique to estimate the VARX model parameters. We use the Akaike Information Criterion (AIC) to estimate the value of the past window $p$~\cite{lutkepohl2005new}. Figure~\ref{fig:Graph10}(a) shows the AIC as a function of the past window $p$. We select the past window $p=39$ that produces the smallest AIC value.
We use the estimated VARX model, and input-output identification data to form a data matrix. We estimate the state-sequence of the system by performing a singular value decomposition of this matrix~\cite{verhaegen2007filtering}. After the state sequence is estimated, we estimate the state-space matrices by solving a least-squares problem. During the state and state-space matrices estimation steps, we also estimate the state order $n$. The state order is estimated by detecting gaps in the plot of singular values. Namely, the state order can be estimated as a singular value index immediately after significant gaps. Figure~\ref{fig:Graph10}(b) shows the singular values. We can observe that the candidates for the state order are $n=2,3,6$ and higher state orders, such as for example $n=35$.
\subsection*{Step 3: Model validation and quality check}
Often, several models are estimated and the final model is chosen by validating and testing the performance of estimated models by using the validation data set. Following this practice, once we estimated the models for different state orders, we simulate models by using the input sequence from the validation data set. Then, the simulated model outputs are compared with the output from the validation data set. The error between the validation output (also called the real output) and the simulated output (also called the predicted output) is computed. This error is called the validation error or the model prediction error. The final model is selected as the model that produces the smallest validation error. Panels (a), (b), and (c) in Figure~\ref{fig:Graph11} show the real and predicted piston, defocus, and horizontal tilt Zernike coefficients (outputs), respectively, for the estimated state order $n=2$. Panels (a), (b), and (c) of Fig.~\ref{fig:Graph12} show the prediction and real value of the same coefficients determined for the estimated order of $n=35$.
On the other hand, panels (d) in these two figures show the correlation values of the validation error of predicting the defocus term. The red dashed lines represent the bounds of the interval for testing the white-noise hypothesis of the validation error. This interval is used to additionally validate the model quality. Ideally, if all information available in the data is captured by the final model, then the validation error should have a white noise property (this is also valid when the data is corrupted by the white measurement noise). If more than $95$ percent of correlation values are in the interval, then we can assume that the validation error has a white noise property. From the data in panels (d), we can observe that this is not the case since there is a strong correlation for smaller lag values. This implies that the estimation results can be improved by changing the model structure or choosing a different model order. However, from panels (a) and (b), we can observe that our estimated final models for both $n=2$ and $n=35$ are able to accurately predict the piston and defocus modes. However, the prediction is worse for the horizontal tilt mode. This is due to the fact that the horizontal tilt mode is more oscillatory and more difficult to be estimated. Furthermore, piston and defocus mode values are significantly larger than the values of the horizontal tilt mode. These results can be improved by some form of data scaling and detrending of the piston and defocus modes. The improvement of our simulation results is a future research direction.
As the final model quality check, we investigate the stability of the estimated models. Figure~\ref{fig:Graph13} shows the eigenvalues of the estimated models for (a) $n$=2 and (b) $n=35$. We can observe that both models are stable since all the eigenvalues are inside of the unit circle. Another important observation is that for $n=35$, the eigenvalues are clustered close to the unit circle. A thorough analysis of this phenomenon is left for future research.
\section{Conclusion and Future Work}
\label{sec:conclusions}
In this paper, we investigated the feasibility of using the subspace identification method for estimating state-space models of transient Structural Thermal Optical Performance (STOP) dynamics of reflective optical systems. We tested the method on a Newtonian telescope structure. We obtained identification and test data sets by simulating the STOP model in COMSOL Multiphysics. Our results demonstrate that the subspace identification method is capable of estimating low-order STOP models of the dominant wavefront aberrations. Future research directions should be directed towards improving the estimation performance by proper data preprocessing and method tuning. Also, future research direction should be directed towards experimental verification of the subspace identification method.
\bibliography{sample} %
\bibliographystyle{spiebib} %
|
Title:
Precise mass determination for the keystone sub-Neptune planet transiting the mid-type M dwarf G 9-40 |
Abstract: Context. Despite being a prominent subset of the exoplanet population
discovered in the past three decades, the nature and provenance of
sub-Neptune-sized planets are still one of the open questions in exoplanet
science. Aims. For planets orbiting bright stars, precisely measuring the
orbital and planet parameters of the system is the best approach to distinguish
between competing theories regarding their formation and evolution. Methods. We
obtained 69 new radial velocity observations of the mid-M dwarf G 9-40 with the
CARMENES instrument to measure for the first time the mass of its transiting
sub-Neptune planet, G 9-40 b, discovered in data from the K2 mission. Results.
Combined with new observations from the TESS mission during Sectors 44, 45, and
46, we are able to measure the radius of the planet to an uncertainty of 3.4%
(Rb = 1.900 +- 0.065 Re) and determine its mass with a precision of 16% (Mb =
4.00 +- 0.63 Me). The resulting bulk density of the planet is inconsistent with
a terrestrial composition and suggests the presence of either a water-rich core
or a significant hydrogen-rich envelope. Conclusions. G 9-40 b is referred to
as a keystone planet due to its location in period-radius space within the
radius valley. Several theories offer explanations for the origin and
properties of this population and this planet is a valuable target for testing
the dependence of those models on stellar host mass. By virtue of its
brightness and small size of the host, it joins L 98-59 d as one of the two
best warm (Teq ~ 400 K) sub-Neptunes for atmospheric characterization with
JWST, which will probe cloud formation in sub-Neptune-sized planets and break
the degeneracies of internal composition models.
| https://export.arxiv.org/pdf/2208.07287 |
\begin{sidewaystable}
\begin{tiny}
\begin{center}
\caption{Radial velocities and spectral activity indicators measured from CARMENES-VIS spectra with \texttt{serval} and \texttt{raccoon}.
\label{table-complete_serval_output}}
\begin{tabular}{rrrrrrrrrrrrrrrrrrrrrrrrrrr}
\hline
\hline
\noalign{\smallskip}
\multicolumn{1}{c}{BJD$_\mathrm{TDB}$} &
\multicolumn{2}{c}{RV} &
\multicolumn{2}{c}{BIS} &
\multicolumn{2}{c}{CCF\_FWHM} &
\multicolumn{2}{c}{CCF\_CTR} &
\multicolumn{2}{c}{CRX} &
\multicolumn{2}{c}{dlW} &
\multicolumn{2}{c}{$\mathrm{H_{\alpha}}$} &
\multicolumn{2}{c}{$\mathrm{NaD_{1}}$} &
\multicolumn{2}{c}{$\mathrm{NaD_{2}}$} &
\multicolumn{2}{c}{$\mathrm{CaIRT_{1}}$} &
\multicolumn{2}{c}{$\mathrm{CaIRT_{2}}$} &
\multicolumn{2}{c}{$\mathrm{CaIRT_{3}}$} &
\multicolumn{1}{c}{SNR} &
\multicolumn{1}{c}{$\mathrm{T_{exp}}$}\\
\multicolumn{1}{c}{-2457000} &
\multicolumn{2}{c}{($\mathrm{m\,s^{-1}}$)} &
\multicolumn{2}{c}{($\mathrm{m\,s^{-1}}$)} &
\multicolumn{2}{c}{($\mathrm{km\,s^{-1}}$)} &
\multicolumn{2}{c}{(\%)} &
\multicolumn{2}{c}{($\mathrm{m\,s^{-1}\,Np^{-1}}$)} &
\multicolumn{2}{c}{($\mathrm{m^2\,s^{-2}}$)} &
\multicolumn{2}{c}{---} &
\multicolumn{2}{c}{---} &
\multicolumn{2}{c}{---} &
\multicolumn{2}{c}{---} &
\multicolumn{2}{c}{---} &
\multicolumn{2}{c}{---} &
\multicolumn{1}{c}{(@737nm)} &
\multicolumn{1}{c}{(s)}\\
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{$\sigma$} &
\multicolumn{1}{c}{Val.} &
\multicolumn{1}{c}{Val.}\\
\hline
1453.68494 & 3.934 & 2.855 & -1.262 & 11.462 & 3.404 & 0.097 & 19.740 & 0.481 & 69.678 & 27.782 & -0.013 & 3.696 & 0.898 & 0.005 & 0.362 & 0.020 & 0.252 & 0.020 & 0.609 & 0.004 & 0.462 & 0.004 & 0.435 & 0.004 & 37.2 & 1800.0\\
1453.70640 & 3.565 & 3.069 & -10.428 & 15.480 & 3.406 & 0.098 & 19.913 & 0.489 & 57.342 & 31.028 & -27.949 & 5.715 & 0.904 & 0.007 & 0.449 & 0.037 & 0.278 & 0.037 & 0.616 & 0.005 & 0.459 & 0.006 & 0.432 & 0.005 & 28.4 & 1800.0\\
1455.70680 & 2.484 & 1.960 & -2.482 & 8.207 & 3.402 & 0.096 & 19.795 & 0.478 & 5.249 & 19.534 & -4.899 & 1.957 & 0.908 & 0.004 & 0.252 & 0.012 & 0.168 & 0.011 & 0.620 & 0.003 & 0.465 & 0.003 & 0.438 & 0.003 & 50.7 & 1800.6\\
1455.72952 & 3.860 & 1.669 & -9.820 & 8.302 & 3.410 & 0.097 & 19.757 & 0.477 & -17.757 & 16.071 & -3.921 & 2.707 & 0.912 & 0.004 & 0.216 & 0.011 & 0.182 & 0.011 & 0.615 & 0.003 & 0.462 & 0.003 & 0.439 & 0.003 & 50.2 & 1799.9\\
1458.66661 & -0.491 & 1.837 & -3.324 & 7.473 & 3.405 & 0.096 & 19.869 & 0.477 & 4.126 & 17.180 & -10.637 & 2.385 & 0.899 & 0.003 & 0.178 & 0.010 & 0.144 & 0.010 & 0.620 & 0.003 & 0.467 & 0.003 & 0.435 & 0.003 & 55.5 & 1800.2\\
1462.67440 & 6.042 & 1.649 & -11.165 & 7.027 & 3.425 & 0.096 & 19.665 & 0.469 & 13.442 & 13.563 & 3.018 & 2.238 & 0.904 & 0.003 & 0.209 & 0.008 & 0.169 & 0.009 & 0.615 & 0.003 & 0.454 & 0.003 & 0.427 & 0.002 & 59.3 & 1800.0\\
1468.64998 & 2.301 & 2.567 & -6.104 & 10.735 & 3.416 & 0.097 & 19.725 & 0.476 & -14.309 & 24.064 & -4.347 & 3.064 & 0.886 & 0.005 & 0.248 & 0.019 & 0.226 & 0.018 & 0.606 & 0.004 & 0.464 & 0.004 & 0.425 & 0.004 & 39.5 & 1802.0\\
1468.67382 & 4.749 & 2.143 & -1.174 & 9.442 & 3.412 & 0.095 & 19.920 & 0.475 & 29.988 & 17.744 & -15.645 & 3.750 & 0.890 & 0.004 & 0.165 & 0.016 & 0.115 & 0.016 & 0.606 & 0.003 & 0.459 & 0.004 & 0.429 & 0.003 & 44.3 & 1800.0\\
1469.66456 & -4.790 & 2.083 & -13.102 & 9.446 & 3.408 & 0.095 & 19.761 & 0.472 & 34.807 & 20.745 & 1.094 & 2.397 & 0.891 & 0.004 & 0.255 & 0.015 & 0.233 & 0.015 & 0.620 & 0.003 & 0.458 & 0.003 & 0.434 & 0.003 & 44.6 & 1799.9\\
1469.68703 & -2.067 & 2.050 & -12.305 & 8.480 & 3.411 & 0.096 & 19.923 & 0.479 & -1.522 & 20.987 & -15.542 & 3.672 & 0.881 & 0.004 & 0.202 & 0.013 & 0.146 & 0.013 & 0.614 & 0.003 & 0.462 & 0.003 & 0.438 & 0.003 & 48.9 & 1800.0\\
1471.73246 & 4.247 & 2.944 & -25.163 & 13.118 & 3.388 & 0.096 & 20.126 & 0.484 & -8.525 & 30.811 & -27.297 & 4.992 & 0.893 & 0.006 & 0.397 & 0.031 & 0.259 & 0.031 & 0.600 & 0.005 & 0.468 & 0.005 & 0.430 & 0.005 & 32.2 & 1799.9\\
1493.57395 & -10.080 & 2.933 & -8.657 & 11.742 & 3.416 & 0.095 & 19.854 & 0.471 & -6.538 & 31.237 & -5.357 & 2.938 & 0.898 & 0.005 & 0.397 & 0.022 & 0.251 & 0.021 & 0.613 & 0.004 & 0.466 & 0.004 & 0.428 & 0.004 & 36.3 & 1799.9\\
1493.59887 & -8.050 & 2.566 & 7.337 & 10.538 & 3.422 & 0.094 & 20.037 & 0.471 & 0.482 & 26.691 & -19.292 & 4.036 & 0.892 & 0.005 & 0.301 & 0.020 & 0.205 & 0.019 & 0.613 & 0.004 & 0.455 & 0.004 & 0.436 & 0.004 & 39.8 & 1800.0\\
1495.56685 & 6.340 & 2.116 & -10.725 & 8.760 & 3.422 & 0.095 & 19.933 & 0.474 & 4.660 & 20.531 & -11.125 & 3.026 & 0.892 & 0.004 & 0.240 & 0.014 & 0.162 & 0.014 & 0.613 & 0.003 & 0.460 & 0.003 & 0.438 & 0.003 & 47.5 & 1799.9\\
1495.59638 & 2.097 & 1.952 & -9.143 & 8.964 & 3.428 & 0.096 & 19.877 & 0.476 & -20.237 & 18.711 & -13.786 & 3.071 & 0.898 & 0.004 & 0.281 & 0.014 & 0.200 & 0.014 & 0.611 & 0.003 & 0.457 & 0.003 & 0.435 & 0.003 & 46.9 & 1800.0\\
1496.55884 & 2.142 & 1.971 & 1.761 & 8.704 & 3.427 & 0.096 & 19.784 & 0.472 & -13.410 & 19.751 & -3.428 & 2.373 & 0.896 & 0.004 & 0.233 & 0.013 & 0.165 & 0.013 & 0.612 & 0.003 & 0.461 & 0.003 & 0.430 & 0.003 & 48.2 & 1800.0\\
1496.58177 & 4.676 & 1.965 & -4.574 & 8.820 & 3.429 & 0.096 & 19.876 & 0.474 & -12.731 & 19.843 & -11.867 & 2.903 & 0.894 & 0.004 & 0.284 & 0.015 & 0.210 & 0.015 & 0.615 & 0.003 & 0.461 & 0.003 & 0.438 & 0.003 & 47.3 & 1799.9\\
1497.56210 & -2.145 & 2.081 & -15.810 & 8.426 & 3.418 & 0.095 & 19.910 & 0.473 & 2.411 & 20.756 & -11.467 & 3.041 & 0.900 & 0.004 & 0.174 & 0.013 & 0.141 & 0.014 & 0.613 & 0.003 & 0.456 & 0.003 & 0.436 & 0.003 & 49.3 & 1799.9\\
1498.60095 & -2.301 & 2.154 & -13.043 & 7.602 & 3.419 & 0.096 & 19.725 & 0.472 & 25.277 & 21.081 & 0.421 & 1.727 & 0.894 & 0.003 & 0.259 & 0.011 & 0.188 & 0.010 & 0.609 & 0.003 & 0.461 & 0.003 & 0.434 & 0.003 & 54.9 & 1800.0\\
1510.54963 & -4.643 & 1.675 & -3.885 & 7.771 & 3.422 & 0.095 & 19.681 & 0.468 & -15.013 & 16.717 & 4.863 & 2.199 & 0.896 & 0.004 & 0.121 & 0.010 & 0.126 & 0.010 & 0.614 & 0.003 & 0.462 & 0.003 & 0.436 & 0.003 & 54.0 & 1799.9\\
1510.57188 & -4.519 & 2.183 & -5.865 & 8.328 & 3.406 & 0.096 & 19.764 & 0.473 & 20.908 & 22.499 & -2.565 & 2.970 & 0.900 & 0.004 & 0.166 & 0.012 & 0.137 & 0.012 & 0.612 & 0.003 & 0.466 & 0.003 & 0.437 & 0.003 & 50.4 & 1800.0\\
1524.42078 & -3.329 & 2.489 & -16.819 & 9.790 & 3.412 & 0.095 & 19.804 & 0.472 & 27.452 & 23.816 & -7.132 & 2.430 & 0.891 & 0.004 & 0.082 & 0.016 & 0.109 & 0.016 & 0.604 & 0.003 & 0.458 & 0.004 & 0.437 & 0.003 & 43.1 & 1800.0\\
1524.44385 & -3.844 & 2.532 & 5.286 & 8.441 & 3.418 & 0.096 & 19.787 & 0.475 & 17.561 & 24.384 & -2.292 & 2.464 & 0.887 & 0.004 & 0.094 & 0.012 & 0.097 & 0.012 & 0.606 & 0.003 & 0.457 & 0.003 & 0.433 & 0.003 & 49.5 & 1800.0\\
1527.58635 & -2.822 & 2.238 & -15.832 & 9.011 & 3.418 & 0.097 & 19.821 & 0.478 & 30.120 & 22.789 & -4.964 & 2.287 & 0.893 & 0.004 & 0.104 & 0.013 & 0.126 & 0.014 & 0.606 & 0.003 & 0.457 & 0.003 & 0.434 & 0.003 & 46.4 & 1800.0\\
1527.60927 & -5.754 & 1.636 & -10.574 & 8.847 & 3.417 & 0.096 & 19.750 & 0.474 & 14.332 & 16.506 & 0.864 & 2.091 & 0.883 & 0.004 & 0.105 & 0.013 & 0.103 & 0.013 & 0.606 & 0.003 & 0.453 & 0.003 & 0.427 & 0.003 & 47.4 & 1800.0\\
1528.48608 & -9.979 & 1.485 & -8.419 & 7.124 & 3.419 & 0.095 & 19.713 & 0.469 & 9.027 & 14.158 & 0.938 & 1.624 & 0.896 & 0.003 & 0.106 & 0.008 & 0.104 & 0.008 & 0.611 & 0.003 & 0.460 & 0.003 & 0.430 & 0.002 & 58.2 & 1800.0\\
1528.50840 & -6.686 & 1.836 & -10.476 & 7.026 & 3.418 & 0.096 & 19.738 & 0.472 & 1.728 & 17.848 & 3.558 & 2.138 & 0.889 & 0.003 & 0.107 & 0.008 & 0.093 & 0.008 & 0.615 & 0.003 & 0.459 & 0.003 & 0.433 & 0.002 & 59.0 & 1800.0\\
1541.40341 & 2.723 & 3.411 & -4.260 & 12.354 & 3.428 & 0.097 & 19.734 & 0.474 & 17.556 & 35.773 & -6.001 & 3.379 & 0.958 & 0.006 & 0.151 & 0.023 & 0.201 & 0.023 & 0.623 & 0.004 & 0.471 & 0.005 & 0.446 & 0.004 & 34.7 & 1800.0\\
1541.42598 & -2.652 & 2.548 & -3.678 & 11.092 & 3.408 & 0.095 & 19.818 & 0.471 & 2.750 & 25.992 & -10.871 & 2.745 & 0.927 & 0.005 & 0.134 & 0.019 & 0.131 & 0.019 & 0.620 & 0.004 & 0.472 & 0.004 & 0.437 & 0.004 & 38.5 & 1800.4\\
1542.48875 & 0.497 & 1.985 & 1.013 & 10.246 & 3.425 & 0.096 & 19.723 & 0.471 & 0.176 & 19.490 & -1.541 & 1.941 & 0.890 & 0.005 & 0.116 & 0.015 & 0.107 & 0.016 & 0.612 & 0.004 & 0.458 & 0.004 & 0.433 & 0.004 & 41.5 & 1800.0\\
1542.51829 & -0.511 & 2.309 & -7.523 & 8.143 & 3.418 & 0.096 & 19.809 & 0.474 & -13.236 & 23.209 & -2.717 & 2.122 & 0.885 & 0.004 & 0.083 & 0.011 & 0.097 & 0.012 & 0.612 & 0.003 & 0.457 & 0.003 & 0.432 & 0.003 & 51.2 & 1800.0\\
1545.51141 & -6.181 & 2.749 & 12.005 & 9.215 & 3.423 & 0.097 & 19.780 & 0.477 & 32.136 & 28.068 & -9.727 & 2.988 & 0.874 & 0.004 & 0.107 & 0.013 & 0.126 & 0.014 & 0.610 & 0.003 & 0.467 & 0.003 & 0.435 & 0.003 & 45.9 & 1799.9\\
1545.53366 & -6.836 & 2.155 & -11.380 & 9.504 & 3.405 & 0.096 & 19.860 & 0.476 & 5.250 & 21.492 & -8.689 & 2.374 & 0.881 & 0.004 & 0.076 & 0.015 & 0.145 & 0.015 & 0.612 & 0.003 & 0.465 & 0.004 & 0.435 & 0.003 & 44.0 & 1800.0\\
1546.44637 & 2.874 & 1.832 & -14.310 & 7.226 & 3.432 & 0.096 & 19.687 & 0.470 & -14.799 & 18.504 & -3.907 & 2.263 & 0.894 & 0.003 & 0.105 & 0.009 & 0.106 & 0.009 & 0.613 & 0.003 & 0.461 & 0.003 & 0.434 & 0.002 & 58.0 & 1799.9\\
1546.46921 & -0.324 & 2.008 & -8.556 & 7.598 & 3.424 & 0.095 & 19.770 & 0.469 & -6.655 & 20.932 & -5.786 & 2.276 & 0.891 & 0.003 & 0.083 & 0.010 & 0.115 & 0.010 & 0.614 & 0.003 & 0.461 & 0.003 & 0.437 & 0.003 & 54.6 & 1799.9\\
1553.45292 & 3.326 & 1.998 & -7.656 & 8.941 & 3.418 & 0.095 & 19.818 & 0.469 & -7.130 & 20.218 & -7.257 & 2.427 & 0.873 & 0.004 & 0.088 & 0.013 & 0.109 & 0.013 & 0.615 & 0.003 & 0.459 & 0.003 & 0.433 & 0.003 & 47.0 & 1800.0\\
1553.47930 & 1.644 & 2.231 & -6.603 & 8.706 & 3.418 & 0.095 & 19.786 & 0.467 & -23.826 & 22.529 & -6.552 & 2.739 & 0.878 & 0.004 & 0.097 & 0.012 & 0.106 & 0.012 & 0.616 & 0.003 & 0.464 & 0.003 & 0.432 & 0.003 & 48.5 & 1800.0\\
1554.43673 & 3.499 & 1.965 & -1.681 & 8.717 & 3.436 & 0.096 & 19.702 & 0.469 & -11.389 & 18.913 & -5.435 & 2.324 & 0.890 & 0.004 & 0.109 & 0.012 & 0.095 & 0.013 & 0.616 & 0.003 & 0.462 & 0.003 & 0.430 & 0.003 & 48.4 & 1799.9\\
1554.45899 & 2.932 & 2.053 & -5.111 & 8.420 & 3.410 & 0.095 & 19.827 & 0.471 & -19.910 & 20.244 & -0.860 & 2.590 & 0.882 & 0.004 & 0.092 & 0.012 & 0.097 & 0.012 & 0.617 & 0.003 & 0.460 & 0.003 & 0.434 & 0.003 & 49.5 & 1800.0\\
1555.42633 & -1.363 & 2.268 & -19.839 & 8.336 & 3.408 & 0.095 & 19.765 & 0.471 & -34.289 & 22.240 & -9.587 & 2.555 & 0.884 & 0.004 & 0.108 & 0.011 & 0.116 & 0.012 & 0.609 & 0.003 & 0.455 & 0.003 & 0.432 & 0.003 & 50.1 & 1800.0\\
1555.44973 & -2.117 & 2.318 & 1.826 & 8.942 & 3.428 & 0.096 & 19.748 & 0.470 & -11.573 & 23.197 & -1.949 & 2.083 & 0.884 & 0.004 & 0.117 & 0.013 & 0.100 & 0.013 & 0.616 & 0.003 & 0.463 & 0.003 & 0.430 & 0.003 & 46.7 & 1800.0\\
1571.43095 & 1.152 & 3.243 & -7.333 & 15.310 & 3.402 & 0.098 & 19.838 & 0.489 & -12.931 & 33.665 & -5.394 & 3.942 & 0.916 & 0.007 & 0.098 & 0.030 & 0.129 & 0.031 & 0.603 & 0.005 & 0.453 & 0.006 & 0.438 & 0.005 & 28.6 & 1799.9\\
1572.46112 & -3.621 & 2.205 & -6.265 & 7.791 & 3.408 & 0.095 & 19.819 & 0.470 & -4.081 & 21.809 & -3.106 & 2.481 & 0.912 & 0.004 & 0.088 & 0.010 & 0.109 & 0.011 & 0.617 & 0.003 & 0.455 & 0.003 & 0.437 & 0.003 & 53.3 & 1800.0\\
1572.48350 & 1.494 & 2.072 & -4.866 & 9.258 & 3.406 & 0.095 & 19.776 & 0.473 & -3.231 & 19.942 & -6.222 & 3.108 & 0.898 & 0.004 & 0.096 & 0.014 & 0.092 & 0.015 & 0.610 & 0.003 & 0.453 & 0.003 & 0.434 & 0.003 & 45.5 & 1799.9\\
1601.37774 & -1.545 & 2.143 & -0.962 & 8.404 & 3.421 & 0.094 & 19.841 & 0.467 & -5.257 & 21.345 & -5.489 & 2.123 & 0.870 & 0.004 & 0.115 & 0.012 & 0.128 & 0.012 & 0.601 & 0.003 & 0.464 & 0.003 & 0.431 & 0.003 & 49.8 & 1799.9\\
1601.40002 & -2.308 & 2.022 & -10.244 & 8.113 & 3.421 & 0.095 & 19.793 & 0.468 & 14.596 & 19.841 & -1.655 & 1.709 & 0.884 & 0.004 & 0.121 & 0.012 & 0.110 & 0.012 & 0.604 & 0.003 & 0.460 & 0.003 & 0.431 & 0.003 & 51.5 & 1799.9\\
1827.60895 & 6.979 & 2.407 & -5.749 & 8.624 & 3.445 & 0.095 & 19.676 & 0.463 & -25.325 & 22.626 & 1.775 & 1.851 & 0.899 & 0.004 & 0.377 & 0.014 & 0.284 & 0.013 & 0.619 & 0.003 & 0.468 & 0.003 & 0.435 & 0.003 & 48.9 & 1800.0\\
1827.63081 & 6.889 & 1.836 & -6.143 & 7.999 & 3.438 & 0.095 & 19.711 & 0.465 & -32.825 & 14.021 & 2.311 & 2.438 & 0.911 & 0.004 & 0.346 & 0.012 & 0.269 & 0.012 & 0.619 & 0.003 & 0.468 & 0.003 & 0.441 & 0.003 & 52.6 & 1799.9\\
1828.62056 & 12.000 & 2.811 & -1.472 & 13.215 & 3.431 & 0.095 & 19.499 & 0.459 & -1.933 & 27.093 & 11.371 & 4.017 & 0.899 & 0.006 & 0.808 & 0.027 & 0.529 & 0.025 & 0.620 & 0.004 & 0.482 & 0.005 & 0.439 & 0.004 & 33.6 & 1800.0\\
1828.64288 & 13.495 & 2.143 & -12.437 & 8.630 & 3.425 & 0.095 & 19.722 & 0.466 & -18.398 & 17.964 & 2.227 & 2.066 & 0.894 & 0.004 & 0.471 & 0.014 & 0.308 & 0.013 & 0.612 & 0.003 & 0.471 & 0.003 & 0.434 & 0.003 & 48.9 & 1800.0\\
1832.64836 & -1.695 & 2.344 & -12.792 & 10.171 & 3.443 & 0.092 & 17.846 & 0.405 & -26.194 & 21.367 & 146.373 & 9.681 & 0.999 & 0.004 & 0.354 & 0.011 & 0.328 & 0.011 & 0.643 & 0.003 & 0.501 & 0.003 & 0.465 & 0.003 & 46.3 & 1800.0\\
1832.67223 & 7.014 & 3.580 & 2.522 & 11.558 & 3.450 & 0.090 & 17.235 & 0.383 & -33.277 & 34.301 & 203.736 & 12.673 & 0.902 & 0.004 & 0.373 & 0.012 & 0.339 & 0.012 & 0.621 & 0.004 & 0.475 & 0.004 & 0.447 & 0.003 & 42.7 & 1800.0\\
1844.58560 & 8.039 & 4.891 & 10.633 & 17.710 & 3.441 & 0.094 & 19.661 & 0.457 & -9.657 & 42.971 & -9.952 & 4.268 & 0.908 & 0.008 & 0.800 & 0.039 & 0.524 & 0.038 & 0.609 & 0.006 & 0.467 & 0.007 & 0.430 & 0.006 & 25.8 & 1800.0\\
1844.60982 & 8.094 & 4.010 & -0.344 & 15.149 & 3.422 & 0.095 & 19.661 & 0.468 & 28.181 & 30.200 & -11.293 & 3.691 & 0.887 & 0.007 & 0.657 & 0.031 & 0.454 & 0.030 & 0.616 & 0.005 & 0.456 & 0.006 & 0.438 & 0.005 & 29.5 & 1800.0\\
1845.62972 & 4.933 & 2.094 & -11.276 & 7.303 & 3.428 & 0.095 & 19.851 & 0.469 & 17.611 & 18.500 & -12.969 & 2.928 & 0.911 & 0.003 & 0.233 & 0.011 & 0.164 & 0.010 & 0.618 & 0.003 & 0.472 & 0.003 & 0.445 & 0.003 & 56.5 & 1799.9\\
1845.65236 & 6.175 & 2.004 & -8.235 & 7.891 & 3.430 & 0.095 & 19.781 & 0.468 & 27.083 & 17.036 & -0.760 & 2.098 & 0.900 & 0.004 & 0.303 & 0.011 & 0.227 & 0.011 & 0.613 & 0.003 & 0.469 & 0.003 & 0.436 & 0.003 & 52.7 & 1800.0\\
1847.65595 & -1.586 & 2.057 & 4.082 & 7.815 & 3.421 & 0.096 & 20.098 & 0.480 & 24.496 & 20.146 & -33.364 & 2.615 & 0.893 & 0.004 & 0.009 & 0.011 & 0.015 & 0.011 & 0.615 & 0.003 & 0.464 & 0.003 & 0.430 & 0.003 & 52.6 & 1800.0\\
1847.67883 & 2.018 & 2.162 & -6.840 & 8.339 & 3.428 & 0.096 & 20.089 & 0.480 & -43.030 & 20.129 & -33.534 & 2.762 & 0.887 & 0.004 & -0.047 & -0.012 & 0.000 & 0.013 & 0.608 & 0.003 & 0.462 & 0.003 & 0.428 & 0.003 & 49.6 & 1800.0\\
1848.58131 & 2.437 & 1.923 & -8.688 & 7.864 & 3.425 & 0.094 & 19.806 & 0.465 & -4.819 & 18.387 & 0.211 & 1.974 & 0.881 & 0.004 & 0.289 & 0.011 & 0.200 & 0.011 & 0.608 & 0.003 & 0.459 & 0.003 & 0.433 & 0.003 & 53.0 & 1800.0\\
1848.60345 & -2.489 & 1.791 & -5.797 & 7.755 & 3.439 & 0.095 & 19.733 & 0.467 & 25.893 & 16.353 & 0.811 & 1.915 & 0.888 & 0.003 & 0.272 & 0.011 & 0.194 & 0.010 & 0.617 & 0.003 & 0.464 & 0.003 & 0.437 & 0.003 & 53.7 & 1800.0\\
1849.61852 & -3.601 & 2.249 & -7.824 & 8.402 & 3.436 & 0.095 & 19.723 & 0.465 & 13.208 & 22.473 & -2.125 & 2.412 & 0.877 & 0.004 & 0.333 & 0.012 & 0.277 & 0.012 & 0.606 & 0.003 & 0.464 & 0.003 & 0.434 & 0.003 & 50.0 & 1800.0\\
1849.64087 & -8.947 & 1.868 & -13.159 & 7.697 & 3.428 & 0.094 & 19.778 & 0.463 & -21.078 & 17.379 & -2.301 & 2.094 & 0.888 & 0.003 & 0.326 & 0.011 & 0.247 & 0.010 & 0.605 & 0.003 & 0.466 & 0.003 & 0.439 & 0.003 & 54.2 & 1800.0\\
\hline
\end{tabular}
\end{center}
\end{tiny}
\end{sidewaystable}
|
Title:
Zwicky Transient Facility and Globular Clusters: The Period-Luminosity and Period-Wesenheit Relations for Type II Cepheids |
Abstract: We present the first gri-band period-luminosity (PL) and period-Wesenheit
(PW) relations for 37 Type II Cepheids (hereafter TIIC) located in 18 globular
clusters based on photometric data from the Zwicky Transient Facility. We also
updated BV IJHK-band absolute magnitudes for 58 TIIC in 24 globular clusters
using the latest homogeneous distances to the globular clusters. The slopes of
g/r/i and B/V/I band PL relations are found to be statistically consistent when
using the same sample of distance and reddening. We employed the calibration of
ri-band PL/PW relations in globular clusters to estimate a distance to M31
based on a sample of ~270 TIIC from the PAndromeda project. The distance
modulus to M31, obtained using calibrated ri-band PW relation, agrees well with
the recent determination based on classical Cepheids. However, distance moduli
derived using the calibrated r- and i-band PL relations are systematically
smaller by ~0.2 mag, suggesting there are possible additional systematic error
on the PL relations. Finally, we also derive the period-color (PC) relations
and for the first time the period-Q-index (PQ) relations, where the Q-index is
reddening-free, for our sample of TIIC. The PC relations based on (r-i) and
near-infrared colors and the PQ relations are found to be relatively
independent of the pulsation periods.
| https://export.arxiv.org/pdf/2208.03404 |
\shorttitle{Type II Cepheid PL \& PW relations}
\shortauthors{Ngeow et al.}
\title{Zwicky Transient Facility and Globular Clusters: The Period-Luminosity and Period-Wesenheit Relations for Type II Cepheids}
\correspondingauthor{C.-C. Ngeow}
\email{cngeow@astro.ncu.edu.tw}
\author[0000-0001-8771-7554]{Chow-Choong Ngeow}
\affil{Graduate Institute of Astronomy, National Central University, 300 Jhongda Road, 32001 Jhongli, Taiwan}
\author[0000-0001-6147-3360]{Anupam Bhardwaj}
\affil{INAF-Osservatorio astronomico di Capodimonte, Via Moiariello 16, 80131 Napoli, Italy}
\author{Jing-Yi Henderson}
\affil{Graduate Institute of Astronomy, National Central University, 300 Jhongda Road, 32001 Jhongli, Taiwan}
\author[0000-0002-3168-0139]{Matthew J. Graham}
\affiliation{Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA}
\author[0000-0003-2451-5482]{Russ R. Laher}
\affiliation{IPAC, California Institute of Technology, 1200 E. California Blvd, Pasadena, CA 91125, USA}
\author[0000-0002-7226-0659]{Michael S. Medford}
\affiliation{University of California, Berkeley, Department of Astronomy, Berkeley, CA 94720, USA}
\affiliation{Lawrence Berkeley National Laboratory, 1 Cyclotron Rd., Berkeley, CA 94720, USA}
\author[0000-0003-1227-3738]{Josiah Purdum}
\affiliation{Caltech Optical Observatories, California Institute of Technology, Pasadena, CA 91125, USA}
\author[0000-0001-7648-4142]{Ben Rusholme}
\affiliation{IPAC, California Institute of Technology, 1200 E. California Blvd, Pasadena, CA 91125, USA}
\section{Introduction}\label{sec1}
The evolved and low-mass Type II Cepheids \citep[hereafter TIIC; for a general review, see][]{welch2012} are one of the old population distance indicators. Similar to the young Type I or classical Cepheids, TIIC also exhibit a period-luminosity (PL, or the Leavitt Law) relation. However, TIIC are $\sim 2$~mag less luminous than the classical Cepheids. Nevertheless, TIIC are a few magnitudes more luminous, depending on the pulsation periods and filters, than the popular RR Lyrae -- another old population distance indicator. Therefore, TIIC are useful to probe a more distant stellar system (such as dwarf galaxies and elliptical galaxies) independent of RR Lyrae stars. The comprehensive reviews on TIIC as distance indicators can be found, for examples, in \citet{wallerstein2002}, \citet{sandage2006}, \citet{beaton2018}, \citet{bhardwaj2020}, and \citet{bhardwaj2022}.
Some of the earlier derivations of $BVI$-band, or a subset of these filters, PL relations for TIIC can be found, for examples, in \citet{demers1971}, \citet{nemec1994}, \citet{alcock1998}, and \citet{pritzl2003}. Other works on the optical PL relations included a color term \citep{breger1975,alcock1998} to derive the period-luminosity-color (PLC) relation, or using the Wesenheit index to derive the equivalent period-Wesenheit (PW) relation \citep{kubiak2003,matsunaga2011,groenewegen2017}. Recently, the optical band PL and PW relations were extended to the filters specific for the {\it Gaia} mission \citep{ripepi2019,ripepi2022}. In addition, \citet{groenewegen2017} have also derived the bolometric PL relation based on a combined sample of TIIC in Magellanic Clouds.
Compared to the optical PL relations, more studies have derived TIIC PL and PW relations in the near-infrared $JHK$ bands, or a subset of these filters, in the past two decades. These near-infrared PL/PW relations were derived using TIIC located in various stellar systems, including globular clusters \citep{matsunaga2006}, the Galactic Bulge \citep{groenewegen2008,bhardwaj2017a,braga2018}, the Large and/or Small Magellanic Cloud \citep{matsunaga2009,ciechanowska2010,matsunaga2011,ripepi2015,bhardwaj2017b,wiegorski2021}, and in nearby Milky Way field \citep{wiegorski2021}. Some of the derived $K$-band PL relations in the Galactic bulge also included an additional dependence on the Galactic longitude and latitude \citep{groenewegen2008,braga2018}.
To our knowledge, there is no $ugrizY$-band PL and PW relations available in the literature, which will be important in the era of Vera Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{lsst2019}. Therefore, the goal of this work is to derive the $gri$-band PL and PW relations, by utilizing the time-series observations from the Zwicky Transient Facility \citep[ZTF,][]{bellm2017,bellm2019,dec20,gra19} project and archival data compiled in \citet[][because ZTF cannot observe the southern sky]{bhardwaj2022}, for TIIC located in the globular clusters. TIIC in globular clusters have been used to derive PL relations in the past. \citet{demers1974} derived the $V$-band PL relation based on 17 TIIC found in 4 globular clusters, while \citet{pritzl2003} derived the $BVI$-band PL relations using two globular clusters (NGC 6388 and NGC 6441) that host the most TIIC (for a total of 10 TIIC). Optical and near-infrared PL relations were also derived from a larger sample of TIIC in \citet[][with $\sim 40$ TIIC in 15 globular clusters]{nemec1994} and \citet[][with 46 TIIC in 26 globular clusters]{matsunaga2006}, respectively. Note that PL relations presented in \citet{matsunaga2006} were updated in \citet{braga2020} and \citet{bhardwaj2022}.
Section \ref{sec2} describes the TIIC sample and their ZTF light curves data used in this work. In Section \ref{sec3}, we refined the pulsation periods and determined the mean magnitudes for our sample of TIIC. The derivations of the PL relations are presented in Section \ref{sec4}, as well as the multi-band relations (PW and period-color relations) in Section \ref{sec5}. We tested our derived PL/PW relations for a sample of M31 TIIC in Section \ref{sec6}, followed by conclusions of our work in Section \ref{sec7}.
\section{Sample and Data} \label{sec2}
\subsection{Selecting TIIC in Globular Clusters} \label{sec2.1}
We started the compilation of TIIC in globular clusters using the ``Updated Catalog of Variable Stars in Globular Clusters'' \citep[][hereafter Clement's Catalog]{clement2001,clement2017}, by selecting globular clusters that can be observed with ZTF ($\delta_{J2000} > -30^\circ$) and variable stars marked as ``CW'', ``CWA'', ``CWB'', ``RV'', or ``RVB'' in the Clement's Catalog.\footnote{Classifications of variable stars in the Clement's Catalog were based on the GCVS (General Catalog of Variable Stars) classification, available at \url{http://www.sai.msu.su/gcvs/gcvs/vartype.htm}. In brief, ``CW'' refers to W Virginis type, ``CWA'' and ``CWB'' are subtypes of ``CW'' with pulsation periods separated at 8~days. ``RV'' refers to the RV Tauri type, and ``RVB'' is subtype of ``RV'' which exhibits long-term periodic variations. Both W Virginis and RV Tauri are also subtypes of TIIC.} The known foreground or suspected foreground TIIC in the Clement's Catalog (marked with an ``f'' or ``f?''), however, were excluded. The preliminary list of TIIC were augmented with the catalogs presented in \citet{pritzl2003} and \citet{matsunaga2006}. We have also searched the literature for new TIIC, and updated equatorial coordinates, periods, and classifications of TIIC in our preliminary list. We identified five new, or re-classified, TIIC: V24 in M10 \citep{rozyczka2018}, V167 in M14 \citep{yepez2022}, V34 and ZK3 in M15 \citep{bhardwaj2021}, and V24 in M22 \citep{rozyczka2017}. Similarly, we rejected the TIIC that were re-classified as other types of variable stars in recent work, they included V1 in M10 \citep[identified as a semi-regular variable in][]{rozyczka2018}, V72 and V142 in M15 \citep[identified as a RR Lyrae and an anomalous Cepheid, respectively, in][]{bhardwaj2021}, V21 and V22 in M28 \citep[identified as a long-period variable and a RR Lyrae, respectively, in][]{prieto2012}, V8 in M79 \citep[identified as a semi-regular variable in][]{bond2016}, and V7 in M92 \citep[identified as an anomalous Cepheid in][]{osborn2012}. We also excluded S7 in M3 because the position of this variable star coincides with V254, a known RR Lyrae. All together, our preliminary list contains 50 TIIC located in 23 globular clusters.
\subsection{Extracting ZTF Light-Curves} \label{sec2.2}
ZTF is a wide-field synoptic survey on the northern sky observed in $gri$ filters. Combining the Samuel Oschin 48 inch Schmidt telescope (located at the Palomar Observatory) and a dedicated wide-field mosaic CCD camera, the field-of-view of ZTF can reach to $47$~squared degrees, while maintaining a pixel scale of $1.01\arcsec$/pixel. ZTF carries out three high-level surveys: the partner surveys, the public surveys, and the Caltech (California Institute of Technology) surveys. Imaging data from all of these high-level surveys were processed through a dedicated reduction pipeline \citep{mas19}, and the photometry were calibrated to the Pan-STARRS1 \citep[Panoramic Survey Telescope and Rapid Response System 1,][]{chambers2016,magnier2020} AB magnitude system. The preliminary list of TIIC sample were cross-matched to the PSF (point-spread function) catalogs, generated from the reduction pipeline, using an $1\arcsec$~search radius. The extracted $gri$-band (whenever available) light-curves for these TIIC were based on the ZTF Public Data Release 10 (DR10) data and partner surveys data until 2022 March 31. Out of the preliminary 50 TIIC sample, 48 of them have ZTF light-curves in at least two of the $gri$ filters (there are 1 and 11 TIIC without the $g$- and $i$-band light curves, respectively). The number of data points per light curve varies from 1 to $\sim1500$ for the extracted light curves, with medians of $158$, $504$, and $51$ in the $gri$-band, respectively. Two TIIC without ZTF light-curves are V1 and V2 in M19.
\section{Periods and Mean Magnitudes} \label{sec3}
Since it is well-known that TIIC will undergo periods change \citep[for examples, see][roughly in the range of $\sim10^{-8}$ to $\sim10^{-11}$~days/day]{wehlau1982,percy1997,percy2000,schmidt2004,schmidt2005a,schmidt2005b,rabidoux2010,osborn2012,soszynski2018,karmakar2019,berdnikov2021}, we re-determined the periods of our sample of TIIC with ZTF light curves instead of adopting the published periods.
\begin{deluxetable*}{llllrrrrrrrrl}
\tabletypesize{\scriptsize}
\tablecaption{Basic Information and Mean Magnitudes for ZTF Sample of TIIC in Globular Clusters\label{tab_t2cep}}
\tablewidth{0pt}
\tablehead{
\colhead{G. C.} &
\colhead{Var. Name} &
\colhead{$P_{\mathrm{lit}}$\tablenotemark{a} (days)} &
\colhead{$P$ (days)} &
\colhead{$N_g$} &
\colhead{$N_r$} &
\colhead{$N_i$} &
\colhead{$\langle g\rangle$} &
\colhead{$\langle r\rangle$} &
\colhead{$\langle i\rangle$} &
\colhead{$D$\tablenotemark{b} (kpc)} &
\colhead{$E$\tablenotemark{c}} &
\colhead{Note\tablenotemark{d}}
}
\startdata
M15 & V1 & 1.43781 & 1.437812 & 540 & 665 & 135 & 15.029 & 14.837 & 14.754 & $10.71\pm0.10$ & $0.068\pm0.002$ & 1 \\
M13 & V1 & 1.45902 & 1.459040 & 1067 & 1103 & 215 & 14.173 & 14.037 & 14.011 & $7.42\pm0.08$ & $0.000\pm0.000$ & 3 \\
M56 & V1 & 1.51000 & 1.509997 & 392 & 851 & 31 & 15.685 & 15.232 & 15.049 & $10.43\pm0.14$ & $0.202\pm0.002$ & 8 \\
NGC2419& V18 & 1.57870 & 1.578572 & 379 & 1078 & 63 & 19.032 & 18.733 & 18.633 & $88.47\pm2.40$ & $0.144\pm0.004$ & 8 \\
M22 & V11 & 1.69050 & 1.690401 & 77 & 581 & 0 & 12.835 & 12.230 & $\cdots$ & $3.30\pm0.04$ & $0.419\pm0.006$ & 8 \\
M22 & V24 & 1.71485 & 1.715079 & 76 & 581 & 0 & 13.746 & 13.130 & $\cdots$ & $3.30\pm0.04$ & $0.419\pm0.006$ & 5 \\
M15 & ZK3 & 1.74634 & 1.746591 & 537 & 667 & 134 & 15.361 & 15.000 & 14.799 & $10.71\pm0.10$ & $0.162\pm0.004$ & 1 \\
NGC6401& V3 & 1.74870 & 1.747028 & 83 & 343 & 71 & 17.092 & 15.947 & 15.317 & $8.06\pm0.24$ & $0.926\pm0.002$ & 8 \\
M14 & V76 & 1.88990 & 1.890065 & 182 & 569 & 1 & 16.329 & 15.508 & $\cdots$ & $9.14\pm0.25$ & $0.540\pm0.000$ & 7 \\
M13 & V6 & 2.11286 & 2.112920 & 1049 & 1077 & 215 & 14.271 & 13.962 & 13.854 & $7.42\pm0.08$ & $0.000\pm0.000$ & 3 \\
M10 & V24 & 2.30746 & 2.307591 & 71 & 142 & 1 & 14.355 & 13.728 & $\cdots$ & $5.07\pm0.06$ & $0.312\pm0.002$ & 6 \\
M19 & V4 & 2.43260 & 2.432354 & 62 & 411 & 0 & 15.555 & 14.943 & $\cdots$ & $8.34\pm0.16$ & $0.488\pm0.005$ & 8 \\
M14 & V2 & 2.79490 & 2.794852 & 182 & 582 & 1 & 15.955 & 15.093 & $\cdots$ & $9.14\pm0.25$ & $0.540\pm0.000$ & 7 \\
NGC6284& V4 & 2.81870 & 2.818707 & 63 & 486 & 0 & 16.029 & 15.446 & $\cdots$ & $14.21\pm0.42$ & $0.318\pm0.002$ & 8 \\
NGC6749& V1 & 4.48100 & 4.477411 & 125 & 296 & 2 & 18.515 & 16.633 & $\cdots$ & $7.59\pm0.21$ & $1.346\pm0.007$ & 8 \\
NGC6284& V1 & 4.48120 & 4.484024 & 66 & 493 & 0 & 15.806 & 15.131 & $\cdots$ & $14.21\pm0.42$ & $0.318\pm0.002$ & 8 \\
M13 & V2 & 5.11078 & 5.111326 & 1071 & 1097 & 216 & 13.157 & 12.882 & 12.787 & $7.42\pm0.08$ & $0.000\pm0.000$ & 3 \\
M14 & V167 & 6.20100 & 6.205786 & 182 & 564 & 1 & 16.046 & 14.965 & $\cdots$ & $9.14\pm0.25$ & $0.560\pm0.003$ & 7 \\
NGC6325& V2 & 10.74400 & 10.748907 & 66 & 498 & 0 & 16.533 & 14.938 & $\cdots$ & $7.53\pm0.32$ & $0.966\pm0.005$ & 8 \\
M14 & V17 & 12.07580 & 12.092216 & 184 & 582 & 1 & 15.189 & 14.123 & $\cdots$ & $9.14\pm0.25$ & $0.540\pm0.000$ & 7 \\
NGC6325& V1 & 12.51600 & 12.522716 & 65 & 497 & 0 & 16.299 & 14.716 & $\cdots$ & $7.53\pm0.32$ & $0.928\pm0.006$ & 8 \\
M28 & V4 & 13.46200 & 13.480377 & 136 & 909 & 144 & 13.532 & 12.558 & 12.093 & $5.37\pm0.10$ & $0.458\pm0.004$ & 8 \\
M14 & V7 & 13.58970 & 13.592731 & 185 & 581 & 1 & 15.222 & 14.104 & $\cdots$ & $9.14\pm0.25$ & $0.560\pm0.003$ & 7 \\
M79 & V7 & 13.99950 & 14.057529 & 114 & 136 & 0 & 13.824 & 13.304 & $\cdots$ & $13.08\pm0.18$ & $0.014\pm0.002$ & 2 \\
NGC6229& V8 & 14.84600 & 14.844260 & 1469 & 1484 & 431 & 15.699 & 15.117 & 14.939 & $30.11\pm0.47$ & $0.092\pm0.002$ & 8 \\
M2 & V1 & 15.56470 & 15.542598 & 61 & 70 & 6 & 13.596 & 13.075 & $\cdots$ & $11.69\pm0.11$ & $0.000\pm0.000$ & 8 \\
M80 & V1 & 16.28134 & 16.306309 & 62 & 74 & 0 & 13.734 & 13.097 & $\cdots$ & $10.34\pm0.12$ & $0.220\pm0.003$ & 4 \\
M19 & V3 & 16.50000 & 16.686135 & 66 & 421 & 0 & 14.128 & 13.157 & $\cdots$ & $8.34\pm0.16$ & $0.488\pm0.005$ & 8 \\
M15 & V86 & 16.84211 & 16.833319 & 514 & 650 & 133 & 13.112 & 12.553 & 12.353 & $10.71\pm0.10$ & $0.162\pm0.004$ & 1 \\
M2 & V5 & 17.55700 & 17.574309 & 132 & 215 & 53 & 13.572 & 13.015 & 12.831 & $11.69\pm0.11$ & $0.004\pm0.004$ & 8 \\
M10 & V2 & 19.47099 & 18.713201 & 146 & 333 & 2 & 12.211 & 11.504 & $\cdots$ & $5.07\pm0.06$ & $0.312\pm0.002$ & 6 \\
M14 & V1 & 19.74110 & 18.749399 & 184 & 581 & 1 & 14.762 & 13.692 & $\cdots$ & $9.14\pm0.25$ & $0.568\pm0.002$ & 7 \\
M5 & V42 & 25.73500 & 25.710120 & 199 & 316 & 75 & 11.457 & 11.123 & 10.927 & $7.48\pm0.06$ & $0.090\pm0.000$ & 8 \\
M2 & V6 & 19.29900 & 38.581288 & 156 & 257 & 61 & 13.438 & 12.892 & 12.696 & $11.69\pm0.11$ & $0.000\pm0.000$ & 8 \\
M5 & V84 & 53.95000 & 52.934619 & 245 & 424 & 100 & 11.626 & 11.231 & 11.039 & $7.48\pm0.06$ & $0.112\pm0.002$ & 8 \\
M2 & V11 & 67.00000 & 66.453838 & 132 & 218 & 50 & 12.300 & 11.933 & 11.755 & $11.69\pm0.11$ & $0.000\pm0.000$ & 8 \\
M56 & V6 & 90.00000 & 89.320054 & 391 & 857 & 31 & 13.278 & 12.386 & 11.827 & $10.43\pm0.14$ & $0.202\pm0.002$ & 8 \\
\enddata
\tablenotetext{a}{Period published in the literature.}
\tablenotetext{b}{Distance of the globular clusters adopted from \citet{baumgardt2021}.}
\tablenotetext{c}{Reddening returned from the {\tt Bayerstar2019} 3D reddening map \citep{green2019} at the location of the TIIC with distance $D$ from \citet{baumgardt2021}.}
\tablenotetext{d}{Literature period adopted from the following reference: 1 = \citet{bhardwaj2021}; 2 = \citet{bond2016}; 3 = \citet{osborn2019}; 4 = \citet{plachy2017}; 5 = \citep{rozyczka2017}; 6 = \citet{rozyczka2018}; 7 = \citet{yepez2022}; 8 = Clement's Catalog.}
\end{deluxetable*}
Given that majority of our sample of TIIC have ZTF light curves in two or three filters, we employed the {\tt LombScargleMultiband} module available in the {\tt astroML/gatspy}\footnote{\url{https://github.com/astroML/gatspy}, also see \citet{vdp2016}.} package \citep{vdp2015} to refine the periods for our sample of TIIC in a two-steps process. In the first step, ZTF light-curves were folded using periods identified from the first-pass of {\tt LombScargleMultiband}, and then fit with a low-order Fourier expansion in the following form \cite[for example, see][]{deb2009}:
\begin{eqnarray}
m(\Phi) & = & m_0 + \sum^n_{j=1} \left[ a_j \cos (2 \pi j \Phi) + b_j \sin (2 \pi j \Phi)\right],
\end{eqnarray}
\noindent where $\Phi \in [0,1]$ are the pulsational phases. Note that we only fit equation (1) to the light curves that have more than 30 data points. Outliers beyond $3\sigma$ were excluded, where $\sigma$ represents the dispersion of the fitted light curves, and {\tt LombScargleMultiband} was run again in the second-pass to obtain the final adopted periods. The periods obtained from {\tt LombScargleMultiband} need to be doubled for three TIIC (V11 in M2, V84 in M5, and V6 in M56) in order to match with published periods. We found that the period for V6 in M2 also needs to be doubled, because alternate minima can be seen on its light-curves (as displayed in Figure \ref{fig_m2v6}).
We visually inspected all light-curves folded with the final adopted periods. We removed 9 TIIC (V1 in M12, V12 in M13, V34 in M15, V17 and V32 in M28, V22 in NGC6229, V2 in NGC6293, V4 in NGC7492, and V4 in Pal3) from our sample because they exhibit evidence of blending (such as no variations or large scatters seen on the ZTF light-curves). We further removed 2 TIIC (V154 in M3 and V3 in M10) that only have 19 data points in the $r$-band light-curve (and the total number of data points in all three filters is 30 or less). Finally, 37 TIIC remained in our sample and their intensity mean magnitudes were obtained based on the fitted low-order Fourier expansion as given in equation (1). The final adopted periods and the intensity mean magnitudes of these TIIC are listed in Table \ref{tab_t2cep}. Examples of the ZTF light-curves are presented in Figure \ref{fig_lc}.
\section{The PL Relations} \label{sec4}
\subsection{Preliminary PL Relations} \label{sec4.1}
Homogeneous and accurate distances of globular clusters were adopted from \citet{baumgardt2021}, who combined various distance measurements based on the {\it Gaia} and/or {\it Hubble Space Telescope} data, as well as literature distances, to obtain averaged distances via a likelihood analysis. Using these distances, we queried the {\tt Bayerstar2019} 3D reddening map \citep{green2019}\footnote{See \url{http://argonaut.skymaps.info/usage}\label{fn2}} via the {\tt dustmaps}\footnote{\url{https://dustmaps.readthedocs.io/en/latest/}} \citep{green2018} code to obtain reddening $E$ towards each of the TIIC, and corrected the extinctions on mean magnitudes using $A_g = 3.518E$, $A_r = 2.617E$ and $A_i=1.971E$ \citep{green2019}. A linear regression was fitted to the extinction-corrected absolute magnitudes for 37 and 17 TIIC in the $gr$- and $i$-band, respectively. While fitting the PL relations, we did not separate the TIIC into the three sub-types (BL Herculis, W Virginis, and RV Tauri) of TIIC, mainly due to the small number of samples in each subtype.
We compare our preliminary $gri$-band PL relations to the Johnson-Cousin $BVI$-band and 2MASS $JHK$-band (hereafter collectively referred as $BVIJHK$-band) PL relations, taken from \citet{bhardwaj2022}, in the left panel of Figure \ref{fig_compare}. The slopes of the $gri$-band PL relations follow the trend that the slopes become steeper at longer wavelengths, however these $gri$-band PL slopes were shallower than the expected trends portrait from the $BVIJHK$-band PL slopes. Similar to our work, the $BVIJHK$-band PL relations were derived by \citet{bhardwaj2022} using a sample of 36 to 50 TIIC in globular clusters compiled from the literature. The distance moduli of these globular clusters were collected in \citet{braga2020}. In contrast to our work, these distance moduli were compiled from various publications \citep[see the reference listed in Table 4 of][]{braga2020}. In the next sub-section, we demonstrate that after updating the multi-band PL relations, the $gri$-band PL slopes are consistent with the $BVI$-band PL slopes, as shown in the upper-right panel of Figure \ref{fig_compare}. Similarly, the dispersion of the preliminary $gri$-band PL relations were larger (especially in the $i$-band), and improvements were evident after updating the PL relations.
\begin{deluxetable*}{lllllllllrcl}
\tabletypesize{\scriptsize}
\tablecaption{Basic Information and Mean Magnitudes for B22 Sample of TIIC in Globular Clusters\label{tab_b22}}
\tablewidth{0pt}
\tablehead{
\colhead{G. C.} &
\colhead{Var. Name} &
\colhead{$P$ (days)} &
\colhead{$B$} &
\colhead{$V$} &
\colhead{$I$} &
\colhead{$J$} &
\colhead{$H$} &
\colhead{$K$} &
\colhead{$D$\tablenotemark{a} (kpc)} &
\colhead{$E(B-V)$\tablenotemark{b}} &
\colhead{Reference\tablenotemark{c}}
}
\startdata
NGC5139 & V43 & 1.1569 & 14.139 & 13.759 & 13.149 & 12.730 & 12.492 & 12.426 & $5.43\pm0.05$ & 0.14 & 3 \\
NGC5139 & V92 & 1.346 & 14.480 & 13.946 & 13.199 & 12.700 & 12.340 & 12.313 & $5.43\pm0.05$ & 0.13 & 3 \\
NGC5139 & V60 & 1.3495 & 14.028 & 13.624 & 13.001 & 12.584 & 12.295 & 12.281 & $5.43\pm0.05$ & 0.14 & 3 \\
M15 & V1 & 1.4377 & 15.412 & 14.954 & 14.362 & 13.94 & $\cdots$ & 13.65 & $10.71\pm0.10$ & 0.11 & 8 \\
M56 & V1 & 1.51 & 16.01 & 15.46 & $\cdots$ & 13.99 & 13.66 & 13.57 & $10.43\pm0.14$ & 0.25 & 18 \\
M62 & V73 & 1.7 & 16.147 & 15.243 & 13.966 & $\cdots$ & $\cdots$ & $\cdots$ & $6.41\pm0.10$ & 0.45 & 6, 17 \\
NGC2808 & V10 & 1.7653 & 15.91 & 15.28 & 14.47 & 13.89 & 13.54 & 13.43 & $10.06\pm0.11$ & 0.22 & 12 \\
M14 & V76 & 1.8903 & 16.881 & 15.978 & 14.750 & 13.78 & 13.30 & 13.16 & $9.14\pm0.25$ & 0.48 & 7 \\
M15 & V34 & 2.03355 & $\cdots$ & $\cdots$ & $\cdots$ & 13.756 & $\cdots$ & 13.340 & $10.71\pm0.10$ & 0.11 & \\
NGC5139 & V61 & 2.2736 & 14.293 & 13.661 & 12.821 & 12.190 & 11.811 & 11.771 & $5.43\pm0.05$ & 0.14 & 3 \\
M19 & V4 & 2.4326 & 14.75 & $\cdots$ & 13.947 & 13.28 & 12.85 & 12.77 & $8.34\pm0.16$ & 0.31 & 4, 17 \\
NGC6441 & V132 & 2.5474 & 17.218 & 16.478 & 15.241 & $\cdots$ & $\cdots$ & $\cdots$ & $12.73\pm0.16$ & 0.61 & 13 \\
M14 & V2 & 2.7947 & 16.596 & 15.629 & 14.337 & 13.45 & 12.98 & 12.85 & $9.14\pm0.25$ & 0.48 & 7 \\
NGC6284 & V4 & 2.8187 & 16.04 & $\cdots$ & 14.786 & 14.15 & 13.71 & 13.67 & $14.21\pm0.42$ & 0.31 & 5, 17 \\
NGC5139 & V48 & 4.4752 & 13.528 & 12.924 & 12.092 & 11.59 & 11.14 & 11.15 & $5.43\pm0.05$ & 0.14 & 3 \\
NGC6749 & V1 & 4.481 & $\cdots$ & $\cdots$ & $\cdots$ & 13.38 & 12.62 & 12.34 & $7.59\pm0.21$ & 1.75 & \\
NGC6284 & V1 & 4.4812 & 15.88 & $\cdots$ & 14.504 & 13.68 & 13.24 & 13.18 & $14.21\pm0.42$ & 0.30 & 5, 17 \\
M10 & V3 & 7.831 & 13.62 & 12.75 & 11.721 & 11.02 & 10.55 & 10.36 & $5.07\pm0.06$ & 0.27 & 2, 15 \\
NGC6441 & V153 & 9.89 & $\cdots$ & $\cdots$ & 13.72 & $\cdots$ & $\cdots$ & $\cdots$ & $12.73\pm0.16$ & 0.62 & 16 \\
M62 & V2 & 10.59 & 14.408 & 13.418 & 12.065 & 11.22 & 10.64 & 10.53 & $6.41\pm0.10$ & 0.47 & 6, 17 \\
NGC6325 & V2 & 10.744 & $\cdots$ & $\cdots$ & 13.632 & 12.14 & 11.43 & 11.22 & $7.53\pm0.32$ & 0.96 & 17 \\
NGC6441 & V154 & 10.83 & $\cdots$ & $\cdots$ & 13.57 & $\cdots$ & $\cdots$ & $\cdots$ & $12.73\pm0.16$ & 0.61 & 16 \\
M14 & V17 & 12.091 & 15.846 & 14.676 & 13.182 & $\cdots$ & $\cdots$ & $\cdots$ & $9.14\pm0.25$ & 0.47 & 7 \\
NGC6256 & V1 & 12.447 & $\cdots$ & $\cdots$ & 13.402 & 11.86 & 11.15 & 10.85 & $7.24\pm0.29$ & 1.71 & 17 \\
NGC6325 & V1 & 12.516 & $\cdots$ & $\cdots$ & 13.436 & 11.97 & 11.25 & 11.02 & $7.53\pm0.32$ & 0.95 & 17 \\
M28 & V4 & 13.462 & 14.21 & $\cdots$ & 11.734 & 10.78 & 10.18 & 10.01 & $5.37\pm0.10$ & 0.49 & 17, 19 \\
NGC6441 & V128 & 13.519 & 16.475 & 15.257 & 13.795 & $\cdots$ & $\cdots$ & $\cdots$ & $12.73\pm0.16$ & 0.61 & 13 \\
M14 & V7 & 13.6038 & 16.051 & 14.745 & 13.224 & 12.04 & 11.46 & 11.29 & $9.14\pm0.25$ & 0.48 & 7 \\
M19 & V2 & 14.139 & 14.15 & $\cdots$ & 12.242 & 11.53 & 11.06 & 10.92 & $8.34\pm0.16$ & 0.32 & 4, 17 \\
HP1 & V17 & 14.42 & $\cdots$ & $\cdots$ & $\cdots$ & 11.91 & 11.09 & 10.78 & $7.00\pm0.14$ & 2.32 & \\
NGC5139 & V29 & 14.7338 & 12.776 & 12.015 & 11.049 & 10.43 & 10.03 & 9.93 & $5.43\pm0.05$ & 0.14 & 3 \\
M3 & V154 & 15.29 & 12.79 & 12.33 & 11.68 & 11.45 & 11.06 & 10.99 & $10.18\pm0.08$ & 0.01 & 14 \\
M12 & V1 & 15.527 & $\cdots$ & $\cdots$ & $\cdots$ & 10.24 & 9.79 & 9.64 & $5.11\pm0.05$ & 0.18 & \\
M2 & V1 & 15.5647 & 13.97 & 13.36 & $\cdots$ & 11.93 & 11.54 & 11.45 & $11.69\pm0.11$ & 0.04 & 9 \\
M80 & V1 & 16.3042 & 14.19 & 13.365 & $\cdots$ & 11.65 & 11.23 & 11.10 & $10.34\pm0.12$ & 0.21 & 11, 20 \\
HP1 & V16 & 16.4 & $\cdots$ & $\cdots$ & $\cdots$ & 11.77 & 10.99 & 10.70 & $7.00\pm0.14$ & 2.39 & \\
M19 & V3 & 16.5 & 13.70 & $\cdots$ & 12.417 & $\cdots$ & $\cdots$ & $\cdots$ & $8.34\pm0.16$ & 0.31 & 4, 17 \\
M15 & V86 & 16.829 & 14.368 & 13.659 & 12.646 & 11.70 & 11.32 & 11.19 & $10.71\pm0.10$ & 0.11 & 8 \\
M19 & V1 & 16.92 & 13.85 & $\cdots$ & 12.260 & 11.37 & 10.88 & 10.75 & $8.34\pm0.16$ & 0.32 & 4, 17 \\
M2 & V5 & 17.557 & 13.89 & 13.28 & $\cdots$ & 11.80 & 11.40 & 11.31 & $11.69\pm0.11$ & 0.04 & 9 \\
NGC6441 & V129 & 17.832 & 16.395 & 15.128 & 13.610 & 12.14 & 11.61 & 11.65 & $12.73\pm0.16$ & 0.62 & 13 \\
M10 & V2 & 18.7226 & 13.01 & 12.05 & 10.934 & 10.05 & 9.61 & 9.47 & $5.07\pm0.06$ & 0.29 & 2, 15 \\
M14 & V1 & 18.729 & 15.429 & 14.210 & 12.633 & 11.63 & 11.10 & 10.89 & $9.14\pm0.25$ & 0.48 & 7 \\
Terzan1 & V5 & 18.85 & $\cdots$ & $\cdots$ & 14.576 & 11.97 & 10.93 & 10.61 & $5.67\pm0.17$ & 6.86 & 17 \\
M2 & V6 & 19.299 & 13.74 & 13.14 & $\cdots$ & 11.72 & 11.33 & 11.25 & $11.69\pm0.11$ & 0.04 & 9 \\
NGC6441 & V127 & 19.773 & 16.398 & 15.048 & 13.441 & $\cdots$ & $\cdots$ & $\cdots$ & $12.73\pm0.16$ & 0.61 & 13 \\
NGC6441 & V126 & 20.625 & 16.282 & 14.997 & 13.402 & $\cdots$ & $\cdots$ & $\cdots$ & $12.73\pm0.16$ & 0.61 & 13 \\
NGC6441 & V6 & 21.365 & 16.117 & 14.885 & 13.231 & 12.16 & 11.64 & 11.49 & $12.73\pm0.16$ & 0.61 & 13 \\
M5 & V42 & 25.735 & 11.82 & 11.659 & 10.740 & 10.16 & 9.85 & 9.82 & $7.48\pm0.06$ & 0.04 & 1, 14 \\
M5 & V84 & 26.87 & 12.11 & 11.287 & 10.451 & 10.20 & 9.80 & 9.71 & $7.48\pm0.06$ & 0.04 & 1, 14 \\
NGC6453 & V2 & 27.1954 & $\cdots$ & 14.231 & 12.375 & 11.35 & 10.75 & 10.59 & $10.07\pm0.22$ & 0.66 & 17 \\
NGC5139 & V1 & 29.3479 & 11.488 & 10.829 & 10.058 & 9.40 & 9.05 & 8.99 & $5.43\pm0.05$ & 0.13 & 3 \\
NGC6453 & V1 & 31.0476 & $\cdots$ & 14.601 & 12.789 & 11.51 & 10.85 & 10.66 & $10.07\pm0.22$ & 0.66 & 17 \\
M2 & V11 & 33.4 & 12.67 & 12.11 & $\cdots$ & 10.87 & 10.53 & 10.44 & $11.69\pm0.11$ & 0.04 & 9 \\
NGC5986 & V13 & 40.62 & $\cdots$ & $\cdots$ & $\cdots$ & 10.90 & 10.22 & 10.07 & $10.54\pm0.13$ & 0.34 & \\
M56 & V6 & 45.0 & 13.7 & 12.9 & $\cdots$ & 10.86 & 10.37 & 10.21 & $10.43\pm0.14$ & 0.25 & 18 \\
M28 & V17 & 48.0 & $\cdots$ & $\cdots$ & $\cdots$ & 9.55 & 8.95 & 8.75 & $5.37\pm0.10$ & 0.49 & \\
NGC6569 & V16 & 87.5 & 16.55 & $\cdots$ & $\cdots$ & 10.56 & 9.74 & 9.45 & $10.53\pm0.26$ & 0.43 & 10 \\
\enddata
\tablenotetext{a}{Distance of the globular clusters adopted from \citet{baumgardt2021}.}
\tablenotetext{b}{Reddening returned from the ``SFD'' dust map \citep{sfd1998}.}
\tablenotetext{c}{Sources for the $BVI$-band mean magnitudes: 1 = \citet{af2016}; 2 = \citet{af2020}; 3 = \citet{braga2020}; 4 = \citet{clement1978}; 5 = \citet{clement1980}; 6 = \citet{conteras2010}; 7 = \citet{cp2018}; 8 = \citet{corwin2008}; 9 = \citet{demers1969}; 10 = \citet{hl1985}; 11 = \citet{kopacki2013}; 12 = \citet{kunder2013}; 13 = \citet{pritzl2003}; 14 = \citet{rabidoux2010}; 15 = \citet{rozyczka2018}; 16 = \citet{skottfelt2015}; 17 = \citet{udalski2018}; 18 = \citet{wehlau1985}; 19 = \citet{wehlau1990a}; 20 = \citet{wehlau1990}.}
\end{deluxetable*}
\subsection{Updated the PL Relations} \label{sec4.2}
\begin{deluxetable}{ccrcc}
\tabletypesize{\scriptsize}
\tablecaption{The Derived Period-Luminosity Relations for TIIC in the Globular Clusters \label{tab_pl}}
\tablewidth{0pt}
\tablehead{
\colhead{Band} &
\colhead{$a$} &
\colhead{$b$} &
\colhead{$\sigma$} &
\colhead{$N$}
}
\startdata
$B$ & $-1.64\pm0.14$ & $0.39\pm0.14$ & 0.42 & 42 \\
$V$ & $-1.88\pm0.10$ & $0.13\pm0.11$ & 0.31 & 37 \\
$I$ & $-2.09\pm0.08$ & $-0.39\pm0.08$ & 0.24 & 41 \\
\hline
$J$ & $-2.23\pm0.04$ & $-0.83\pm0.04$ & 0.13 & 45 \\
$H$ & $-2.36\pm0.03$ & $-1.07\pm0.04$ & 0.10 & 43 \\
$K$ & $-2.41\pm0.03$ & $-1.09\pm0.03$ & 0.10 & 48 \\
\hline
$g$ & $-1.63\pm0.10$ & $-0.07\pm0.10$ & 0.38 & 55 \\
$r$ & $-1.84\pm0.08$ & $-0.25\pm0.08$ & 0.30 & 55 \\
$i$ & $-1.96\pm0.08$ & $-0.26\pm0.08$ & 0.28 & 41 \\
\enddata
\tablecomments{The PL relation takes the form of $m=a\log P + b$, and $\sigma$ is the dispersion of the fitted PL relation. $N$ represents the number of TIIC used in the fitting.}
\end{deluxetable}
We updated the $BVIJHK$-band PL relations for the TIIC sample compiled in \citet[hereafter B22 sample]{bhardwaj2022} by adopting the homogeneous distance from \citet{baumgardt2021} to their host globular clusters. We have also adopted the homogeneous reddening $E(B-V)$ queried from the same all-sky ``SFD'' dust map \citep{sfd1998}, using the {\tt dustmaps} code, to the TIIC in B22 sample. The compiled $BVIJHK$-band mean magnitudes (whenever available), as well as the adopted distances and reddenings, for the B22 sample are presented in Table \ref{tab_b22}. Mean magnitudes in the $BVI$-band were adopted from various sources as listed in the last column of Table \ref{tab_b22}. For $JHK$-band mean magnitudes, majority of them were taken from \citet{matsunaga2006} except for V34 in M15 \citep{bhardwaj2021} and V43, V60, V61, and V92 in NGC 5139 \citep{braga2020}. We excluded V1 in M10 and V8 in M79 from the B22 sample for the reasons mentioned in Section \ref{sec2.1}.
The $JHK$ photometry from the aforementioned three studies was homogeneously calibrated to 2MASS \citep[2 Micron All Sky Survey,][]{2mass2006} system. However, the optical photometric data are very heterogeneous and were taken from several different studies as evident from the last column of Table \ref{tab_b22}. Since most of the mean magnitudes do no have their associated photometric measurement errors and are likely to suffer from systematic uncertainties, we adopt an error of 0.05 magnitudes on the mean magnitudes. The available mean magnitudes listed in Table \ref{tab_b22} were converted to absolute magnitudes using the adopted distances. Extinction corrections on $BVIJHK$-band mean magnitudes were done using $A_{BVIJHK}=R_{BVIJHK}E(B-V)$, where $R_{BVIJHK}=\{3.626,\ 2.742,\ 1.505,\ 0.793,\ 0.469,\ 0.303\}$ \citep{schlafly2011,green2019}. We then fit the PL relations using an iterative $3\sigma$-clipping linear regression (where $\sigma$ is the dispersion of the regression), implemented in {\tt astropy}, to exclude a few obvious outliers. The updated $BVIJHK$-band PL relations are shown in Figure \ref{fig_pl} and provided in Table \ref{tab_pl}.
There are 33 TIIC in the B22 sample that are not included in Table \ref{tab_t2cep}. Majority of these TIIC were located at south of $\delta_{J2000} = -30^\circ$ (i.e. outside the ZTF footprint), and the remaining TIIC either did not have ZTF light curve data or were excluded (e.g. due to blending). The $BVI$-band mean magnitudes for these TIIC, whenever available, were transformed to the $gri$-band using the transformations provided in \citet{tonry2012}. Extinction corrections were done using the {\tt Bayerstar2019} 3D reddening map if available, else the ``SFD'' dust map was used together with the conversion of $E=E(B-V)/0.884$ (see footnote \ref{fn2}). Similarly, there are 25 common TIIC in B22 sample and Table \ref{tab_t2cep},\footnote{We checked the consistency of transformed $gri$-band mean magnitudes using the 25 common TIIC in B22 sample and Table \ref{tab_t2cep}. The averaged differences of $m_{ZTF}-m_T$ in the $gri$-band are $0.014$, $-0.123$, and $-0.020$~mag, respectively, where $m_{ZTF}$ and $m_T$ represent the ZTF and the transformed mean magnitudes. The corresponding standard deviations in the $gri$-band are $0.092$, $0.163$, and $0.160$~mag, respectively. Note that after removing an extreme outlier, the number of TIIC in both samples with mean magnitudes to calculate the averaged difference is 16 for the $gr$-band, and 3 for the $i$-band. The revised $r$-band PL relation, $M_r=-1.83(\pm0.08)\log P - 0.29(\pm0.08)$ with $\sigma=0.31$~mag, is consistent with Table \ref{tab_pl} after taking the averaged differences of $-0.12$~mag into account.} the $BVI$-band mean magnitudes from B22 sample were transformed to the $i$-band for those TIIC without the $i$-band data. Open circles in the right panels of Figure \ref{fig_pl} represent the TIIC in B22 sample transformed from the $BVI$-band photometry.
Combining the TIIC in Table \ref{tab_t2cep} and those transformed from B22 sample, we derived the updated $gri$-band PL relation, using the same iterative $3\sigma$-clipping linear regression. The results are listed in the bottom part of Table \ref{tab_pl}. With the updated PL relations, derived using the homogeneous distances, consistent PL relations were found between the $BVI$-band PL relations and the $gri$-band PL relations, as demonstrated in the right-panel of Figure \ref{fig_compare}.
Most of the previous studies have suggested that the PL relations for TIIC are insensitive to metallicity \citep[for examples, see][and reference therein]{matsunaga2006,dic2007,matsunaga2009,ciechanowska2010,ripepi2015,groenewegen2017,braga2018,bhardwaj2020,bhardwaj2022}. In contrast, significant metallicity terms were found for the $UB$-band and $JHK$-band PL relations from theoretical work of \citet{das2021} and empirical investigations of \citet{wiegorski2021}, respectively. Following \citet{matsunaga2006} and \citet{wiegorski2021}, we fit a linear regression to the residuals of PL relations as a function of metallicity for our sample of TIIC, where the metallicities, $\mathrm{[Fe/H]}$ for the host globular clusters, were taken from the GOTHAM (GlObular clusTer Homogeneous Abundances Measurements) survey\footnote{\url{http://www.sc.eso.org/~bdias/files/dias+16\_MWGC.txt}} \citep{dias2015, dias2016a, dias2016b, vasquez2018}. Metallicity of these host globular clusters ranged from $-2.27$~dex (M15) to $-0.47$~dex (NGC6441). Slopes of these linear regressions, denoted as $\gamma$, as a function of filters are displayed in Figure \ref{fig_feh}. Except in $B$-band, the values of $\gamma$ are consistent with zero in all other filters, implying the corresponding PL relations are insensitive to metallicity. This is consistent with the theoretical predictions of \citet{das2021}. For $B$-band, fitting a period-luminosity-metallicity relation to the data yields:
\begin{eqnarray}
M_B & = & 0.68 (\pm0.25) - 1.67 (\pm0.14)\log P \nonumber \\
& & + 0.19 (\pm0.14) \mathrm{[Fe/H]},\ \ \sigma=0.41.\nonumber
\end{eqnarray}
\section{The Multi-Band Relations} \label{sec5}
\begin{deluxetable}{lcrcc}
\tabletypesize{\scriptsize}
\tablecaption{The Derived Period-Wesenheit Relations for TIIC in the Globular Clusters \label{tab_pw}}
\tablewidth{0pt}
\tablehead{
\colhead{Wesenheit Index} &
\colhead{$a$} &
\colhead{$b$} &
\colhead{$\sigma$} &
\colhead{$N$}
}
\startdata
$W^{BV}_V = V - 3.102 (B-V)$ & $-2.62\pm0.05$ & $-1.00\pm0.05$ & 0.13 & 30 \\
$W^{VI}_V = V - 2.217 (V-I)$ & $-2.43\pm0.07$ & $-0.99\pm0.08$ & 0.20 & 30 \\
$W^{BI}_B = B - 1.710 (B-I)$ & $-2.42\pm0.05$ & $-0.98\pm0.05$ & 0.14 & 31 \\
\hline
$W^{JH}_J = J - 2.448 (J-H)$ & $-2.49\pm0.03$ & $-1.44\pm0.04$ & 0.11 & 46 \\
$W^{HK}_K = K - 1.825 (H-K)$ & $-2.51\pm0.03$ & $-1.10\pm0.04$ & 0.11 & 45 \\
$W^{JK}_K = K - 0.618 (J-K)$ & $-2.46\pm0.03$ & $-1.28\pm0.03$ & 0.08 & 46 \\
\hline
$W^{ri}_r = r - 4.051 (r-i)$ & $-2.26\pm0.10$ & $-0.34\pm0.10$ & 0.34 & 41 \\
$W^{gr}_r = r - 2.905 (g-r)$ & $-2.43\pm0.11$ & $-0.77\pm0.11$ & 0.42 & 55 \\
$W^{gi}_g = g - 2.274 (g-i)$ & $-2.33\pm0.07$ & $-0.48\pm0.07$ & 0.26 & 41 \\
\enddata
\tablecomments{The PW relation takes the form of $W=a\log P + b$, and $\sigma$ is the dispersion of the fitted relation. $N$ represents the number of TIIC used in the fitting.}
\end{deluxetable}
\begin{deluxetable}{ccrcc}
\tabletypesize{\scriptsize}
\tablecaption{The Derived Period-Color and Period-Q-index Relations for TIIC in the Globular Clusters \label{tab_pcq}}
\tablewidth{0pt}
\tablehead{
\colhead{Color} &
\colhead{$a$} &
\colhead{$b$} &
\colhead{$\sigma$} &
\colhead{$N$}
}
\startdata
$(B-V)$ & $0.24\pm0.04$ & $0.35\pm0.04$ & 0.11 & 34 \\
$(V-I)$ & $0.26\pm0.04$ & $0.50\pm0.04$ & 0.11 & 30 \\
$(B-I)$ & $0.50\pm0.11$ & $0.75\pm0.11$ & 0.31 & 35 \\
\hline
$(J-H)$ & $0.08\pm0.02$ & $0.27\pm0.02$ & 0.05 & 41 \\
$(H-K)$ & $0.05\pm0.01$ & $0.02\pm0.01$ & 0.04 & 42 \\
$(J-K)$ & $0.14\pm0.02$ & $0.29\pm0.02$ & 0.06 & 42 \\
\hline
$(g-r)$ & $0.21\pm0.04$ & $0.18\pm0.04$ & 0.15 & 55 \\
$(r-i)$ & $0.09\pm0.02$ & $0.04\pm0.02$ & 0.06 & 39 \\
$(g-i)$ & $0.29\pm0.04$ & $0.17\pm0.04$ & 0.15 & 41 \\
\hline
$Q_{BVI}$ & $0.12\pm0.03$ &$-0.04\pm0.03$ & 0.07 & 27 \\
$Q_{JHK}$ & $0.02\pm0.02$ & $0.21\pm0.03$ & 0.08 & 45 \\
$Q_{gri}$ & $0.06\pm0.05$ & $0.11\pm0.05$ & 0.17 & 41 \\
\enddata
\tablecomments{The PC and PQ relations take the form of $c=a\log P + b$ (where $c$ is for colors or $Q$-index), and $\sigma$ is the dispersion of the fitted relation. $N$ represents the number of TIIC used in the fitting.}
\end{deluxetable}
In addition to PL relations, the updated B22 samples can be used to derive the period-Wesenheit (PW), period-color (PC), and the period-$Q$-index (PQ) relations in the $BVIJHK$-band. The Wesenheit index, $W$, is analog to magnitude but it is extinction-free by construction \citep{madore1982,madore1991}. Similarly, the $Q$-index is analog to color but reddening-free by construction, inspired from the classical work of \citet[][who defined the $Q$-index in $UBV$-band]{johnson1953}. The combined sample of TIIC listed in Table \ref{tab_t2cep} and those photometrically transformed from the B22 sample can also be used to derive the $gri$-band PW, PC, and PQ relations. The $gri$-band Wesenheit indices were defined in \citet{ngeow2021}, while the various $BVIJHK$-band Wesenheit indices are defined in Table \ref{tab_pw}. For the PQ relations, we have $Q_{BVI} = (B-V) - 0.715(V-I)$ and $Q_{JHK} = (J-H) - 1.952(H-K)$ in the $BVIJHK$-band, while the $gri$-band $Q$-index was adopted from \citet{ngeow2022} as $Q_{gri} = (g-r) - 1.395(r-i)$. The fitted PW and PC/PQ relations are summarized in Table \ref{tab_pw} and \ref{tab_pcq}, respectively, as well as presented in Figure \ref{fig_pw} and \ref{fig_pcq}.
The $(H-K)$ and $(r-i)$ PC relations have relatively flat PC slopes with zero-points almost consistent with zero. These explain why the pairs of $HK$-band and $ri$-band PL relations are quite similar, especially their PL zero-points are identical within the uncertainties (see Table \ref{tab_pl}). We also see that the redder colors, in $JHK$-band and in $(r-i)$ color, tend to have the smaller PC dispersion. In contrast, the $(B-I)$ PC relation displays the largest dispersion among all the PC relations. In case of the PQ relations, slopes for both of the $Q_{JHK}$ and $Q_{gri}$ PQ relations are statistically consistent with zero, in contrast to the RR Lyrae \citep{ngeow2022}. The $Q_{BVI}$ PQ relation is also much shallower than the $BVI$-band PC relations, and has the smallest dispersion among the three PQ relations.
\section{Comparison with M31 TIIC} \label{sec6}
The Pan-STARRS1 survey of Andromeda, known as the PAndromeda project, reported a finding of 278 TIIC in the (halo of) M31 galaxy \citep{kodric2018}. This sample of M31 TIIC can be used to test the applicability of our derived PL/PW relations. Numerous distance measurements to M31, via various techniques and distance indicators, can be found in the literature. \citet{dgb2014} summarized the distance estimates prior to 2013 and recommended a distance modulus of $\mu=24.46\pm0.10$~mag to M31. A latest distance measurement to M31 can be found in \citet{li2021}, who give $\mu=24.407\pm0.032$~mag based on the Hubble Space Telescope observations of classical Cepheids.
\citet{kodric2018} provided the pulsation periods as well as the extinction-corrected $gri$-band mean magnitudes for these sample of M31 TIIC. We first removed six TIIC that have errors on the periods which are larger than 1~day (or fractional error larger than 1\%; the rest of the TIIC have fractional errors that are less than 0.64\% in period). The reddening-corrected colors for the remaining 272 TIIC were plotted against their logarithmic period in the left panel of Figure \ref{fig_m31pc}, overlaid with the PC relations taken from Table \ref{tab_pcq}. The $(r-i)$ colors for the M31 TIIC are remarkable in good agreement with the $(r-i)$ PC relation derived from our sample of TIIC located in the globular clusters. In contrast, outliers can be seen on the $(g-r)$ and $(g-i)$ PC relations, suggesting there could be some problems in the $g$-band. Indeed, the $g$-band observations were $\sim5$ to $\sim10$ times less than the $ri$-band \citep{kodric2018}, such that the $g$-band light curves do not have quality as good as in other two bands. As a result, out of the remaining 272 TIIC, 50 of them do not have mean $g$-band magnitudes, and 161 of them carry a non-zero bit flag \citep[see Table 2 of][]{kodric2018} indicating there are some problems associated with the $g$-band data. For these reasons, we only focused on the $ri$-band mean magnitudes for this sample of TIIC in the subsequent analysis.
Right panels of Figure \ref{fig_m31pc} present the $ri$-band PL/PW relations for the M31 TIIC. We over-plotted the PL/PW relations from Table \ref{tab_pl} and \ref{tab_pw}, together with the respected $\pm3\sigma$ boundaries, on the right panels of Figure \ref{fig_m31pc} after shifting these PL/PW relations vertically with $\mu=24.407$~mag \citep[][as black lines]{li2021}. Except for five TIIC that appeared to be brighter (marked as crosses in the right panels of Figure \ref{fig_m31pc}) in the $ri$-band PL relations, almost all of the TIIC were confined within the $\pm3\sigma$ of the respected PL/PW relations. Furthermore, scatters of these TIIC around the PL/PW relations confirmed the rather large dispersion in $ri$-band PL/PW relations as reported in Table \ref{tab_pl} and \ref{tab_pw}.
Our derived PL/PW relations can also be used to determine the distance modulus of M31 from this sample of TIIC (after excluding the five TIIC marked as crosses in the right panels of Figure \ref{fig_m31pc}). By fitting the data with the $ri$-band PL/PW relations given in Table \ref{tab_pl} and \ref{tab_pw}, weighted with the quadrature sums of errors on the mean magnitudes and the PL/PW dispersions, we obtained $\mu_r = 24.180\pm0.021$~mag, $\mu_i = 24.249\pm0.020$~mag, and $\mu_W = 24.423\pm0.026$~mag using the $ri$-band PL and PW relations, respectively. The quoted errors on $\mu$ are statistical errors only. The $\mu_W$ obtained from fitting the PW relation is in good agreements, and lie in between, the measurement of $\mu=24.407\pm0.032$~mag from \citet{li2021} and the recommended value of $\mu=24.46\pm0.10$~mag from \citet{dgb2014}. This suggested our derived $ri$-band PW relation is robust. On the other hand, distance moduli obtained from the $ri$-band PL relations are $\sim0.2$~mag smaller than $\mu_W$, hinting there could be additional systematic, in the order of $\sim0.2$~mag, in the derived PL relations. Distances to the globular clusters adopted from \citet{baumgardt2021} are unlikely to be the source of the systematic, because the same distances were used in deriving both of the PL and PW relations. Other possible systematic errors include the samples used, the extinction maps used, and the assumed extinction law to derive the $ri$-band PL relations.
The derivation of $ri$-band PL relations include the TIIC sample transformed from the $BVI$-band photometry. Therefore, we first excluded the TIIC with transformations and only using the TIIC that have ZTF $ri$-band mean magnitudes, and re-derived the $ri$-band PL relations. Using the re-derived PL relations, the distance moduli of M31 we obtained are $\mu_r = 24.096\pm0.021$~mag and $\mu_i = 24.156\pm0.020$~mag. Similarly, we have used the ``SFD'' dust map for TIIC located outside the footprint of the {\tt Bayerstar2019} reddening map. If we re-derived the $ri$-band PL relations by using the same ``SFD'' dust map to all TIIC in the sample and re-determined the distance moduli to M31, then we obtained $\mu_r = 24.000\pm0.021$~mag and $\mu_i = 24.115\pm0.020$~mag. Finally, we adopted the same extinction law as in \citet{kodric2018}, i.e. $A_r = 2.554E$ and $A_i=1.893E$, and we obtained $\mu_r = 24.150\pm0.021$~mag and $\mu_i = 24.229\pm0.020$~mag. These distance moduli are smaller than those obtained from the $ri$-band PL relations derived in Table \ref{tab_pl}. Hence, there could have hidden systematic errors when deriving the PL relations, and independent samples and calibration of the TIIC PL relations are desirable.
\section{Conclusions} \label{sec7}
In this work, we present the first $gri$-band and the updated $BVIJHK$-band PL and PW relations for TIIC located in the globular clusters. All-together, there are 70 TIIC spanning in 30 globular clusters (with ages spanning from $\sim11.0$ to $\sim13.2$~Gyr) in our sample, and only three of them have the complete nine band photometry. Homogeneous distance to the globular clusters, ranging from 3.30 (M22) to 88.47~kpc (NGC2419), adopted from a single source \citep{baumgardt2021} and consistent reddening maps, either the {\tt Bayerstar2019} 3D reddening map or the ``SFD'' dust map, were used to calibrate the absolute magnitudes of these samples of TIIC. We demonstrated that the PL relations are consistent in the $BVI$ and the $gri$ bands. We have also derived nine sets of the PW relations based on the combinations of these filters. For the PL/PW relations, the $JHK$-band PL/PW relations exhibit the smallest dispersion, which are preferable to be applied in the future distance scale work. Finally, our sample of TIIC also allow the derivation of PC and PQ relations in these filters. We found that the slopes of the PC relations in the $JHK$-band and in the $(r-i)$ color, as well as the slopes of the PQ relations, are quite shallow or flat.
We tested our PL/PW relations, at least in the $ri$-band, with a sizable sample of TIIC in M31. The scatters of M31 TIIC on the PL/PW relations are similar to those presented in Table \ref{tab_pl} and \ref{tab_pw}, confirming the derived PL/PW dispersions are intrinsic. Using our derived $ri$-band PW relation, the distance modulus of M31 we obtained is in agreement with the latest measurement using the classical Cepheids. However, distance moduli derived from using the $ri$-band PL relations are smaller by $\sim0.2$~mag, suggesting there could be hidden systematics in the derived PL relations. Therefore, additional work in the near future are required to independently crosscheck these PL relations. Nevertheless, our derived PW relations can be applied in the on-going and upcoming synoptic time-series sky surveys, such as LSST or other surveys employing similar $gri$ filters.
\acknowledgments
We are thankful for the useful discussions and comments from an anonymous referee that improved the manuscript. We are thankful for funding from the Ministry of Science and Technology (Taiwan) under the contracts 107-2119-M-008-014-MY2, 107-2119-M-008-012, 108-2628-M-007-005-RSP and 109-2112-M-008-014-MY3. AB acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 886298.
Based on observations obtained with the Samuel Oschin Telescope 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute of Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
This research has made use of the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France. This research made use of Astropy,\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{astropy2013, astropy2018}.
\facility{PO:1.2m}
\software{{\tt astropy} \citep{astropy2013,astropy2018}, {\tt dustmaps} \citep{green2018}, {\tt gatspy} \citep{vdp2015}, {\tt Matplotlib} \citep{hunter2007}, {\tt NumPy} \citep{harris2020}, {\tt SciPy} \citep{virtanen2020}.}
|
Title:
Collisional Pumping of H$_2$O and CH$_3$OH Masers in C-Type Shock Waves |
Abstract: The collisional pumping of H$_2$O and CH$_3$OH masers in magnetohydrodynamic
nondissociative C-type shocks is considered. A grid of C-type shock models with
speeds in the range $5-70$ km s$^{-1}$ and preshock gas densities $n_{\rm
H_2,0} = 10^4-10^7$ cm$^{-3}$ is constructed. The large velocity gradient
approximation is used to solve the radiative transfer equation in molecular
lines. The para-H$_2$O 183.3 GHz and ortho-H$_2$O 380.1 and 448.0 GHz
transitions are shown to be inverted and to have an optical depth along the
shock velocity $\vert \tau \vert \sim 1$ at relatively low gas densities in the
maser zone, $n_{\rm H_2} \gtrsim 10^5-10^6$ cm$^{-3}$. Higher gas densities,
$n_{\rm H_2} \gtrsim 10^7$ cm$^{-3}$, are needed for efficient pumping of the
remaining H$_2$O masers. Simultaneous generation of H$_2$O and class I CH$_3$OH
maser emission in a shock is possible at preshock gas densities $n_{\rm H_2,0}
\approx 10^5$ cm$^{-3}$ and shock speeds in the range $u_{\rm s} \approx
17.5-22.5$ km s$^{-1}$. The possibility of detecting class I CH$_3$OH and
para-H$_2$O 183.3 GHz masers in star-forming regions and near supernova
remnants is investigated.
| https://export.arxiv.org/pdf/2208.00201 |
\thispagestyle{firststyle}
\begin{center}
\Large Collisional Pumping of H$_2$O and CH$_3$OH Masers in C-Type Shock Waves
\vspace{0.5cm}
\large A.V. Nesterenok
\vspace{0.5cm}
\normalsize Ioffe Physical-Technical Institute, Politekhnicheskaya St. 26, Saint~Petersburg, 194021 Russia
e-mail: alex-n10@yandex.ru
\end{center}
Keywords: \textit{cosmic masers, radiative transfer, shocks, star-forming regions.}
\smallskip
DOI: 10.1134/S1063773722060044
\section{Introduction}
Shocks in the interstellar medium are observed at the formation stage of stars, during their evolution, and at the final evolutionary stage of massive stars -- supernova explosions. At the star formation stage the protostellar bipolar outflows interact with the protostar envelope and the parent molecular cloud to produce shocks. After the explosion of a supernova its outer layers expand into the interstellar medium with a huge velocity, sweeping up the interstellar gas and forming a shock. In this paper we consider magnetohydrodynamic (MHD) nondissociative C-type shocks propagating in dense molecular clouds. The chemical reactions in the shock-heated gas and the sputtering of icy grain mantles change significantly the chemical composition of the gas and, at the same time, make it possible to diagnose the physical conditions through the observation of molecular and atomic lines. The more transitions of various molecules are observed in the shock-heated gas, the more accurate is the determination of physical parameters -- the shock speed and the gas density and temperature. In this paper we investigate the physical conditions under which an intense H$_2$O and CH$_3$OH maser emission arises in shocks.
Shocks can be nondissociative (C-type, there is no dissociation of molecules at the shock front) and dissociative (J-type) (Draine and McKee 1993). The shock type depends on the magnetic field strength, the gas flow speed, and the gas ionization fraction. H$_2$O maser emission can be generated in the postshock region of shocks of both types. If the gas flow speed is higher than the propagation speed of perturbations in the medium (the speed of sound and the magnetosonic speed), then a J-type shock is formed. In J-type shocks the physical parameters change in a narrow region of space with a size of the order of the mean free path of atoms and molecules. The gas in such shocks is heated to temperatures $T_{\rm g} \sim 10^5$~K, and there is a complete dissociation of molecules. Behind the shock front H$_2$ molecules are formed on dust grains, and the release of thermal energy during H$_2$ formation maintains a gas temperature of $300-400$~K. Elitzur et al. (1989) and Hollenbach et al. (2013) showed that the generation of an intense H$_2$O 22.23~GHz maser emission is possible in the warm gas behind the front of a J-type shock. However, gas temperatures $T_{\rm g} > 400$~K are needed for efficient pumping of most H$_2$O maser lines in the millimeter and submillimeter wavelength ranges (Gray et al. 2016).
If the gas flow speed is lower than the magnetosonic speed, but higher than the speed of sound for the neutral gas component, then a C-type shock is formed. In such shocks the changes in physical parameters at the shock front are determined by the diffusion of ions (and charged dust grains) and neutral gas through one another, and the gas parameters (temperature and density) undergo gradual changes. Since the kinetic energy of the gas flows is converted to thermal energy in a vast shock region, the gas is heated to temperatures much lower than those in J-type shocks: $T_{\rm g} \sim 10^3-10^4$~K. Passing through the front of a C-type shock, the gas remains molecular. In this case, the gas temperature at and behind the shock front, where the pumping of masers occurs, can be higher than that in the maser zone of J-type shocks: $T_{\rm g} \gtrsim 1000$~K (Kaufman and Neufeld 1996a). Previously it has been shown that relatively high gas densities, $n_{\rm H_2} \gtrsim 10^7$~cm$^{-3}$, are needed to pump H$_2$O masers (Neufeld and Melnick 1991; Kaufman and Neufeld 1996a; Yates et al. 1997; Gray et al. 2016, 2022). The sizes of 22.23~GHz maser emission sources also point to high gas densities in maser spots (Kaufman and Neufeld 1996a). However, Cernicharo et al. (1994, 1999) and Daniel and Cernicharo (2013) showed that some H$_2$O maser transitions (183.3, 325.1, and 380.1~GHz) could be inverted at relatively low gas densities, $n_{\rm H_2} \sim 10^5-10^6$~cm$^{-3}$.
Methanol masers are divided into two classes: class I masers with a collisional pumping mechanism and class II masers with a radiative pumping mechanism. In star-forming regions class I CH$_3$OH maser emission is generated in shocks -- the regions of interaction of the protostellar flows with the surrounding interstellar medium, in expanding HII regions (Voronkov et al. 2014). Class I CH$_3$OH masers are also observed in clouds of the central molecular zone of our Galaxy and near supernova remnants (Salii et al. 2002; Pihlstr\"{o}m et al. 2014). Methanol is formed in dark molecular clouds on dust grains in CO hydrogenation reactions (Watanabe and Kouchi 2002). At the shock front methanol falls into the gas phase as a result of the sputtering of icy grain mantles. Methanol has no formation channels in the gas phase and, therefore, methanol maser emission is generated in nondissociative C-type shocks. If the gas temperature at the shock front is sufficiently high, $T_{\rm g} \gtrsim 2000$~K, then methanol is destroyed in collisional dissociation reactions (Nesterenok 2022). The preshock gas density $n_{\rm H_2,0} \sim 10^4-10^5$~cm$^{-3}$ was shown in Nesterenok (2022) to be the most favorable condition for the emergence of an intense class I CH$_3$OH maser emission (the gas density in the maser zone is several-fold higher than the preshock gas density $n_{\rm H_2,0}$ due to gas compression in the shock). The pumping of CH$_3$OH masers can also occur at higher gas densities (McEwen et al. 2014; Leurini et al. 2016). It was shown in Nesterenok (2022) that the optical depth for CH$_3$OH transitions in a shock is small for preshock gas densities $n_{\rm H_2,0} \gtrsim 10^6$~cm$^{-3}$. At these preshock gas densities and at shock speeds when the sputtering of icy grain mantles occurs ($u_{\rm s} \gtrsim 17.5$~km~s$^{-1}$), the gas temperature at the shock front is $T_{\rm g} \gtrsim 2000$~K and there is a (partial) dissociation of methanol molecules. Thus, class I CH$_3$OH masers and some H$_2$O transitions have a pumping regime at relatively low gas densities.
This paper is a continuation of our study of maser pumping in shocks begun in Nesterenok (2020, 2021, 2022). The collisional pumping of OH masers at 1720~MHz in shocks near supernova remnants was considered in Nesterenok (2020). In Nesterenok (2021, 2022) we considered the pumping of class I CH$_3$OH masers and studied the coexistence of CH$_3$OH and OH masers in the same source. In this paper we investigate the collisional pumping of H$_2$O masers in C-type shocks for preshock gas densities $n_{\rm H_2,0} = 10^4-10^7$~cm$^{-3}$, and consider the coexistence of H$_2$O and class I CH$_3$OH masers in the same
source.
\section{C-type shock model}
The model of a steady-state C-type shock propagating in a dense molecular cloud was developed in Nesterenok (2018) and Nesterenok et al. (2019). The numerical simulations consist of two parts: (1) the simulations of the chemical evolution of the dark molecular cloud and (2) the simulations of the shock propagation. At the start of the simulations of the cloud's chemical evolution the H atoms are assumed to be bound into H$_2$ molecules, while all of the remaining elements are in the atomic or ionized state (Nesterenok 2022). A detailed description of all the chemical processes that are taken into account in the numerical simulations and a description of the dynamics of the gas components in the shock (neutral gas, ions, electrons, and dust grains) are given in Nesterenok (2018). The chemical reactions that determine the methanol concentration are discussed in Nesterenok (2022).
As a starting point for the shock simulations we chose the age of the molecular cloud $t_0$ at which the methanol abundance relative to the hydrogen nuclei in the icy mantles of dust grains is 10$^{-5}$. This age depends on the gas density and the cosmic-ray ionization rate. At the same time, the relative abundance of H$_2$O molecules at $t_0$ slightly differs for different gas densities and gas ionization rates and is $5 \times 10^{-5}-10^{-4}$. According to the observational data, the relative abundance of H$_2$O and CH$_3$OH molecules in the icy mantles of dust grains in molecular clouds lies in the range $(1-8) \times 10^{-5}$ and $(0-1.5) \times 10^{-5}$, respectively (Boogert et al. 2015). The relative CH$_3$OH abundance adopted in our calculations corresponds to the upper limit of the observed values.
To estimate the preshock magnetic field, we used a power-law dependence of the magnetic field on gas density (Dudorov 1991; Crutcher et al. 2010):
\begin{equation}
B = \beta B_0 \left( n_\mathrm{H,tot}/n_0 \right)^{\alpha},
\label{eq_magn_field}
\end{equation}
\noindent
where the values of the parameters are as follows: $n_0 = 300$~cm$^{-3}$, $B_0 = 10$~$\mu$G, $\alpha = 0.65$, and the number density of hydrogen nuclei $n_{\rm H,tot} \geq n_0$. According to the Zeeman molecular line splitting observations, the magnetic field in molecular clouds varies in a wide range, $0 < \beta \leq 1$ (Crutcher et al. 2010). In most of our calculations we use $\beta = 1$ and the direction of the magnetic field is perpendicular to the shock velocity. The results of our shock model computations in which $\beta = 0.5$ are also presented.
In cold molecular clouds the ortho-H$_2$-to-para-H$_2$ conversion time scale can be larger than the cloud evolution time, and the ortho-/para-H$_2$ ratio has no time to reach its equilibrium value. The initial ortho-/para-H$_2$ ratio was chosen to be 0.1 -- some arbitrary low value. In our calculations we take into account the following processes through which the para-/ortho-H$_2$ interconversion occurs: H$_2$--H collisions, H$_2$--H$^+$ collisions, and H$_2$ formation on dust grains (Nesterenok et al. 2019). When simulating the chemical evolution of a cold molecular cloud, the ortho-/para-H$_2$ ratio changes slowly toward its equilibrium value. In this case, collisions with H$^+$ are the main para-/ortho-H$_2$ interconversion channel.
The cosmic-ray ionization rate in most of our calculations was set equal to $\zeta_{\rm H_2} = 3 \times 10^{-17}$~s$^{-1}$, corresponding to the gas ionization rate in cold molecular clouds away from the sources of ionizing radiation (Dalgarno 2006). We also present the results of our shock model computations in which the gas ionization rate was set equal to $\zeta_{\rm H_2} = 3 \times 10^{-15}$~s$^{-1}$. This value may be considered as a typical gas ionization rate in molecular clouds in the vicinity of supernova remnants and in clouds in the central molecular zone of our Galaxy (Shingledecker et al. 2016). The shock speeds varied from 5~km~s$^{-1}$ to the limiting C-type shock speed. The limiting speed is determined from the condition of almost complete H$_2$ dissociation. The limiting speeds are approximately 70, 45, 30, and 30~km~s$^{-1}$ for the preshock gas densities $n_{\rm H_2,0} = 10^4$, $10^5$, $10^6$, and $10^7$~cm$^{-3}$, respectively (for the gas ionization rate $\zeta_{\rm H_2}= 3 \times 10^{-17}$~s$^{-1}$). Table~1 gives the parameters that were used in our numerical simulations of shocks.
~\\
~\\
\begin{tabular}{l@{\quad\quad}@{\quad\quad}l}
\multicolumn{2}{l}{\large\bf Table 1. Shock parameters} \\ [5pt]
\hline \\ [-2ex]
Preshock gas density, $n_{\rm H_2,0}$ & $10^4 - 10^7$~cm$^{-3}$ \\ [5pt]
Shock speed, $u_{\rm s}$ & $5-70$~km~s$^{-1}$ \\ [5pt]
Cosmic-ray ionization rate, $\zeta_{\rm H_2}$ & $3 \times 10^{-17}$, $3 \times 10^{-15}$~s$^{-1}$ \\ [5pt]
Initial ortho-H$_2$/para-H$_2$ ratio & 0.1 \\ [5pt]
Parameter characterizing the magnetic field strength, $\beta$ & 0.5, 1 \\ [5pt]
Turbulent velocity, $v_{\rm turb}$ & 0.3~km~s$^{-1}$ \\ [5pt]
Initial relative CH$_3$OH abundance in the icy mantles of dust grains & $10^{-5}$ \\ [5pt]
Initial relative H$_2$O abundance in the icy mantles of dust grains & $(5-10) \times 10^{-5}$ \\ [5pt]
\hline
\end{tabular}
~\\
~\\
The parameter $\beta$ is defined in Eq. (\ref{eq_magn_field}).
~\\
\section{Calculation of molecular energy level populations}
\subsection{Collisional Rate Coefficients and Spectroscopic Data}
The energies of rotational levels and the Einstein coefficients for the H$_2$O molecule were taken from the HITRAN 2020 database (Gordon et al. 2022). In our calculations we took into account 150 rotational energy levels of the para-H$_2$O molecule and 150 energy levels of the ortho-H$_2$O molecule belonging to the ground and first excited vibrational states of the molecule. The energy of the highest H$_2$O level considered is 4500~K. The collisional rate coefficients for transitions between H$_2$O energy levels in collisions of H$_2$O with H$_2$ and electrons were calculated in Faure et al. (2007) and Faure and Josselin (2008). Four data sets for collisions between ortho-/para-H$_2$O and ortho-/para-H$_2$ for the gas temperature range $20-2000$~K are given in Faure et al. (2007), with the transitions between 45 lower energy levels of each H$_2$O spin isomer being considered. For the remaining transitions in our calculations we used data from Faure and Josselin (2008). In the calculations of these collisional rate coefficients the ortho-/para-H$_2$ ratio was initially set equal to 3. The collisional rate coefficients for transitions between H$_2$O levels in collisions of H$_2$O with He atoms were taken from Green et al. (1993) and Nesterenok (2013). The collisional rate coefficients for H$_2$O transitions in collisions with H atoms for 45 lower rotational H$_2$O levels were taken from Daniel et al. (2015). The collisions of H$_2$O with H atoms become significant when the shock speed is close to the limiting C-type shock speed and there is a partial dissociation of H$_2$ molecules at the shock front. The abundance of electrons relative to the hydrogen nuclei in the molecular gas is $x_e \sim 10^{-8}-10^{-7}$ for $\zeta_{\rm H_2} \sim 10^{-16}$~s$^{-1}$ and $n_{\rm H_2} \sim 10^4-10^5$~cm$^{-3}$, and decreases with increasing gas density. The collisions of H$_2$O with electrons are insignificant.
The spectroscopic data and the data on the collisional rate coefficients that were used in our calculations of the CH$_3$OH energy level populations are described in Nesterenok (2016). In our calculations we do not use the extrapolation of the collisional rate coefficients for high gas temperatures -- the rate coefficients are assumed to be constant at temperatures above the maximum temperature for which data are available. The sensitivity of the results of our calculations of the CH$_3$OH energy level populations to the collisional rate coefficients at high temperatures was analyzed in Nesterenok (2022). The spin-isomer abundance ratio was assumed to be the following: ortho-/para-H$_2$O = 3 (Emprechtinger et al. 2013) and A-/E-CH$_3$OH = 1 (Nesterenok 2022).
\subsection{Basic Formulas}
In this paper we use the same method of calculating the molecular level populations in a shock as that in Nesterenok (2020, 2022). Below, we briefly outline the ideas of the method. The shock profile obtained as a result of our numerical simulations is divided into layers. For each layer we calculate the H$_2$O and CH$_3$OH energy level populations. The system of equations for the energy level populations of a molecule at some distance $z$ in the shock is
\begin{equation}
\begin{array}{c}
\displaystyle
\sum_{k=1, \, k \ne i}^M \left( R_{ki} + C_{ki} \right) n_k(z) - n_i(z) \sum_{k=1, \, k \ne i}^M \left( R_{ik} + C_{ik} \right)=0, \quad i=1,...,M-1, \\
\displaystyle
\sum_{i=1}^M n_i(z)=1,
\end{array}
\label{eq1}
\end{equation}
\noindent
Here, $M$ is the total number of energy levels, $R_{ik}$ is the probability of radiative transitions from level $i$ to level $k$, and $C_{ik}$ is the probability of collisional transitions. The probabilities of radiative transitions are as follows:
\begin{equation}
\begin{array}{c}
\displaystyle
R_{ik}^{\downarrow}=B_{ik}J_{ik}+A_{ik}, \quad i > k, \\[10pt]
\displaystyle
R_{ik}^{\uparrow}=B_{ik}J_{ik}, \quad i < k,
\end{array}
\label{eq_rad_prob}
\end{equation}
\noindent
where $A_{ik}$ and $B_{ik}$ are the Einstein coefficients for spontaneous and stimulated emission, respectively, and $J_{ik}$ is the radiation intensity averaged over the direction and the line profile. To calculate the radiation intensity, we used the large velocity gradient method (Hummer and Rybicki 1985). This method gives a good approximation if the length scale of the change in physical parameters is much greater than the Sobolev length:
\begin{equation}
\Delta z_\mathrm{S} = u_\mathrm{D} \left\vert \frac{\mathrm{d}u(z)}{\mathrm{d}z} \right\vert^{-1}
\end{equation}
\noindent
where $u(z)$ is the gas velocity and $u_D$ is the line profile width in velocity units (Nesterenok 2020, 2022). The overlap between CH$_3$OH and H$_2$O lines was ignored since for the masers under consideration the line overlap has a minor effect on pumping (McEwen et al. 2014; Gray et al. 2016). The absorption of radiation in molecular lines by dust was taken into account (Hummer and Rybicki 1985; Nesterenok 2016). The dust temperature behind the shock front, where the pumping of masers occurs, is much lower than the gas temperature, $T_{\rm d} << T_{\rm g}$. The maximum dust temperature is reached at the shock peak and is 65~K for the model with parameters $n_{\rm H_2,0} = 10^7$~cm$^{-3}$ and $u_{\rm s} = 30$~km~s$^{-1}$. The dust radiation was ignored in our calculations of the radiation intensity in molecular lines. The system of equations (\ref{eq1}) was solved by the iteration method.
Once the system of equations (\ref{eq1}) for the molecular energy level populations had been solved, we calculated the gain for transitions with level population inversion. The expression for the gain (which is equal to the absorption coefficient with the opposite sign) for a $i \to k$ transition in the case of a plane-parallel gas-dust cloud is
\begin{equation}
\displaystyle
\gamma_{ik}\left( z, \mu, \nu \right)=\frac{\lambda^2}{8 \pi} A_{ik} n_{\rm m}(z) \, \left(n_i(z) - \frac{g_i}{g_k}n_k(z) \right) \phi \left( z, \mu, \nu \right) - \kappa_{\rm c}(z),
\end{equation}
\noindent
where $\mu$ is the cosine of the angle between the gas flow direction in the shock and the line of sight, $n_{\rm m}(z)$ is the number density of molecules (ortho- or para-H$_2$O, A- or E-type CH$_3$OH spin isomers) at distance $z$ in the shock, $g_i$ and $g_k$ are the statistical weights of the energy levels, and $\kappa_{\rm c}(z)$ is the dust absorption coefficient. The spectral profile of the emission and absorption coefficients in the laboratory frame of reference is given by the expression
\begin{equation}
\phi(z,\mu, \nu) = \tilde{\phi}_\mathrm{ik} \left[ \nu - \nu_\mathrm{ik} \mu u(z) / c \right]
\label{eq_profile_lab}
\end{equation}
\noindent
where $\nu_{ik}$ is the transition frequency and $\tilde{\phi}_\mathrm{ik}(\nu)$ is the normalized spectral profile in the frame of reference associated with the gas flow. For the ortho-H$_2$O $6_{16} \to 5_{23}$ transition at 22.23~GHz it is necessary to take into account the additional line profile broadening due to the hyperfine splitting of energy levels (Varshalovich et al. 2006; Nesterenok and Varshalovich 2011). The spectral profile of the emission and absorption coefficients in this line is the sum of six components with different intensities. At a gas temperature $T_{\rm g} \gtrsim 150$~K the components merge into a single asymmetric profile. For ortho-H$_2$O transitions in the millimeter and submillimeter wavelength ranges the splitting is small compared to the Doppler line profile width.
The optical depth in the line for which there is level population inversion is
\begin{equation}
\displaystyle
\vert \tau_{\mu}(\nu) \vert = \frac{1}{\mu}\int \, \mathrm{d}z \, \gamma_\mathrm{ik}(z,\mu, \nu).
\label{eq_tau}
\end{equation}
\noindent
The parameter $a = 1/\mu$ is equal to the ratio of the amplification path length along the line of sight to the shock width. If the shock is seen edge-on, then $a$ is large and the maser emission is most intense. In theoretical works $a \sim 10$ is assumed to explain the emission of bright H$_2$O masers (see, e.g., Kaufman and Neufeld 1996a). The optical depth increases with $a$ faster than $\propto a$ due to the dependence of the spectral line profile on $\mu$, see Eq. (\ref{eq_profile_lab}) (Nesterenok 2021). The maximum of (\ref{eq_tau}) at fixed $\mu$ is reached at the line center.
For the brightness temperature in a maser line one
can write
\begin{equation}
T_{\rm b} = T_{\rm bg} \exp\left(\vert \tau \vert \right),
\label{eq_br_temp}
\end{equation}
\noindent
where $T_{\rm bg}$ is the background radiation temperature and $\vert \tau \vert$ is the absolute value of the optical depth in the maser line. As the radiation intensity in the maser line increases, the rate of induced transitions (the term proportional to $J_{ik}$ in Eqs. (\ref{eq_rad_prob})) becomes comparable to the rates of collisional and radiative transitions to other levels. In this case, the maser passes to the regime of saturation, and the exponential amplification law changes to a linear one (Strelnitskii 1975). In our paper we did not consider the maser amplification in the saturated regime.
\section{Results}
\subsection{Physical Conditions in the Maser Formation Region}
Figure~1 shows plots of the temperature, the relative abundance of H$_2$O and CH$_3$OH molecules, and the gain in maser lines as functions of the distance along the gas flow in the shock. The results are presented for two shock models with speeds $u_{\rm s} = 20$ and 30~km~s$^{-1}$. The preshock gas density for both shock models is $n_{\rm H_2,0} = 10^5$~cm$^{-3}$. The gas temperature rises rapidly to its maximum value (2100 and 3800~K for $u_{\rm s} = 20$ and 30~km~s$^{-1}$, respectively). Subsequently, the gas temperature falls slowly as a result of the reduction in the gas heating rate and of the gas cooling through radiation in molecular lines. At the shock front the icy mantles of dust grains are sputtered, and a sharp rise in the relative abundance of H$_2$O and CH$_3$OH in the gas phase is observed. Behind the shock front the relative abundance of molecules in the gas falls slowly as a result of adsorption on dust grains. The time scale of this process is $t \sim 10^3$~yr for $n_{\rm H_2} = 10^6$~cm$^{-3}$ and $T_{\rm g} = 50$~K. In the hot gas at the shock front there is destruction of methanol molecules in reactions with H atoms and collisional dissociation reactions. For the shock speed $u_{\rm s} = 20$~km~s$^{-1}$ methanol is destroyed incompletely and the CH$_3$OH-to-H$_2$O ratio in the postshock region is 0.03. At a higher speed, $u_{\rm s} = 30$~km~s$^{-1}$, the methanol molecules are destroyed completely in the hot gas at the shock front. On the other hand, the higher the shock speed, the higher the relative H$_2$O abundance in the cooling gas behind the shock front: the H$_2$O column density from the shock peak to the region where the gas temperature drops below 30~K is $N_{\rm H_2O} = 3 \times 10^{17}$ and $5 \times 10^{17}$~cm$^{-2}$ for $u_{\rm s} = 20$ and 30~km~s$^{-1}$, respectively. This is because O and OH forming in reactions of molecules with H atoms and collisional dissociation reactions (the destruction of CH$_3$OH, CO$_2$, and other molecules) turn into H$_2$O.
In Figs. 1e and 1f the gain is plotted against the distance for the methanol E~$4_{-1} \to 3_0$ 36.1~GHz and A$^+$~$7_0 \to 6_1$ 44.0~GHz lines, the para-H$_2$O $3_{13} \to 2_{20}$ 183.3~GHz line, and the ortho-H$_2$O $4_{14} \to 3_{21}$ 380.1~GHz and $4_{23} \to 3_{30}$ 448.0~GHz lines. The size of the shock region where the gain in the para-H$_2$O line at 183.3~GHz drops by a factor of 2 from its maximum value is $1.5 \times 10^{15}$~cm for the shock speed $u_{\rm s} = 20$~km~s$^{-1}$ and half as much for $u_{\rm s} = 30$~km~s$^{-1}$. Within this region the gas temperature drops by a factor of 10 (from $T_{\rm g} \approx 1000$~K to 100~K), whereas the gas density increases by a factor of 2 (from $3 n_{\rm H_2,0}$ to $6n_{\rm H_2,0}$ for $u_{\rm s} = 20$~km~s$^{-1}$) and the absolute value of the gas velocity gradient decreases by a factor of 5. The gas temperature is the parameter with the largest gradient and, therefore, the position of the gain peak for the maser transitions is determined mainly by the change in gas temperature. In the region where the gain in the 183.3~GHz line reaches its maximum, the gas temperature is $250-300$~K and the gas density is $n_{\rm H_2} \approx 5n_{\rm H_2,0}$. Higher temperatures are needed for efficient pumping of the 380.1~GHz transition, and the 380.1~GHz maser emission region must be more compact. In the region where the gain in the 380.1~GHz line reaches its maximum the gas temperature is $\approx 600$~K. The population inversion in the 380.1~GHz line vanishes as soon as the gas temperature drops below 100~K. For the shock speed $u_{\rm s} = 20$~km~s$^{-1}$ there is energy level population inversion for the CH$_3$OH 36.1 and 44.0~GHz transitions in the wide region from the shock front to the far postshock zone, where the gas temperature drops to 30~K.
The dust absorption coefficient at 183.3~GHz is $\kappa_{\rm c} \approx (5-10) \times 10^{-21}$~cm$^{-1}$ in the shock region, where the gain in the 183.3~GHz maser line is at a maximum (for $n_{\rm H_2,0} = 10^5$~cm$^{-3}$). Thus, the dust absorption for the maser transitions is negligible.
\subsection{Optical Depth in H$_2$O and CH$_3$OH Maser Lines}
Figure~2 shows the results of our calculations of the optical depth at the line center along the gas flow $\vert \tau_{\rm \mu=1, l.c.}\vert$ for the H$_2$O and CH$_3$OH maser transitions. The calculations are presented for the preshock gas densities $n_{\rm H_2,0} = 10^4$, $10^5$, and $10^6$~cm$^{-3}$. This figure shows all of the inverted H$_2$O transitions for which, according to our calculations, the optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert > 0.1$. According to our calculations, an optimal condition for the pumping of methanol masers is the gas density range $n_{\rm H_2,0} = 10^4 - 10^5$~cm$^{-3}$. At such gas densities the optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert \sim 1$ for the para-H$_2$O transition at 183.3~GHz and the ortho-H$_2$O transitions at 380.1 and 448.0~GHz. At gas densities $n_{\rm H_2,0} \geq 10^6$~cm$^{-3}$ the optical depth for the CH$_3$OH maser transitions is small due to the destruction of methanol molecules in the hot dense gas at the shock front (Nesterenok 2022). At the same time, for many H$_2$O transitions the optical depth $\vert \tau_{\mu=1, l.c.}\vert \geq 0.1$ at such a gas density.
Figure 3 shows the results of our calculations of the optical depth for the H$_2$O maser transitions for the preshock gas density $n_{\rm H_2,0} = 10^7$~cm$^{-3}$. In this case, there is energy level population inversion for a much larger number of H$_2$O transitions than in the case of lower gas densities. The list of transitions is given in Table~2; all transitions belong to the ground vibrational H$_2$O state. According to our calculations, the para-H$_2$O 183.3, 325.1, and 970.3~GHz transitions and the ortho-H$_2$O 22.23, 380.1, 448.0, 620.7, 1296, and 1322~GHz transitions have an optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert > 1$ at least in one of the shock models. If the shock is seen edge-on ($a \sim 10$), then these transitions are strong masers. The well-known ortho-H$_2$O 321.2, and 439.1~GHz maser transitions have an optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert \sim 0.1$ (Fig. 3). In our calculations we obtained a small optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert < 0.1$ (or the absence of energy level population inversion) for the maser transitions of excited vibrational H$_2$O states.
In Fig.~4 the optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert$ for the H$_2$O and CH$_3$OH maser transitions is plotted against the preshock gas density $n_{\rm H_2,0}$, the shock speed in all calculations was set equal to $u_{\rm s} = 20$~km~s$^{-1}$. As the preshock gas density increases, the optical depth in the CH$_3$OH lines decreases, while for the H$_2$O transitions it increases. At gas densities $n_{\rm H_2,0} \approx 10^5$~cm$^{-3}$ a coexistence of class I CH$_3$OH masers and H$_2$O 183.3 and 380.1~GHz masers is possible. The higher the preshock gas density, the narrower the shock front. The length of the postshock region at which the gain in the 183.3~GHz line drops by a factor of 2 from its maximum value is $\approx 10^{14}$~cm for $n_{\rm H_2,0} = 10^7$~cm$^{-3}$ -- an order of magnitude less than that for a preshock gas density of $10^5$~cm$^{-3}$.
~\\
~\\
\begin{tabular}{llll}
\multicolumn{2}{l}{\large\bf Table 2. H$_2$O transitions.} \\ [5pt]
\hline
\multicolumn{2}{c}{ortho-H$_2$O} & \multicolumn{2}{c}{para-H$_2$O} \\ [3pt]
\hline
$6_{1\,6} \to 5_{2\,3}$ & 22.23* & $3_{1\,3} \to 2_{2\,0}$ & 183.3* \\ [3pt]
$10_{2\,9} \to 9_{3\,6}$ & 321.2* & $5_{1\,5} \to 4_{2\,2}$ & 325.1* \\ [3pt]
$4_{1\,4} \to 3_{2\,1}$ & 380.1* & $5_{3\,3} \to 4_{4\,0}$ & 474.6* \\ [3pt]
$6_{4\,3} \to 5_{5\,0}$ & 439.1* & $4_{2\,2} \to 3_{3\,1}$ & 916.1 \\ [3pt]
$4_{2\,3} \to 3_{3\,0}$ & 448.0* & $5_{2\,4} \to 4_{3\,1}$ & 970.3* \\ [3pt]
$5_{3\,2} \to 4_{4\,1}$ & 620.7* & $7_{2\,6} \to 6_{3\,3}$ & 1440 \\ [3pt]
$3_{1\,2} \to 2_{2\,1}$ & 1153 & $6_{3\,3} \to 5_{4\,2}$ & 1541 \\ [3pt]
$6_{3\,4} \to 5_{4\,1}$ & 1158 & & \\ [3pt]
$7_{4\,3} \to 6_{5\,2}$ & 1278* & & \\ [3pt]
$8_{2\,7} \to 7_{3\,4}$ & 1296* & & \\ [3pt]
$6_{2\,5} \to 5_{3\,2}$ & 1322 & & \\ [3pt]
$8_{3\,6} \to 7_{4\,3}$ & 2244 & & \\ [3pt]
\hline
\end{tabular}
~\\
~\\
The list of transitions for which the optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert > 0.1$ at least in one of the shock models for the preshock gas density $n_{\rm H_2,0} = 10^7$~cm$^{-3}$. All transitions belong to the ground vibrational H$_2$O state. The H$_2$O transitions whose emission was observed in astrophysical objects are designated by an asterisk (Neufeld et al. 2017; Pereira-Santaella et al. 2017). The frequencies are given in GHz, "truncated" values are used.
~\\
\subsection{Effect of the ortho-/para-H$_2$ Ratio on Maser Pumping}
In the hot gas at the shock front H$_2$--H collisions are the main para-H$_2$/ortho-H$_2$ interconversion mechanism. If the gas temperature and the number density of H atoms in the gas are sufficiently high, then the ortho-/para-H$_2$ ratio has time to reach its equilibrium value determined by the gas temperature (Nesterenok 2019). For shock speeds less than some value of u$_0$ the para-H$_2$-to-ortho-H$_2$ conversion in the heated gas at the shock front is inefficient. In this case, the main collisional partner of molecules in collisions is para-H$_2$. The value of $u_0$ is $\approx 30$ and 20~km~s$^{-1}$ for the preshock gas densities $n_{\rm H_2,0} = 10^4$ and $10^7$~cm$^{-3}$, respectively ($\zeta_{\rm H_2} = 3 \times 10^{-17}$~s$^{-1}$). In particular, in a shock with parameters $n_{\rm H_2,0} = 10^5$~cm$^{-3}$ and $u_{\rm s} = 22.5$~km~s$^{-1}$ the ortho-/para-H$_2$ ratio increases from 0.02 to 0.5 as the gas passes through the shock front. We performed calculations in which the ortho-/para-H$_2$ ratio was initially set equal to 3 in the shock model with these parameters. The difference of the results of our calculations for the optical depth in the para-H$_2$O 183.3~GHz line is about 3\%. This result is explained by the fact that the H$_2$O--H$_2$ collisional rate coefficients have a small difference for ortho- and para-H$_2$ for gas temperatures above 300~K (Faure et al. 2007). For the CH$_3$OH 36.1, 44.0, and 95.1~GHz transitions the optical depth is smaller by $\approx 30$\% for an ortho-/para-H$_2$ ratio of 3. The influence of the ortho-/para-H$_2$ ratio on the pumping of H$_2$O masers is minor and significant for CH$_3$OH masers.
\subsection{Effect of the Gas Ionization Rate and the Magnetic Field Strength on the Generation of H$_2$O and CH$_3$OH Maser Emission}
In Fig.~5 the optical depth $\vert \tau_{\rm \mu=1, l.c.}\vert$ is plotted against the shock speed for the H$_2$O, CH$_3$OH, and OH maser transitions; the preshock gas density is $n_{\rm H_2,0} = 10^5$~cm$^{-3}$. Figure~5a presents the results of our calculations for two values of the cosmic-ray ionization rate: $\zeta_{\rm H_2} = 3 \times 10^{-17}$ and $3 \times 10^{-15}$~s$^{-1}$. The results of our calculations for the OH transition between the sublevels of the ground rotational state $^2\Pi_{3/2}$~$j = 3/2$ at 1720 MHz were taken from Nesterenok (2022). At high cosmic-ray ionization rates methanol is destroyed in the postshock region in ion--molecule reactions and reactions of photodissociation by cosmic-ray-induced ultraviolet radiation. The optical depths in the CH$_3$OH maser transitions are considerably smaller at high values of the gas ionization rate than those at low ones (Fig.~5a) (see also Nesterenok (2022)). In contrast, the OH molecule is formed in H$_2$O photodissociation reactions and ion--molecule reactions involving H$_3$O$^+$. High gas ionization rates, $\zeta_{\rm H_2} = 10^{-15}$~s$^{-1}$, are needed for the existence of OH 1720~MHz maser emission. The fraction of the H$_2$O molecules destroyed in the maser zone as a result of these reactions is $\sim 5$\% (for $\zeta_{\rm H_2} = 3 \times 10^{-15}$~s$^{-1}$). Therefore, the influence of the gas ionization rate on the optical depth for the H$_2$O maser transitions is minor. Figure~5b presents the optical depths in the H$_2$O and CH$_3$OH maser lines for two values of the magnetic field ($\beta = 0.5$ and 1), while the gas ionization rate in both cases is $\zeta_{\rm H_2} = 3 \times 10^{-17}$ s$^{-1}$. The lower the value of the magnetic field, the narrower the shock front and the higher gas temperature at the shock front. As a result, the optical depths in the H$_2$O and CH$_3$OH lines are smaller in the case of a weaker magnetic field. The icy mantles of dust grains are sputtered at lower shock speeds. Therefore, the curve of the dependence of the optical depth on the shock speed is shifted leftward for $\beta = 0.5$ (Fig.~5b).
Table~3 gives the brightness temperature of masers calculated from Eq.~(\ref{eq_br_temp}) for the shock parameters $n_{\rm H_2,0} = 10^5$~cm$^{-3}$, $u_{\rm s} = 17.5$~km~s$^{-1}$, $\beta = 1$, and two values of the gas ionization rate, $\zeta_{\rm H_2} = 3 \times 10^{-17}$ and $3 \times 10^{-15}$ s$^{-1}$. The parameter $a = 1/\mu$ was chosen to be 5 in these estimates (where $\mu$ is the cosine of the angle between the line of sight and the shock velocity direction). The background radiation temperature was set equal to $T_{\rm bg} = 3$~K for CH$_3$OH and H$_2$O masers and $T_{\rm bg} = 50$~K for an OH 1720~MHz maser (Hoffman et al. 2005). According to our estimates, the CH$_3$OH masers pass to the saturation regime when the brightness temperature becomes $T_{\rm b,sat} \sim 10^7$~K, while the para-H$_2$O 183.3~GHz maser becomes saturated at $T_{\rm b,sat} \sim 10^9$~K. The lower limits on the brightness temperature given in Table~3 are equal to the maximum brightness temperature of a maser in the unsaturated regime $T_{\rm b,sat}$. In this case, the maser saturation should be taken into account in the brightness temperature calculations, which is beyond the scope of this study. Thus, for the gas ionization rate $\zeta_{\rm H_2} = 3 \times 10^{-15}$~s$^{-1}$, the preshock gas density $n_{\rm H_2,0} = 10^5$~cm$^{-3}$, and the shock speed $u_{\rm s} \approx 20$~km~s$^{-1}$ a coexistence of class I CH$_3$OH, H$_2$O (183.3~GHz), and OH (1720~MHz) masers in one source is possible (provided that the shock velocity direction is perpendicular to the line of sight, $a \sim 5$).
~\\
{{\bf Table 3.} Brightness temperature of OH, CH$_3$OH, and H$_2$O masers} \\ [5pt]
\label{table3}
\vspace{5mm}
\begin{tabular}{lcc}
\hline
Transition & $\zeta_{\rm H_2} = 3 \times 10^{-17}$~s$^{-1}$ & $\zeta_{\rm H_2} = 3 \times 10^{-15}$~s$^{-1}$ \\ [3pt]
\hline
1720~MHz (OH) & -- & $5 \times 10^4$~K \\ [3pt]
44.0~GHz (CH$_3$OH) & $> 10^7$~K & $\sim 10^7$~K \\ [3pt]
36.1~GHz (CH$_3$OH) & $> 10^7$~K & $10^4$~K \\ [3pt]
183.3~GHz (H$_2$O) & $2 \times 10^8$~K & $5 \times 10^8$~K \\ [3pt]
\hline
\end{tabular}
~\\
~\\
The shock parameters are $n_{\rm H_2,0} = 10^5$~cm$^{-3}$, $u_{\rm s} = 17.5$~km~s$^{-1}$, and $\beta = 1$; the ratio of the maser amplification path length to the shock width is $1/\mu = 5$. The lower limit on the brightness temperature for CH$_3$OH masers implies that the masers are saturated.
~\\
\section{Discussion}
\subsection{Comparison with the Results of Previous Studies}
Kaufman and Neufeld (1996a) published a C-type shock model and studied the generation of H$_2$O maser emission in shocks of this type. They considered preshock gas densities $n_{\rm H_2,0} = 10^7-10^{9.5}$~cm$^{-3}$, but ignored the interaction of dust grains and gas (adsorption, desorption, the sputtering of icy grain mantles). The gas-phase O-to-H$_2$O conversion reactions were the source of H$_2$O in the gas in their model, whereas in our model H$_2$O is formed on dust grains. At the shock front H$_2$O ends up in the gas as a result of the sputtering of icy grain mantles; the gas-phase H$_2$O formation reactions also make a contribution. Figure 6 shows the average Sobolev optical depth $\bar{\tau}_{\rm S}$ for the H$_2$O 183.3 and 380.1~GHz maser transitions obtained in our calculations and in Kaufman and Neufeld (1996a) (see Fig.~8 in their paper). The results are presented for the preshock gas density $n_{\rm H_2,0} = 10^7$~cm$^{-3}$ (for the determination of the Sobolev optical depth and the method of averaging this parameter in the maser zone, see Kaufman and Neufeld 1996a and 1996b). The relative H$_2$O abundance in the postshock gas is $x_{\rm H_2O} \approx 4 \times 10^{-4}$ in the model of Kaufman and Neufeld (1996a, 1996b). In our model the maximum relative H$_2$O abundance in the cooling postshock gas is $x_{\rm H_2O} \approx 10^{-4}$ -- a factor of 4 lower. In addition, according to our calculations, the width of the postshock region where the maser emission is generated is a factor of $1.5-5$ smaller than that in the model of Kaufman and Neufeld (1996a, 1996b). The lowest shock speed at which the icy mantles of dust grains are sputtered is $17.5-20$~km~s$^{-1}$ for $n_{\rm H_2,0} = 10^7$~cm$^{-3}$ -- this explains the absence of maser emission at low shock speeds in our model. At low shock speeds no evaporation of the icy mantles of dust grains due to grain heating occurs, since the dust temperature is not high enough in our shock model ($T_{\rm d} \lesssim 30$~K for $u_{\rm s} = 10$~km~s$^{-1}$). However, this effect can take place for other dust model parameters or at a higher preshock gas density (Hartquist et al. 1995). At shock speeds $u_{\rm s} > 30$~km~s$^{-1}$ the dissociation of H$_2$ molecules occurs at the shock front, and the shock becomes a J-type shock. These effects explain the difference between the results of our calculations and the results of Kaufman and Neufeld (1996a). Flower and Pineau des For\^{e}ts (2010) also studied the H$_2$O excitation and emission in C-type shocks, but the results for inverted transitions were not discussed in their paper.
Cernicharo et al. (1994) performed numerical simulations of the pumping of para-H$_2$O masers using the large velocity gradient method to solve the radiative transfer equation. They showed that there is level population inversion for the para-H$_2$O 183.3 and 325.1~GHz transitions at relatively low temperatures and gas densities: $T_{\rm g} \approx 100$~K and $n_{\rm H_2} \gtrsim 10^5$~cm$^{-3}$. It also follows from our calculations that efficient pumping of para-H$_2$O 183.3~GHz and ortho-H$_2$O 380.1 and 448.0~GHz masers occurs at low gas densities: $\vert \tau_{\rm \mu=1, l.c.}\vert \sim 1$ for the preshock gas density $n_{\rm H_2,0} = 10^5$~cm$^{-3}$ (Fig.~2). There is energy level population inversion for the para-H$_2$O 325.1~GHz transition for $n_{\rm H_2,0} = 10^5$~cm$^{-3}$, but the optical depth is small, $\vert \tau_{\rm \mu=1, l.c.}\vert < 0.1$. High gas densities, $n_{\rm H_2,0} \gtrsim 10^6$~cm$^{-3}$ (Figs.~2 and 3), are needed for the generation of an intense ortho-H$_2$O 22.23~GHz maser emission ($\vert \tau_{\rm \mu=1, l.c.}\vert \sim 1$).
\subsection{Observations of H$_2$O 183.3, 380.1, and 448.0~GHz Masers}
Cernicharo et al. (1990, 1994) discovered a spatially distributed para-H$_2$O 183.3~GHz maser emission in Orion A-IRc2 using observations with the IRAM telescope. The size of the region from where the emission comes is $80''$ or $\approx 0.2$~pc, which is much larger than the sizes of the maser spots observed in the 22.23~GHz line in the same object, $\sim 10^{14}-10^{15}$~cm (Genzel et al. 1981). Doty (2000) used the model of a molecular core with a protostar at the center to study the excitation of the para-H$_2$O 183.3~GHz maser line. Doty (2000) showed that a high relative H$_2$O abundance, $x_{\rm H_2O} \sim 10^{-5}$, much higher than the relative H$_2$O abundance in the gas phase in the cold parts of molecular cores, $x_{\rm H_2O} \lesssim 10^{-7}$ (van Dishoeck et al. 2013), is needed to explain the observational data in Orion A-IRc2. This implies that the mechanisms of H$_2$O liberation from the icy mantles of dust grains, such as shocks, need to be invoked.
The observations of the para-H$_2$O 183.3~GHz emission were carried out toward the low-mass protostars HH7-11, L1448-mm, and Serpens SMM1 (Cernicharo et al. 1996; van Kempen et al. 2009). Cernicharo et al. (1996) published the observations of the para-H$_2$O 183.3~GHz emission toward the group of Herbig-Haro objects HH7-11 performed with the IRAM telescope. The emission is generated in a vast region ($>10''$ or $>0.015$~pc) that spatially coincides with the flows near the close binary system SVS~13 visible in CO lines. The emission spectrum has a high-velocity component; the spatial emission peak of this component coincides with HH11. The variability of the radiation intensity points to the maser effect. The brightness temperature of the emission component is $\sim 10$~K. If the emission is assumed to be generated in compact sources whose sizes are much smaller than the telescope's angular resolution ($15''$ or 0.02~pc), then the brightness temperature of the 183.3~GHz masers is $T_{\rm b} >> 10$~K. The H$_2$O masers at 22.23~GHz are located near SVS~13 within $0.3''$ (Rodr{\'{\i}}guez et al. 2002).
Van Kempen et al. (2009) published the observations in the 183.3~GHz line toward the low-mass protostar Serpens SMM1 performed with SMA. The SMA beam sizes were $3'' \times 4''$, corresponding to a linear distance in the source $\approx 1500$~AU (if the distance to the object is taken to be 440~pc; Ortiz-Le{\'o}n et al. 2017). Three spatial emission components that coincide with the flow from the protostar and are at a distance of $1500-3500$~AU from it were detected in Serpens SMM1. The brightness temperature of the components lies in the range $1000-2000$~K, where it was assumed in the estimates that the emission fills completely the SMA beam. If the sizes of the 183.3~GHz emission region is assumed to be $10^{15}$~cm (corresponding to the shock model with the preshock gas density $n_{\rm H_2,0} = 10^5$~cm$^{-3}$), then the brightness temperature in the maser line for the brightest component is $T_b \approx 10^6$~K. Such brightness temperatures of the H$_2$O 183.3~GHz maser are reproduced in a C-type shock model with parameters $n_{\rm H_2,0} = 10^5$~cm$^{-3}$, $u_{\rm s} \gtrsim 17.5$~km~s$^{-1}$, and $a \approx 3$. Moscadelli et al. (2006) observed H$_2$O 22.23~GHz masers toward Serpens SMM1 with VLBA. The 22.23~GHz maser emission sources are located at a distance $\approx 10-20$~AU from the protostar (probably, inside an accretion disk), while the sizes of the maser spots are $<5$~AU. Thus, the para-H$_2$O 183.3~GHz masers, along with the class I CH$_3$OH masers, are indicators of the gas flows interacting with the protostar envelope and the interstellar medium, whereas the H$_2$O 22.23~GHz masers emerge in the immediate vicinity of protostars.
The generation of para-H$_2$O 183.3~GHz and ortho-H$_2$O 380.1 and 448.0~GHz maser emission is possible at lower gas densities than those for the ortho-H$_2$O 22.23~GHz transition. The emission in these lines provides an additional possibility for diagnosing the physical conditions in astrophysical objects (for example, K\"{o}nig et al. 2017). The ortho-H$_2$O 380.1 and 448.0~GHz transitions cannot be observed with ground-based telescopes in astrophysical objects of our Galaxy due to absorption in the Earth's atmosphere. However, these transitions can be observed toward galaxies in the local Universe and at cosmological distances, where the emission in these lines is shifted to a frequency range accessible to observation (Pereira-Santaella et al. 2017; Kuo et al. 2019; Yang et al. 2020). In particular, using the ALMA radio interferometer, Kuo et al. (2019) observed ortho-H$_2$O 380.1~GHz emission toward the lensed quasar QSO MG J0414+0534 at redshift $z = 2.639$. The recorded emission may be a maser one in nature -- according to the estimates by Kuo et al. (2019), the isotropic (unlensed) line luminosity is $ \approx 5 \times 10^6 L_{\odot}$.
\subsection{Absence of H$_2$O 22.23~GHz Masers Associated with Supernova Remnants}
Claussen et al. (1999) searched for H$_2$O 22.23~GHz emission toward three supernova remnants in which the OH maser emission at 1720~MHz was known: W28, W44, and IC 443. Woodall and Gray (2007) searched for 22.23~GHz emission toward 18 supernova remnants (they also included supernova remnants where no OH emission was recorded in their sample). The 22.23~GHz emission was not detected in any of the sources. Using the C- and J-type shock models, Woodall and Gray (2007) performed numerical simulations of the pumping of H$_2$O 22.23~GHz masers. The preshock gas density in their numerical simulations varied in the range $n_{\rm H_2,0} = 10^3-10^5$~cm$^{-3}$ -- the collisional pumping of OH masers at 1720~MHz is efficient precisely at these preshock gas densities. Woodall and Gray (2007) showed that there is no H$_2$O 22.23~GHz maser emission at these gas densities. The same conclusions follow from our calculations -- the optical depth in the 22.23~GHz line is $\vert \tau_{\rm \mu=1, l.c.}\vert \lesssim 0.05$ for the preshock gas density $n_{\rm H_2,0} = 10^5$~cm$^{-3}$. At the same time, the generation of para-H$_2$O 183.3~GHz maser emission is possible for this preshock gas density (just as in the ortho-H$_2$O 380.1 and 448.0~GHz lines, but these lines is difficult to observe in Galactic objects due to absorption in the Earth's atmosphere). Note that OH 1720~MHz and CH$_3$OH 36.1 and 44.0~GHz maser emission was observed near the supernova remnants W28 and W44 (Pihlstr\"{o}m et al. 2014; McEwen et al. 2016).
\section{Conclusions}
We investigated the collisional pumping of H$_2$O and CH$_3$OH masers in C-type shocks. Within the models considered here we showed that the para-H$_2$O 183.3~GHz and ortho-H$_2$O 380.1 and 448.0~GHz transitions could be inverted at relatively low preshock gas densities, $n_{\rm H_2,0} \approx 10^5$~cm$^{-3}$. The generation of H$_2$O and CH$_3$OH maser emission in the same postshock region is possible at these gas densities and shock velocities $u_{\rm s} = 17.5-22.5$~km~s$^{-1}$. We showed that the effect of the ortho-/para-H$_2$ ratio on the pumping of H$_2$O masers in a shock is minor and it is significant on the pumping of CH$_3$OH masers. No H$_2$O 22.23~GHz maser emission associated with supernova remnants has been detected previously. The relatively low gas densities in shocks in supernova remnants are most likely responsible for the absence of 22.23~GHz maser emission. According to our calculations, for preshock gas densities $n_{\rm H_2,0} \leq 10^5$~cm$^{-3}$ the optical depth in the 22.23~GHz line along the gas flow in the shock is small, $< 0.05$. Our numerical simulations suggest that para-H$_2$O 183.3~GHz emission can be detected in those supernova remnant regions where the 1720~MHz OH and class I CH$_3$OH maser emission is generated. The para-H$_2$O 183.3~GHz maser emission provides an additional possibility to investigate the physical conditions in protostellar flows in star-forming regions and near supernova remnants.
For me it is a great honor to devote this paper to the memory of my teacher and scientific adviser, academician Dmitrii Aleksandrovic Varshalovich (1934--2020) of the Russian Academy of Sciences. Under his leadership I began to investigate the interstellar medium and cosmic masers. Dmitrii Aleksandrovic will always remain in memory as an outstanding scientist and a remarkable man.
\section{References}
\noindent
1. A. C. A. Boogert, P. A. Gerakines, and D. C. B. Whittet,
Ann. Rev. Astron. Astrophys. {\bf 53}, 541 (2015).
\noindent
2. J. Cernicharo, C. Thum, H. Hein, D. John, P. Garcia, and F. Mattioco, Astron. Astrophys. {\bf 231}, L15 (1990).
\noindent
3. J. Cernicharo, E. Gonz\'{a}lez-Alfonso, J. Alcolea, R. Bachiller, and D. John, Astrophys. J. {\bf 432}, L59 (1994).
\noindent
4. J. Cernicharo, R. Bachiller, and E. Gonz\'{a}lez-Alfonso, Astron. Astrophys. {\bf 305}, L5 (1996).
\noindent
5. J. Cernicharo, J. R. Pardo, E. Gonz\'{a}lez-Alfonso, E. Serabyn, T. G. Phillips, D. J. Benford, and D. Mehringer, Astrophys. J. {\bf 520}, L131 (1999).
\noindent
6. M. J. Claussen, W. M. Goss, and D. A. Frail, Astron. J. {\bf 117}, 1387 (1999).
\noindent
7. R. M. Crutcher, B. Wandelt, C. Heiles, E. Falgarone, and T. H. Troland, Astrophys. J. {\bf 725}, 466 (2010).
\noindent
8. A. Dalgarno, Proc. Natl. Acad. Sci. U. S. A. {\bf 103}, 12269 (2006).
\noindent
9. F. Daniel and J. Cernicharo, Astron. Astrophys. {\bf 553}, A70 (2013).
\noindent
10. F. Daniel, A. Faure, P. J. Dagdigian, M.-L. Dubernet, F. Lique, and G. Pineau des For\^{e}ts, Mon. Not. R. Astron. Soc. {\bf 446}, 2312 (2015).
\noindent
11. E. F. van Dishoeck, E. Herbst, and D. A. Neufeld, Chem. Rev. {\bf 113}, 9043 (2013).
\noindent
12. S. D. Doty, Astrophys. J. {\bf 535}, 907 (2000).
\noindent
13. B. T. Draine and C. F. McKee, Ann. Rev. Astron. Astrophys. {\bf 31}, 373 (1993).
\noindent
14. A. E. Dudorov, Sov. Astron. {\bf 35}, 342 (1991).
\noindent
15. M. Elitzur, D. J. Hollenbach, and C. F. McKee, Astrophys. J. {\bf 346}, 983 (1989).
\noindent
16. M. Emprechtinger, D. C. Lis, R. Rolffs, P. Schilke, R. R. Monje, C. Comito, C. Ceccarelli, D. A. Neufeld, et al., Astrophys. J. {\bf 765}, 61 (2013).
\noindent
17. A. Faure, N. Crimier, C. Ceccarelli, P. Valiron, L. Wiesenfeld, and M. L. Dubernet, Astron. Astrophys. {\bf 472}, 1029 (2007).
\noindent
18. A. Faure and E. Josselin, Astron. Astrophys. {\bf 492}, 257 (2008).
\noindent
19. D. R. Flower and G. Pineau des For\^{e}ts, Mon. Not. R. Astron. Soc. {\bf 406}, 1745 (2010).
\noindent
20. R. Genzel, M. J. Reid, J. M. Moran, and D. Downes, Astrophys. J. {\bf 244}, 884 (1981).
\noindent
21. I. E. Gordon, L. S. Rothman, R. J. Hargreaves, R. Hashemi, E. V. Karlovets, F. M. Skinner, E. K. Conway, C. Hill, et al., J. Quant. Spectrosc. Radiat. Transfer {\bf 277}, 107949 (2022).
\noindent
22. M. D. Gray, A. Baudry, A. M. S. Richards, E. M. L. Humphreys, A. M. Sobolev, and J. A. Yates, Mon. Not. R. Astron. Soc. {\bf 456}, 374 (2016).
\noindent
23. M. D. Gray, S. Etoka, A. M. S. Richards, and B. Pimpanuwat,
Mon. Not. R. Astron. Soc. {\bf 513}, 1354 (2022).
\noindent
24. S. Green, S. Maluendes, and A. D. McLean, Astrophys. J. Suppl. Ser. {\bf 85}, 181 (1993).
\noindent
25. T. W. Hartquist, K. M. Menten, S. Lepp, and A. Dalgarno, Mon. Not. R. Astron. Soc. {\bf 272}, 184 (1995).
\noindent
26. I. M. Hoffman, W. M. Goss, C. L. Brogan, and M. J. Claussen, Astrophys. J. {\bf 627}, 803 (2005).
\noindent
27. D. Hollenbach, M. Elitzur, and C. F. McKee, Astrophys. J. {\bf 773}, 70 (2013).
\noindent
28. D. G. Hummer and G. B. Rybicki, Astrophys. J. {\bf 293}, 258 (1985).
\noindent
29. M. J. Kaufman and D. A. Neufeld, Astrophys. J. {\bf 456}, 250 (1996a).
\noindent
30. M. J. Kaufman and D. A. Neufeld, Astrophys. J. {\bf 456}, 611 (1996b).
\noindent
31. T. A. van Kempen, D. Wilner, and M. Gurwell, Astrophys. J. {\bf 706}, L22 (2009).
\noindent
32. S. K\"{o}nig, S. Mart{\'{\i}}n, S. Muller, J. Cernicharo, K. Sakamoto, L. K. Zschaechner, E. M. L. Humphreys, T. Mroczkowski, et al., Astron. Astrophys. {\bf 602}, A42 (2017).
\noindent
33. C.-Y. Kuo, S. H. Suyu, V. Impellizzeri, and J. A. Braatz, Publ. Astron. Soc. Jpn. {\bf 71}, 57 (2019).
\noindent
34. S. Leurini, K. M. Menten, and C. M. Walmsley, Astron. Astrophys. {\bf 592}, A31 (2016).
\noindent
35. B. C. McEwen, Y. M. Pihlstr\"{o}m, and L. O. Sjouwerman, Astrophys. J. {\bf 793}, 133 (2014).
\noindent
36. B. C. McEwen, Y. M. Pihlstr\"{o}m, and L. O. Sjouwerman, Astrophys. J. {\bf 826}, 189 (2016).
\noindent
37. L. Moscadelli, L. Testi, R. S. Furuya, C. Goddi, M. Claussen, Y. Kitamura, and A. Wootten, Astron. Astrophys. {\bf 446}, 985 (2006).
\noindent
38. A. V. Nesterenok, Astron. Lett. {\bf 39}, 717 (2013).
\noindent
39. A. V. Nesterenok, Mon. Not. R. Astron. Soc. {\bf 455}, 3978 (2016).
\noindent
40. A. V. Nesterenok, Astrophys. Space Sci. {\bf 363}, 151 (2018).
\noindent
41. A. V. Nesterenok, Astron. Lett. {\bf 46}, 449 (2020).
\noindent
42. A. V. Nesterenok, J. Phys.: Conf. Ser. {\bf 2103}, 012012 (2021).
\noindent
43. A. V. Nesterenok, Mon. Not. R. Astron. Soc. {\bf 509}, 4555 (2022).
\noindent
44. A. V. Nesterenok and D. A. Varshalovich, Astron. Lett. {\bf 37}, 456 (2011).
\noindent
45. A. V. Nesterenok, D. Bossion, Y. Scribano, and F. Lique, Mon. Not. R. Astron. Soc. {\bf 489}, 4520 (2019).
\noindent
46. D. A. Neufeld and G. J. Melnick, Astrophys. J. {\bf 368}, 215 (1991).
\noindent
47. D. A. Neufeld, G. J. Melnick, M. J. Kaufman, H. Wiesemeyer, R. G\"{u}sten, A. Kraus, K. M. Menten, O. Ricken, et al., Astrophys. J. {\bf 843}, 94 (2017).
\noindent
48. G. N. Ortiz-Le\'{o}n, S. A. Dzib, M. A. Kounkel, L. Loinard, A. J. Mioduszewski, L. F. Rodr{\'{\i}}guez, R. M. Torres, G. Pech, et al., Astrophys. J. {\bf 834}, 143 (2017).
\noindent
49. M. Pereira-Santaella, E. Gonz{\'a}lez-Alfonso, A. Usero, S. Garc{\'{\i}}a-Burillo, J. Mart{\'{\i}}n-Pintado, L. Colina, A. Alonso-Herrero, S. Arribas, et al., Astron. Astrophys. {\bf 601}, L3 (2017).
\noindent
50. Y. M. Pihlstr\"{o}m, L. O. Sjouwerman, D. A. Frail, M. J. Claussen, R. A. Mesler, and B. C. McEwen, Astron. J. {\bf 147}, 73 (2014).
\noindent
51. L. F. Rodr{\'{\i}}guez, G. Anglada, J. M. Torrelles, J. E. Mendoza-Torres, A. D. Haschick, and P. T. P. Ho, Astron. Astrophys. {\bf 389}, 572 (2002).
\noindent
52. S. V. Salii, A. M. Sobolev, and N. D. Kalinina, Astron. Rep. {\bf 46}, 955 (2002).
\noindent
53. C. N. Shingledecker, J. B. Bergner, R. Le Gal, K. I. \"{O}berg, U. Hincelin, and E. Herbst, Astrophys. J. {\bf 830}, 151 (2016).
\noindent
54. V. S. Strelnitskii, Sov. Phys. Usp. {\bf 17}, 507 (1975).
\noindent
55. D. A. Varshalovich, A. V. Ivanchik, and N. S. Babkovskaya, Astron. Lett. {\bf 32}, 29 (2006).
\noindent
56. M. A. Voronkov, J. L. Caswell, S. P. Ellingsen, J. A. Green, and S. L. Breen, Mon. Not. R. Astron. Soc. {\bf 439}, 2584 (2014).
\noindent
57. N. Watanabe and A. Kouchi, Astrophys. J. {\bf 571}, L173 (2002).
\noindent
58. J. M. Woodall and M. D. Gray, Mon. Not. R. Astron. Soc. {\bf 378}, L20 (2007).
\noindent
59. C. Yang, E. Gonz{\'a}lez-Alfonso, A. Omont, M. Pereira-Santaella, J. Fischer, A. Beelen, and R. Gavazzi, Astron. Astrophys. {\bf 634}, L3 (2020).
\noindent
60. J. A. Yates, D. Field, and M. D. Gray, Mon. Not. R. Astron. Soc. {\bf 285}, 303 (1997).
~\\
~\\
Translated by V. Astakhov
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.