The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 18
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 8037)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 18
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\subsection{Density of localised states}
Let $i_{0}$ denote a generic end site of the semi-infinite chain considered (denoted simply by $0$ in the main text).
Recall first that $\Delta_{i_{0}}(\omega)=\pi J^{2}D_{i_{0}+1}(\omega)$, with $D_{i_{0}+1}(\omega)$ $=-\mathrm{Im}G_{i_{0}+1}^{(i_{0})}(\omega)$ the LDoS for site $i_{0}+1$. Our self-consistent theory for the extended phase yields $\Delta_\mathrm{typ}(\omega) \geq 0$, with $\Delta_\mathrm{typ}(\omega)=\pi J^{2}D_{\mathrm{typ}}(\omega)$ thus a direct measure of the typical LDoS. $\Delta_\mathrm{typ}(\omega)$ can vanish for two reasons. The first, physically non-trivial reason, is when $\omega$ approaches a ME; the second corresponds simply to $\omega$ approaching a spectral band edge. Each of these is contained in the solutions to Eq.\ \eqref{eq:deltat-leading} (with resultant MEs, and band edges $\bed{\pm}$, given in the main text).
In the localised phase by contrast, the central quantity is $y_\mathrm{typ}(\omega)$, which diverges as $\omega$ approaches a ME. The band edges in this regime (denoted by $\bel{\pm}$) can be inferred from the averaged DoS, given by
\begin{equation}
\label{eq:A1}
D(\omega)~=~\big\langle\big\langle
D_{i_{0}}(\omega)
\big\rangle\big\rangle
~=~\frac{1}{N}\sum_{i_{0}}\int_{0}^{2\pi}\frac{d\phi}{2\pi}~D_{i_{0}}(\omega;\phi)
\end{equation}
with the average over both $\phi \in [0,2\pi]$ and end sites $i_{0}$ (the number of which is denoted by $N$). For the leading-order theory considered here, the local propagator $G_{i_{0}}^{(i_{0}-1)}(\omega) =[\omega+i\eta-V\epsilon_{i_{0}}+i\Delta_\mathrm{typ}(\omega)]^{-1}$.
Hence, for the regime of localised states in which $\Delta_\mathrm{typ}(\omega) \propto \eta =0^{+}$, we have $D(\omega)\equiv D_{L}(\omega) =\langle\langle\delta(\omega -V\epsilon_{i_{0}})\rangle\rangle$, i.e.\
\begin{equation}
\label{eq:A2}
D_{L}^{\phantom\dagger}(\omega) ~=~ \frac{1}{N}\sum_{i_{0}}^{}\int_{0}^{2\pi}\frac{d\phi}{2\pi}~\delta\big(\omega -V\epsilon_{i_{0}}^{\phantom\dagger}(\phi)\big).
\end{equation}
For the $\beta$-model~\cite{ganeshan2015nearest} (with $\epsilon_{i_{0}}$ from Eq.\ \eqref {eq:beta-model}), the $\phi$-integral in Eq.\ \eqref{eq:A2} is independent of the site index $i_{0}$ (whence the site average $N^{-1}\sum_{i_{0}}$ is in effect redundant); and evaluation of Eq.\ \eqref{eq:A2} gives
\begin{equation}
\label{eq:A3}
D_{L}^{\phantom\dagger}(\omega) ~=~\frac{1}{\pi\left(1+\frac{\beta \omega}{V}\right)\sqrt{
\big(\beta \omega+V \big)^{2} -\omega^{2}}},
\end{equation}
holding for $\omega^{2} \leq (\beta\omega+V)^{2}$, with the equality giving the band edges $\bel{\pm}=\pm V/(1\mp \beta)$. Note that for $V=V_{-}$ [$V_{+}$], below [above] which $V$ all states are extended [localised], and where the ME coincides with a band edge, the band edge $\bel{+}$ [$\bel{-}$] correctly coincides with $\bed{+}$ [$\bed{-}$] arising from the vanishing of Eq.\ \eqref{eq:deltat-leading} for $\Delta_\mathrm{typ}(\omega)$. For the $l=2$ mosaic model~\cite{wang2020onedimensional} (Eq.\ \ref {eq:mosaic-model}), half the end sites $i_{0}$ have an $\epsilon_{i_{0}}$ of AAH form while the remainder have $\epsilon_{i_{0}}=0$, with $\phi$-integrals in each case again independent of $i_{0}$; yielding
\begin{equation}
\label{eq:A5}
D_{L}^{\phantom\dagger}(\omega) ~=~\frac{1}{2\pi\sqrt{V^{2} -\omega^{2}}}~+~\tfrac{1}{2}\delta (\omega)
\end{equation}
with $\bel{\pm}=\pm V$. Once again, for $V=V_{-}$ the MEs and band edges again coincide, with
$\omega_{\mathrm{ME},\pm} =\bel{\pm}=\bed{\pm}$ (recall that $V_{+}\to \infty$ for this model, as some states are
always extended).
\subsection{Analytic results for $P_{y}(y)$}
In the main text, we demonstrated graphically that the distributions $P_y(y)$ have a signature $\propto y^{-3/2}$ L\'evy tail (see Fig.~3). Here we give the analytical derivations and resulting expressions for $P_y(y)$.
The starting point is Eq.\ \eqref{eq:Pydistloc}, which can be re-expressed as
\begin{equation}
P_y(y) = \frac{1}{2\pi N}\sum_{i_0}\sum_{\phi^\ast}\vert\partial_\phi^{\phantom\dagger} f\vert^{-1}\big\vert_{\phi=\phi^\ast},
\label{eq:py-sm}
\end{equation}
where
\eq{
f = \frac{J^2(1+y_\mathrm{typ})}{(\omega-V\epsilon_1^{\phantom\dagger}(\phi))^2}
\label{eq:f}
}
and the set of $\phi^\ast$ values are defined as the solutions to $f(\phi^\ast)=y$. In the following we set $J=1$ for brevity. Note that the functional form of $f$ in Eq.~\eqref{eq:f} in general generates two solutions, $\phi^\ast_\pm$, given by
\eq{
V\epsilon_1^{\phantom\dagger}(\phi_\pm^\ast)=\omega\mp \sqrt{\frac{1+y_\mathrm{typ}}{y}},
\label{eq:phistar}
}
such that $P_{y}(y)$ can be written as a sum of two contributions,
\eq{
P_y^{\phantom\dagger}(y) &= P_y^+(y) + P_y^-(y)
\label{eq:Pplusminus}
}
with $P_y^\pm(y)=\frac{1}{2\pi N}\sum_{i_0}\vert\partial_\phi f\vert^{-1}\vert_{\phi=\phi^\ast_\pm}$. It is also useful to notice that since the magnitude of $\epsilon_1(\phi)$ is bounded from above, the support of the distribution $P_y(y)$ is bounded from below. $P_y(y)$ is thus cut off at small values of $y$, as can be seen in Fig.~3.
We start with the $\beta$-model, where we give the results explicitly only for the band-centre, $\omega=0$. Using Eqs.~\eqref{eq:py-sm}-\eqref{eq:phistar} with $\epsilon_i(\phi)$ given by Eq.\ \eqref{eq:beta-model}, we obtain
\begin{align}
\begin{split}
P_y^\pm(y) = &\frac{y^{-3/2}}{2\pi}\frac{V\sqrt{1+y_\mathrm{typ}}}{V\pm\beta\sqrt{\frac{1+y_\mathrm{typ}}{y}}} \\ &\times\left\{ \left(V\pm\beta\sqrt{\frac{1+y_\mathrm{typ}}{y}}\right)^2 - \frac{1+y_\mathrm{typ}}{y} \right\}^{-1/2},
\end{split}
\label{eq:beta-dist}
\end{align}
with the support of $P_y^\pm(y)$ residing in
\eq{
y\geq \frac{(1+y_\mathrm{typ})}{V^2}(1\mp\beta)^2.
}
Most importantly, for $y \gg y_\mathrm{typ}$, Eq.~\eqref{eq:beta-dist} gives
\eq{
P_y^\pm(y)\overset{y\gg y_\mathrm{typ}}{\sim}\frac{\sqrt{1+y_\mathrm{typ}}}{2\pi V}y^{-3/2},
}
which clearly shows the characteristic L\'evy tail.
Turning to the $l=2$ mosaic model, we note that there can be two kinds of end-sites, with odd and even $i_0$ respectively. Hence, each of the terms $P_y^\pm$ can be expressed as a sum of two terms
\eq{
P_y^\pm(y) = P_y^{\mathrm{even},\pm}(y) + P_y^{\mathrm{odd},\pm}(y).
}
For even $i_0$, $\epsilon_1 = 0$, and
\eq{
P_y^{\mathrm{even},\pm}(y) = \frac{1}{4}\delta\left(y-\frac{1+y_\mathrm{typ}}{\omega^2}\right).
\label{eq:mosaic-py-even}
}
This is the $\delta$-function contribution shown by the red dashed vertical line in Fig.~3(b).
Using the potential in Eq.\ \eqref{eq:mosaic-model} for odd $i_0$ in Eqs.~\eqref{eq:py-sm}-\eqref{eq:phistar} we obtain
\eq{
P_y^{\mathrm{odd},\pm}(y) = \frac{1}{4\pi}y^{-3/2}\sqrt{\frac{1+y_t}{V^2-R_\pm^2}}~;~~R_\pm=\omega\pm\sqrt{\frac{1+y_t}{y}},
\label{eq:mosaic-py-odd}
}
with the distributions supported on $y\geq (1 +y_\mathrm{typ})/(\omega\mp V)^2$.
The sum of the even and odd contributions, Eqs.~\eqref{eq:mosaic-py-even} and \eqref{eq:mosaic-py-odd}, comprise the full distribution. Again, for $y\gg y_\mathrm{typ}$, it is readily seen from Eqs.~\eqref{eq:mosaic-py-even} and \eqref{eq:mosaic-py-odd} that $P_y(y)$ has a form
\eq{
P_y^{\phantom\dagger}(y) \overset{y\gg y_\mathrm{typ}}{\sim} \frac{1}{2\pi}\sqrt{\frac{1+y_\mathrm{typ}}{V^2-\omega^2}}~~y^{-3/2},
}
likewise displaying the characteristic L\'evy tail.
\end{document}
|
{
"timestamp": "2020-12-04T02:00:42",
"yymm": "2012",
"arxiv_id": "2012.01450",
"language": "en",
"url": "https://arxiv.org/abs/2012.01450"
}
|
\section{INTRODUCTION}
Distributed Constrained Optimization Problems (DCOPs) are a popular framework to coordinate interactions in cooperative multi-agent systems. A number of real world problems such as distributed event scheduling \cite{maheswaran2004taking} and the distributed RLFA problem\cite{cabon1999radio} can be modelled with this framework\cite{rashik2020speeding,khan2018near,mahmud2019aed,choudhury2020particle,khan2018generic,khan2018speeding}. The constraints among the participating agents in these applications, and many other besides, can be both hard and soft. In any case, the algorithms that have been proposed to solve DCOPs can be broadly classified into exact and non-exact algorithms. The former (e.g.\cite{modi2005adopt,petcu2005scalable}) always finds a globally optimal solution. In contrast, the latter algorithms (e.g. \cite{farinelli2008decentralised,zhang2005distributed}) trade solution quality at the expense of reduced computation and communication costs.
Among the non-exact approaches, Generalized Distributive Law based algorithms, such as Max-Sum \cite{farinelli2008decentralised} and Bounded Max-Sum (BMS) \cite{rogers2011bounded}, have received particular attention. Specifically, Bounded Max-Sum is extremely attractive variant of Max-Sum which produces good approximate solution for DCOP problems with cycles. However, BMS does not actively consider such constraints that are hard although there is a number of real-life applications containing hard constraints. We particularly observe that
the presence of hard constraints can be utilized to further improve BMS's solution quality by removing inconsistent values from agents' domain and thus reduce the upper bound of the global solution. It is worth noting that due to hard constraints the traditional BMS algorithm exhibits a situation where each agent may own a set of allowable assignments in place of a specific assignment. Each combination of the agent's assignments will experience the same profit for the tree structured graphical representation that is a acyclic graph (e.g. Factor Graphs or Junction Tree) of a given DCOP, but produce different profit for that cyclic DCOP. To the best of our knowledge, there exists no method for choosing the best one. In this paper, we propose a novel approach which aims at enforcing consistency and selecting the most preferable combination of agent's assignments.
%
\vspace{-5mm}
\section{PROBLEM FORMULATION}
\par A DCOP model can be formally expressed as a tuple $\langle$\textbf{A}, \textbf{X}, \textbf{D}, \textbf{F}, $\alpha\rangle$ where \textbf{A} = \{$a_1,a_2,....,a_n$\} is a set of agents, \textbf{X} = \{$x_1,x_2,....,x_n$\} is a set of variables, \textbf{D} = \{$d_1,d_2,....,d_n$\} is a set of domains for the variables in \textbf{X}. \textbf{F} = \{$f_1,f_2,....,f_m$\} is a set of constraint functions.
$f_j( \mathbf{x_i})$ denotes value for each possible combination of the variables of $\mathbf{x_i} \in X$. The dependencies between the functions and variables can be graphically represented by factor graph $\textbf{FG}$. Finally, the mapping of variable node to agent is represented by $\alpha$ : \textbf{X} $\rightarrow$ \textbf{A} where one variable will be assigned to one agent.
\par Within this model, the main objective of DCOP algorithms such as BMS, is to find the assignment of each variable, $\mathbf{\tilde{x}}$ and approximate solution $\tilde{V}$ by maximizing the sum of all functions that is \begin{math}\tilde{V}= \sum_{j}^{m} f_{j}(\tilde{x}_{i}) \end{math}. After removing appropriate dependencies from $\textbf{FG}$ according to the phases of BMS, an acyclic graph $\mathbf{\overline{FG}}$ is formed. Additionally, the maximum impact $B$ is calculated which is used for computing the upper bound on the value of the unknown optimal solution as $\tilde{V}^m+B$. Here $\tilde{V}^m$ is the solution found by executing BMS on the corresponding acyclic graph.
%
%
Now, the first objective of our approach is to update the domain of each variables so that the maximum impact is minimized as in (Equation~\ref{CE}). Here, $\tilde{d_{i}}$ = inconsistent domain values of variable $x_i$.
\vspace{-4mm}
\begin{equation}
\begin{aligned}
D=\argminA_{d_1,...,d_n} B \\
s.t. \forall d_i \in D , d_i= d_i \char`\\ \tilde{d_{i}} \\
\end{aligned}
\label{CE}
\end{equation}
\vspace{-0.5mm}
Due to the presence of hard constraints, the BMS algorithm experiences tie variable assignment(s) after executing Max-Sum on its acyclic graph. Our second objective is to select appropriate variable assignment so that it provides the most preferable solution for the constraint graph (Equation~\ref{tie}). Here, $T_i \subseteq D_i$ is set of tie assignments for variable $x_i$.
\vspace{-5mm}
\begin{equation}
\begin{aligned}
\tilde{\textbf{x}}=\argminA_{x_1,...,x_n} \sum_{j}^{m} f_j(\mathbf{x_i}) \\
s.t. (x_1,x_2,...,x_n) \in (T_1\times T_2\times,...,\times T_n) \and
\end{aligned}
\label{tie}
\end{equation}
\begin{figure*}[ht]
\centering
\begin{subfigure}[H]{.3\textwidth}
\centering
\includegraphics[scale=0.5]{Figures/graph1.pdf}
\caption{Arc consistency is enforced and domain is pruned. Maximum impact reduced from 1473 to 1457. A spanning tree (acyclic graph) is created from the cyclic factor graph.
} \label{graph1}
\end{subfigure}
\hfill
\begin{subfigure}[H]{.3\textwidth}
\centering
\includegraphics[scale=0.5]{Figures/graph2.pdf}
\caption{Tie assignment is found for each variable. Next,new DCOP problem is created including the blue colored nodes and BMS is executed on these nodes.
} \label{graph2}
\end{subfigure}
\hfill
\begin{subfigure}[H]{.3\textwidth}
\centering
\includegraphics[scale=0.5]{Figures/graph3.pdf}
\caption{
Priority for each tie assignment is found. Using this information, BMS is executed on the main acyclic graph.
} \label{graph3}
\end{subfigure}
\vspace{-3mm}
\caption{Worked example of the HBMS algorithm. Here the square nodes and round nodes represent function nodes and variable nodes, respectively. The thick edges of the graph are our main points of interest. The third phase is executed on the sub-graph phase consists of blue color nodes.
processes } \label{hbms_simulation}
\end{figure*}
\vspace{-5mm}
\section{HARD CONSISTENCY ENFORCED BOUNDED Max-Sum (HBMS)}
With the motivation of utilizing hard constraints, our objective is to
decrease the upper bound on the optimal solution and also increase the solution quality of BMS. Our first contribution in the upper bound is obtained by changing the maximum impact $B$. In our \textbf{first phase}, consistency enforcement, we update the variable domains (Equation~\ref{CE}) by enforcing arc-consistency on the constraint graph. This step also speeds up the execution time of the algorithm. For example, in Figure~\ref{graph1}, $F_{01}$\footnote{In this paper, we have considered all constraints $f_i$ as binary and mentioned as $F_{x_i,x_j}$ for illustration where $x_i,x_j$ are dependent on $f_i$} calculates its maximum impact $B_{01}$ using Equation~\ref{upperbound}. According to this equation, $x_1$ selects its value $x_{1h}$ for maximizing and $x_{1l}$ for minimizing the $F_{01}$ function. After the consistency enforcement phase, each variable's domain gets pruned and ($x_{1h},x_{1l}$) pair changes to ($x_{1h}^\prime,x_{1l}^\prime$) by reducing $B$. This is true for $F_{03}$ and $F_{23}$ in the same way.
\vspace{-3mm}
\begin{equation}
B_{01}=\argmax_{x_0} \Bigg[ \argmax_{x_{1}}F_{01}(x_0,x_1) -\argmin_{x_{1}}F_{01}(x_0,x_1) \Bigg]
\label{upperbound}
\end{equation}
In the \textbf{second phase}, we generate a spanning tree (i.e. acyclic graph) from the factor graph by removing the most suitable dependencies. We do this following the same way as the BMS algorithm. Then run Max-Sum algorithm on the acyclic factor graph. If the hard constraints are satisfied, each of them will contribute the same in profit maximization. This phenomenon increases the possibility for each variable having multiple assignment ($T_i$ is the set of assignment allowed for variable $x_i$) with same profit for the acyclic graph.
For example, in Figure~\ref{graph2}, after executing Max-Sum, $x_2$ is assigned with multiple values that is $\{0,6,8,17\}$.
However, we need to chose the variable assignments in such a way that the profit for the main constraint graph is maximized (Equation~\ref{tie}). For this purpose, we utilize the removed dependencies and the set of variables $\mathbf{x_i^c}$ that are dependent on function $F_i$ but are not a part of the acyclic graph $\mathbf{\overline{FG}}$. In the \textbf{third phase}, we model a smaller DCOP problem as $\langle \mathbf{A}', \mathbf{X}', \mathbf{D}', \mathbf{F}', \alpha\rangle$ such that, \textbf{F} = \{$f_1,f_2,....,f_k$\} where $\forall F_i, \mathbf{x_i^c} \neq \O$ and select a set of agents $\mathbf{A}'$ and variables $\mathbf{X}'$ dependent on the function accordingly. Finally, for each $d_i \in \mathbf{D}$, $d_i$ will be equals to $T_i$. According to Figure~\ref{graph2}, we create a new DCOP and represents it as a factor graph but this time it includes the dependencies that are removed in the previous phase. It consists of $F_{01}, F_{03}, F_{23}$ and their corresponding variables ($x_0,x_1$) , ($x_0,x_3$) and ($x_2,x_3$). At this phase, we execute BMS on this smaller graph and eventually get information about the priority of the tie assignments of each variables that we found in second phase. In Figure~\ref{graph3}, we can see the profit for each assignment received from the third phase (e.g. for $x_2, (0:633),(6:715),(8:1209),(17:943)$). Finally, In the \textbf{fourth phase}, we use this information to execute Max-Sum on $\mathbf{\overline{FG}}$ again. This step will finally select the preferable variable assignment which improves solution quality. For instance, the variable assignment is $(x_0:14),(x_1:1),(x_2:8) \and (x_3:13)$. The complexity of HBMS is twice of the BMS algorithm since we execute this algorithm two times. The computation cost for the smaller graph in the second phase is negligible. Finally, the complexity of the arc consistency enforcement phase is $ed^3$ where $e$ is the number of edges of the constraint graph and $d$ is the average domain size.
\section{EMPIRICAL EVALUATION}
In this section, we empirically evaluate the improvement in solution quality of HBMS in comparison to the Bounded Max-Sum algorithm. To benchmark the result, we run experiment on random constraint graphs. We vary the number of nodes from 5 to 30 in Figure~\ref{performance}, set the variable domain in [0,..,40], 30\% hard constraints along with soft constraints and functions' utility values from 0 to 500. We observe improvement in solution quality around 5-30\% on average. However, we experience negative results in some instances. In the future, we would like to explore that area for observing the reasons behind this situation.
\begin{figure}[t]
\centering
\vspace{-5mm}
\includegraphics[scale=0.4]{performanceUP2.pdf}
\vspace{-3.4mm}
\caption{Empirical result for constraint graphs varying number of nodes from 4 to30. Improvement is calculated in percentage with respect to Bounded Max-Sum.}
\label{performance}
\end{figure}
\vspace{-1mm}
\section{CONCLUSIONS AND FUTURE WORK}
The major finding of this paper is that by taking advantage of the hard constraints, we can significantly improve the solution quality of the Bounded Max-Sum algorithm. Another notable contribution is in the reduction of the upper bound. Our empirical evidence presents that it is possible to improve the solution around 10-30\% than BMS. In our future work, we would to like to observe the impact of different forms of consistency enforcement, and fix the negative results observed in the evaluation. The final research direction includes extending the potential application domain.
\vspace{-3mm}
|
{
"timestamp": "2020-12-03T02:29:23",
"yymm": "2012",
"arxiv_id": "2012.01369",
"language": "en",
"url": "https://arxiv.org/abs/2012.01369"
}
|
\section{Introduction} \label{sec:intro}
A challenge of modern galaxy evolution is to understand the formation of massive and quiescent galaxies. Stellar archaeology indicates that massive galaxies (log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$11) form their stars in a rapid burst in the first 1-3 billion years of the universe ($z>2$) \citep[e.g.][]{Thomas2010, McDermid2015}.
After this rapid growth phase, their star formation halted (quenched) through unknown processes, and most remain dormant, without significant star formation for $>$10 billion years \citep[e.g.][]{Renzini2006,Citro2016}.
The observed rapid growth and early death in quenched galaxies are longstanding problems in our theoretical understanding of galaxy formation. This is particularly true for high mass galaxies at high redshifts, which have caused the largest tension with simulations in terms of reproducing numbers \citep[e.g.][]{Santini2012,Cecchi2019} and halting and preventing further star formation \citep[e.g.][]{Croton2006, NaabOstriker2017, Forrest2020}.
Recent improvements to the physical models that are implemented in cosmological simulations are well matched to the observed properties of massive quiescent galaxies at least from $z<2.5$, the growth of their black holes, and the maintenance of quiescence across cosmic time \citep[e.g.][]{Schaye2015,Feldmann2017, Nelson2018, Pillepich2018}. However, even with recent advances, simulations generally require some form of poorly understood, yet extreme feedback to truncate star formation and reproduce the properties of observed massive galaxies across cosmic time \citep[for a review see][]{SomervilleDave2015}.
A key unknown is the evolution of the cold gas reservoirs, the fuel for star formation, in massive galaxies as they transition from star forming to quiescent. While the majority of massive galaxies quench around cosmic noon \citep[$1<z<3$;][]{Whitaker2011, Muzzin2013mf, Tomczak2014,Davidzon2017}, surveys characterizing the molecular gas reservoirs using rotational transitions of CO generally find that cold gas is abundant in massive star forming galaxies during this era. The continuity of the star forming sequence implies high accretion rates from the intergalactic medium \citep[for reviews, see][]{Tacconi2020, HodgeDaCunha2020}.
To quench, galaxies must break this equilibrium. To sufficiently deplete, expel, or heat the
abundant gas supply in massive galaxies, theoretical quenching
models favor strong feedback from supermassive black holes \citep[]{Choi2017, Weinberger2017, Weinberger2018}
or extreme star formation \citep{Hopkins2010, Grudic2019}.
These may be driven by efficient and rapid growth \citep{Wellons2015, Williams2014, Williams2015}, mergers \citep{diMatteo2005,Hopkins2006} or disk-instabilities \citep{DekelBurkert2014,Zolotov2015}.
Additional theories exist to stabilize existing cold gas from collapse, e.g. through the growth of a stellar bulge \citep{Martig2009}, thereby decreasing the star formation efficiency to quench. However, this likely requires that accretion be halted \citep[through shock-heating at the virial radius for massive dark matter halos with log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$12; e.g.][]{BirnboimDekel2003,Keres2005, DekelBirnboim2006}.
To first order, these different mechanisms (destruction by feedback, consumption, or stabilization) yield different predictions for the rate at which cold gas disappears from galaxies relative to ceasing star formation.
The evolution of molecular gas reservoirs in quiescent galaxies is therefore an important constraint on the possible mechanisms halting star formation.
Several surveys have characterized molecular gas in massive quiescent galaxies using CO at $z\sim0$ \citep[][]{Young2011, Saintonge2011, Saintonge2012, Saintonge2017,Davis2016}, generally finding that galaxies maintain low gas fractions ($<$0.1-1\%) after $\gtrsim$10 Gyr of quiescence. However, the peak epoch of the transition to quiescence for massive galaxies is at $z\sim2$, where few observations have been made to date.
Characterizing the distribution of molecular gas reservoirs in quenched galaxies would be a major step forward in understanding the pathways massive galaxies take to quiescence. With this work, we conduct the first survey targeting molecular gas traced by CO(2--1) in a sample of quiescent galaxies above $z>1$. These build on samples studied at $z\sim0$ \citep{Rowlands2015, French2015, Alatalo2016}, at intermediate redshifts \citep{Spilker2018, Suess2017}, and at $z>1$, single galaxies \citep{Sargent2015, Bezanson2019}, and average properties through stacking dust emission \citep{Gobat2018}. From these studies emerges a wide diversity of molecular gas properties in quenched and quenching galaxies. Key informative constraints on the molecular gas reservoirs include 1) their variation with properties related to quiescence, such as compact stellar density \citep[e.g.][]{Whitaker2017, Lee2018}, in light of recent reports that age and quenching timescale varies with size and stellar density \citep{Williams2017,Wu2018,Belli2019}
and 2) the amount of gas leftover relative to the time galaxies stopped forming stars, tracing the timescale for consumption.
In this paper, we present a new survey with the Atacama Large Millimeter/submillimeter Array (ALMA) targeting the CO(2--1) emission in quiescent galaxies at $z>1$. In Section \ref{sec:data} we present our sample, their stellar population properties, and our ALMA observations of the CO(2--1) emission line.
In Section \ref{sec:results}, we present our ALMA measurements in the context of other molecular gas surveys, and in Section \ref{sec:discussion}, we discuss our results in the context of theoretical ideas about the formation of quiescent galaxies. We assume a $\Lambda$CDM cosmology with H$_0$=70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_M$ = 0.3, $\Omega_\Lambda$ = 0.7, and a \citet{Chabrier2003} initial mass function (IMF).
\section{Sample and Data}\label{sec:data}
We select ALMA targets from the literature of spectroscopically confirmed quiescent galaxies at redshifts $1<z<1.74$, where the CO(2--1) molecular transition is observable in ALMA Band 3, and also have state-of-the-art ancillary data (deep rest-frame UV to mid-IR coverage, including high-resolution {\it Hubble Space Telescope (HST)} WFC3 imaging).
We identified the most massive of those published (log$_{10}\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace>11.3$) that have quiescent stellar populations based on both their rest frame optical spectroscopy, UV-VJ rest-frame colors (Figure \ref{fig:props}) and UV+IR star formation rates.
Our final sample includes five galaxies in the COSMOS field, which are all confirmed to be quiescent on the basis of deep Balmer absorption features, strong Dn4000, and a lack of strong emission lines, using deep spectroscopy from Keck/LRIS \citep{Bezanson2013,vandeSande2013, Belli2014a, Belli2015} and Subaru/MOIRCS \citep{Onodera2012}. In this study, we combine these five targets with one galaxy from our pilot observation published in \citet{Bezanson2019}.
\begin{figure*}[th]
\includegraphics[scale=.58, trim=50 50 50 50,clip]{fig1.pdf}
\caption{ Our ALMA targets (red squares) compared to star-forming and quiescent galaxies from 3DHST with log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>9.5$ at $1<z<1.5$ \citep[blue/red contours][]{Skelton2014}.
Top Left: U-V vs V-J rest-frame colors and quiescent galaxy selection (black).
Top Right: Star formation rate vs stellar mass. Our sample are more than 3$\times$ below the main sequence at $1<z<1.5$ \citep[][]{Whitaker2014}. Black points show CO observations at $z>0.7$ for star forming galaxies (see Section \ref{sec:results}).
Bottom Left: Size vs mass distributions and mean relations at $z\sim1.25$ \citep{Mowla2019}.
Bottom Right: sSFR vs stellar surface density, $\Sigma_* = \ensuremath{M_{\rm{*}}}\xspace/2\pi R_{e}^2$.
Stellar density higher than the dotted line indicates quiescent galaxies that are compact as defined in \citep{Cassata2013, Williams2017}. Bottom panels demonstrate that our ALMA sample spans the full range of quiescent galaxy sizes and densities at this redshift. }
\label{fig:props}
\end{figure*}
\subsection{Optical and infrared data}
The five galaxies confirmed with Keck were originally selected for spectroscopy from the NEWFIRM Medium Band Survey \citep[NMBS;][]{Whitaker2011}. NMBS includes multi-wavelength photometry from the UV to 24 $\mu$m, and in particular, medium-band near-IR filters that sample the Balmer/4000 \AA\ break at our target redshifts. The Subaru target from the sample of \citet{Onodera2012} was selected from the BzK color-selected catalog published in \citet{Mccracken2010}.
To identify our target galaxies, we used the stellar masses measured from the photometric spectral energy distribution (SED) as published in the original studies. These works fit the photometry using FAST \citep{Kriek2009} to estimate stellar masses assuming \citet[][]{Bruzual2003} stellar population models with exponentially declining star formation history (SFH), a \citet[][]{Chabrier2003} IMF and \citet{Calzetti2000} dust attenuation. Where relevant, we convert literature measurements to Chabrier IMF for comparison to our measurements. Not all targets had stellar ages measured in the literature (based on the UV to IR photometry), but where available they indicate old stellar ages (1-1.5 Gyr), with the exception of our pilot galaxy 21434, whose published stellar age is 800 Myr \citep{Bezanson2019}. The pilot galaxy's rest-frame colors are also the closest to the bluer post-starburst region of the UVJ-quiescent diagram \citep{Whitaker2012,Belli2019}.
For results presented herein, we re-fit the UV to near-IR photometry uniformly using the SED-fitting code \textsc{prospector} \citep{Johnson2019}, which uses the Flexible Stellar Populations Synthesis (FSPS) code \citep{Conroy2009,ConroyGunn2010}. We fit using the \textsc{prospector}-$\alpha$ model framework \citep{Leja2017} which includes a non-parametric SFH that has been shown to be more realistic and physically representative of massive galaxies \citep{Leja2019a}. For the purpose of fitting the stellar population properties probed by the UV to near-IR photometry, we augment the \textsc{prospector}-$\alpha$ model by removing emission due to active galactic nuclei (AGN; which contribute primarily at mid-IR wavelengths) and the dust emission.
Re-fitting the galaxies uniformly in this way also enables us to measure the mass-weighted age, which is more directly comparable to the cosmological simulations we present in Section \ref{sec:discussion}.
We use the NMBS photometric catalog
that includes medium band near-infrared photometry for all galaxies, with the exception of one (ID 307881) which lies outside the NMBS footprint. For this galaxy we use the UltraVISTA catalog with broad-band photometry in the near-infrared \citep{Muzzin2013}. We present the stellar population properties measured with \textsc{prospector} using the modified \textsc{prospector}-$\alpha$ model and default priors in Table \ref{tab:sedfit}. Using these fits instead of the literature values results in an average difference of $\sim0.1$dex higher stellar mass. We find a similar difference in stellar masses
between using a non-parametric SFH and an exponentially declining SFH within \textsc{prospector}. This difference in mass with assumed SFH is consistent with that characterized for massive log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$11 galaxies \citep{Leja2019b}. Our results do not significantly depend on the choice of SFH or its impact on measured stellar mass, which affect our measurements of \ensuremath{f_{\rm{H_{2}}}}\xspace\ by less than a factor of 1.5.
A second impact of the non-parametric SFH is that the mass-weighted age of the galaxies are typically older than that derived using parametric SFH \citep{Leja2019b}. While the ages measured assuming an exponentially declining model are typically of order 1-3 Gyr, the non-parametric model returns ages of order 2-3 Gyr. These imply that the major star formation episodes in our sample happened above $z>3$. We list both values in Table \ref{tab:sedfit}. In the rest of this work, stellar age will refer to mass-weighted age, and we adopt the older ages from the non-parametric model because it is the more conservative constraint, as we will discuss in Section \ref{sec:timescale}.
\subsubsection{Estimation of the star formation rates}\label{sec:sfr}
In this work we consider star formation rate (SFR) measured using two different methods, from the SED fitting outlined in the last section, and also that measured by modeling the obscured and unobscured fluxes, SFR$_{\rm UV+IR}=$SFR$_{\rm UV,uncorr}+$SFR$_{\rm IR}$ as published in the UltraVISTA catalog \citep{Muzzin2013}. The SFR$_{\rm UV,uncorr}$ is calculated using the conversion of \citet[][]{Kennicutt1998} and the IR component is extrapolated from observed 24$\mu$m flux following \citet[][]{Wuyts2008}.
SFRs from either method of SED fitting or extrapolated from 24$\mu$m are uncertain for quiescent galaxies. In particular, the SFR$_{\rm UV+IR}$ should be considered an upper limit, because of significant contributions to the mid and far infrared flux that do not trace ongoing star formation in older galaxies \citep[e.g. asymptotic giant branch (AGB) stars, AGN, dust heated by older stars][]{Salim2009, Fumagalli2014, Utomo2014, Hayward2014}. We list the SFR measured using both indicators in Table \ref{tab:sedfit}, and in the rest of this work we adopt SFR$_{\rm UV+IR}$ when measuring sSFR, which we explicitly consider to be an upper limit. Our upper limits to the sample sSFR range from $-10< \rm Log_{10} sSFR < -12$ yrs$^{-1}$.
\begin{deluxetable*}{lccccccccccc}[t!]
\tablecaption{Properties of ALMA targets }
\tablecolumns{10}
\tablewidth{0pt}
\tablehead{
\colhead{ID$^a$} &
\colhead{RA} &
\colhead{Dec} &
\colhead{$z_{spec}$} &
\colhead{Mass} &
\colhead{SFR$_{\rm UV+IR}$$^b$} &\colhead{SFR$_{\rm 30Myr}$$^c$} & \colhead{Re[kpc]$^d$} & \colhead{Age$^e$} & \colhead{Age$^f$} &\colhead{Reference} }
\startdata
22260 & 149.818229 & 2.561610 & 1.240 & 11.51 $^{+ 0.04 }_{- 0.03 }$ & 3.6 & 5.3 $^{+ 3.41 }_{- 1.91 }$ & 7.6 & 3.4 & 4.6 & Bezanson+2013 \\
20866 & 149.800931 & 2.537990 & 1.522 & 11.46 $^{+ 0.03 }_{- 0.03 }$ & 12.8 & 0.7 $^{+ 2.69 }_{- 0.68 }$ & 2.4 & 2.4 & 1.7 & Bezanson+2013 \\
34879 & 150.131380 & 2.523800 & 1.322 & 11.32 $^{+ 0.04 }_{- 0.04 }$ & 22.9 & 1.4 $^{+ 2.30 }_{- 1.20 }$ & 5.5 & 2.5 & 2.1 & Belli+2015 \\
34265 & 150.170160 & 2.481100 & 1.582 & 11.51 $^{+ 0.03 }_{- 0.03 }$ & 7.4 & 0.3 $^{+ 1.61 }_{- 0.34 }$ & 0.9 & 2.1 & 1.3 & Belli+2015 \\
21434 & 149.816230 & 2.549250 & 1.522 & 11.39 $^{+ 0.03 }_{- 0.03 }$ & 19.1 & 0.5 $^{+ 1.79 }_{- 0.49 }$ & 1.9 & 2.1 & 1.2 & Bezanson+2013,2019 \\
307881 & 150.648487 & 2.153990 & 1.429 & 11.63 $^{+ 0.03 }_{- 0.03 }$ & 5.0 & 0.7 $^{+ 1.73 }_{- 0.66 }$ & 2.7 & 2.7 & 3.2 & Onodera+2012 \\
\enddata \label{tab:sedfit}
\tablenotetext{a}{We adopt IDs as published in the source reference. ID for galaxy 34265 is from \cite{Belli2015} but is referred to as NMBS-COSMOS18265 in \citet{vandeSande2013}. }
\tablenotetext{b}{SFR$_{\rm UV+IR}$ values correspond to those published by the ULTRAVISTA survey \citet[][]{Muzzin2013}.}
\tablenotetext{c}{Corresponds to the average SFR over the past 30 Myr as derived from our SED fitting with \textsc{prospector}.}
\tablenotetext{d}{Circularized half light radius, defined as $R_{e} = r_{e}\sqrt{b/a}$ where $r_{e}$ is the semi-major axis and $b/a$ is the axis ratio. Measured with GALFIT \citep{Peng2002}}
\tablenotetext{e}{Mass-weighted stellar age as derived from fitting with non-parametric SFH \textsc{prospector} (in Gyr).}
\tablenotetext{f}{Mass-weighted stellar age assuming an exponentially declining SFH (in Gyr).}
\end{deluxetable*}
\subsubsection{Hubble Space Telescope imaging}
Our ALMA target selection includes the requirement of high-resolution rest-frame optical imaging from {\it HST} to enable accurate measurements of morphology of the sample. Because compactness is known to be the strongest predictor of quiescence \citep{Franx2008,Bell2012, Teimoorinia2016,Whitaker2017} this requirement enables an assessment of a possible additional correlation with gas content. Our selected galaxies are structurally representative for their redshift, spanning a large range of half-light radius ($R_e$) and stellar densities ($\Sigma_{\star}\propto M_{\star}$/Re$^{2}$) among quiescent galaxies (Figure \ref{fig:props}).
We process all available {\it HST} imaging covering the ALMA sources with the {\sc grizli} software\footnote{https://github.com/gbrammer/grizli} (Brammer, in prep.). These include WFC3/F160W imaging from programs 12167, 14114, 12440, and ACS F814W imaging from programs 10092 and 9822. Briefly, we first group all exposures into associations defined as exposures taken with a single combination of instrument, bandpass filter, and guide-star acquisition (i.e., a ``visit'' in the standard {\it Hubble} nomenclature). We align all individual exposures in an association to each other allowing small shifts to the original astrometry from the files downloaded from the MAST archive at STScI. For the global astrometry, we generate a reference astrometric catalog from sources in the ultra-deep optical catalog of the entire COSMOS field provided by the HyperSurprime-Cam Subaru Strategic Program \citep[DR2;][]{Aihara2019},
which we have verified is well aligned to the GAIA DR2 reference frame \citep{GAIA2016,GAIA2018}.
We align the {\it HST} association exposures as a group to this reference catalog, allowing for corrections in shift, scale and rotation, resulting in a final global astrometric precision of $\sim$30\,milli-arcseconds. Finally, we combine exposures in a given filter (from one or more associations) using {\sc DrizzlePac} / {\sc AstroDrizzle} \citep{Gonzaga2012}.
\subsection{ALMA observations}
The ALMA observations of our target galaxies were carried out in project 2018.1.01739.S (PI: Williams) in separate observing sessions from December 18, 2018 to January 17, 2019 using the Band 3 (3\,mm) receivers. We combine the results from this program with ALMA data for one similar galaxy from a previous pilot program \citep[2015.1.00853.S; see][for details]{Bezanson2019}.
The correlator was configured to center the CO(2--1) line within a spectral window of 1.875\,GHz width, which provides $\sim$5500\,km\,s$^{-1}$ of bandwidth centered on the expected frequency of the CO line, $\approx$89.3--102.9\,GHz. Three additional spectral windows were used for continuum observations. Targets 20866, 22260, and 307881 were observed for a total of $\sim$90--100\,min on-source, while 34265 and 34879 were each observed for about twice as long. The array was in a compact configuration yielding synthesized beam sizes $\sim$1.5--2.5'' so as not to spatially resolve the target galaxies. Bandpass and flux calibrations were performed using J1058+0133 and gain calibration using J0948+0022. The data were reduced using the standard ALMA pipeline and the reductions checked manually. Our cleaning procedure involved first masking regions with clearly-detected emission (S/N $>$ 5) and then we used a stopping criterion of 3$\times$ the image rms.
Images of both the continuum and line emission were created using natural weighting of the visibilities to maximize sensitivity, with pixel sizes chosen to yield 5--10 pixels across the synthesized beam. The spectral cubes have a typical noise of 50-65$\mu$Jy/beam in a 400km/s channel measured near the rest-frequency of the CO(2--1) line. The continuum data combined all available spectral windows, and reach a typical sensitivity of $\sim5-9\mu$Jy/beam, calculated as the rms of the non-primary-beam corrected image. All target galaxies are undetected in the continuum. However, the continuum imaging yielded several serendipitous 3-mm sources in these deep data. Two of these continuum sources were previously unknown galaxies and are presented in \citet{Williams2019}.
\begin{figure*}[]
\includegraphics[scale=0.6]{fig2.pdf}
\caption{ Left panel: ALMA CO(2--1) spectra in 200 km/s channels for each of our galaxies. Spectra are extracted from the position of the blue cross in right panel. Middle panel: The ALMA CO(2--1) integrated image in 400 km/s channels centered at CO(2--1) of the target galaxy (except 22260 which shows the integrated image in a 500 km/s channel, where we find a 4$\sigma$ detection; we assume the flux
as originating in our source). ALMA beam is indicated by white ellipse. Right panel: the {\it HST}/WFC3 F160W image for each of our targets. For 22260 we show a zoomed inset of the ACS/F814W imaging where a secondary stellar component is more visible than F160W. We show the CO(2--1) contours in red (where detected). For 22260 we show 50, 60 and 80 mJy/beam km/s contours. For 34879 we show 100, 120 and 130 mJy/beam km/s contours in a 200 km/s channel, offset in velocity from our target galaxy by dv$=$-600 km/s to show the emission of the companion galaxy. 34879 itself is not detected in CO(2--1).
}\label{fig:spectra}
\end{figure*}
\subsection{Molecular gas measurements}\label{sec:molgas}
To extract CO(2--1) spectra for each source, we used the \texttt{uvmultifit} package \citep{MartiVidal2014} to fit pointlike sources to the visibilities, averaging together channels in order to produce a number of resulting spectra with channel widths ranging from 50--800\,km\,s$^{-1}$. Given the low spatial resolution of the data and the compact galaxy sizes as measured in the available \textit{HST} imaging, the point-like source approximation is likely valid. For most sources, we fixed the position of the point source component to the phase center of the ALMA data, with two exceptions detailed below, leaving only the flux density at each channel as a free parameter. The spectra of each target are shown in Figure~\ref{fig:spectra}.
In source 22260, we detect a weak emission line at the correct frequency for the galaxy's redshift, but offset $\sim$1.2$\pm$0.3'' from the expected position of the target galaxy, a marginally-significant offset given the signal-to-noise of these 2'' resolution data. It is not clear if this offset is spurious, due to an astrometric offset with respect to the \textit{HST} imaging (although unlikely given our careful registration to GAIA), or reflective of a more complex physical scenario with a gas-rich region within this galaxy or a very nearby secondary galaxy as has been seen in high-redshift quiescent galaxies \citep{Schreiber2018JH}.
For this source, we fit two point sources to the visibilities, fixing the position of one to the phase center (where we find no detection) and the other to the position of the slightly offset source, which is shown in Figure~\ref{fig:spectra}.
We subsequently treat this as a real detection of CO(2--1) from our target. As can be seen in the {\it HST} imaging shown in Figure \ref{fig:spectra} this galaxy has a secondary optical/near-IR component (seen most prominently at {\it HST}/ACS 814W shown as inset),
possibly indicating a recent minor merger.
Deeper high-resolution ALMA data would be necessary to conclusively determine if the origin of the CO(2--1) emission is the secondary optical-IR component.
\begin{deluxetable*}{lcccccc}[!th]
\tablecaption{Molecular gas properties }
\tablecolumns{8}
\tablewidth{0pt}
\tablehead{
\colhead{ID} &
\colhead{S$_{\nu}^{a,b}$ } &
\colhead{S$_{\nu}$dv$^b$ } & \colhead{L'CO(2--1)$^b$ } &
\colhead{$\ensuremath{M_{\rm{H_{2}}}}\xspace$$^{c}$} & \colhead{$\ensuremath{f_{\rm{H_{2}}}}\xspace$$^{c}$} \\
\colhead{} & \colhead{$\mu$Jy} & \colhead{mJy kms$^{-1}$} & \colhead{10$^{8}$ K km s$^{-1}$ pc$^2$} & \colhead{10$^{9}$ Msun} & \colhead{\%}
}
\startdata
22260$^d$ & 180 $\pm$ 38 & 90$\pm$ 19 & 19 $\pm$ 4 & 10.5 $\pm$ 2.2 & 3.2 $\pm$ 0.7 \\
20866 & 47.4 & 23.7 & 7.5 & 12.3 & $<$ 4.3 \\
34879$^d$ & 27.5 & 13.8 & 3.3 & 5.5 & $<$ 2.6 \\
34265$^d$ & 35.1 & 17.6 & 5.9 & 9.8 & $<$ 3.0 \\
21434 & 69.6 & 34.8 & 8.0 & 13.7 & $<$ 5.5 \\
307881 & 37.8 & 18.9 & 5.3 & 8.8 & $<$ 2.1 \\
\hline
Stack & - & 10.3$^e$ & 2.89 & 4.7 & $<$1.6 \\
\enddata
\tablenotetext{a}{Line flux is measured in a 500km/s channel. }
\tablenotetext{b}{1$\sigma$ upper limits }
\tablenotetext{c}{3$\sigma$ upper limits, and assuming r$_{21}$ = 0.8 in temperature units, alpha$_{CO}$ = 4.4. Molecular gas masses can be rescaled under different assumptions as \ensuremath{M_{\rm{H_{2}}}}\xspace$\times$(0.8/r$_{21}$)(\ensuremath{\alpha_{\rm{CO}}}\xspace/4.4) }
\tablenotetext{d}{OII$\lambda$3727 detected in emission with Keck.}
\tablenotetext{e}{Assumes 400 km/s bin. Can be scaled to width 500 km/s by multiplying by $\sqrt{500/400}$.}
\label{tab:mol}
\end{deluxetable*}
34879 is a similar scenario, although in that case the line emitter is brighter, offset in velocity from the redshift of our target ($\Delta v\sim$600 km/s), and clearly identifiable with a nearby galaxy in the \textit{HST} imaging (Fig.~\ref{fig:spectra}). We again extract spectra by fitting multiple point sources to the visibility data for this field, fixing the positions of the sources to the phase center and the observed position of the line emitter, respectively. After this procedure, the spectra extracted at the phase centers of each field show no evidence of CO emission, although we note that the channel fluxes in these spectra are now slightly correlated with the spectra of the offset sources due to the small sky separations compared to the synthesized beam sizes.
For the detected source, 22260, we measure the line flux by fitting a simple Gaussian to the CO(2--1) spectrum. For galaxies that are not detected in CO(2--1), we set upper limits to the line flux. The undetected galaxies have velocity dispersions, $\sigma$, measured from the rest-frame optical stellar absorption features in the range $\sigma\sim200-370$ km s$^{-1}$ \citep{Bezanson2013, Belli2014a} with the exception of 307881 for which it was not measured \citep{Onodera2012}. To measure upper limits to the integrated CO(2--1) line flux of undetected galaxies, we assume similar line widths for the stars as any molecular gas, and adopt typical values for the FWHM of the CO(2--1) of 2.355$\times\sigma\sim$500-600 km s$^{-1}$. We use these line widths and the channel noise to set upper limits on the integrated line fluxes of each target. We note that large linewidths are conservative, and that these upper limits scale with velocity interval $\Delta v$ as $\sqrt{\Delta v}$. Assuming a smaller linewidths would decrease our limiting integrated flux. The CO(2--1) line luminosity for our detected galaxy 22260, and the 1$\sigma$ upper limits in the case of non-detections, are reported in Table \ref{tab:mol}.
To convert our measurements of CO(2--1) line luminosities into molecular gas mass (\ensuremath{M_{\rm{H_{2}}}}\xspace), we make the following standard assumptions about the molecular gas conditions.
We first assume a CO excitation (namely the luminosity ratio between the CO(2--1) and CO(1-0) transitions, r$_{21}$) following observations from the local Universe. Although local galaxies with low sSFR are observed to exhibit a range of values r$_{21}=0.7-1$ \citep[e.g.][]{Saintonge2017}, bulges and the central nuclei of galaxies that are thought to be similar to high-redshift compact quiescent galaxies exhibit near-thermalized excitation, r$_{21}=1$ \citep[e.g.][]{Leroy2009}. For our analysis we assume r$_{21}=0.8$ following \citet{Spilker2018}, which results in more conservative (higher) molecular gas mass measurements and limits than the assumption of thermalized emission. For comparisons to other measurements in the literature of $z>1$ passive galaxies we therefore rescale other values to this excitation as \ensuremath{M_{\rm{H_{2}}}}\xspace$\times$(0.8/r21) (including the object from \cite{Bezanson2019} which we convert to our value of r$_{21}=0.8$). Assuming a larger value for r$_{21}$ (e.g. 1, as is done for other studies of passive systems across redshifts) does not significantly change our results, and instead would imply even lower molecular gas fractions that further strengthen our conclusions.
This assumption has a minimal impact on our \ensuremath{M_{\rm{H_{2}}}}\xspace\ uncertainty budget (10-20\%) compared to e.g. a factor of $\gtrsim$2 uncertainty due to the assumed value of the CO-H$_{2}$ conversion factor to translate the measured CO luminosity to \ensuremath{M_{\rm{H_{2}}}}\xspace.
In this work we assume a Milky Way like value of \ensuremath{\alpha_{\rm{CO}}}\xspace = 4.4 \ensuremath{\rm{M}_\odot}\xspace (K km s$^{-1}$pc$^{2}$)$^{-1}$, which is a reasonable assumption for massive galaxies with presumably high metallicities (e.g. \citealt{Narayanan2012}; see also the review by \citealt{Bolatto2013}).
\subsection{Stack of non-detections in CO(2--1)}
With five out of six galaxies undetected in CO(2--1) \citep[including 21434;][]{Bezanson2019}, we perform a stacking analysis of the five non-detected galaxies. We calculate the weighted average (mean) to account for the slight differences in map rms, and use the non-primary beam corrected maps, which have Gaussian noise properties.
Since the nearby companion of galaxy 34879 has significantly detected CO(2--1) emission offset by 600 km/s, but with roughly width of 200 km/s, we restrict our exploration of stacked CO(2--1) emission using image cubes with velocity resolution $\lesssim$400 km/s to prevent the flux from the companion entering the stack, given the companion's location within 1.5'' of the target galaxies in the stack. We construct image cubes at 400km/s velocity resolution centered at the rest-frequency of CO(2--1) of each galaxy, and stack the velocity bin that contains the CO(2--1) line.
We do not detect any CO(2--1) from the stack of individually undetected sources, with an rms noise limit of 25.6$\mu$Jy/beam for the 400\,km/s channel width of the stack. We use the mean redshift of the non-detected galaxies ($<z>$=1.476) to put a 1$\sigma$ upper limit to the average CO luminosity of \ensuremath{\rm{L}_{\rm{CO}}'}\xspace$_{(2-1)}<2.9\times10^{8}$ K km s$^{-1}$ pc$^{2}$. We make the same assumptions listed in Section \ref{sec:molgas} to convert this measurement to a molecular gas mass and find \ensuremath{M_{\rm{H_{2}}}}\xspace$<4.7\times10^{9}$\ensuremath{\rm{M}_\odot}\xspace (3$\sigma$). Using the average stellar mass of our undetected sample of log$_{10}\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace\sim11.5$, this puts a 3$\sigma$ upper limit on the molecular gas fraction of 1.6\%. The stacked sample has an average specific star formation rate of $6\times10^{-11}$ yr$^{-1}$ (likely an upper limit, as explained in Section \ref{sec:sfr}) and the properties derived from the stack are summarized in Table \ref{tab:mol}.
\section{Results}\label{sec:results}
Our new ALMA observations indicate that our sample of massive (log$_{10}\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace > 11.3$) and quiescent (log$_{10}$ sSFR$\lesssim -10$ yr$^{-1}$) galaxies at $z>1$ have low molecular gas masses (\ensuremath{M_{\rm{H_{2}}}}\xspace$\lesssim5-10\times10^{9}$\ensuremath{\rm{M}_\odot}\xspace), translating to molecular gas fractions (\ensuremath{f_{\rm{H_{2}}}}\xspace = \ensuremath{M_{\rm{H_{2}}}}\xspace/\ensuremath{M_{\rm{*}}}\xspace) between $\sim$2-6\%. To provide context for these measurements, we compile measurements of molecular gas using CO as a tracer from the literature across redshifts. We include surveys that target low-J transitions (J$_{up}\lesssim2$) to minimize uncertainties from variations in the methods to correct for CO excitation. The majority of these surveys targeted star forming galaxies outside the local Universe, including PHIBSS \citep{Tacconi2013}, PHIBSS2 \citep{Tacconi2018, Freundlich2019}, as well as smaller programs targeting Milky-Way progenitors \citep{Papovich2016}, extended disk galaxies \citep{Daddi2010}, compact star-forming galaxies \citep{Spilker2016}, and galaxies from overdense regions \citep{Hayashi2018,Rudnick2017}.
To targeted samples, we add CO-detected sources from the blind ASPECS Survey \citep[][]{Decarli2016, Aravena2019}.
We also include the few studies that have targeted quiescent or post-starburst galaxies outside the local universe at $z<1$ \citep{Suess2017,Spilker2018}. Finally we include the large surveys at $z\sim0$ that have enabled an exploration of molecular gas in similarly massive galaxies to our sample, at similarly low sSFRs \citep[albeit at late cosmic times;][]{Young2011,Saintonge2012,Saintonge2017,Davis2016}.
To date, molecular gas measurements using CO exist for only two confirmed quiescent galaxies above $z>1$; these are upper limits (3$\sigma$) on a massive quiescent galaxy published by
\citet[][$\ensuremath{f_{\rm{H_{2}}}}\xspace\lesssim13$\%, converted from Salpeter IMF]{Sargent2015}
and the pilot galaxy for this survey
($\ensuremath{f_{\rm{H_{2}}}}\xspace\lesssim5.5\%$; \citealt[][ using our derived stellar mass and velocity width, to be consistent with the rest of the sample]{Bezanson2019}). Both measurements are rescaled to our assumption r$_{21}$=0.8. Both galaxies are spectroscopically confirmed, enabling a robust upper limit to their molecular gas content.
\begin{figure}[th]
\includegraphics[scale=0.85, trim=15 9 3 10,clip]{fig3.pdf}
\caption{ Comparison of our CO measurements to those of quiescent galaxies at $z\sim0$ from the COLDGASS and MASSIVE surveys \citep{Saintonge2017, Davis2016}.
Large symbols indicate quiescent galaxies at $z>1$ \citep[this work;][]{Sargent2015, Bezanson2019} and the far-infrared based stack of \citet{Gobat2018}.
Our sample has low \ensuremath{f_{\rm{H_{2}}}}\xspace $<2-6 \%$; comparably low to galaxies at z=0 with similarly low sSFR. }\label{fig:fgas_vs_z0}
\end{figure}
The most comprehensive constraint on the average molecular gas in quiescent galaxies at $z>1$ to date is a far-infrared stack of 977 photometrically selected quiescent galaxies \citep{Gobat2018}, where the molecular gas content is inferred from the average dust emission \citep{Magdis2012}.
We add this measurement to the CO constraints from the literature because it uses the largest sample of quiescent galaxies at $z>1$.
Figures~\ref{fig:fgas_vs_z0} and \ref{fig:fgas_vs_m} show \ensuremath{f_{\rm{H_{2}}}}\xspace\ for our sample as a function of sSFR and \ensuremath{M_{\rm{*}}}\xspace. We plot our
ALMA measurements as stars, along with the measurements from the literature (small translucent symbols). Quiescent galaxies at $z>1$ are large bold symbols. We additionally include the stacked measurement from the five non-detected galaxies (diamond). Figure \ref{fig:fgas_vs_z0} shows that based on our deep limits, massive and quiescent galaxies at $z>1$ have comparably low gas fractions relative to galaxies at $z=0$ with similar sSFRs, and that our deep \ensuremath{f_{\rm{H_{2}}}}\xspace\ limits are comparable to local surveys.
Figure \ref{fig:fgas_vs_m} shows that our upper limits on \ensuremath{f_{\rm{H_{2}}}}\xspace\ are the lowest CO-derived constraints on molecular gas content of any galaxy population above $z>1$.
The left panel of Figure \ref{fig:fgas_vs_m} shows \ensuremath{f_{\rm{H_{2}}}}\xspace\ vs sSFR at $z>0$, with galaxies color coded by redshift.
The gas fraction measurement/limits for our sample are about an order of magnitude deeper than the limit set by \citet[][]{Sargent2015}, and an order of magnitude lower than that inferred from dust emission by \citet[][]{Gobat2018}. We discuss this discrepancy in quiescent galaxy \ensuremath{f_{\rm{H_{2}}}}\xspace\ between their average detection and our deep limits further in Section \ref{sec:scatter}.
In the right panel, we plot \ensuremath{f_{\rm{H_{2}}}}\xspace\ vs stellar mass, where galaxies are again color coded by redshift. Our measurements are in line with observations that the gas fraction in galaxies decreases with increasing stellar mass at all redshifts, although the mass dependence is weak compared to the stronger dependencies on redshift and sSFR \citep[e.g.][]{Tacconi2018}. Our study doubles the number of constraints on molecular gas mass at z $>$1 at the massive (log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace $>$ 11.3) end.
\begin{figure*}[]
\includegraphics[scale=0.85, trim=10 9 35 10,clip]{fig4a.pdf}
\includegraphics[scale=0.85, trim=10 9 3 10,clip]{fig4b.pdf}
\caption{
Comparison of our measurements to measurements based on CO in literature at $z>0.5$.
Large symbols indicate quiescent galaxies at $z>1$ \citep[this work;][]{Sargent2015, Bezanson2019} and the far-infrared based stack of \citet{Gobat2018}.
Small symbols (defined in right panel legend) indicate comparison literature measurements.
Our sample have low molecular gas fraction $<2-6 \%$; 1-2 orders of magnitude lower than few coeval star-forming galaxies at similar stellar mass. }\label{fig:fgas_vs_m}
\end{figure*}
In Figure \ref{fig:fgas_vs_z} we plot \ensuremath{f_{\rm{H_{2}}}}\xspace\ as a function of redshift, where galaxies are color coded by sSFR. A number of well-known scaling relations are apparent in Figures \ref{fig:fgas_vs_z0}, \ref{fig:fgas_vs_m}, and \ref{fig:fgas_vs_z} including that overall, the molecular gas fractions in galaxies decrease with decreasing redshift, decreasing sSFR, and increasing stellar mass. Our data contributes new datapoints to the poorly explored parameter space at low sSFR, high mass, at high redshift.
\section{Discussion}\label{sec:discussion}
In this paper, we have placed constraints on the molecular gas content in the first sample of massive quiescent galaxies at $z>1$ ($<z>=1.45$).
Our low \ensuremath{f_{\rm{H_{2}}}}\xspace\ measurements indicate that the exhaustion or destruction of molecular gas in massive quiescent galaxies is efficient and complete, consistent with the finding for our pilot galaxy \citep{Bezanson2019}.
That massive quiescent galaxies at $z>1$ are gas poor suggests high star-formation efficiency and rapid depletion times during their evolution.
While our sample is not complete in stellar mass, we do not find evidence within our sample that \ensuremath{f_{\rm{H_{2}}}}\xspace\ varies with either galaxy size or surface density $\Sigma_*$. Among quiescent galaxies, these structural properties are known to correlate with stellar age \citep[e.g.][]{Williams2017}, formation redshift \citep[e.g.][]{EstradaCarpenter2020} and quenching timescale \citep[e.g.][]{Belli2019}, and therefore plausibly trace timescales for gas consumption. We measure \ensuremath{f_{\rm{H_{2}}}}\xspace$<2-6\%$, values that are universally low despite the large dynamic range of structure we probe among quiescent galaxies at $z>1$ ($R_e=0.9-7$ kpc; log$_{10}\Sigma_* = 8.9-10.8$ \ensuremath{\rm{M}_\odot}\xspace kpc$^{-2}$; Figure \ref{fig:props}).\footnote{We note that quiescent galaxies are generally higher $\Sigma_*$ than star forming galaxies, which have larger gas reservoirs.}
Our sample suggests that massive galaxies that cease star formation at the peak epoch of quenching do not retain large reservoirs of gas. These findings are in contrast with observations of recently quenched galaxies at $z<1$, some of which contain significant molecular gas reservoirs (\ensuremath{f_{\rm{H_{2}}}}\xspace$\sim20-30$\%), suggesting that their low SFRs are due to decreased star-formation efficiency (e.g. suppressed dynamically) rather than a lack of fuel for star-formation \citep{Rowlands2015, French2015, Suess2017, Smercina2018, Li2019}. Furthermore, \citet[][]{Spilker2018} find \ensuremath{f_{\rm{H_{2}}}}\xspace$<~1-15$\% in quiescent galaxies at intermediate redshifts ($z\sim0.7$), additional evidence for heterogeneity among galaxies below the main sequence. Our new results, collectively with those at $z<1$, highlight a diversity in molecular gas properties among quenching galaxies across cosmic time, possibly indicating that the primary drivers of quenching change over cosmic time. These new observations of the variation in gas reservoirs of non-starforming galaxies across cosmic time are therefore important constraints for our theoretical formulations of quenching processes, and the time evolution of gas reservoirs. In the following sections, we explore the implications of our new low gas fraction measurements in this context.
\subsection{The distribution (intrinsic scatter) of cold gas content among $z>1$ quiescent galaxies}\label{sec:scatter}
Although this is the first systematic study using CO to measure molecular gas in quiescent galaxies at $z>1$,
the recent observation of average far-IR properties of 977 quiescent galaxies at $z>1$ found significant dust continuum emission, implying a relatively large molecular gas content \citep[\ensuremath{f_{\rm{H_{2}}}}\xspace$\sim16$\% when converted to Chabrier IMF;][]{Gobat2018}. The individual measurements of molecular gas in our quiescent sample range from \ensuremath{f_{\rm{H_{2}}}}\xspace$\lesssim2-6$\%, and are inconsistent with the average \ensuremath{f_{\rm{H_{2}}}}\xspace\ measurement by \citet[][]{Gobat2018}. While a primary uncertainty in our \ensuremath{f_{\rm{H_{2}}}}\xspace\ measurement is the assumed value of \ensuremath{\alpha_{\rm{CO}}}\xspace, extreme values only observed in low mass and low metallicity systems \citep[\ensuremath{\alpha_{\rm{CO}}}\xspace$\gtrsim15$;][]{Bolatto2013, Narayanan2012} would be required to bring our measurements into agreement.
A direct comparison to the Gobat et al. result is difficult owing in part to our differing methodologies, each of which is subject to its own systematic uncertainties.
And, as with any photometric selection of quiescent galaxies, there is always some risk of contamination from dusty star-forming galaxies. The contamination may enter the stack either through misidentification because of the age-dust degeneracy of colors (even if only a few bright objects), or due to neighboring dusty galaxies given the low spatial resolution of the far-IR data ($\sim$10-30''). Neighbors can contaminate either through poor source subtraction, or as hidden dusty galaxies that do not appear in optical/near-IR selection but may remain within the far-IR photometric beam \citep[e.g.][]{Simpson2017, Schreiber2018JH, Williams2019}. Further, both our sample and dusty star forming galaxies are massive and may be strongly clustered \citep[e.g.][although see also \citealt{Williams2011}]{Hickox2012}.
In this section we ignore any such possible contamination, and discuss several physical explanations for this disagreement.
First, the relatively large \ensuremath{f_{\rm{H_{2}}}}\xspace\ observed by \citet[][]{Gobat2018} could be reflecting a heterogeneity of molecular gas properties among the passive galaxy population at $z>1$, as is observed at $z<1$. Our sample represent some of the most massive and oldest passive galaxies known at $1<z<1.5$, while the \citealt{Gobat2018} sample is dominated by objects less massive than our sample ($<$log$_{10}$\ensuremath{M_{\rm{*}}}\xspace$> =$ 10.8). Perhaps lower mass and/or younger additions to the red sequence still have molecular gas leftover, contributing to the far-IR emission observed on average.
However, we note that because the stack is average, and our measurements are $>10\times$ lower \ensuremath{f_{\rm{H_{2}}}}\xspace, a heterogenous sample would imply even larger $\ensuremath{f_{\rm{H_{2}}}}\xspace>16\%$ in any sample of gas-rich quiescent galaxies.
More surveys that span a larger range of parameter space for individual quiescent galaxies (e.g. lower mass) are required to investigate this explanation further (Whitaker et al. in prep, Caliendo et al. in prep).
Alternatively, the calibration to convert the far-IR emission into a measurement of \ensuremath{M_{\rm{H_{2}}}}\xspace\ might not be universal.
These conversions are typically based on assumed dust to gas ratios and/or dust temperatures, calibrated using primarily star-forming galaxies \citep[e.g.][]{Magdis2012, Scoville2016}. In theory this relies on an intrinsic relationship between dust and gas content that has been shown to accurately describe star forming galaxies \citep{Kaasinen2019}, and for the most part, also holds for quiescent galaxies in the local Universe, albeit with large scatter \citep[e.g.][]{Lianou2016}. In principle, dust traces both atomic (HI) and molecular (H$_{2}$) gas phases, and so this could still hold if the HI/H$_{2}$ ratio is high in quiescent galaxies, while the dust to H$_{2}$ ratio is very low. We note that \citet{Spilker2018} stacked the 2mm dust continuum emission to compare to \ensuremath{M_{\rm{H_{2}}}}\xspace\ measured from CO, finding consistent values between \ensuremath{M_{\rm{H_{2}}}}\xspace\ observed via CO and dust, lending support for the idea that dust to H$_{2}$ conversions hold for massive quiescent galaxies at high redshift. However, \citet{Gobat2018} make the simplifying assumption that all gas traced by dust is molecular, although HI/H$_{2}$ mass ratios in local quiescent galaxies can be large \citep{Zhang2019} and diverse \citep{Welch2010,Young2014,Boselli2014, Calette2018}. It is therefore a possibility that the significant dust emission detected by \citet[][]{Gobat2018} is not in conflict with our low \ensuremath{f_{\rm{H_{2}}}}\xspace\ measurements, and instead is primarily tracing atomic HI rather than H$_{2}$.
Nevertheless, other factors may affect these conversions, warranting further exploration. For example the dust to gas ratio can also vary with metallicity, as explored in simulations, although the extent to which this disrupts scaling relations is not clear
\citep[i.e. gas/dust may plateau above solar metallicity, applicable to most massive galaxies;][]{Privon2018, Li2019}. Future samples of quiescent galaxies with observations of both CO and dust continuum emission would reveal if the dust to \ensuremath{M_{\rm{H_{2}}}}\xspace\ calibrations apply across galaxy populations at high redshift, as done locally \citep{Smith2012}.
The comparison between our work presented here and that presented in \citep{Gobat2018} thus highlights several avenues of future investigation to understand the intrinsic scatter in molecular gas properties of quiescent galaxies, which will help understand the diversity of pathways that passive galaxies may take to quiescence.
\subsection{Timescales for gas consumption or destruction}\label{sec:timescale}
Accretion is now considered to be a primary driver of galaxy growth in the early Universe \citep[for a review see][]{Tacconi2020}. While observations support
this picture, it remains unclear what disrupts the growth in massive galaxies that become quiescent. Explanations include the destruction or expulsion of gas due to feedback, the suppression of gas accretion (e.g. by virial shocks once log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$12), or the suppression of gas collapse due to the development of a stellar bulge. Our observations of molecular gas in quenched galaxies can help discriminate between the different processes.
In particular, we explore here the timescales for gas expulsion or consumption that are consistent with the low gas fractions we measure. Unfortunately with mostly upper limits to \ensuremath{M_{\rm{H_{2}}}}\xspace, and likely only upper limits to the SFR, our dataset precludes a robust measurement of (current) depletion times (\ensuremath{t_{\rm{dep}}}\xspace = \ensuremath{M_{\rm{H_{2}}}}\xspace / SFR) and we instead explore the allowable range of \ensuremath{t_{\rm{dep}}}\xspace\ given low \ensuremath{f_{\rm{H_{2}}}}\xspace\ and old mass-weighted stellar age.
\begin{figure*}[]
\begin{center}
\includegraphics[scale=0.88, trim=10 9 70 10,clip]{fig5a.pdf}
\includegraphics[scale=0.88, trim=55 9 10 10,clip]{fig5b.pdf}
\caption{\ensuremath{f_{\rm{H_{2}}}}\xspace vs redshift for galaxies in our sample (large stars) and literature measurements. All galaxies are color-coded by their sSFR. Symbols are represented as in Figure \ref{fig:fgas_vs_m}. For clarity we omit $z=0$ measurements below \ensuremath{f_{\rm{H_{2}}}}\xspace $<$1e-3 from the Atlas3D or MASSIVE surveys, and two measurements above \ensuremath{f_{\rm{H_{2}}}}\xspace$>$5 at $z\sim2$ from \citep{Hayashi2018}. Black line indicates the \ensuremath{f_{\rm{H_{2}}}}\xspace\ on the main sequence for star forming galaxies with log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$=11$.
Lines show the gas depletion according to our toy models outlined in Section \ref{sec:timescale}: blue curves indicate models with constant \ensuremath{t_{\rm{dep}}}\xspace=0.3, 0.5, 0.6 Gyr where accretion halts at $z=2,3,4$, respectively. Yellow indicate models where the value of \ensuremath{t_{\rm{dep}}}\xspace\ changes according to scaling relations measured by \citet[][]{Tacconi2018}.
Our low gas fractions require rapid \ensuremath{t_{\rm{dep}}}\xspace, inconsistent with \citet{Tacconi2018}, and have better agreement with relations that have faster depletion times at high redshift, low sSFR, and high mass \citep[][magenta curves]{Liu2019}.
}\label{fig:fgas_vs_z}
\end{center}
\end{figure*}
\subsubsection{Closed-box toy model: constant \ensuremath{t_{\rm{dep}}}\xspace}
To provide qualitative insight into
the timescales required to achieve the low \ensuremath{f_{\rm{H_{2}}}}\xspace\ we observe,
we construct a closed-box toy model for a log$_{10}\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace\sim11$ main sequence galaxy that stops gas accretion, and then depletes its existing gas reservoir at specified depletion times. The SFR is decreased accordingly as gas is consumed. This model is qualitatively similar to that used in \citet[][]{Spilker2018} to investigate if their measured depletion times for quiescent galaxies at $z\sim$0.7 are consistent with depleting to levels observed in quiescent galaxies at $z=0$.
We first assume a toy model with a constant \ensuremath{t_{\rm{dep}}}\xspace\ that remains the same with time and SFR, and calculate how the gas fraction declines if the gas accretion is halted while the galaxy is on the main sequence at $z=2,3,4$. We additionally assume that as the galaxy consumes its gas through star formation, stellar mass loss will return $\sim30\%$ of that mass back to the interstellar medium \citep[ISM; for a Chabrier IMF; e.g.][]{LeitnerKravtsov2011, Scoville2017}. While this is a physically motivated assumption, it also is conservative. If the true fraction of gas returned to the ISM is lower, the gas reservoir will be depleted even faster, strengthening our conclusions.
In the left panel of Figure \ref{fig:fgas_vs_z}, the blue curves show the evolution of these constant \ensuremath{t_{\rm{dep}}}\xspace\ closed box models at $z=2,3,4$ for \ensuremath{t_{\rm{dep}}}\xspace$=0.3, 0.5, 0.6$ Gyr, respectively.
The higher the redshift that accretion is halted, the longer the limiting \ensuremath{t_{\rm{dep}}}\xspace\ that is consistent with our low gas fractions. Longer depletion times will flatten the blue curves and are inconsistent with our measurements. These curves indicate that rapid depletion times are required for a main sequence galaxy to use up its existing reservoir if accretion is halted. The earlier in cosmic time the accretion is halted, the longer \ensuremath{t_{\rm{dep}}}\xspace\ can be and still match our observations. However we note that for mass-weighted ages of $\sim$1-3 Gyr (all galaxies except 22260\footnote{We note the possibility that 22260 received its gas later through a minor merger from its secondary component, and therefore its older age does not disagree with this picture.}), the majority of star-formation occurred before z$=3.5$, indicating a limiting \ensuremath{t_{\rm{dep}}}\xspace$<0.6$ Gyr. This is also roughly the typical depletion times for massive galaxies on the main sequence at these redshifts \citep[$\sim$0.4-0.6 Gyr][]{Tacconi2018,Liu2019}.
Our data is consistent with this simple picture where galaxies truncate accretion and then consume the existing gas at typical main sequence \ensuremath{t_{\rm{dep}}}\xspace\ rates, or faster.
\subsubsection{Closed-box toy model: varying \ensuremath{t_{\rm{dep}}}\xspace
}
While the constant \ensuremath{t_{\rm{dep}}}\xspace\ toy model is useful for providing the qualitative intuition that long depletion timescales (\ensuremath{t_{\rm{dep}}}\xspace$ > 0.6$ Gyr) are inconsistent with our data, observations have shown that in reality, \ensuremath{t_{\rm{dep}}}\xspace\ is not constant as galaxies evolve. \ensuremath{t_{\rm{dep}}}\xspace\ is known to vary as a function of redshift, sSFR (i.e. distance from the main sequence), with weaker dependences on \ensuremath{M_{\rm{*}}}\xspace\ and galaxy size \citep[][]{Tacconi2013, Santini2014, Genzel2015, Tacconi2018, Liu2019, Tacconi2020}. Therefore, we also explore a closed box model where the \ensuremath{t_{\rm{dep}}}\xspace\ smoothly evolves according to scaling relations as galaxies leave the main sequence. These scaling relations imply an increase in \ensuremath{t_{\rm{dep}}}\xspace\ as galaxies move below the main sequence, which slows the rate that \ensuremath{f_{\rm{H_{2}}}}\xspace\ decreases with time. For this set of toy models we make the conservative, albeit unrealistic, assumption that no mass is returned by stars to the ISM as galaxies move below the main sequence. This is the more conservative comparison in this case, because any mass loss to the ISM during this phase will increase the time required for the toy model to reach the low \ensuremath{f_{\rm{H_{2}}}}\xspace\ we observe.
The right panel of Figure \ref{fig:fgas_vs_z} shows the result of this toy model calculation for two example scaling relations, that of \citet[][]{Tacconi2018} in yellow and of \citet[][]{Liu2019} in magenta.
In the case of \citet{Tacconi2018}, the simple consumption of gas does not reach low enough gas fractions quickly enough to match our observations. This is due to the relatively long depletion times below the main sequence implied by this particular scaling relation. This is not necessarily surprising, as primarily star-forming galaxies are used to calibrate these relations outside the local Universe; understanding the evolution of the star-forming population was the primary goal of these analyses. Scaling relations measured in \cite{Genzel2015,Tacconi2020} result in similar behavior.
\begin{figure*}[]
\begin{center}
\includegraphics[scale=1.3, trim=10 9 10 10,clip]{fig6.pdf}
\caption{ Same as Figure \ref{fig:fgas_vs_z} but with \ensuremath{f_{\rm{H_{2}}}}\xspace(z) predictions from analytical equilibrium ``bathtub models" that balance gas inflow, outflow, and star-formation.
Curves represent halos with masses at $z=0$ of \ensuremath{M_{\rm{halo}}}\xspace$=$10$^{11}$ (black), 10$^{12}$ (magenta), 10$^{13}$ (orange) and 10$^{14}$
(yellow) \ensuremath{\rm{M}_\odot}\xspace\ published in \citet[][]{Dave2012}. Solid lines indicate a model where the mass loading factor for outflowing gas is similar to momentum driven feedback.
For halos with final mass $>$10$^{13}$ we plot two other feedback prescriptions (dotted and dashed lines, see text) but our conclusions are independent of the stellar feedback prescription.
Our data are most consistent with massive halos (\ensuremath{M_{\rm{halo}}}\xspace=10$^{14}$\ensuremath{\rm{M}_\odot}\xspace at z=0) which reached the critical halo mass \ensuremath{M_{\rm{halo}}}\xspace=10$^{12}$\ensuremath{\rm{M}_\odot}\xspace the earliest, and z$\sim4$ (to slow accretion of baryons due to shock heating at the virial radius).
}\label{fig:fgas_dave}
\end{center}
\end{figure*}
Taken at face value, the
\citet{Tacconi2018} relation implies that \ensuremath{t_{\rm{dep}}}\xspace=0.7, 0.6, 0.5 Gyr for a log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace=11 galaxy leaving the main sequence at z=2,3,4, as explored in Figure \ref{fig:fgas_vs_z}. Extrapolating the relation to the average mass, sSFR and redshifts of our ALMA targets would imply \ensuremath{t_{\rm{dep}}}\xspace$\sim$1.6 Gyr and \ensuremath{f_{\rm{H_{2}}}}\xspace$\sim$10\%. For the ALMA galaxies individually, the relation implies \ensuremath{f_{\rm{H_{2}}}}\xspace\ values that are $>2\times$ larger than our conservatively measured 3$\sigma$ upper limits. Our data safely rule out these extrapolations.
In contrast, the closed box model based on scaling relations measured by \citet[][]{Liu2019} reaches substantially lower \ensuremath{f_{\rm{H_{2}}}}\xspace. This is primarily due to a more rapid \ensuremath{t_{\rm{dep}}}\xspace\ near but below the main sequence at high redshift in their calibration, compared to the behavior of \ensuremath{t_{\rm{dep}}}\xspace\ measured by \citet{Tacconi2018}. As is apparent, the faster \ensuremath{t_{\rm{dep}}}\xspace\ near but below the main sequence has a substantial impact on the behavior of our toy model. Therefore, we cannot rule out that our low \ensuremath{f_{\rm{H_{2}}}}\xspace\ are consistent with simple gas consumption with behavior similar to \citet[][]{Liu2019} if accretion onto galaxies is halted. However, we note that including physically motivated gas recycling from stellar mass loss as in the last section would drastically increase the time required to reach our measured \ensuremath{f_{\rm{H_{2}}}}\xspace, and increasing the tension with our observations (30\% gas recycling as assumed in the previous section results in the $z=4$ magenta curve consistent with only our two highest \ensuremath{f_{\rm{H_{2}}}}\xspace\ constraints, too high to explain all our measurements. The $z=2-3$ curves would be inconsistent with all of our data).
At face value, the
\citet{Liu2019} relation implies that \ensuremath{t_{\rm{dep}}}\xspace=0.5, 0.4, 0.3 Gyr for a log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace=11 galaxy leaving the main sequence at z=2,3,4, as explored in Figure \ref{fig:fgas_vs_z}. Extrapolating to the properties of our ALMA targets would imply longer \ensuremath{t_{\rm{dep}}}\xspace$\sim$2.6 Gyr and lower \ensuremath{f_{\rm{H_{2}}}}\xspace$\sim$6\%, closer to our observations but still $\sim4\times$ larger than our stacked result.
We note that our 3$\sigma$ upper limits and our assumption about r$_{21}$ are conservative, and thus the real gas fractions are likely much lower than the figure suggests. Therefore, we speculate that \ensuremath{t_{\rm{dep}}}\xspace\ must remain rapid, in disagreement with extrapolations from scaling relations, as galaxies move below the main sequence.
Unfortunately, our toy model is highly sensitive to the form of scaling relations at high masses, high redshifts, and below the main sequence, which is poorly explored parameter space.
This highlights the need for further exploration of gas reservoirs in galaxies below the main sequence in the early Universe.
Our finding that scaling relations do not describe galaxies below the main sequence (at least outside of the local Universe) is in agreement with findings by \citet[][]{Spilker2018}.
Half of their sample (the half with higher log$_{10}$sSFR $>-1.2$ Gyr and lower mass log$_{10}\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace\lesssim11$) was detected in CO with \ensuremath{f_{\rm{H_{2}}}}\xspace$\sim7-15$\%, in agreement with scaling relations.
However, the \ensuremath{f_{\rm{H_{2}}}}\xspace\ limits measured in their non-detected sample (with similar sSFR and \ensuremath{M_{\rm{*}}}\xspace\ to our sample) were significantly lower than the expectations based on scaling relations. Both our data and that of \citet{Spilker2018} indicate that scaling relations for the star-forming population don't extrapolate to populations with lower sSFR, and break down around $3-5$ times below the main sequence \citep{Spilker2018}. Rather, \ensuremath{t_{\rm{dep}}}\xspace\ likely remains short below the main sequence until gas is used and destroyed, and what little is left cannot be efficiently converted into stars, thereby increasing \ensuremath{t_{\rm{dep}}}\xspace.
Despite the uncertainty in behavior of scaling relations below the main sequence at high-redshift, these comparisons are useful because they qualitatively indicate that \ensuremath{t_{\rm{dep}}}\xspace\ must be rapid when galaxies are shutting off their star-formation. These conclusions are the same whether we assume the galaxy originates on the main sequence or in a starburst phase \citep[with even faster typical \ensuremath{t_{\rm{dep}}}\xspace; e.g.][]{Silverman2015, Silverman2018}. Smoothly evolving models of departure from the main sequence where star-formation efficiency is decreased and \ensuremath{t_{\rm{dep}}}\xspace\ is increased (i.e. reservoirs of gas exist but do not form stars) are inconsistent with our observations.
Finally, we note the possibility that \ensuremath{t_{\rm{dep}}}\xspace\ evolution is not smooth, and an initial rapid drop in gas fraction due to, e.g. increased star formation efficiency or feedback as galaxies go below the main sequence, is followed by an extended period of low \ensuremath{f_{\rm{H_{2}}}}\xspace\ and long \ensuremath{t_{\rm{dep}}}\xspace. That long depletion times kick in after most gas is gone is also consistent with simulations presented by \citet{Gensior2020} that indicate that suppression of star formation efficiency (i.e. lengthening of \ensuremath{t_{\rm{dep}}}\xspace) due to dynamical stabilization by growth of a bulge in galaxies below the main sequence has an impact only at low \ensuremath{f_{\rm{H_{2}}}}\xspace \citep[$<$5\%; see also][]{Martig2009, Martig2013}. Such a scenario implies an even faster initial depletion of gas than we model here. Therefore, the \ensuremath{t_{\rm{dep}}}\xspace\ values derived for our sample from these toy models should be considered upper limits.
\begin{figure*}[]
\begin{center}
\includegraphics[scale=1., trim=10 0 10 10,clip]{fig7.pdf}
\label{fig:fgas_simba}
\caption{ Comparison of our observations to extrapolated \ensuremath{f_{\rm{H_{2}}}}\xspace\ from scaling relations \citep{Tacconi2013,Scoville2017, Tacconi2018, Liu2019} as a function of sSFR at fixed $z=1.5$, for $10.8<$\ensuremath{M_{\rm{*}}}\xspace$<11.6$ (indicated by shaded region) to match the range of observations plotted \citep[symbols with black edges; this work and][]{Gobat2018}. \textsc{simba} quiescent galaxies that meet our selection criteria are circles (those with no ongoing SFR have a floor set to log$_{10}$ sSFR = -13 yr$^{-1}$). Our observed limits are in agreement with the low gas fractions predicted by \textsc{simba} simulations, and both have significantly lower \ensuremath{f_{\rm{H_{2}}}}\xspace than expected from scaling relations.
}
\end{center}
\end{figure*}
\subsection{Comparison to analytic bathtub models}
Further insight is possible by comparing to analytical ``bathtub'' models, where the gas content of galaxies is an equilibrium of gas infow, outflow and consumption by star formation \citep[e.g.][]{Dave2012, Finlator2008, Bouche2010, Lilly2013, PengMaiolino2014, RathausSternberg2016}. This self-regulation,
to first order, appears to describe the behavior of \ensuremath{f_{\rm{H_{2}}}}\xspace\ across star forming galaxy populations remarkably well \citep{Tacconi2020}.
However, as halos grow above \ensuremath{M_{\rm{halo}}}\xspace$>10^{12}$ \ensuremath{\rm{M}_\odot}\xspace, the accretion of baryons is slowed down due to shock heating at the virial radius \citep[e.g.][]{DekelBirnboim2006}. More massive halos reach this critical mass at higher redshifts, spending a longer fraction of cosmic time without accreting new fuel for star formation.
In this section, we compare our observed gas fractions to that predicted using the simple analytic equilibrium model for \ensuremath{f_{\rm{H_{2}}}}\xspace(z, \ensuremath{M_{\rm{halo}}}\xspace) outlined in \citet[][]{Dave2012}. For a given halo mass and formation redshift, gas in the galaxy is computed from cosmological accretion as a function of \ensuremath{M_{\rm{halo}}}\xspace\ \citep{Dekel2009}, simple stellar and preventative feedback prescriptions that remove gas or keep it hot in the halo, and consumption from the star formation rate. Although the gas fraction in this model is not a self-consistent model for gas evolution because it is computed from the star formation rate with an assumed star formation efficiency, this comparison nonetheless is a simple intuitive tool to qualitatively compare the relative impact of competing processes in galaxies that affect the gas fraction evolution.
In Figure \ref{fig:fgas_dave} we show a series of these models in comparison to our observations. We show \ensuremath{f_{\rm{H_{2}}}}\xspace\ for galaxies in halos that reach masses at z=0 of \ensuremath{M_{\rm{halo}}}\xspace$=$10$^{11}$ (black), 10$^{12}$ (magenta), 10$^{13}$ (orange) and 10$^{14}$ (yellow) \ensuremath{\rm{M}_\odot}\xspace\ published in \citet[][]{Dave2012}. Observed galaxies are color-coded by their inferred halo mass at the redshift of observation, using the stellar mass to halo mass relation of \citealt{Behroozi2010} as implemented in \textsc{halotools} \citep[][assuming no scatter, therefore the uncertainties are likely large]{Hearin2017}.
This model predicts that only halos that reach 10$^{14}$\ensuremath{\rm{M}_\odot}\xspace\ by $z=0$ halt accretion early enough in cosmic time to allow gas consumption to reach the low \ensuremath{f_{\rm{H_{2}}}}\xspace\ we observe in our sample. A halo with 10$^{14}$\ensuremath{\rm{M}_\odot}\xspace\ at $z=0$ reaches this critical halo mass of 10$^{12}$\ensuremath{\rm{M}_\odot}\xspace\ at $z\sim4$, and exceeds the quenching threshold (which evolves slightly with z) around $z
\sim3$ \citep{Dekel2009, Dave2012}. For \ensuremath{M_{\rm{halo}}}\xspace $\lesssim$10$^{13}$\ensuremath{\rm{M}_\odot}\xspace\ there is not enough time to consume the gas already accreted, and other effects would be required (e.g. gas destruction from feedback) to match our low gas fractions.
For Mhalo $>10^{13}$ we also plot additional mathematical forms to describe the outflow term in the equilibrium model (variations to the stellar feedback prescription, which vary the star formation efficiency). The dotted line indicates a mass loading factor that lowers efficiency at low masses, and the dashed line indicates an additional dependence on metallicity, that decreases gas consumption at low metallicity. These variations mostly impact growth and gas fraction at low galaxy mass and improve agreement with observations at low masses, but for our case the differences are small and do not impact this result.
Based on these models we speculate that a plausible explanation of our observations is that our galaxies reside in massive halos (10$^{14}$\ensuremath{\rm{M}_\odot}\xspace\ by $z=0$) that grew above the critical mass of 10$^{12}$\ensuremath{\rm{M}_\odot}\xspace\, slowing gas accretion early enough in cosmic time ($z\sim4$) to reach low gas fractions by $z\sim1.5$. This scenario is qualitatively similar to the idea of cosmological starvation explored in \citep{FeldmannMayer2015}.
Estimated halo masses for the ALMA sample are consistent with this picture. The stellar mass to halo mass relation predicts typical log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace\ for our sample of $13.5-14$ at their respective redshifts \citep{Behroozi2010}, in general agreement with inferred halo masses from clustering of quiescent galaxies at $z>1$ \citep[e.g.][]{Ji2018}.
Furthermore, the relative number density of our ALMA sample from integrating the observed stellar mass function \citep[$\sim$10$^{-5}$Mpc$^{-3}$;][]{Tomczak2014} is similar to that of log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$13.5 at our typical redshift $z\sim1.4$ \citep[halos that will reach 10$^{14}$\ensuremath{\rm{M}_\odot}\xspace\ by z=0; calculated using the halo mass function calculator \textsc{hmf} published by][assuming the halo mass function of \citealt{Behroozi2013}]{Murray2013}.
Were our sample too numerous compared to the requisite mass halos, it would require some fraction of lower mass halos have their gas destroyed more rapidly than implied by the equilibrium model (e.g. via stronger AGN feedback). We note that these ballpark estimates are uncertain owing to scatter in the stellar mass to halo mass relation as well as uncertainties in linking progenitor populations through cumulative number density evolution \citep{Wellons2017, Torrey2017}.
Unfortunately, the simplicity of this analytical model and the significant intrinsic scatter in the stellar mass to halo mass relation precludes a rigorous test of the idea that reaching high halo mass and stopping accretion at early times is the primary driver of low gas fractions. We can only speculate here that this could be a contributing factor. With recent improvements in cosmological simulations, they may provide more realistic and self-consistent comparisons to observables like \ensuremath{f_{\rm{H_{2}}}}\xspace. We explore these comparisons in the next section.
\subsection{Comparison to cosmological simulations}
Historically, cosmological simulations have been challenged to match massive galaxies in their abundances over cosmic time, as well as to prevent continued star formation in massive quiescent galaxies \citep[for a review see][]{SomervilleDave2015}. Recent advances in feedback prescriptions have enabled progress on both of these fronts \citep{Vogelsberger2014, Schaye2015, Dave2019}, and now face a new challenge to match the ISM properties such as the cold gas reservoirs we study here \citep[e.g.][]{Narayanan2012b, Lagos2014, Lagos2015a, Lagos2015b}. Analysis of recent cosmological hydrodynamical simulations indicate that modern implementations of feedback prescriptions for massive galaxies are able to qualitatively reproduce the global scaling relations for star forming galaxies across cosmic time \citep[e.g.][]{Scoville2017, Tacconi2018, Liu2019} as well as the low \ensuremath{f_{\rm{H_{2}}}}\xspace\ that are observed in massive and quiescent galaxies by $z\sim0$ \citep[e.g.][]{Young2011,Saintonge2017, Davis2016}. With our new observations of \ensuremath{f_{\rm{H_{2}}}}\xspace\ presented here we can now extend these comparisons to massive quiescent galaxies at $z\sim1.5$.
We compare to the predictions for molecular gas reservoirs in the \textsc{simba} simulation \citep{Dave2019}. \textsc{simba} quenches galaxies primarily via its implementation of jet AGN feedback, in which $\sim10^4$ km/s jets are ejected bipolarly from low-Eddington ratio black holes. The jets are explicitly decoupled from the ISM, thus presumably the quenching owes to heating and/or removal of halo gas. \textsc{simba}'s X-ray feedback is important for removing H$_2$ from the central regions \citep[$<0.5R_{e}$;][]{Appleby2020}, which may also contribute to lowering the global molecular content, in general agreement with evidence for inside out quenching observed in molecular gas reservoirs \citep{Spilker2019}.
We select quiescent galaxies from a snapshot at $z\sim1.5$ to match our ALMA target selection criteria: log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$11.3 and log$_{10}$ sSFR$<-10$yr$^{-1}$.
The comparison of the \ensuremath{f_{\rm{H_{2}}}}\xspace\ in \textsc{simba} galaxies compared to our ALMA observations can be seen in Figure \ref{fig:fgas_simba}. Remarkably, \textsc{simba} predicts low \ensuremath{f_{\rm{H_{2}}}}\xspace\ in quiescent galaxies that are consistent with our observational limits. Our ALMA limits on \ensuremath{f_{\rm{H_{2}}}}\xspace\ lie at the upper envelope of \ensuremath{f_{\rm{H_{2}}}}\xspace\ predicted for \textsc{simba} galaxies of similar mass and sSFR, with the majority of \textsc{simba} galaxies containing \ensuremath{f_{\rm{H_{2}}}}\xspace$<$3\%.
90\% of \textsc{simba} galaxies similar to our sample reside in halos with log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace$>$13, and likely truncated accretion of new gas at earlier times. Better observational constraints on \ensuremath{t_{\rm{dep}}}\xspace, the time evolution of gas reservoirs, and the precision of stellar age diagnostics would be required to link this success directly to the destruction from feedback model, and/or the truncation of new gas accretion as explored in the previous section.
\textsc{simba} produces a comparable population of ``slow quenchers" and ``fast quenchers" \citep{RodriguezMontero2019} at these redshifts, and in the future we will examine whether the galaxies consistent with our ALMA limits are preferentially in either category, and measure the associated gas depletion times.
Also in Figure \ref{fig:fgas_simba} we show the scaling relations based on star forming galaxies across redshifts and quiescent galaxies at $z\sim0$ \citep{Scoville2017, Tacconi2018, Liu2019}. The shaded regions correspond to the scaling relations at z$=1.5$ for log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace=10.8 (upper bound set by the average mass of the sample studied in \citealt{Gobat2018}) and log$_{10}$\ensuremath{M_{\rm{*}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace=11.6 (lower bound set by the mass of our most massive galaxy). Both our ALMA limits, as well as the \textsc{simba} predictions, lie well below scaling relations for $\ensuremath{f_{\rm{H_{2}}}}\xspace(\ensuremath{M_{\rm{*}}}\xspace,z,sSFR)$. This is consistent with the results of Section \ref{sec:timescale}, indicating that the simulations also disagree with extrapolations of current scaling relations.
Improvements to future scaling relations should include data from surveys such as this one, in the poorly explored parameter space of high redshift and low sSFR.
\section{Conclusions}
We have conducted the first molecular gas survey of massive quiescent galaxies at $z>1$, using CO(2--1) measured with ALMA. We summarize the findings of our survey as follows:
1. We find very low \ensuremath{f_{\rm{H_{2}}}}\xspace $<2-6$\% measured for massive quiescent galaxies at $z\sim1.5$. The sample uniformly displays \ensuremath{f_{\rm{H_{2}}}}\xspace $<6\%$ and we do not observe any variation with size or stellar density across the large dynamic range of the structural properties within our sample.
2. Depletion times must be rapid as galaxies leave the star forming sequence in order to match our constraints of very low \ensuremath{f_{\rm{H_{2}}}}\xspace. We estimate an upper limit to the typical depletion time of \ensuremath{t_{\rm{dep}}}\xspace$<0.6$ Gyr, much shorter than expected from extrapolating current scaling relations to low sSFR.
3. Our low \ensuremath{f_{\rm{H_{2}}}}\xspace\ limits are generally consistent with the predictions of an analytical ``bathtub" model, for galaxies in massive halos that reach log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace=14 by z=0.
We speculate that ``cosmological starvation" after reaching a critical mass of log$_{10}$\ensuremath{M_{\rm{halo}}}\xspace/\ensuremath{\rm{M}_\odot}\xspace=12 ($z\sim4$ for these halos), contributed to the rapid decline in \ensuremath{f_{\rm{H_{2}}}}\xspace\ required by our observations.
4. Our low \ensuremath{f_{\rm{H_{2}}}}\xspace\ limits are consistent with predictions from the recent \textsc{simba} cosmological simulations with realistic AGN feedback, highlighting another success for state-of-the-art models describing the properties of massive quiescent galaxies. This consistency, like the bathtub model, may also point to the simple truncation and consumption picture. However, with our data we cannot rule out that low gas fractions result from gas destruction from feedback or an increase in the efficiency of gas consumption.
Although it may be observationally expensive, concrete tests of current and future galaxy formation models will rely on building larger datasets that probe the molecular gas properties of galaxies with little on-going star formation. Building statistical samples will be challenging and there are a number of approaches that one could take. Real progress will be made with increasing numbers alone. Another possibility would be to combine information about the SFHs with depletion time tracks to follow individual objects back in time. The extraction of these histories from quiescent galaxies at cosmic noon will soon be enabled by the unparalleled capabilities of the {\it James Webb Space Telescope} ({\it JWST}). Deep photometric and spectrosopic surveys are planned for Cycle 1 \citep{Williams2018, Rieke2019} that will be capable of identifying quiescent galaxies even at $z>4$ and reconstructing their star formation histories with unprecedented detail. These will make ideal targets for future ALMA CO surveys to build our understanding of molecular gas in galaxies that have ceased star formation.
\acknowledgments
This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. We acknowledge valuable discussions with Ivo Labbe, Wren Suess, Sandro Tacchella, Sirio Belli. CCW acknowledges support from the National Science Foundation Astronomy and Astrophysics Fellowship grant AST-1701546. JSS is supported by NASA Hubble Fellowship grant \#HF2-51446 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. K.E.W. wishes to acknowledge funding from the Alfred P. Sloan Foundation. CAW is supported by the National Science Foundation through the Graduate Research Fellowship Program funded by Grant Award No. DGE-1746060. This paper makes use of the following ALMA data: ADS/JAO.ALMA \#2018.1.01739.S, ADS/JAO.ALMA \#2015.1.00853.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Cosmic Dawn Center is funded by the Danish National Research Foundation.
\bibliographystyle{aasjournal}
|
{
"timestamp": "2020-12-04T02:00:22",
"yymm": "2012",
"arxiv_id": "2012.01433",
"language": "en",
"url": "https://arxiv.org/abs/2012.01433"
}
|
\section{Introduction}
\subfile{sections/introduction}
\section{Methods}
\subfile{sections/methods}
\section{Results and Discussion}
\label{sec:results}
\subfile{sections/results}
\begin{acknowledgement}
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No. 19-SI-001. This work is reviewed and released under LLNL-JRNL-816936. The authors thank Brian Gallagher for providing valuable discussions and information.
\end{acknowledgement}
\section{Author Information}
\subsection{Corresponding Authors}
\begin{itemize}
\item Jize Zhang, Center for Applied Scientific Computing, Computing Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550, United States, Email: zhang64@llnl.gov
\item T. Yong-Jin Han, Materials Science Division, Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550, United States, Email: han5@llnl.gov
\end{itemize}
\subsection{Authors}
\begin{itemize}
\item Bhavya Kailkhura, Center for Applied Scientific Computing, Computing Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550, United States; Email: kailkhura1@llnl.gov
\end{itemize}
\subsection{Performance Evaluation Result}
{\color {black} On the test dataset}, \autoref{metrics} shows that Deep Ensembles outperforms rest of the methods in Accuracy and ECE. Dropout achieves the best NLL and also a much better ECE than the Softmax baseline. Intriguingly, the two uncertainty quality metric ranks Deep Ensembles differently (best ECE and worst NLL). This might be attributed to the well-known pitfall of NLL to over-penalize the existence of samples with very low prediction probabilities for their true classes\cite{ovadia2019can}. Thus, we recommend ECE as the default uncertainty quality metric to avoid such pitfall and also for its better interpretability (accuracy vs confidence). Overall, these results demonstrate the effectiveness of uncertainty-aware DNN approaches over the Softmax baseline. It is important to remember that their performance gain over the baseline is not free. In the case of Deep Ensembles, additional cost must be spent on training and inference. For example, we trained an ensemble of 16 classifiers independently, and the computational cost is 16 times higher than the baseline. For Dropout, albeit the training cost is comparable to baseline, the inference computation is similarly expensive as Deep Ensembles.
The success of Deep Ensembles is anticipated. After all, ensembling multiple machine learning models is a well known treatment to reduce the error of a single model, which has been theoretically \cite{dietterich2000ensemble,shi2018crowd} and empirically \cite{fernandez2014we,cortes2018deep} explained. Comparing to Dropout, the other ensemble-like technique, we hypothesize that Deep Ensembles learns multiple models whose predictions are much more diverse (lowly correlated) given the high-dimensionality and nonconvexity of DNN parameter spaces, which is crucial for enhancing the classification error and the uncertainty quantification quality \cite{kendall2017uncertainties}.
\begin{table}[!t]
\caption{Accuracy and uncertainty quality of different methods.}
\label{tbl:example}
\begin{tabular}{lllll}
\hline
Approaches & Accuracy ($\uparrow$) & NLL ($\downarrow$) & ECE ($\downarrow$) \\
\hline
Baseline & 92.3\% & 0.919 & 5.38\% \\
Dropout & 92.1\% & \textbf{0.903} & 2.79\% \\
Deep Ensembles & \textbf{95.3\%} & 0.920 & \textbf{1.52\%} \\
\hline
\end{tabular}
\label{metrics}
\end{table}
\subsection{DL Trustworthiness Case Studies}
In this section, we show how uncertainty scores can be leveraged to answer the aforementioned important problems to make the Materials Discovery workflows dependable.
\subsubsection{Case Study 1: How much data is required to train a DNN?}
The need for large amounts of labelled training data is often the bottleneck to the successful deployment of machine learning models. This is especially crucial for DNNs due to their over-parameterized nature \cite{krizhevsky2012imagenet,lecun2015deep}. Yet, in scientific applications, obtaining high-fidelity labels can be expensive due to the associated costly and time consuming experiments \cite{schmidt2019recent}. Therefore, an important issue is to decide how much training data is required to achieve the desired accuracy level, which allows the user to prioritize experimental plans accordingly.
Conventionally, this task is done by generating the learning curve \cite{cho2015much}, which approximately represents the relationship between the training data amount and the validation accuracy on a set of labelled data unused in training \cite{krizhevsky2012imagenet}. One can even further predict the needed training data size to achieve the required accuracy by extrapolating the learning curves \cite{domhan2015speeding,kolachina2012prediction}. However, a drawback of such conventional validation-based learning curve approach is its reliance on a large amount of labelled data unused in training to accurately evaluate the validation accuracy, which can be expensive or even infeasible in many applications. Here, we ask the question whether we can leverage the predictive uncertainty information to solve this use case with access to unlabeled validation data (i.e., SEM images without material class labels) only. Specifically, we decide to test if the average prediction confidence (which can be computed without any label information) on the unlabeled dataset can be used as a surrogate to assess the validation accuracy and approximately generate the learning curve. The logic behind such approach is that as we have discussed in the previous sub-section, for DNN models with well-calibrated uncertainties (low ECE), the average confidence should closely match the accuracy.
To examine the feasibility, we train DNN classifiers with varying amount of training data (ranging from $10\%$, $20\%$ to $100\%$ of the maximum available training dataset size). We monitor the average validation accuracy as well as the predicted accuracy based on confidence, and plot the corresponding learning curves at \autoref{curve}. We see a significant difference between curves for Softmax. Specifically, it over-estimates the validation accuracy and will result in under-estimating the needed training data amount. For Dropout, the two curves consistently stay close, but seem to be weakly correlated, since the average confidence can sometimes decrease while the validation accuracy keeps improving. This weak correlation can be harmful in certain scenarios. For example, the users might want to determine if the DNN performance continues to improve as training data grows, and the Dropout predicted learning curves may lead them into wrong decisions. For Deep Ensembles, the predicted learning curve not only closely matches the actual one, but also showed nearly identical trends.
\textbf{{To summarize, these results show that uncertainty scores from both Dropout and Deep Ensembles can be leveraged to predict the required amount of training data to achieve a certain validation accuracy without having access to labelled validation data.}}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{images/lc.png}
\caption{\textbf{Uncertainty-guided learning curves.} The predicted (confidence-based) and actual learning curves using different approaches.}
\label{curve}
\end{figure}
\subsubsection{Case Study 2: How to equip DNNs with a reject option?}
\begin{figure}[!t]
\centering
\includegraphics[width=0.6\textwidth]{images/selective.png}
\caption{\textbf{Uncertainty-guided decision referral.} Risk coverage curves for different DNN approaches.}
\label{select}
\end{figure}
Next, we tested another practical use of predictive uncertainties to identify confusing samples so that DNN can be refrained from making a prediction. This reduces the risk of making erroneous decision by rejecting to trust the classifier on certain instances and refer other difficult material samples for further testing and evaluation, instead of making an erroneous decision. This idea, formally referred as \emph{selective classification} \cite{chow1957optimum,el2010foundations}, has been introduced recently in the context of DNN classifiers \cite{geifman2017selective}.
In this case study, we design the reject mechanism based on the predictive entropy, where the user is allowed to reject the prediction if the entropy of a DNN prediction exceeds a certain threshold. The quality of uncertainty can then be reflected by the effectiveness of the reject mechanism. To measure the performance, we adopted the \emph{risk-coverage trade-off curve} \cite{el2010foundations,geifman2017selective} of selective classification. Holistically, the \emph{coverage} refers to the ratio of data points for which the classifier is confident enough (i.e., the predictive uncertainty is lower than a given threshold) while the \emph{risk} presents the classification error among such sufficiently-confident points. The ideal goal would be to minimize the risk while maximizing the coverage.
We compared different approaches for selective classification based on the risk-coverage trade-off curves in \autoref{select}. We see that the trade-off curves based on the baseline Softmax and Dropout uncertainties are nearly identical, while Deep Ensembles performed much better on this task. For example, while offering the same level of 90\% coverage (i.e., the classifier will reject to classify on the top 10\% instances that it is mostly uncertain about), Deep Ensembles has around 1.5\% classification error, much lower than the other two approaches (3.5\%). This further verifies the superior uncertainty qualities of Deep Ensembles, and presents us another practical benefit of well-calibrated prediction uncertainties.
\textbf{To summarize, our results show that Deep Ensemble uncertainty guided decision referral can dramatically improve the classification accuracy on the non-referred material samples while maintaining a minimal fraction of referred (rejected) material samples.}
\subsubsection{Case Study 3: How to make DNNs recognize Out-of-Distribution examples?}
In the real-world setting, DNNs often encounter data collected in a different condition from those being used in the DNN training process. This can occur because of (a) changes in the image acquisition conditions, (b) changes in the synthesis conditions (e.g., discovery of a new material), or (c) unrelated data sample.
In such cases, it is crucial to have a detection mechanism to flag such \emph{out-of-distribution} (OOD) data points that are far away from the training data's distribution.
In this section, we test the potential use of predictive uncertainties for detecting OOD data points, with the underlying logic being that DNN models with well-calibrated uncertainties should assign higher predictive uncertainties to the OOD instances. We formulate the OOD detection problem as a binary classification problem based on the predictive entropy, and quantify the performance of the corresponding OOD classifiers. The OOD data are regarded as the positive class and the in-distribution data as the negative class, and the OOD classifiers make decisions solely based on the values of prediction entropy. We adopt the evaluation metric in ref.~\citenum{hendrycks2018deep}, and measure the classification performance using the \emph{Receiver Operating Characteristic curve} (ROC) and the \emph{Area Under the Curve} (AUC). The ROC curve plots the False Positive Rate (the probability of in-distribution data being classified as OOD) versus the True Positive Rate (the probability of OOD data being classified as OOD) for different threshold on the entropy. Therefore, the closer the ROC curve is to the upper left corner \textcolor{black}{($0.0, 1.0$)}, the better the OOD classifier is \cite{zweig1993receiver}, while a totally uninformative classifier would exhibit a diagonal ROC curve. The AUC provides a quantitative measure on the ROC curves by measuring the areas under them. Higher (closer to 1) AUC value is better, while the uninformative classifier has an AUC of 0.5.
\paragraph{Detecting Changes in Image Acquisition Conditions.}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[height=1.2in]{images/at_original.png}
\caption{Representative SEM images with the original filament}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[height=1.15in]{images/at_newnewnew.png}
\caption{{\color {black} Representative SEM images with the new filament}}
\end{subfigure}
\caption{The effect of changing filaments on the SEM images while maintaining fixed brightness and contrast acquisition settings from the same \emph{lot}.}
\label{change}
\end{figure}
\begin{table}[!t]
\caption{In-distribution and OOD classification accuracy for the \emph{AT} lot}
\label{tbl:example}
\begin{tabular}{l|ccc}
\hline
Test Accuracy & Softmax & Dropout & Deep Ensembles \\
\hline
In-distribution & 83.5\% & 92.1\% & 89.5\% \\
OOD & 70.4\% & 73.1\% & 70.9\% \\
\hline
Drop due to OOD & 13.1\% & 19.0\% & 18.6\% \\
\hline
\end{tabular}
\label{at_metrics}
\end{table}
To facilitate a more concrete understanding on the necessity of such OOD detection mechanism, let us first examine how our DNN classifiers perform on some real-world OOD SEM images. This particular OOD phenomena is caused by replacing the SEM filament. As a result, while the brightness and contrast settings of SEM images were held constant before and after the filament change, the newly collected images are different to the data we used in training and testing (see \autoref{change}) for the same lots. This is particularly applicable in an automated image collection workflow, where image collection conditions are usually set to static and constant conditions without human intervention. Such type of OOD is commonly denoted as \emph{covariate shift} in Machine Learning community \cite{sugiyama2007covariate}. We recorded the DNN classification accuracy on the in-distribution (original filament) and the OOD (replaced filament) SEM images from the same material lot in \autoref{at_metrics}. The gaps between in-distribution and OOD accuracy are substantial, ranging from 13\% to 19\%. This highlights the risk of being misled by DNN's erroneous predictions when encountering such real-world OOD data.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/at_softmax.png}
\caption{Softmax}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/at_drop.png}
\caption{Dropout}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/at_de.png}
\caption{Deep Ensembles}
\end{subfigure}
\caption{ROC curves and AUC for detecting \textbf{covariate-shifted SEM material images} based on the predictive uncertainties from different approaches.}
\label{shift}
\end{figure}
Now we focus our attention to the problem of detecting the data generated using different image acquisition conditions than the training data.
In this experiment, we use 1000 SEM images from the existing SEM image dataset as the in-distribution data, and obtain 1000 covariate shift (replaced filament) images for the same material as the OOD data. The ROC curves and AUC are shown in \autoref{shift}. The classifiers based on Softmax and Dropout uncertainties both performed poorly (flat ROC curves and low AUC value), indicating that the OOD detection based on such uncertainties will not work. This is due to the fact that the difference between in- and out-of-distribution images are subtle in this experiment, making the detection task very challenging. On the other hand, the OOD classifier based on Deep Ensembles performed much better (visually from ROC curve or quantitatively from AUC). From a practical point of view, it might be the only OOD detector capable to identify a large number of replaced-filament images without triggering a high volume of false positive alarms. The superiority of Deep Ensembles is not completely surprising -- it aligns with some prior research \cite{ovadia2019can} that also identified Deep Ensembles as the best performer on covariate shift data.
\paragraph{Detecting Changes in Synthesis Conditions.}
In this experiment, we push the OOD dataset further away from the training data distribution. Specifically, the OOD data points are SEM images for some \emph{unseen classes} of the TATB crystal material, {\color {black} i.e., they do not belong to the 30 classes in the training dataset due to different manufacturing techniques or post-processing (i.e., grinding) which will produce very different looking TATB crystals}. The occurrence of OOD data from unseen classes will be frequently encountered in realistic applications such as material discovery due to synthesis condition changes. As seen from \autoref{novel}, all examined approaches achieve acceptable performance (AUC higher than 0.7), meaning that they should be applicable to distinguish the SEM images from novel material classes.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/novelclass_softmax.png}
\caption{Softmax}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/novelclass_drop.png}
\caption{Dropout}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/novelclass_de.png}
\caption{Deep Ensembles}
\end{subfigure}
\caption{ROC curves and AUC for detecting \textbf{SEM material images of unseen classes} based on the predictive uncertainties from different approaches.}
\label{novel}
\end{figure}
\paragraph{Detecting Unrelated Data.}
Finally, we examine an extreme case for OOD detection, where the OOD samples are truly far away (or unrelated) from the distribution of material SEM images. For this case, we obtained OOD images from the CIFAR-10 natural image dataset (including 10 categories of images, such as cats, dogs and birds) \cite{krizhevsky2009learning}. As the CIFAR-10 images were RGB color images on lower (32 by 32) resolution, some grayscale transformation and upsampling was conducted to convert them into the format of SEM images (64 by 64 grayscale). The detection results are shown in \autoref{cifar}. Deep Ensembles achieved a near-perfect detection result (AUC close to 1). Interestingly, the simple Softmax baseline also performed better than Dropout, as the latter only achieved a 0.42 AUC (worse than randomly guessing). Although CIFAR-10 images are visually distinguishable than material SEM images, the results show that it is still non-trivial to obtain a good uncertainty-based OOD detector. The near-perfect performance of Deep Ensembles is very impressive and validates the superiority of its uncertainty estimates over Softmax and Dropout.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/cifar_softmax.png}
\caption{Softmax}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/cifar_drop.png}
\caption{Dropout}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/cifar_de.png}
\caption{Deep Ensembles}
\end{subfigure}
\caption{ROC curves and AUC for detecting \textbf{unrelated CIFAR-10 images} based on the predictive uncertainties from different approaches.}
\label{cifar}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_at_softmax.png}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_novelclass_softmax.png}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_cifar_softmax.png}
\end{subfigure}
\caption{\textbf{Softmax:} histogram comparisons of the predictive entropy for the in-distribution and out-of-distributions from various datasets.}
\label{histo_at}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_at_dropout.png}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_novelclass_dropout.png}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_cifar_dropout.png}
\end{subfigure}
\caption{\textbf{Dropout:} histogram comparisons of the predictive entropy for the in-distribution and out-of-distributions from various datasets.}
\label{histo_dropout}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_at_de.png}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_novelclass_de.png}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=2.0in]{images/entropy_cifar_de.png}
\end{subfigure}
\caption{\textbf{Deep Ensembles:} histogram comparisons of the predictive entropy for the in-distribution and out-of-distributions from various datasets.}
\label{histo_de}
\end{figure}
\paragraph{Can we leverage uncertainties to identify different types of shifts?}
In this section, we ask the question if uncertainty-guided OOD detection approaches can differentiate among different sources of distributions shifts. This is an important feature to have as this might inform users on what should they do with the OOD data, for example, if the user could utilize the OOD data to augment the existing training data (the case of changing image acquisition conditions), conduct further testing (the case of changing synthesis conditions), or simply discard the data (the case of unrelated data).
To answer this question, we characterize the distribution of predictive entropy for in-distribution and OOD data using histograms in \autoref{histo_at} to \autoref{histo_de}. Intuitively, we expect the predictive entropy of OOD data to be always higher (i.e., more uncertain) than in-distribution ones. Furthermore, this discrepancy should become more noticeable as the OOD data shifts away from the training data distribution. However, from \autoref{histo_at} and \autoref{histo_dropout}, we observe that both Softmax and Dropout are very confident (assigning low entropy values) on their predictions for domain-shift and CIFAR OOD data, although they both performed reasonably well for Unseen-class data.
On the other hand, as seen in \autoref{histo_de}, Deep Ensembles always produce higher predictive entropy for all examined OOD datasets, and the gap between in-distribution and OOD samples' predictive entropy indeed becomes more apparent with an increase in the amount of shift. In other words, Deep Ensembles can differentiate among different sources of shifts.
\textbf{To summarize, our results show that uncertainties from Deep Ensembles can be used to detect out-of-distribution sample. Further, their uncertainties are able to differentiate the sources of distribution shifts and hint towards what to do with the OOD data, e.g., using the OOD data with changed image acquisition conditions in data augmentation, conducting new mechanical testing after detecting the OOD data from unseen classes of materials, or simply discarding the unrelated OOD data.}
\section{Conclusion}
\label{sec:discussion}
In this work, we successfully demonstrated the benefits, applicability and limitations of uncertainty-aware deep learning methods for making materials discovery workflows more dependable. Specifically, we showed how uncertainty-guided methods can serve as a unified approach to answer several important issues in {\color {black} the examined material classification problem}. There are still some issues yet to be resolved for a successful application of machine learning in Materials Discovery workflows, but leveraging uncertainties in DL models is a first step to addressing implementation of DL models for materials applications.
\end{document}
\subsection{Main Contributions}
Deep Learning based solutions for Materials Informatics applications have so far been suggested some times without considering these important questions listed in the previous section . Yet, techniques to ensure the dependability of automated decisions are crucial for integrating DL in Materials Discovery workflows. The main contribution of this paper is to show that uncertainty-aware DL is a unified solution that is capable of answering all these questions by leveraging predictive uncertainty of DNNs.
We demonstrate the applicability of our technique by using DL models to classify the microstructural differences of a material based on their corresponding SEM images.
Summary of our findings are as follows.
\begin{itemize}
\item First, we show that by leveraging predictive uncertainty one can estimate classification accuracy at a given training sample size without relying on labelled data.
This serves a general methodology for determining the training data set size necessary to achieve a certain target classification accuracy.
\item Next, we show that predictive uncertainty can be a guiding principle to decide which material samples should be referred to a material scientist for further testing and evaluation instead of making a DL based prediction. We find that this uncertainty guided decision referral can dramatically improve the classification accuracy on the remaining (i.e., non-referred) examples.
\item Finally, we show that predictive uncertainty can be used to detect distributional changes in the test data. We find that this scheme is accurate enough to detect a wide range of real-world shifts, e.g., due to changes in the imaging instrument or changes in the synthesis conditions.
\end{itemize}
Although we focus on a specific materials application in this paper, the proposed methodology is quite generic and can be used to make the application of DL to a wide range of scientific domains dependable and trustworthy.
\end{document}
\subsection{Data Sets}
Our main dataset is comprised of SEM images of 30 different lots of 2,4,6-triamino-1,3,5-trinitrobenzene (TATB). Here, a lot refers to TATB crystals produced under a specified synthesis/processing condition. TATB is an insensitive high explosive compound of interest for both Department of Energy and Department of Defense \cite{willey2006changes}. After each lot has been synthesized (with different synthesis conditions), each lot is analyzed with a Zeiss Sigma HD VP SEM to produce high-resolution scanned images, while holding image acquisition conditions (e.g., brightness and contrast settings) fixed across all lots/images. Each image tile consists of 1000$\times$1000 pixels, with a corresponding field of view of 256.19 $\mu$m$\times$ 256.19 $\mu$m. The combined images captured for the 30 lots resulted in 59,690 greyscale SEM images. Thus, labelled data of interest for DNNs corresponds to 59,690 greyscale SEM images labeled with unique designators per class (30 in total). Example SEM images for TATB for some classes are provided in \autoref{tatb}. One can notice strong visual discrepancy across SEM images from different lots (or classes).
\begin{figure}[!t]
\centering
\includegraphics[width=.5\textwidth]{images/brian_fig.jpg}
\caption{ Representative SEM images to illustrate the typical microstructural variability for different TATB lots. The varying particle size, porosity, polydispersity, and facetness can be clearly observed. Images have been processed to normalize image contrast and brightness levels. {\color {black} Reprinted with permission from Ref.~\citenum{gallagher2020predicting} (CC BY 4.0)}.}
\label{tatb}
\end{figure}
\subsection{Deep Learning Models}
Let $\mathcal{X}$ denote the SEM images and $\mathcal{Y}=\{1,\ldots,K\}$ represents the $K$ classes of materials. We use $\mathcal{D}=\{\mathbf{x}^{(i)},y^{(i)}\}_{i=1}^{N}$ to represent the $N$ training data points (pairs of images and labels). {\color {black} Before being fed into the model, the SEM images will be down-sampled to the resolution level of 64 by 64, and the greyscale on each pixel will be normalized into the range of [0, 1].}
Given training and validation dataset, our goal is then to learn a classifier that predicts the quality of material samples in unseen test dataset.
We trained the following vanilla and uncertainty-aware models:
\begin{itemize}
\item \textit{Vanilla Softmax}: Our uncertainty-unaware baseline simply regards the softmax outputs provided by the DNN as the predictive probabilities. Unfortunately, high confidence softmax predictions can be woefully incorrect, and may fail to indicate when they are likely mistaken\cite{guo2017calibration}, or detecting OOD \cite{Bulusu20}.
\item \textit{Dropout}: We use Dropout \cite{gal2016dropout}, a variational-inference based Bayesian uncertainty quantification approach. During the training process, Dropout DNN is trained to minimize the approximating distribution (i.e., a product of the Bernoulli distribution across the DNN parameters) and the Bayesian posterior for the DNN parameters. At inference time, Dropout predicts the outputs by Monte-Carlo sampling the network with randomly dropped out units and averaging them, which is equivalent to integrating the posterior distribution and the predictive likelihood. Simple and computationally lightweight nature of Dropout may provide approximated posteriors that are inaccurate in some scenarios \cite{louizos2017multiplicative,kuleshov2018accurate,cortes2019reliable}.
\item \textit{Deep Ensemble}: Finally, we include Deep Ensembles \cite{lakshminarayanan2017simple}, a practical, scalable and non-Bayesian uncertainty quantification alternative for DNNs. As the name suggests, the core idea is to train the DNN classifier in the identical manner (i.e., same model architecture, training data and training procedure) multiple times, but each time with a random initialization of model parameters. With $T$ DNN classifiers (parameterized by $\theta^t,t=1,\ldots,T$) being included in the ensemble, the prediction probability vector for Deep Ensembles is the averaged softmax vector of each DNN.
\end{itemize}
We use a Wide Residual Network (WRN) \cite{zagoruyko2016wide} architecture due to its strong performance on benchmark computer vision datasets \cite{rawat2017deep}.
{\color {black} We apply a depth of 16 and a widen factor of 2, and the summary network structure is given in \autoref{wrn}.} Given WRN architecture, we train these DNN models with the cross entropy loss function and the Adam optimizer \cite{kingma2015adam}. The DNN model is trained from scratch (therefore, no pre-training). We set the learning rate of 0.001, and decay the learning rate by half every 50 epochs. We use a minibatch size of 64 and a weight decay factor of $5e^{-4}$. {\color {black} Hyperparameters (including learning rate, weight decay factor, as well as the network depth and width) were determined through the HpBandster toolbox, an efficient tool for hyperparameter optimization \cite{falkner2018bohb}. }
We used 200 epochs with early stopping mechanism to terminate training when the validation performance did not improve after 100 epochs. For Dropout, we used a dropout rate of $p=0.3$ and $T=16$ dropout samples for inference. For Deep Ensembles, we train $T=16$ models. All other hyperparameters are kept the same as in the standard baseline case.
No image pre-processing technique was adopted in the training process. Horizontal flips were used for training data augmentation. We randomly divide the labelled SEM image dataset into 80\%, 10\%, and 10\% splits for training, validation and testing, respectively. {\color {black}The codes can be found online \footnote{Codebase: \url{https://github.com/zhang64-llnl/Uncertainty-DL-Material-Discovery}.}.}
\begin{figure}[!t]
\centering
\includegraphics[width=.75\textwidth]{images/WRN_ACS.pdf}
\caption{ A Wide ResNet architecture with a depth of 16 and a width of 2. The notation (k$\times$k, n) in the convolutional block and residual blocks denotes a filter of size k and n channels. The dimensionality of outputs from each block is also annotated. The detailed structure of the residual block is shown in the dash line box. Note that batch normalization and ReLU precede the convolution layers and fully connected layer but omitted in the figure for clarity.}
\label{wrn}
\end{figure}
\subsection{Performance Evaluation Metrics}
We evaluate the performance of DNN models in terms of their (a) predictive performance, and (b) predictive uncertainty quality {\color {black} on the testing dataset}.
The predictive performance is measured using the \textbf{Classification Accuracy} metric (i.e., the percentage of correct predictions among all data points).
On the other hand, following ref.~\citenum{gal2015thesis}, we use the Shannon-entropy \cite{shannon1948mathematical} as the metric to quantify the uncertainty inside the prediction probability vector $p(y|\mathbf{x},\mathcal{D})=[p(y=1|\mathbf{x},\mathcal{D}),\ldots,p(y=M|\mathbf{x},\mathcal{D})]$:
\begin{equation*}
\mathbb{H}(y|\mathbf{x},\mathcal{D}) := -\sum_{k=1}^{K} p(y=k|\mathbf{x},\mathcal{D})\log p(y=k|\mathbf{x},\mathcal{D}),
\end{equation*}
Basically, it captures the average amount of information contained in the prediction: $\mathbb{H}$ attains its maximum value when the classifier prediction is purely uninformative (assign all classes equal probability $1/K$), and attains its minimum value when the classifier is absolutely certain about its prediction (assign zero probability to all but one class).
The quality of the predictive uncertainties is quantified using the following metrics:
\begin{itemize}
\item \textit{Negative Log-likelihood (NLL):} It is a standard metric of the uncertainty quality by calculating the log of the joint probabilities of predictions on all test samples \cite{gneiting2007strictly}:
\begin{equation*}
NLL = \frac{1}{N}\sum_{i=1}^N -\log p(y=y^{(i)}|\mathbf{x}^{(i)},\mathcal{D})
\end{equation*}
Lower NLL indicates better uncertainty quality.
\item \textit{Expected calibration error (ECE):} Calibration accounts for the degree of consistency between the predictive probabilities and the empirical accuracy. We adopt a popular calibration metric ECE \cite{naeini2015obtaining}, measuring the average absolute discrepancies between the prediction confidence versus the accuracy:
\begin{equation*}
ECE= \frac{1}{N_b}\sum_j^{N_b} \frac{B_j}{N} |acc(B_j)-conf(B_j)|
\end{equation*}
where we sort the test data points according to their confidence (the prediction probability for the most likely label, i.e., $\max_y p(y|\bf{x})$), and bin them into $N_b$ quantiles. Here $acc(B_j)$ and $conf(B_j)$ is the average accuracy and confidence of points in the $j$th bin, and $B_j$ is the number of data points in such bin. We use 20 equal-spaced bins to measure ECE in this paper. Lower ECE is more favorable.
\end{itemize}
\end{document}
|
{
"timestamp": "2021-04-26T02:05:53",
"yymm": "2012",
"arxiv_id": "2012.01478",
"language": "en",
"url": "https://arxiv.org/abs/2012.01478"
}
|
\section{Introduction}
Runtime, and it's theoretical subset, time complexity, are imperative to understanding the speed and continual efficiency of all algorithms\cite{Nasar}\cite{Aho}. Particularly because runtime information allows for thorough comparisons between the performance of competing approaches. Due to the varying environments in which algorithms are executed, time complexity is implemented as a function of inputted arguments\cite{Sipser} rather than accounting for the situational execution time\cite{Pusch}; this removes the need to address every extraneous factor that affects the speed of such algorithms\cite{Dean}. There are countless methods on determining the formulaic runtime complexity\cite{Qi}\cite{Guzman}, particularly because, from a theoretical perspective, the true runtime can never be determined without thoroughly examining the algorithm itself\cite{ullman}; however, this does not mean that the process cannot be expedited, simplified, or made easier.
The goal is to produce a function $\mathcal{O}(T(n))$ that can model the time complexity of any given algorithm\cite{Mohr}, primarily who's runtime is defined as a function of more than just a single variable. We define $E(foo(args))$ where $foo(args)$ is any given algorithm and $E$ denotes the execution in a controlled environment. The following method can be used to determine run time with respect to a several variables (not just element size) by evaluating CPU time with respect to an input size. Any confounding variables such as CPU type, computing power, and/or programming language, will be bypassed as they will remain controlled during testing. The constructed polynomial series, which will be a piece-wise of segmented quadratics, will then produce the same functional asymptotic behavior as the true time complexity $\mathcal{O}(T(n))$, which can then be independently determined through the correlation with their respective parent functions. In addition, the methods found for computing such runtimes has profound mathematical implications for representing various non-polynomial functions as differentiable quadratic segments, similar, but not identical, to the outcome of evaluating Taylor Series\cite{Jumarie}\cite{Corliss}. In short, we do this by using reference points of any given non-polynomial, and developing a quadratic (using polynomial interpolation\cite{Boor}) over a particular segment that accurately matches the true functional behavior.
\section{Methods}
Our primary condition is the following:
$$
\exists_{x\in\mathbb{R}} \left[{F(x+c) - f(x) = \int_{x}^{x+c}\frac{\partial f}{\partial x}}\right]
$$
Additionally,
$$ \forall(n \in \mathbb{R}: n > 0)\exists\frac{\partial}{\partial n}\mathcal{O}(T(n))$$
This ensures that the targeted Time Complexity function must be constructed of only real numbers and be differentiable throughout, except for segemented bounds. It is important to note that $F(x) \neq \mathcal{O}(T(n)) \vee F(x)\not\approx \mathcal{O}(T(n))$. We also define $E(foo(args)) = k\mathcal{O}(T(n))$, where k is any constant of proportionality that converts the predicted time complexity into execution time or vice-versa.
\subsection{Lagrangian Polynomial Principles}
We first construct a single line of intersection amongst every consecutive ordered pair of segmented indexes and respective computing time (or any alternative performance modeling metric). We use the following standard point slope formula to do so:
\begin{equation}
y = \frac{y_i - y_{i-1}}{x_{i} - x_{i-1}}(x-x_{i}) + y_i
\end{equation}
The polynomial of any given segment can be constructed using the explicit formula below\cite{sauer}\cite{Rashed}, in this case the first three indexes within a data set are used; however, this applies for any given 3 point segment within the data set. Defined as: $\forall(x \in (x_j, x_k))|(k = j + 2))$. Note: The proof for the following formulas is shown in section 2.5.
\begin{equation}
\forall(x \in (x_0, x_2)): f(x) = y_{0}{\frac {(x-x_{1})(x-x_{2})}{(x_{0}-x_{1})(x_{0}-x_{2})}} + y_{1}{\frac {(x-x_{0})(x-x_{2})}{(x_{1}-x_{0})(x_{1}-x_{2})}} + y_{2}{\frac {(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})}}
\end{equation}
We then factor in the polynomial model above and the respective secant line equation, to construct the explicit average form of the initial 3 point segment such that each point is equivalent to the difference between the secant line and the original polynomial.
\newline
\\
$
\forall(x \in (x_0, x_2)): f(x) = $
\begin{equation}
y_{0}{\frac {(x-x_{1})(x-x_{2})}{(x_{0}-x_{1})(x_{0}-x_{2})}} + y_{1}{\frac {(x-x_{0})(x-x_{2})}{(x_{1}-x_{0})(x_{1}-x_{2})}} + y_{2}{\frac {(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})}} +
\left[\frac{y_i - y_{i-1}}{x_{i} - x_{i-1}}(x-x_{i}) + y_i\right]| (i = (1 \vee 2))
\end{equation}
Before we implement this method, we must account for any given segment, and to do so, we must simplify the method of polynomial construction. First we define F(x) to be dependent on our $f_j$ outputs.
\begin{equation}
F(x):=\sum _{j=0}^{k}y_{j}f _{j}(x)\end{equation}
These outputs are determined accordingly (Note: k = 3 in our case; however, the model would work for any value of k):
\begin{equation}
f _{j}(x):=\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{k})}{(x_{j}-x_{k})}} \end{equation}
Such that,
\begin{equation}
{\displaystyle \forall ({j\neq i}):f_{j}(x_{i})=\prod _{m\neq j}{\frac {x_{i}-x_{m}}{x_{j}-x_{m}}}=0}
\end{equation}
\subsection{Estimating $\mathcal{O}(T(n))$ as a Function of Quadratic Segments}
We can then average this with the constructed Lagrangian polynomial to get our model for any given 3-point segment. Note: $\because (F(x_k) = F(x_k)) \wedge (\lim_{x \to k^{-}} \frac{\partial F}{\partial x} \neq \lim_{x \to k^{+}} \frac{\partial F}{\partial x}) \therefore \nexists (\frac{\partial F}{\partial x}|_{x = k})$ We can simplify the given expression to\cite{Berrut}:
\begin{equation}
\forall(x \in (x_j, x_k)): F(x) = \frac{1}{2}\sum _{j}^{k=j+3}y_{j}\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}} + \frac{1}{2}\left[ \frac{y_{k} - y_{k-1}}{x_{k} - x_{k-1}}(x-x_{k}) + y_k \right]
\end{equation}
Such that,
\begin{equation}
\left.\frac{\partial}{\partial x}\left[\sum _{j}^{k=j+2}y_{j}\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}} + \left[ \frac{y_{k} - y_{k-1}}{x_{k} - x_{k-1}}(x-x_{k}) + y_k \right]\right]\right|_{x = x_{j+1}}\approx \left.\frac{\partial}{\partial x} (\frac{1}{k})E(foo(x_{j+1}))\right|_{x = x_{j+1.5}}\end{equation}
as well as the segmented average\cite{Comenetz},
\begin{equation}
\frac{1}{2x_{k} - 2x_{j+1}}\int_{x_j}^{x_k}\left[\sum _{j}^{k=j+2}y_{j}\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}} + \left[ \frac{y_{k} - y_{k-1}}{x_{k} - x_{k-1}}(x-x_{k}) + y_k \right]\right] \approx \int_{x_{j+1}}^{x_k}(\frac{1}{k})E(foo(n))\end{equation}
We then implement the proposed method of each selected segment to construct the function for every iteration of natural numbers by redefining $F(x)$ from a single constructed polynomial to a multi-layered, piece-wise construction of the primary segments of such polynomials.
\begin{equation}
\forall(x \in \mathbb{R} : x > 0)): F(x) = \begin{cases}
\frac{1}{2}\sum _{0}^{2}y_{j}\prod _{\begin{smallmatrix}0\leq m\leq 2\\m\neq 0\end{smallmatrix}}{\frac {x-x_{m}}{x_{0}-x_{m}}} + \frac{1}{2}\left[\frac{y_2 - y_{1}}{x_{2} - x_{1}}(x-x_{2}) + y_2\right] & x_1 \leq x \leq x_2 \\
\frac{1}{2}\sum _{1}^{3}y_{j}\prod _{\begin{smallmatrix}2\leq m\leq 4\\m\neq 2\end{smallmatrix}}{\frac {x-x_{m}}{x_{1}-x_{m}}} + \frac{1}{2}\left[\frac{y_3 - y_{2}}{x_{3} - x_{2}}(x-x_{3}) + y_4\right] & x_2 \leq x \leq x_3 \\
\cdots & \cdots \\
\frac{1}{2}\sum _{n-2}^{n}y_{j}\prod _{\begin{smallmatrix}n-2\leq m\leq n\\m\neq n-2\end{smallmatrix}}{\frac {x-x_{m}}{x_{n-2}-x_{m}}} + \frac{1}{2}\left[\frac{y_n - y_{n-1}}{x_{n} - x_{n-1}}(x-x_{n}) + y_n\right] & x_{n-1} \leq x \leq x_n \\
\end{cases}
\end{equation}
In order to retrieve the complexity of the algorithm at a particular index $i$ we can now simply compute $F(i)$. Note: $ \nexists\left.\frac{\partial F}{\partial x}\right|_{x = {x_{j}}\vee {x_{k}}}$ but $\forall(x \in (x_j, x_k)\exists\left.\frac{\partial F}{\partial x}\right|_{x}$. Additionally, however, the proposed method, when graphed, will construct a continuous function, making it easy to determine the true runtime of the function as $\mathcal{O}T(n)). $
\subsection{Evaluating Quadratic Segments of Multi-variable Algorithms}
\subsubsection{Run Times with Non-Composite Operations}
The following method will suffice if, and only if, the arguments are not directly correlated through any mathematical operation, excluding addition, subtraction, or any non-composite operation. For example, if our unknown time complexity of Algorithm $foo(x, b)$ was $\mathcal{O}(\log_2(x) + b)$. We must first evaluate the execution time with respect to a single variable. We use $E(foo())$, to denote the execution time of the given function; this can be determined by implementing a computing timer into the algorithm. In this case we evaluate the algorithm accordingly:
\begin{equation}
Y_0 = E(foo(x, b))|\{(x \in \mathbb{N}: x > 0)\wedge(b = 0)\}
\end{equation}
Such that,
\begin{equation}
Y_0 = {y_{0_{0}}\vee{}E(foo(x_0, 0)), y_{0_{1}}\vee{}E(foo(x_1, 0)), \cdots, y_{0_n}\vee{}E(foo(x_n, 0))}\end{equation}
And the same for the other argument:
\begin{equation}
X_0 = E(foo(x, b))|\{(b \in \mathbb{N}: b > 0)\wedge(x = 0)\}
\end{equation}
Such that,
\begin{equation}
X_0 = {\chi_{0_{0}}\vee{}E(foo(0, b_0)), \chi_{0_{1}}\vee{}E(foo(0, b_0)), \cdots, \chi_{0_n}\vee{}E(foo(0, b_n))}\end{equation}
In this particular case, we first isolate the $F(x, b)$ in terms of x. To do so we must first ensure $x$ and $b$ are independent of each other. Since, in our sample scenario,
\begin{equation}
E(foo(x, b)) = \log_2(x) + b
\end{equation}
Now, we can conclude that,
\begin{equation}
E(foo(x, 0)) = \log_2(x) + 0 = \log_2(x)
\end{equation}
And,
\begin{equation}
E(foo(0\vee(\forall \in\mathbb{R} > 0), b)) = \log_2(0\vee(\forall \in\mathbb{R} > 0)) + b
\end{equation}
Now, we can evaluate the $E(foo(x, b))$ over a set of fixed data points. First with respect to x:
\begin{equation}
F(x, 0)\vee F_x(x, b) = \frac{1}{2}\sum _{j}^{k=j+3}y_{0_{j}}\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}} + \frac{1}{2}\left[ \frac{y_{0_{k}} - y_{0_{k-1}}}{x_{k} - x_{k-1}}(x-x_{k}) + y_{0_{k}} \right]
\end{equation}
Then with respect to b:
\begin{equation}
F(0, b)\vee F_b(x, b) = \frac{1}{2}\sum _{j}^{k=j+3}\chi_{0_{j}}\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {b-b_{m}}{b_{j}-b_{m}}} + \frac{1}{2}\left[ \frac{\chi_{0_{k}} - \chi_{0_{k-1}}}{b_{k} - b_{k-1}}(b-b_{k}) + \chi_{0_{k}} \right]
\end{equation}
Once we have computed our segmented quadratics with respect a particular index group, we can construct our piece-wise function of $E(foo(x,b))= \log_2(x) + b$ as two independent, graphical representations.
\begin{equation}
F(x,b) = \begin{cases}
\forall(x > 0)):F_x= \begin{cases}
\frac{1}{2}\sum _{0}^{2}y_{j}\prod _{\begin{smallmatrix}0\leq m\leq 2\\m\neq 0\end{smallmatrix}}{\frac {x-x_{m}}{x_{0}-x_{m}}} + \frac{1}{2}\left[\frac{y_2 - y_{1}}{x_{2} - x_{1}}(x-x_{2}) + y_2\right] & x_1 \leq x \leq x_2 \\
\cdots & \cdots \\
\frac{1}{2}\sum _{n-2}^{n}y_{j}\prod _{\begin{smallmatrix}n-2\leq m\leq n\\m\neq n-2\end{smallmatrix}}{\frac {x-x_{m}}{x_{n-2}-x_{m}}} + \frac{1}{2}\left[\frac{y_n - y_{n-1}}{x_{n} - x_{n-1}}(x-x_{n}) + y_n\right] & x_{n-1} \leq x \leq x_n \\
\end{cases}\\
\forall(b > 0)):F_b= \begin{cases}
\frac{1}{2}\sum _{0}^{2}\chi_{j}\prod _{\begin{smallmatrix}0\leq m\leq 2\\m\neq 0\end{smallmatrix}}{\frac {x-x_{m}}{b_{0}-b_{m}}} + \frac{1}{2}\left[\frac{\chi_2 - \chi_{1}}{b_{2} - b_{1}}(b-b_{2}) + \chi_2\right] & b_0 \leq x \leq b_2 \\
\cdots & \cdots \\
\frac{1}{2}\sum _{n-2}^{n}\chi_{j}\prod _{\begin{smallmatrix}n-2\leq m\leq n\\m\neq n-2\end{smallmatrix}}{\frac {b-b_{m}}{b_{n-2}-b_{m}}} + \frac{1}{2}\left[\frac{\chi_n - \chi_{n-1}}{b_{n} - b_{n-1}}(b-b_{n}) + y_n\right] & b_{n-1} \leq b \leq b_n \\
\end{cases}
\end{cases}
\end{equation}
Although our method produces non-differentiable points at segmented bounds, we can still compute partial derivatives at points $\forall(x\in \mathbb{R}: x > 0)$ such as:
\begin{equation}
(\left.\frac{\partial}{\partial x})\frac{1}{2}\sum _{j}^{k=j+3}y_{0_{j}}\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}} + \frac{1}{2}\left[ \frac{y_{0_{k}} - y_{0_{k-1}}}{x_{k} - x_{k-1}}(x-x_{k}) + y_{0_{k}} \right]\right|_{x = x_j + 1}\approx (\left.\frac{\partial}{\partial x})(\log_2x + b)\right|_{x = x_j + 1}
\end{equation}
We can justify the accuracy of $F_x$ with $E(foo(x))$ assuming only one inputted argument accordingly:
\begin{equation}
\because \nexists (\frac{\partial F}{\partial x}|_{x = j}) \wedge \nexists (\frac{\partial F}{\partial x}|_{x = k}) \wedge \because (\lim_{x \to k^{-}} F_x(x) = \lim_{x \to k^{+}} F_x(x) = F_x(x))\end{equation}
We can conclude that:
\begin{equation}
\frac{1}{n}\sum_{\forall(x \in \mathbb{N}: x > x_{k})}^{n}E(foo(x)) \approx\frac{1}{x_{2} - x_{1}}\int_{x_{1}}^{x_{2}}F_x(x) + \frac{1}{x_{3} - x_{2}}\int_{x_{2}}^{x_{3}}F_x(x) + \cdots + \frac{1}{x_{n} - x_{n-1}}\int_{x_{n-1}}^{x_{n}}F_x(x)
\end{equation}
Alternatively,
\begin{equation}
\frac{1}{n}\sum_{\forall(x \in \mathbb{N}: x > x_{k})}^{n}E(foo(x)) \approx\sum_{\forall(x \in \mathbb{N}: x > x_{k})}^{n} \frac{1}{x_{k} - x_{j}}\int_{x_{j}}^{x_{k}}F_x(x)
\end{equation}
\subsubsection{Run times with Composite Operations}
In order to explain the approach used in cases with unknown runtime functions that consist of composite operations, we must implement the following proof.
\begin{theorem}
If the unknown run time function consists of composite operations, such as in $M(x,b) = \frac{1}{b}\log_2(x)$, this can be instantly determined if the functional difference across a set of input values is not just a graphical translation.
\end{theorem}
\begin{proof}[Proof of Theorem 2.1]
If,
\begin{equation}G(x, b) = \log_2(x) + b \wedge M(x, b) = \frac{1}{b}\log_2(x)\end{equation}
Then,
\begin{equation}G(x, 0) = \log_2(x) + 0 = \log_2(x) = G_x(x, b) \vee G(x)\end{equation}
Additionally,
\begin{equation}G(x, 0) = G_x(x, b)\vee G(x)\end{equation}
But,
\begin{equation}M(x, 0) \neq M_x(x, b)\vee M(x)\end{equation}
Due to the non-composite operations of $G$, the value of $b$ does not directly impact the value of $x$, rather just the output of the mulitvariable function. The same can be done conversely with other variables; however, if they are directly correlated, such as in $M(x,b)$ it prevents the difference from being just a translation.
\begin{equation}M(x, 0\vee(\forall(b \in \mathbb{R}: b > 0))) \neq \log_2(x)\end{equation}
Above, it is clear that both independent variables cannot be determined through inputting a constant of 0, causing a non-linear intervariable relationship.
\end{proof}
In order to construct the primary segmented function with equivalent behavior to the multivariable runtime $E(foo(x,b))$ or with any number of arguments, we must run execution tests with respect to each variable such that the remaining are treated as constants. If, like in the example stated earlier, the unknown runtime function was $\frac{1}{b}\log_2x$, then when graphically modeled for $n$ number of tests in terms of $x$, a set of skewed logarithmic curves would be constructed, where as with respect to $y$, a set of hyperbolic functions would be produced. By treating each temporary, non-functional, constant argument as $k_n$ the graphical differences can be factored in, when creating the single time complexity formula.
Although there are several $k$ values that can be used, to keep the methodology consistent, we decide to take the input value that produces the average functional value over a given segment as the select constant. Although the true function is unknown, we can use the constructed, segmented quadratic to do so.
\begin{equation}
\frac{1}{\delta_k - \delta_j}\int_{\delta_j}^{\delta_k}F_\delta(\delta_x, k_b, \cdots, k_z)\partial \delta = a\delta^2 + b\delta + c
\end{equation}
Then we can simply solve for the value of $\delta$ accordingly\cite{Irving}, such that $(\delta\in\mathbb{R})\wedge(\delta > 0)$:
\begin{equation}
\delta = \frac{-b\pm\sqrt{b^2 - 4a(c - \frac{1}{\delta_k - \delta_j}\int_{\delta_j}^{\delta_k}F_\delta(\delta_x, k_b, \cdots, k_z)\partial \delta)}}{2a}
\end{equation}
While this process is still comprehensible, once we start introducing functions with more than 2 arguments, we must test their values in planes with multiple dimensional layers, rather than just one or two dimensions. To do so, we determine the intervariable relationships between every potential pair of arguments and construct a potential runtime formula accordingly. Note: The higher dimensional order of the function, the more convoluted the formulaic determination becomes.
Suppose the intervariable runtime function $E(foo(x,b,c, \cdots, z))$ such that the corresponding segmented quadratic function is $F(x,b,c, \cdots, z)$. We would evaluate the unknown $E(foo(x,b,\cdots, z))$ with respect to a single variable such that the remaining are treated as constants. Using the example above, we would first plug in constant values into $x$ and $b$, while graphically modeling the rate of change of $c$ as an independent function. Then we begin to adjust $b$ with increments of $i$ to determine, their respective transformational relationship. We would repeat this process for every potential pair $(x,b)$, $(x,c)$, $(b,c)$, and so forth.
\begin{equation}
F_{x, b}(x, b, \cdots, z) = F_{x}(x, \forall(b\in\mathbb{R}: b > 0: b=b+i), \cdots, k_z)
\end{equation}
\begin{equation}
F_{b, c}(x, b, \cdots, z) = F_{b}(k_x, b,\forall(c\in\mathbb{R}: c > 0: c=c+i), \cdots, k_z)
\end{equation}
$$
\cdots
$$
\begin{equation}
F_{c, z}(x, b, \cdots, z) = F_{b}(k_x, k_b, c, \forall(z\in\mathbb{R}: z > 0: z=z+i), \cdots, k_z)
\end{equation}
From their we can use the graphical model to help deduce the formulaic runtime with respect to all variables.
Similar to the analysis method with respect to a single variable, we can justify the accuracy by approximately equating the average integrated value of each independent segment with it's true, algorithmic counterpart: Note: We define $l$ as the total number of input arguments.
$$
\frac{1}{(n)(l)}\sum_{\forall(x \in \mathbb{N}: x > x_{k})}^{(n)(l)}E(foo(x, b, \cdots, z)) \approx\frac{1}{x_{2} - x_{1}}\int_{x_{1}}^{x_{2}}F_x((x, b, \cdots, z)\partial x + \cdots + \frac{1}{x_{n} - x_{n-1}}\int_{x_{n-1}}^{x_{n}}F_x(x, b, \cdots, z)\partial x
$$
$$
+ \frac{1}{b_{2} - b_{1}}\int_{b_{1}}^{b_{2}}F_b((x, b, \cdots, z)\partial b + \cdots + \frac{1}{b_{n} - b_{n-1}}\int_{b_{n-1}}^{x_{n}}F_b(x, b, \cdots, z)\partial b
$$
\begin{equation}
+ \cdots + \frac{1}{z_{2} - z_{1}}\int_{z_{0}}^{z_{1}}F_z((x, b, \cdots, z)\partial z + \cdots + \frac{1}{z_{n} - z_{n-1}}\int_{z_{n-1}}^{x_{n}}F_z(x, b, \cdots, z)\partial b
\end{equation}
This method can be simplified accordingly:
\begin{equation}
\frac{1}{(n)(l)}\sum_{\forall(x \in \mathbb{N}: x > x_{k})}^{(n)(l)}E(foo(x, b, \cdots, z)) \approx\sum_{x_0}^{z_n}\sum_{\forall(\delta \in \mathbb{R}: x > \delta_{k})}^{n} \frac{1}{\delta_{k} - \delta_{j}}\int_{\delta_{j}}^{\delta_{k}}F(x, b, \cdots, z) \partial \delta
\end{equation}
\subsection{Segmented Quadratics to Construct Non-Polynomials}
The following subsection will discuss the mathematical applications of this method, and will focus on the proofs behind constructing non-polynomials as piece-wise functions built upon segmented quadratics.
\begin{lemma}
Given $n$ values of $(x \in \mathbb{R})$ with corresponding $n$ values of $(y \in \mathbb{R})$ a representative polynomial $P$ can be constructed such that $\deg(P) < n \wedge P(x_k) = y_k$
\end{lemma}
\begin{proof}[Proof of Lemma 2.2]
Let,
\begin{equation}
P_1(x) = \frac{(x - x_2)(x - x_3)\cdots(x - x_n)}{(x_1 - x_2)(x_1 - x_3)\cdots(x_1 - x_n)}
\end{equation}
Therefore, \begin{equation}
P_1(x_1) = 1 \wedge P_1(x_2) = P_1(x_3) = \cdots = P_1(x_n) = 0\end{equation}
Then evaluate, \begin{equation}
P_2, P_3, \cdots, P_n | P_j(x_j) = 1 \wedge P_j(x_i) = 0 \wedge P_j(x_i) = 0 \forall(i \neq j)\end{equation}
Therefore, $P(x) = \sum_{}^{}y_iP_i(x)$ is a constructed polynomial such that $\forall(x_i\in\mathbb{R}: \exists P(x_i)) \wedge \forall(i \in \mathbb{N}: i < n)$. It is built upon subsidiary polynomials of degree $n-1 \therefore \deg(P) < n$
\end{proof}
\begin{theorem}
Referencing Lemma 1, given any real non-polynomial, an approximate quadratic piece-wise function $F(x)$ can be constructed using $\frac{n}{2}$ segments produced by 3 values, defined over 2 values, of $x \in \mathbb{R}$ and their corresponding outputs such that $F(x)$ is continuous at all x values including respective transition points, but not necessarily differentiable at such values.
\end{theorem}
\begin{proof}[Proof of Theorem 2.3]
Since the initial portion of the polynomial is based upon Lemma 1, it is clear that 3 base points will construct a quadratic polynomial, unless their respective derivatives are equivalent which would produce a sloped line. The following method is shown:
\begin{equation}
F(x) = \begin{cases}
\frac{1}{2}\sum _{0}^{2}y_{j}\prod _{\begin{smallmatrix}0\leq m\leq 2\\m\neq 0\end{smallmatrix}}{\frac {x-x_{m}}{x_{0}-x_{m}}} + \frac{1}{2}\left[\frac{y_2 - y_{1}}{x_{2} - x_{1}}(x-x_{2}) + y_2\right] & x_1 \leq x \leq x_2 \\
\frac{1}{2}\sum _{1}^{3}y_{j}\prod _{\begin{smallmatrix}1\leq m\leq 3\\m\neq 1\end{smallmatrix}}{\frac {x-x_{m}}{x_{2}-x_{m}}} + \frac{1}{2}\left[\frac{y_3 - y_{2}}{x_{3} - x_{2}}(x-x_{3}) + y_3\right] & x_2 \leq x \leq x_3 \\
\cdots & \cdots \\
\frac{1}{2}\sum _{n-2}^{n}y_{j}\prod _{\begin{smallmatrix}n-2\leq m\leq n\\m\neq n-2\end{smallmatrix}}{\frac {x-x_{m}}{x_{n-2}-x_{m}}} + \frac{1}{2}\left[\frac{y_n - y_{n-1}}{x_{n} - x_{n-1}}(x-x_{n}) + y_n\right] & x_{n-2} \leq x \leq x_n \\
\end{cases}
\end{equation}
When simplified, the function would be defined accordingly:
\begin{equation}
F(x) = \begin{cases}
a_0x^2+b_0x+c_0 & x_1 \leq x \leq x_2 \\
a_1x^2+b_1x+c_1 & x_2 \leq x \leq x_3 \\
\cdots & \cdots\\
a_2x^2+b_2x+c_2 & x_n-1 \leq x \leq x_n \\
\end{cases}
\end{equation}
By definition any polynomial is continuous throughout it's designated bounds\cite{Cucker}, therefore, for all values within each segment, $F(x)$ is continuous. And, since the bounded values of each segment are equivalent, we can conclude that the produced function is continuous everywhere. Formally $(\lim_{x \to t^{-}} F(x) = \lim_{x \to t^{+}} F(x) = F(x))$ where $t$ is any bounded point. However, this does not guarantee that $(\lim_{x \to t^{-}} \frac{\partial F(x)}{\partial x} = \lim_{x \to t^{+}} \frac{\partial F(x)}{\partial x}$; therefore it's derivative at the bounded point is likely undefined.
\end{proof}
\begin{theorem}
Given any segmented quadratic $F(x) = ax^2 + bx + c$ constructed through averaging Lagrangian Interpolation with it's respective secant, the graphical concavity can be determined by looking at the sign of variable $a$ such that $(a \in \mathbb{R}) \wedge (b \in \mathbb{R}) \wedge (c \in \mathbb{R})$.
\end{theorem}
\begin{proof}[Proof of Theorem 2.4]
Since $F(x)$ constructed with three base points, the only polynomial function produced are segmented quadratics. Upward concavity exists $\forall(x \in (x_j, x_k)|\frac{\partial^2}{\partial x^2} > 0$; while downward concavity exists $\forall(x \in (x_j, x_k)|\frac{\partial^2}{\partial x^2} < 0$. However, this process can be expedited without the need to compute second derivatives.
\newline
\newline
Since our segmented polynomial is constructed using only three points we can conclude that,
\begin{equation}
\forall(x \in (x_j, x_k)): \{F_x(x) = ax^2 + bx + c \} | \{(a \in \mathbb{R} : a > 0) \wedge (b \in \mathbb{R} : b > 0) \wedge (c \in \mathbb{R} : c > 0)\}
\end{equation}
Therefore,
\begin{equation}
\left.\frac{\partial^2 F}{\partial x^2}\right|_{\forall(x \in (x_j, x_k))} = 2a \therefore \iint{2a}\partial x\partial x = F_x(x) \therefore \int_{x_j}^{x_k}\frac{\partial^2 F}{\partial x^2} = \frac{a}{|a|}\left|\int_{x_j}^{x_k}\frac{\partial^2 F}{\partial x^2}\right|
\end{equation}
Thus, the sign of $a$ is the only significant value to determine the segmented concavity $F_x(x)$. Determining the functional concavity of the Lagrangian construction is important, as with certain functions, primarily those with upward concavity, it may not be necessary to compute secant line averages.
\end{proof}
When testing the mathematical accuracy $A$ of our approach, we use the segmented average value and compare it to that of the original function $G(x)$. In cases where $\int_{a}^{b}G(x) > \int_{a}^{b}F(x)$ we simply compute the reciprocal of the following function. We use $a$ and $b$ as placeholder variables to represent the segmented bounds.
\begin{equation}
A = \frac{\sum_{0}^n\int_{a}^{b}G(x)}{\sum_{0}^{n}\int_{a}^{b}F(x)}
\end{equation}
\section{Results}
We divide the results section into the primary perspective (algorithmic implications) and the secondary perspective (pure mathematical implications).
\subsection{Algorithmic Test Results}
We tested our method on four algorithms (two single variable functions and two multivariable functions) and compared the produced time complexity formulas to the true, known, complexities to see how accurate our formulations were, and the extremity of deviations, if, at all. The first algorithm (single variable) was a binary search of $x$ elements, with a complexity of $\mathcal{O}(\log x)$. The second (single variable) was a sort of $n$ elements, with a complexity of $\mathcal{O}(x\log x)$. The third (multivariable) was a combined search sort algorithm of $n$ unsorted elements, and $b$ select sorted elements, with a complexity of $\mathcal{O}(b + \log x)$. The fourth (multivariable) was a custom algorithm of $n$ elements, with a complexity of $\mathcal{O}(mx + \log\log x)$. Although coefficients and additional constants are implemented in the predicted complexity, due to time being the only output variable, the only relevant component is the contents of the big-$O$ as it represents the asymptotic behavior of the algorithmic runtime regardless of any confounding variables. For multivariable algorithms, like stated in the methods, runtime complexities were computed with respect to each variable, and put together in the form of the final predicted complexity.
\begin{table}[H]
\caption{Predicted Runtime Functions}
\centering
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{cccc}
\toprule
\textbf{Complexity} & \textbf{Constructed Polynomial} & \textbf{Predicted Complexity}\\
\midrule
$\mathcal{O}(\log{x})$ & $ F(x) = \begin{cases}
-\frac{17}{2400000}x^2+\frac{203}{120000}x+\frac{1947}{6400} & 45 \leq x \leq 95 \\
-\frac{3}{2200000}x^2+\frac{149}{220000}x+\frac{6133}{17600} & 95 \leq x \leq 205 \\
-\frac{151}{574200000}x^2+\frac{15289}{57420000}x+\frac{270373}{696000} & 205 \leq x \leq 385 \\
\end{cases}$ & $\frac{1}{11}\mathcal{O}(\log{x}) + 0.22$\\\\
$\mathcal{O}(x\log{x})$ & $ F(x) = \begin{cases}
\frac{19}{625000}x^2+\frac{4}{3125}x+\frac{69}{500} & 50 \leq x \leq 100 \\
\frac{3}{312500}x^2+\frac{123}{25000}x-\frac{9}{500} & 100 \leq x \leq 125 \\
\frac{19}{625000}x^2+\frac{4}{3125}x+\frac{69}{500} & 125 \leq x \leq 150 \\
\end{cases}$ & $\frac{1}{350}\mathcal{O}(x\log{}x)$\\\\
$\mathcal{O}(mx\log{x})$ & $\begin{cases} F_x(x, m) = \begin{cases}
\frac{19}{625000}x^2+\frac{4}{3125}x+\frac{69}{500} & 50 \leq x \leq 100 \\
\frac{3}{312500}x^2+\frac{123}{25000}x-\frac{9}{500} & 100 \leq x \leq 125 \\
\frac{19}{625000}x^2+\frac{4}{3125}x+\frac{69}{500} & 125 \leq x \leq 150 \\
\end{cases}\\ F_m(x, m) = \begin{cases}
1892m & 0 \leq m \leq 10 \\
1837m - 33461 & 10 \leq m \leq 20 \\
2066m - 56184 & 20 \leq m \leq 30 \\
\end{cases} \end{cases}$ & $\frac{1159}{3750}\mathcal{O}(mx\log{x})$ \\\\
$\mathcal{O}(mx+\log\log{b})$ & $\begin{cases} F_x(x, m, b) = \begin{cases}
\frac{19}{625000}x^2+\frac{4}{3125}x+\frac{69}{500} & 50 \leq x \leq 100 \\
\frac{3}{312500}x^2+\frac{123}{25000}x-\frac{9}{500} & 100 \leq x \leq 125 \\
\frac{19}{625000}x^2+\frac{4}{3125}x+\frac{69}{500} & 125 \leq x \leq 150 \\
\end{cases} \\ F_m(x, m, b) = \begin{cases}
1892m & 0 \leq m \leq 10 \\
1837m - 33461 & 10 \leq m \leq 20 \\
2066m - 56184 & 20 \leq m \leq 30 \\
\end{cases}\\F_b(x, m, b) = \begin{cases}
-\frac{13}{12000000}x^2 + \frac{863}{1200000}x + \frac{138949}{160000} & 5 \leq x \leq 305 \\
-\frac{3}{8000000}x^2 + \frac{363}{800000}x + \frac{283677}{320000} & 305 \leq x \leq 505 \\
-\frac{1}{15000000}x^2 + \frac{51}{250000}x + \frac{560389}{600000} & 505 \leq x \leq 705 \\
\end{cases} \end{cases}$ & $\mathcal{O}(mx\log{}x+\log\log{b}) + 0.06$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\subsection{Mathematical Test Results}
We tested our method against various non-polynomial functions, and selected one example of each common type of non-polynomial to be representative of the accuracy with it's functional family. We made sure to remove any non-composite components such as any constants, as that would only adjust the function by a graphical translation. The functions used were $\log_2x$ (logarithm family), $\frac{x-1}{x}$ (rational family), $2^x$ (exponential family), and $4cos(x)$ (trigonometric family); although we could choose more convoluted functions, we wanted to showcase performance for functions that are similar to their parent functions to attain a holistic perspective. In most cases, the Lagrangian constructions tend to exceed the vertical level of their respective non-polynomials, making secant line averages most useful with downward concavity. We defined our piece wise function until some relatively close, arbitrary whole value that leaves numbers simple, to stay consistent; however, the accuracy is still indicative of potential performance regardless of the final bound, due to the natural progression of such functions.
\begin{table}[H]
\caption{Accuracy of Constructed Polynomials Compared to Base Function}
\centering
\begin{tabular}{ccc}
\toprule
\textbf{Function} & \textbf{Constructed Polynomial} & \textbf{Calculated Accuracy}\\
\midrule
$\log_2x$ & $ F(x) = \begin{cases}
-\frac{1}{196}x^2+\frac{1}{4}x+\frac{4}{3} & 8 \leq x \leq 16 \\
-\frac{1}{768}x^2+\frac{1}{8}x+\frac{7}{3} & 16 \leq x \leq 32 \\
-\frac{1}{3072}x^2+\frac{1}{16}x+\frac{10}{3} & 32 \leq x \leq 64 \\
\end{cases}$ & $99.964\%$\\\\
$\cos(\pi{}x)$ & $ F(x) = \begin{cases}
-\frac{414}{125}x^2-\frac{43}{125}x+1 & 0 \leq x \leq 0.5 \\
\frac{414}{125}x^2-\frac{871}{125}x+ \frac{332}{125} & 0.5 \leq x \leq 1 \\
\frac{414}{125}x^2-\frac{157}{25}x+ \frac{246}{125} & 1 \leq x \leq 1.5 \\
\end{cases}$ & $99.79\%$\\\\
$2^x$ & $ F(x) = \begin{cases}
2x^2-6x+8 & 3 \leq x \leq 4 \\
4x^2-20x+32 & 4 \leq x \leq 5 \\
8x^2-56x+112 & 5 \leq x \leq 6 \\
\end{cases}$ & $98.92\%$\\\\
$\frac{x-1}{x}$ & $F(x) = \begin{cases}
-\frac{1}{16}x^2+\frac{1}{2}x+\frac{1}{4} & 2 \leq x \leq 4 \\
-\frac{1}{128}x^2+\frac{1}{8}x+\frac{3}{8} & 4 \leq x \leq 8 \\
-\frac{1}{1024}x^2+\frac{1}{32}x+\frac{11}{16} & 8 \leq x \leq 16 \\
\end{cases}$ & $99.34\%$\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\subfloat[Without Partitions]{\includegraphics[width=0.32\textwidth]{LogFunction1.jpg}\label{fig:f1}}
\hfill
\centering
\subfloat[With Partitions]{\includegraphics[width=0.32\textwidth]{LogFunction2.jpg}\label{fig:f2}}
\caption{Graphical Representations of $\log_2x$ and $F(x)$}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[Without Partitions]{\includegraphics[width=0.32\textwidth]{trigFunction1.jpg}\label{fig:f1}}
\hfill
\centering
\subfloat[With Partitions]{\includegraphics[width=0.32\textwidth]{trigFunction2.jpg}\label{fig:f2}}
\caption{Graphical Representations of $\cos(\pi{}x)$ and $F(x)$}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[Without Partitions]{\includegraphics[width=0.32\textwidth]{expFunc1.jpg}\label{fig:f1}}
\hfill
\centering
\subfloat[With Partitions]{\includegraphics[width=0.32\textwidth]{expFunc2.jpg}\label{fig:f2}}
\caption{Graphical Representations of $2^x$ and $F(x)$}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[Without Partitions]{\includegraphics[width=0.32\textwidth]{rationalFunc1.jpg}\label{fig:f1}}
\hfill
\centering
\subfloat[With Partitions]{\includegraphics[width=0.32\textwidth]{rationalFunc2.jpg}\label{fig:f2}}
\caption{Graphical Representations of $\frac{x-1}{x}$ and $F(x)$}
\end{figure}
\section{Discussion and Implications}
\subsection{Algorithmic Discussion}
After testing the proposed approach against several known algorithms, we were able to swiftly determine the runtime functions that correspond with their true time complexities. We tested the approach on two single variable algorithms and two multivariable algorithms such that we could compare the produced complexity behaviour with the true, known, complexity. In practice, this method will be used on algorithm's where complexities are unknown to help determine their runtime functions; however, experimentally, we needed to know the true complexity beforehand to deduce the comparative accuracy. Regardless, in all cases our method was able to produce the correct big-$O$ runtime function. This was determined through the automated construction of segmented polynomial models given a set of input data. By treating each variable independently and graphing their grouped correlation, it made it easy to deduce the respective time complexity. To reiterate, any external coefficients and constants are a result of the particular test environment and because time is the output value. The only relevant component in determining the accuracy of the method are the contents of the big-O function. Most of the predicted runtime complexity functions followed the format of $k\mathcal{O}(T(n)) + C$ where $k$ is the constant of proportionality between execution time and standard time complexity and $C$ is any factor of translation that matches the produced graphical curve/line with their true counterpart. While these values help us overlay our constructions with their parent functions, they aren't necessarily important in determining the accuracy of our approach as the asymptotic behavior of our construction will be the same regardless. We are confident that the proposed method can significantly help expedite the process of determining functional time complexities in all cases, including both single and mulitvariable algorithms.
\subsection{Mathematical Discussion}
After reviewing the results, we were able to confirm the accuracy of the proposed approach with constructing matching segmented differentiable quadratics given any non-polynomials. These include logarithmic, exponential, trigonometric, and rational functions. To determine the approaches accuracy with select functions, we calculated the average value of the formulated function over a particular segment. And after doing so, as well as reviewing the formulaic relationships between computed segments, we found a collective functional resemblance score of greater than $99\%$ and began to notice profound mathematical implications. After testing just a few data points, we can produce a rule that can construct the next consecutive segmented polynomial based upon the functional patterns that surface. For example, with regard to $\log_2x$, we were able to determine that every consecutive segment was equivalent to $\frac{a}{4}x^2 + \frac{b}{2}x + (c+1)$ such that the variables are the constants of the previous segment and the accuracy of any additional segments would remain identical. Not only is this a revolutionary method for accurate, polynomial replications; but it's sheer simplicity combats flaws found in leading methods of doing so (primarily with non-sinusoidal functions), most notably Taylor series. Using these methods mathematicians and scientists can construct accurate, differentiable functions to represent patterned data, non-polynomials, and functions found in higher theoretical dimensions. Additionally, a similar approach can be used to determine the natural progression of repetitious systems such as natural disasters, planetary orbits, or pandemic-related death tolls, to lead to a better understanding of their nature. As in theory, their physical attributes and properties are built upon reoccurring, natural functions.
\subsection{Conclusion}
In this paper we proposed an approach to use segmented quadratic construction, based upon the principles of Lagrangian Interpolation to help determine algorithmic runtimes, as well as model non-polynomials with advanced, foreseeable applications in pure mathematics and pattern modeling/recognition found in science and nature. We hope to build upon this approach by improving and determine new ways to apply this research in all computational and mathematical based fields.
\section*{Acknowledgments}
I would like to thank Professor Jeffery Ullman, Mr. Sudhir Kamath, Mr. Robert Gendron, Mr. Phillip Nho, and Ms. Katie MacDougall for their continual support with my research work.
|
{
"timestamp": "2020-12-04T02:00:12",
"yymm": "2012",
"arxiv_id": "2012.01420",
"language": "en",
"url": "https://arxiv.org/abs/2012.01420"
}
|
"\\section{\\label{intro}Introduction}\n\nAlthough comparatively few in number, massive stars play a(...TRUNCATED)
| {"timestamp":"2020-12-04T02:01:13","yymm":"2012","arxiv_id":"2012.01464","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{Section:Intro}\n\nWind energy is the forerunner among the renewable (...TRUNCATED)
| {"timestamp":"2022-04-05T02:34:13","yymm":"2012","arxiv_id":"2012.01349","language":"en","url":"http(...TRUNCATED)
|
"\\section{Applications of DML to Wireless Networks}\\label{sec.use}\n\n\\begin{table*}[t]\n\\renewc(...TRUNCATED)
| {"timestamp":"2020-12-04T02:02:16","yymm":"2012","arxiv_id":"2012.01489","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction} \\label{sec:intro}\n\nDetailed chemical evolution patterns from long-gone(...TRUNCATED)
| {"timestamp":"2021-01-22T02:22:55","yymm":"2012","arxiv_id":"2012.01430","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nPredicting the evolution of a turbulent flow such as the ocean motion re(...TRUNCATED)
| {"timestamp":"2020-12-03T02:30:02","yymm":"2012","arxiv_id":"2012.01385","language":"en","url":"http(...TRUNCATED)
|
End of preview.